diff --git a/.changelog/10129.txt b/.changelog/10129.txt index 9a3709212173..2ea53d9627bb 100644 --- a/.changelog/10129.txt +++ b/.changelog/10129.txt @@ -1,4 +1,4 @@ ```release-note:improvement raft: allow reloading of raft trailing logs and snapshot timing to allow recovery from some [replication failure modes](https://github.com/hashicorp/consul/issues/9609). -telemetry: add metrics and documentation for [monitoring for replication issues](https://consul.io/docs/agent/telemetry#raft-replication-capacity-issues). +telemetry: add metrics and documentation for [monitoring for replication issues](https://developer.hashicorp.com/consul/docs/reference/agent/telemetry#raft-replication-capacity-issues). ``` \ No newline at end of file diff --git a/.changelog/11855.txt b/.changelog/11855.txt index ea77b6b841c0..fa89b2adc114 100644 --- a/.changelog/11855.txt +++ b/.changelog/11855.txt @@ -1,4 +1,4 @@ ```release-note:feature Admin Partitions (Consul Enterprise only) This version adds admin partitions, a new entity defining administrative and networking boundaries within a Consul deployment. For more information refer to the - [Admin Partition](https://www.consul.io/docs/enterprise/admin-partitions) documentation. + [Admin Partition](https://developer.hashicorp.com/docs/enterprise/admin-partitions) documentation. ``` diff --git a/.changelog/14423.txt b/.changelog/14423.txt index fd4033945890..32c8df6a58e3 100644 --- a/.changelog/14423.txt +++ b/.changelog/14423.txt @@ -1,3 +1,3 @@ ```release-note:feature -cli: Adds new subcommands for `peering` workflows. Refer to the [CLI docs](https://www.consul.io/commands/peering) for more information. +cli: Adds new subcommands for `peering` workflows. Refer to the [CLI docs](https://developer.hashicorp.com/commands/peering) for more information. ``` diff --git a/.changelog/14474.txt b/.changelog/14474.txt index fcc326547833..589104f89efd 100644 --- a/.changelog/14474.txt +++ b/.changelog/14474.txt @@ -1,3 +1,3 @@ ```release-note:feature -http: Add new `get-or-empty` operation to the txn api. Refer to the [API docs](https://www.consul.io/api-docs/txn#kv-operations) for more information. +http: Add new `get-or-empty` operation to the txn api. Refer to the [API docs](https://developer.hashicorp.com/api-docs/txn#kv-operations) for more information. ``` \ No newline at end of file diff --git a/.changelog/22189.txt b/.changelog/22189.txt new file mode 100644 index 000000000000..f7936e2a1b89 --- /dev/null +++ b/.changelog/22189.txt @@ -0,0 +1,3 @@ +```release-note:improvement +http: Add peer query param on catalog service API +``` diff --git a/.changelog/22204.txt b/.changelog/22204.txt new file mode 100644 index 000000000000..cafbe014a6b5 --- /dev/null +++ b/.changelog/22204.txt @@ -0,0 +1,3 @@ +```release-note:security +Upgrade Go to 1.23.6. +``` \ No newline at end of file diff --git a/.changelog/22207.txt b/.changelog/22207.txt new file mode 100644 index 000000000000..a876c7cf42b8 --- /dev/null +++ b/.changelog/22207.txt @@ -0,0 +1,5 @@ +```release-note:security +Update `golang.org/x/crypto` to v0.35.0 to address [GO-2025-3487](https://pkg.go.dev/vuln/GO-2025-3487). +Update `golang.org/x/oauth2` to v0.27.0 to address [GO-2025-3488](https://pkg.go.dev/vuln/GO-2025-3488). +Update `github.com/go-jose/go-jose/v3` to v3.0.4 to address [GO-2025-3485](https://pkg.go.dev/vuln/GO-2025-3485). +``` \ No newline at end of file diff --git a/.changelog/22220.txt b/.changelog/22220.txt new file mode 100644 index 000000000000..fa660fefcb01 --- /dev/null +++ b/.changelog/22220.txt @@ -0,0 +1,3 @@ +```release-note:bug +agent: Add the missing Service TaggedAddresses and Check Type fields to Txn API. +``` diff --git a/.changelog/22226.txt b/.changelog/22226.txt new file mode 100644 index 000000000000..7c6e65967f85 --- /dev/null +++ b/.changelog/22226.txt @@ -0,0 +1,3 @@ +```release-note:bug +wan-federation: Fixed an issue where advertised IPv6 addresses were causing WAN federation to fail. +``` diff --git a/.changelog/22227.txt b/.changelog/22227.txt new file mode 100644 index 000000000000..ae08904a16af --- /dev/null +++ b/.changelog/22227.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +Added support for Consul Session to update the state of a Health Check, allowing for more dynamic and responsive health monitoring within the Consul ecosystem. This feature enables sessions to directly influence health check statuses, improving the overall reliability and accuracy of service health assessments. +``` diff --git a/.changelog/22237.txt b/.changelog/22237.txt new file mode 100644 index 000000000000..a75b56106128 --- /dev/null +++ b/.changelog/22237.txt @@ -0,0 +1,3 @@ +```release-note:security +Upgrade Go to 1.23.7 and bump X-Repositories to latest. +``` diff --git a/.changelog/22268.txt b/.changelog/22268.txt new file mode 100644 index 000000000000..adbcbeba7584 --- /dev/null +++ b/.changelog/22268.txt @@ -0,0 +1,5 @@ +```release-note:security +Update `golang.org/x/net` to v0.38.0 to address [GHSA-vvgc-356p-c3xw](https://github.com/advisories/GHSA-vvgc-356p-c3xw) and [GO-2025-3595](https://pkg.go.dev/vuln/GO-2025-3595). +Update `github.com/golang-jwt/jwt/v4` to v4.5.2 to address [GO-2025-3553](https://pkg.go.dev/vuln/GO-2025-3553) and [GHSA-mh63-6h87-95cp](https://github.com/advisories/GHSA-mh63-6h87-95cp). +Update `Go` to v1.23.8 to address [GO-2025-3563](https://pkg.go.dev/vuln/GO-2025-3563). +``` \ No newline at end of file diff --git a/.changelog/22286.txt b/.changelog/22286.txt new file mode 100644 index 000000000000..862a91f2c736 --- /dev/null +++ b/.changelog/22286.txt @@ -0,0 +1,3 @@ +```release-note:security +cli: update tls ca and cert create to reduce excessive file perms for generated public files +``` diff --git a/.changelog/22299.txt b/.changelog/22299.txt new file mode 100644 index 000000000000..dfd70b8f4ca1 --- /dev/null +++ b/.changelog/22299.txt @@ -0,0 +1,3 @@ +```release-note:feature +xds: provided a configurable to value to diable XDS session load balancing so that cases where there is a load balancer in front of the consul servers can disable the internal load balancing +``` diff --git a/.changelog/22359.txt b/.changelog/22359.txt new file mode 100644 index 000000000000..77eeae0b2c00 --- /dev/null +++ b/.changelog/22359.txt @@ -0,0 +1,3 @@ +```release-note: improvement +connect: Use net.JoinHostPort for host:port formatting to handle IPv6. +``` \ No newline at end of file diff --git a/.changelog/22376.txt b/.changelog/22376.txt new file mode 100644 index 000000000000..52ff92d31b97 --- /dev/null +++ b/.changelog/22376.txt @@ -0,0 +1,3 @@ +```release-note:security +connect: Added non default namespace and partition checks to ConnectCA CSR requests. +``` diff --git a/.changelog/22381.txt b/.changelog/22381.txt new file mode 100644 index 000000000000..4d2af0902bc8 --- /dev/null +++ b/.changelog/22381.txt @@ -0,0 +1,3 @@ +```release-note:bug +http: return a clear error when both Service.Service and Service.ID are missing during catalog registration +``` \ No newline at end of file diff --git a/.changelog/22382.txt b/.changelog/22382.txt new file mode 100644 index 000000000000..babd741d6f1e --- /dev/null +++ b/.changelog/22382.txt @@ -0,0 +1,3 @@ +```release-note:improvement +config: Warn about invalid characters in `datacenter` resulting in non-generation of X.509 certificates when using external CA for agent TLS communication. +``` diff --git a/.changelog/22409.txt b/.changelog/22409.txt new file mode 100644 index 000000000000..b98258c8a83f --- /dev/null +++ b/.changelog/22409.txt @@ -0,0 +1,11 @@ +```release-note:security +Upgrade UBI base image version to address CVE +[CVE-2025-4802](https://access.redhat.com/security/cve/cve-2025-4802) +[CVE-2024-40896](https://access.redhat.com/security/cve/cve-2024-40896) +[CVE-2024-12243](https://nvd.nist.gov/vuln/detail/CVE-2024-12243) +[CVE-2025-24528](https://access.redhat.com/security/cve/cve-2025-24528) +[CVE-2025-3277](https://access.redhat.com/security/cve/cve-2025-3277) +[CVE-2024-12133](https://access.redhat.com/security/cve/cve-2024-12133) +[CVE-2024-57970](https://access.redhat.com/security/cve/cve-2024-57970) +[CVE-2025-31115](https://access.redhat.com/security/cve/cve-2025-31115) +``` diff --git a/.changelog/22412.txt b/.changelog/22412.txt new file mode 100644 index 000000000000..4d7f735fb32a --- /dev/null +++ b/.changelog/22412.txt @@ -0,0 +1,3 @@ +```release-note:security +security: Upgrade Go to 1.23.10. +``` diff --git a/.changelog/22423.txt b/.changelog/22423.txt new file mode 100644 index 000000000000..661efd31f1f0 --- /dev/null +++ b/.changelog/22423.txt @@ -0,0 +1,3 @@ +```release-note:bug +ui: display IPv6 addresses with proper bracketed formatting +``` \ No newline at end of file diff --git a/.changelog/22467.txt b/.changelog/22467.txt new file mode 100644 index 000000000000..8c6b8b3fd6da --- /dev/null +++ b/.changelog/22467.txt @@ -0,0 +1,3 @@ +```release-note:bug +cli: validate IP address in service registration to prevent invalid IPs in service and tagged addresses. +``` \ No newline at end of file diff --git a/.changelog/22468.txt b/.changelog/22468.txt new file mode 100644 index 000000000000..758a327dadb7 --- /dev/null +++ b/.changelog/22468.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +ui: Improved display and handling of IPv6 addresses for better readability and usability in the Consul web interface. +``` \ No newline at end of file diff --git a/.changelog/22513.txt b/.changelog/22513.txt new file mode 100644 index 000000000000..b3cbbe2e1119 --- /dev/null +++ b/.changelog/22513.txt @@ -0,0 +1,3 @@ +```release-note:improvement +ui: Replaced internal code editor with HDS (HashiCorp Design System) code editor and code block components for improved accessibility and maintainability across the Consul UI. +``` \ No newline at end of file diff --git a/.changelog/22547.txt b/.changelog/22547.txt new file mode 100644 index 000000000000..f2671887694d --- /dev/null +++ b/.changelog/22547.txt @@ -0,0 +1,3 @@ +```release-note:security +security: Update Go to 1.23.12 to address CVE-2025-47906 +``` diff --git a/.changelog/22552.txt b/.changelog/22552.txt new file mode 100644 index 000000000000..e41e2ddf7c41 --- /dev/null +++ b/.changelog/22552.txt @@ -0,0 +1,3 @@ +```release-note:bug +cli: capture pprof when ACL is enabled and a token with operator:read is used, even if enable_debug config is not explicitly set. +``` \ No newline at end of file diff --git a/.changelog/22598.txt b/.changelog/22598.txt new file mode 100644 index 000000000000..e0ac7e07fee6 --- /dev/null +++ b/.changelog/22598.txt @@ -0,0 +1,3 @@ +```release-note:security +api: add charset in all applicable content-types. +``` \ No newline at end of file diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index c10989168c87..b5e61512b753 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -3,18 +3,18 @@ # engineering and web presence get notified of, and can approve changes to web tooling, but not content. -/website/ @hashicorp/web-presence @hashicorp/consul-selfmanage-maintainers +/website/ @hashicorp/consul-selfmanage-maintainers @hashicorp/web-presence /website/data/ /website/public/ /website/content/ # education and engineering get notified of, and can approve changes to web content. -/website/data/ @hashicorp/consul-docs @hashicorp/consul-selfmanage-maintainers -/website/public/ @hashicorp/consul-docs @hashicorp/consul-selfmanage-maintainers -/website/content/ @hashicorp/consul-docs @hashicorp/consul-selfmanage-maintainers +/website/data/ @hashicorp/consul-selfmanage-maintainers @hashicorp/consul-docs +/website/public/ @hashicorp/consul-selfmanage-maintainers @hashicorp/consul-docs +/website/content/ @hashicorp/consul-selfmanage-maintainers @hashicorp/consul-docs # release configuration -/.release/ @hashicorp/team-selfmanaged-releng @hashicorp/consul-selfmanage-maintainers -/.github/workflows/build.yml @hashicorp/team-selfmanaged-releng @hashicorp/consul-selfmanage-maintainers +/.release/ @hashicorp/consul-selfmanage-maintainers @hashicorp/team-selfmanaged-releng +/.github/workflows/build.yml @hashicorp/consul-selfmanage-maintainers @hashicorp/team-selfmanaged-releng diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 8d5dd6dc4ba9..c7248ae3717b 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -23,15 +23,13 @@ work on an issue, comment on it first and tell us the approach you want to take. * Increase our test coverage. * Fix a [bug](https://github.com/hashicorp/consul/labels/type/bug). * Implement a requested [enhancement](https://github.com/hashicorp/consul/labels/type/enhancement). -* Improve our guides and documentation. Consul's [Guides](https://www.consul.io/docs/guides/index.html), [Docs](https://www.consul.io/docs/index.html), and [api godoc](https://godoc.org/github.com/hashicorp/consul/api) +* Improve our documentation and tutorials. Consul's [Documentation](https://developer.hashicorp.com/consul/docs) and [api godoc](https://godoc.org/github.com/hashicorp/consul/api) are deployed from this repo. * Respond to questions about usage on the issue tracker or the Consul section of the [HashiCorp forum]: (https://discuss.hashicorp.com/c/consul) ### Reporting an Issue >Note: Issues on GitHub for Consul are intended to be related to bugs or feature requests. ->Questions should be directed to other community resources such as the: [Discuss Forum](https://discuss.hashicorp.com/c/consul/29), [FAQ](https://www.consul.io/docs/faq.html), or [Guides](https://www.consul.io/docs/guides/index.html). - * Make sure you test against the latest released version. It is possible we already fixed the bug you're experiencing. However, if you are on an older version of Consul and feel the issue is critical, do let us know. @@ -163,7 +161,7 @@ When you're ready to submit a pull request: | `pr/no-changelog` | This PR does not have an intended changelog entry | | `pr/no-backport` | This PR does not have an intended backport target | | `pr/no-metrics-test` | This PR does not require any testing for metrics | - | `backport/1.12.x` | Backport the changes in this PR to the targeted release branch. Consult the [Consul Release Notes](https://www.consul.io/docs/release-notes) page and [`versions.hcl`](/.release/versions.hcl) to view active releases. Website documentation merged to the latest release branch is deployed immediately. See [backport policy](#backport-policy) for more information. | + | `backport/1.12.x` | Backport the changes in this PR to the targeted release branch. Consult the [Consul Release Notes](https://developer.hashicorp.com/docs/release-notes) page and [`versions.hcl`](/.release/versions.hcl) to view active releases. Website documentation merged to the latest release branch is deployed immediately. See [backport policy](#backport-policy) for more information. | | `backport/all` | If contributing a bug fix or other change applicable to all branches, use `backport/all` to target all active branches automatically. See [backport policy](#backport-policy) for more information. | 7. After you submit, the Consul maintainers team needs time to carefully review your diff --git a/.github/workflows/backport-assistant.yml b/.github/workflows/backport-assistant.yml index d004f220bf8e..542607e7afd3 100644 --- a/.github/workflows/backport-assistant.yml +++ b/.github/workflows/backport-assistant.yml @@ -19,7 +19,7 @@ jobs: backport: if: github.event.pull_request.merged runs-on: ubuntu-latest - container: hashicorpdev/backport-assistant:0.4.4 + container: hashicorpdev/backport-assistant:v0.5.8 steps: - name: Run Backport Assistant for release branches run: | diff --git a/.github/workflows/build-distros.yml b/.github/workflows/build-distros.yml index 3b29a4e47ce3..8e1f99d58f91 100644 --- a/.github/workflows/build-distros.yml +++ b/.github/workflows/build-distros.yml @@ -130,25 +130,25 @@ jobs: - run: CC=aarch64-linux-gnu-gcc GOARCH=arm64 go build -tags "${{ env.GOTAGS }}" - build-s390x: - if: ${{ endsWith(github.repository, '-enterprise') }} - needs: - - setup - - get-go-version - - check-go-mod - runs-on: ${{ fromJSON(needs.setup.outputs.compute-xl) }} - steps: - - uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 - - # NOTE: This step is specifically needed for ENT. It allows us to access the required private HashiCorp repos. - - name: Setup Git - run: git config --global url."https://${{ secrets.ELEVATED_GITHUB_TOKEN }}:@github.com".insteadOf "https://github.com" - - - uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0 - with: - go-version: ${{ needs.get-go-version.outputs.go-version }} - - name: Build - run: GOOS=linux GOARCH=s390x CGO_ENABLED=0 go build -tags "${{ env.GOTAGS }}" +# build-s390x: +# if: ${{ endsWith(github.repository, '-enterprise') }} +# needs: +# - setup +# - get-go-version +# - check-go-mod +# runs-on: ${{ fromJSON(needs.setup.outputs.compute-xl) }} +# steps: +# - uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 +# +# # NOTE: This step is specifically needed for ENT. It allows us to access the required private HashiCorp repos. +# - name: Setup Git +# run: git config --global url."https://${{ secrets.ELEVATED_GITHUB_TOKEN }}:@github.com".insteadOf "https://github.com" +# +# - uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0 +# with: +# go-version: ${{ needs.get-go-version.outputs.go-version }} +# - name: Build +# run: GOOS=linux GOARCH=s390x CGO_ENABLED=0 go build -tags "${{ env.GOTAGS }}" # This is job is required for branch protection as a required gihub check # because GitHub actions show up as checks at the job level and not the @@ -170,7 +170,7 @@ jobs: - build-386 - build-amd64 - build-arm - - build-s390x +# - build-s390x runs-on: ${{ fromJSON(needs.setup.outputs.compute-small) }} if: ${{ always() }} steps: diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml index 78481f2a2056..c6f34aaf4ed9 100644 --- a/.github/workflows/build.yml +++ b/.github/workflows/build.yml @@ -111,7 +111,7 @@ jobs: - name: Setup with node and yarn uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: '18' + node-version: '20' cache: 'yarn' cache-dependency-path: 'ui/yarn.lock' @@ -134,7 +134,7 @@ jobs: PRERELEASE_VERSION: ${{ needs.set-product-version.outputs.pre-version }} CGO_ENABLED: "0" GOLDFLAGS: "${{needs.set-product-version.outputs.shared-ldflags}}" - uses: hashicorp/actions-go-build@make-clean-flag-optional + uses: hashicorp/actions-go-build@b9e2cfba3013adccdc112b01cba922d83c78fac5 # v1.1.1 with: product_name: ${{ env.PKG_NAME }} product_version: ${{ needs.set-product-version.outputs.product-version }} @@ -193,60 +193,60 @@ jobs: name: ${{ env.DEB_PACKAGE }} path: out/${{ env.DEB_PACKAGE }} - build-s390x: - needs: - - set-product-version - - get-go-version - if: ${{ endsWith(github.repository, '-enterprise') }} - runs-on: ubuntu-latest - strategy: - matrix: - include: - - {goos: "linux", goarch: "s390x"} - fail-fast: true - - name: Go ${{ needs.get-go-version.outputs.go-version }} ${{ matrix.goos }} ${{ matrix.goarch }} build - steps: - - uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 - - - name: Setup with node and yarn - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 - with: - node-version: '18' - cache: 'yarn' - cache-dependency-path: 'ui/yarn.lock' - - - name: Build UI - run: | - CONSUL_VERSION=${{ needs.set-product-version.outputs.product-version }} - CONSUL_DATE=${{ needs.set-product-version.outputs.product-date }} - CONSUL_BINARY_TYPE=${CONSUL_BINARY_TYPE} - CONSUL_COPYRIGHT_YEAR=$(git show -s --format=%cd --date=format:%Y HEAD) - echo "consul_version is ${CONSUL_VERSION}" - echo "consul_date is ${CONSUL_DATE}" - echo "consul binary type is ${CONSUL_BINARY_TYPE}" - echo "consul copyright year is ${CONSUL_COPYRIGHT_YEAR}" - cd ui && make && cd .. - rm -rf agent/uiserver/dist - mv ui/packages/consul-ui/dist agent/uiserver/ - - name: Go Build - env: - PRODUCT_VERSION: ${{ needs.set-product-version.outputs.product-version }} - PRERELEASE_VERSION: ${{ needs.set-product-version.outputs.pre-version }} - CGO_ENABLED: "0" - GOLDFLAGS: "${{needs.set-product-version.outputs.shared-ldflags}}" - uses: hashicorp/actions-go-build@make-clean-flag-optional - with: - product_name: ${{ env.PKG_NAME }} - product_version: ${{ needs.set-product-version.outputs.product-version }} - go_version: ${{ needs.get-go-version.outputs.go-version }} - os: ${{ matrix.goos }} - arch: ${{ matrix.goarch }} - reproducible: nope - clean: false - instructions: |- - cp LICENSE $TARGET_DIR/LICENSE.txt - go build -ldflags="$GOLDFLAGS" -o "$BIN_PATH" -trimpath -buildvcs=false +# build-s390x: +# needs: +# - set-product-version +# - get-go-version +# if: ${{ endsWith(github.repository, '-enterprise') }} +# runs-on: ubuntu-latest +# strategy: +# matrix: +# include: +# - {goos: "linux", goarch: "s390x"} +# fail-fast: true +# +# name: Go ${{ needs.get-go-version.outputs.go-version }} ${{ matrix.goos }} ${{ matrix.goarch }} build +# steps: +# - uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 +# +# - name: Setup with node and yarn +# uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 +# with: +# node-version: '18' +# cache: 'yarn' +# cache-dependency-path: 'ui/yarn.lock' +# +# - name: Build UI +# run: | +# CONSUL_VERSION=${{ needs.set-product-version.outputs.product-version }} +# CONSUL_DATE=${{ needs.set-product-version.outputs.product-date }} +# CONSUL_BINARY_TYPE=${CONSUL_BINARY_TYPE} +# CONSUL_COPYRIGHT_YEAR=$(git show -s --format=%cd --date=format:%Y HEAD) +# echo "consul_version is ${CONSUL_VERSION}" +# echo "consul_date is ${CONSUL_DATE}" +# echo "consul binary type is ${CONSUL_BINARY_TYPE}" +# echo "consul copyright year is ${CONSUL_COPYRIGHT_YEAR}" +# cd ui && make && cd .. +# rm -rf agent/uiserver/dist +# mv ui/packages/consul-ui/dist agent/uiserver/ +# - name: Go Build +# env: +# PRODUCT_VERSION: ${{ needs.set-product-version.outputs.product-version }} +# PRERELEASE_VERSION: ${{ needs.set-product-version.outputs.pre-version }} +# CGO_ENABLED: "0" +# GOLDFLAGS: "${{needs.set-product-version.outputs.shared-ldflags}}" +# uses: hashicorp/actions-go-build@b9e2cfba3013adccdc112b01cba922d83c78fac5 # v1.1.1 +# with: +# product_name: ${{ env.PKG_NAME }} +# product_version: ${{ needs.set-product-version.outputs.product-version }} +# go_version: ${{ needs.get-go-version.outputs.go-version }} +# os: ${{ matrix.goos }} +# arch: ${{ matrix.goarch }} +# reproducible: nope +# clean: false +# instructions: |- +# cp LICENSE $TARGET_DIR/LICENSE.txt +# go build -ldflags="$GOLDFLAGS" -o "$BIN_PATH" -trimpath -buildvcs=false build-docker: name: Docker ${{ matrix.arch }} build diff --git a/.github/workflows/frontend.yml b/.github/workflows/frontend.yml index 93f8ee0bd050..596549ea8dbe 100644 --- a/.github/workflows/frontend.yml +++ b/.github/workflows/frontend.yml @@ -37,7 +37,7 @@ jobs: - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: '18' + node-version: '20' - name: Install Yarn run: corepack enable @@ -57,7 +57,7 @@ jobs: - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: '18' + node-version: '20' - name: Install Yarn run: corepack enable @@ -87,7 +87,7 @@ jobs: - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: '18' + node-version: '20' - name: Install Yarn run: corepack enable @@ -127,7 +127,7 @@ jobs: - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: '18' + node-version: '20' - name: Install Yarn run: corepack enable diff --git a/.github/workflows/jira-pr.yaml b/.github/workflows/jira-pr.yaml index 3a1aa5d6f1c3..ea48b5b4929f 100644 --- a/.github/workflows/jira-pr.yaml +++ b/.github/workflows/jira-pr.yaml @@ -61,7 +61,7 @@ jobs: if: ( github.event.action == 'opened' && steps.is-team-member.outputs.MESSAGE == 'false' ) uses: tomhjp/gh-action-jira-create@3ed1789cad3521292e591a7cfa703215ec1348bf # v0.2.1 with: - project: NET + project: CSL issuetype: "${{ steps.set-ticket-type.outputs.TYPE }}" summary: "${{ github.event.repository.name }} [${{ steps.set-ticket-type.outputs.TYPE }} #${{ github.event.pull_request.number }}]: ${{ github.event.pull_request.title }}" description: "${{ github.event.issue.body || github.event.pull_request.body }}\n\n_Created in GitHub by ${{ github.actor }}._" diff --git a/.github/workflows/nightly-test-1.18.x.yaml b/.github/workflows/nightly-test-1.18.x.yaml index ca627b013932..72bfbc54e591 100644 --- a/.github/workflows/nightly-test-1.18.x.yaml +++ b/.github/workflows/nightly-test-1.18.x.yaml @@ -31,7 +31,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -64,7 +64,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -103,7 +103,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -137,7 +137,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -176,7 +176,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -207,7 +207,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock diff --git a/.github/workflows/nightly-test-1.19.x.yaml b/.github/workflows/nightly-test-1.19.x.yaml index 20c80dcb2395..459cc1bdeba7 100644 --- a/.github/workflows/nightly-test-1.19.x.yaml +++ b/.github/workflows/nightly-test-1.19.x.yaml @@ -31,7 +31,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -64,7 +64,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -103,7 +103,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -137,7 +137,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -176,7 +176,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -207,7 +207,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock diff --git a/.github/workflows/nightly-test-1.20.x.yaml b/.github/workflows/nightly-test-1.20.x.yaml index 37f035def29b..09f3f824f865 100644 --- a/.github/workflows/nightly-test-1.20.x.yaml +++ b/.github/workflows/nightly-test-1.20.x.yaml @@ -31,7 +31,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -64,7 +64,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -103,7 +103,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -137,7 +137,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -176,7 +176,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -207,7 +207,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock diff --git a/.github/workflows/nightly-test-integrations-1.21.x.yml b/.github/workflows/nightly-test-integrations-1.21.x.yml new file mode 100644 index 000000000000..631785658b2b --- /dev/null +++ b/.github/workflows/nightly-test-integrations-1.21.x.yml @@ -0,0 +1,474 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: MPL-2.0 + +name: Nightly test-integrations 1.21.x + +on: + schedule: + # Run nightly at 1AM UTC/9PM EST/6PM PST + - cron: '0 1 * * *' + workflow_dispatch: {} + +env: + TEST_RESULTS_DIR: /tmp/test-results + TEST_RESULTS_ARTIFACT_NAME: test-results + CONSUL_LICENSE: ${{ secrets.CONSUL_LICENSE }} + GOTAGS: ${{ endsWith(github.repository, '-enterprise') && 'consulent' || '' }} + GOTESTSUM_VERSION: "1.11.0" + CONSUL_BINARY_UPLOAD_NAME: consul-bin + # strip the hashicorp/ off the front of github.repository for consul + CONSUL_LATEST_IMAGE_NAME: ${{ endsWith(github.repository, '-enterprise') && github.repository || 'hashicorp/consul' }} + GOPRIVATE: github.com/hashicorp # Required for enterprise deps + BRANCH: "release/1.21.x" + BRANCH_NAME: "release-1.21.x" # Used for naming artifacts + +jobs: + setup: + runs-on: ubuntu-latest + name: Setup + outputs: + compute-small: ${{ steps.runners.outputs.compute-small }} + compute-medium: ${{ steps.runners.outputs.compute-medium }} + compute-large: ${{ steps.runners.outputs.compute-large }} + compute-xl: ${{ steps.runners.outputs.compute-xl }} + enterprise: ${{ steps.runners.outputs.enterprise }} + steps: + - name: Checkout code + uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 + with: + ref: ${{ env.BRANCH }} + - id: runners + run: .github/scripts/get_runner_classes.sh + + get-go-version: + uses: ./.github/workflows/reusable-get-go-version.yml + with: + ref: release/1.21.x + + get-envoy-versions: + uses: ./.github/workflows/reusable-get-envoy-versions.yml + with: + ref: release/1.21.x + + dev-build: + needs: + - setup + - get-go-version + uses: ./.github/workflows/reusable-dev-build.yml + with: + runs-on: ${{ needs.setup.outputs.compute-large }} + repository-name: ${{ github.repository }} + uploaded-binary-name: 'consul-bin' + branch-name: "release/1.21.x" + go-version: ${{ needs.get-go-version.outputs.go-version }} + secrets: + elevated-github-token: ${{ secrets.ELEVATED_GITHUB_TOKEN }} + + generate-envoy-job-matrices: + needs: [setup] + runs-on: ${{ fromJSON(needs.setup.outputs.compute-small) }} + name: Generate Envoy Job Matrices + outputs: + envoy-matrix: ${{ steps.set-matrix.outputs.envoy-matrix }} + steps: + - name: Checkout code + uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 + with: + ref: ${{ env.BRANCH }} + - name: Generate Envoy Job Matrix + id: set-matrix + env: + # TEST_SPLITS sets the number of test case splits to use in the matrix. This will be + # further multiplied in envoy-integration tests by the other dimensions in the matrix + # to determine the total number of runners used. + TEST_SPLITS: 4 + JQ_SLICER: '[ inputs ] | [_nwise(length / $runnercount | floor)]' + run: | + NUM_DIRS=$(find ./test/integration/connect/envoy -mindepth 1 -maxdepth 1 -type d | wc -l) + + if [ "$NUM_DIRS" -lt "$TEST_SPLITS" ]; then + echo "TEST_SPLITS is larger than the number of tests/packages to split." + TEST_SPLITS=$((NUM_DIRS-1)) + fi + # fix issue where test splitting calculation generates 1 more split than TEST_SPLITS. + TEST_SPLITS=$((TEST_SPLITS-1)) + { + echo -n "envoy-matrix=" + find ./test/integration/connect/envoy -maxdepth 1 -type d -print0 \ + | xargs -0 -n 1 basename \ + | jq --raw-input --argjson runnercount "$TEST_SPLITS" "$JQ_SLICER" \ + | jq --compact-output 'map(join("|"))' + } >> "$GITHUB_OUTPUT" + + envoy-integration-test: + runs-on: ${{ fromJSON(needs.setup.outputs.compute-large) }} + needs: + - setup + - get-go-version + - get-envoy-versions + - generate-envoy-job-matrices + - dev-build + permissions: + id-token: write # NOTE: this permission is explicitly required for Vault auth. + contents: read + strategy: + fail-fast: false + matrix: + envoy-version: ${{ fromJSON(needs.get-envoy-versions.outputs.envoy-versions-json) }} + xds-target: ["server", "client"] + test-cases: ${{ fromJSON(needs.generate-envoy-job-matrices.outputs.envoy-matrix) }} + env: + ENVOY_VERSION: ${{ matrix.envoy-version }} + XDS_TARGET: ${{ matrix.xds-target }} + AWS_LAMBDA_REGION: us-west-2 + steps: + - name: Checkout code + uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 + with: + ref: ${{ env.BRANCH }} + - uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0 + with: + go-version: ${{ needs.get-go-version.outputs.go-version }} + + - name: fetch binary + uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7 + with: + name: '${{ env.CONSUL_BINARY_UPLOAD_NAME }}' + path: ./bin + - name: restore mode+x + run: chmod +x ./bin/consul + + - name: Set up Docker Buildx + uses: docker/setup-buildx-action@d70bba72b1f3fd22344832f00baa16ece964efeb # v3.3.0 + + - name: Docker build + run: docker build -t consul:local -f ./build-support/docker/Consul-Dev.dockerfile ./bin + + - name: Envoy Integration Tests + id: envoy-integration-tests + env: + GOTESTSUM_JUNITFILE: ${{ env.TEST_RESULTS_DIR }}/results.xml + GOTESTSUM_FORMAT: standard-verbose + COMPOSE_INTERACTIVE_NO_CLI: 1 + LAMBDA_TESTS_ENABLED: "true" + # tput complains if this isn't set to something. + TERM: ansi + run: | + # shellcheck disable=SC2001 + echo "Running $(sed 's,|, ,g' <<< "${{ matrix.test-cases }}" |wc -w) subtests" + # shellcheck disable=SC2001 + sed 's,|,\n,g' <<< "${{ matrix.test-cases }}" + go run gotest.tools/gotestsum@v${{env.GOTESTSUM_VERSION}} \ + --debug \ + --rerun-fails \ + --rerun-fails-report=/tmp/gotestsum-rerun-fails \ + --jsonfile /tmp/jsonfile/go-test.log \ + --packages=./test/integration/connect/envoy \ + -- -timeout=30m -tags integration -run="TestEnvoy/(${{ matrix.test-cases }})" + + # See https://github.com/orgs/community/discussions/8945#discussioncomment-9897011 + # and overall topic discussion for why this is necessary. + - name: Generate artifact ID + id: generate-artifact-id + if: ${{ failure() && steps.envoy-integration-tests.conclusion == 'failure' }} + run: | + ARTIFACT_ID=$(uuidgen) + echo "Artifact ID: $ARTIFACT_ID (search this in job summary for download link)" + echo "artifact_id=$ARTIFACT_ID" >> "$GITHUB_ENV" + + - name: Upload failure logs + if: ${{ failure() && steps.envoy-integration-tests.conclusion == 'failure' }} + uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3 + with: + name: envoy-${{ matrix.envoy-version }}-logs-${{ env.artifact_id }} + path: test/integration/connect/envoy/workdir/logs/ + + # NOTE: ENT specific step as we store secrets in Vault. + - name: Authenticate to Vault + if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + id: vault-auth + run: vault-auth + + # NOTE: ENT specific step as we store secrets in Vault. + - name: Fetch Secrets + if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + id: secrets + uses: hashicorp/vault-action@v3 + with: + url: ${{ steps.vault-auth.outputs.addr }} + caCertificate: ${{ steps.vault-auth.outputs.ca_certificate }} + token: ${{ steps.vault-auth.outputs.token }} + secrets: | + kv/data/github/${{ github.repository }}/datadog apikey | DATADOG_API_KEY; + + - name: prepare datadog-ci + if: ${{ !cancelled() && !endsWith(github.repository, '-enterprise') }} + run: | + curl -L --fail "https://github.com/DataDog/datadog-ci/releases/latest/download/datadog-ci_linux-x64" --output "/usr/local/bin/datadog-ci" + chmod +x /usr/local/bin/datadog-ci + + - name: upload coverage + # do not run on forks + if: ${{ !cancelled() && github.event.pull_request.head.repo.full_name == github.repository }} + env: + DATADOG_API_KEY: "${{ endsWith(github.repository, '-enterprise') && env.DATADOG_API_KEY || secrets.DATADOG_API_KEY }}" + DD_ENV: ci + run: datadog-ci junit upload --service "$GITHUB_REPOSITORY" $TEST_RESULTS_DIR/results.xml + + upgrade-integration-test: + runs-on: ${{ fromJSON(needs.setup.outputs.compute-large) }} + needs: + - setup + - get-go-version + - get-envoy-versions + - dev-build + permissions: + id-token: write # NOTE: this permission is explicitly required for Vault auth. + contents: read + strategy: + fail-fast: false + matrix: + consul-version: ["1.15", "1.18", "1.19", "1.20"] + env: + CONSUL_LATEST_VERSION: ${{ matrix.consul-version }} + # ENVOY_VERSION should be the latest version supported by _all_ Consul versions in the + # matrix.consul-version, since we are testing upgrade from an older Consul version. + # In practice, this should be the highest Envoy version supported by the lowest non-LTS + # Consul version in the matrix (LTS versions receive additional Envoy version support). + # + # This value should be kept current in new nightly test workflows, and updated any time + # a new major Envoy release is added to the set supported by Consul versions in + # matrix.consul-version (i.e. whenever the highest common Envoy version across active + # Consul versions changes). The minor Envoy version does not necessarily need to be + # kept current for the purpose of these tests, but the major (1.N) version should be. + ENVOY_VERSION: 1.28.7 + steps: + - name: Checkout code + uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 + with: + ref: ${{ env.BRANCH }} + # NOTE: This step is specifically needed for ENT. It allows us to access the required private HashiCorp repos. + - name: Setup Git + if: ${{ endsWith(github.repository, '-enterprise') }} + run: git config --global url."https://${{ secrets.ELEVATED_GITHUB_TOKEN }}:@github.com".insteadOf "https://github.com" + - uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0 + with: + go-version: ${{ needs.get-go-version.outputs.go-version }} + - run: go env + + # Get go binary from workspace + - name: fetch binary + uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7 + with: + name: '${{ env.CONSUL_BINARY_UPLOAD_NAME }}' + path: . + - name: restore mode+x + run: chmod +x consul + - name: Build consul:local image + run: docker build -t ${{ env.CONSUL_LATEST_IMAGE_NAME }}:local -f ./build-support/docker/Consul-Dev.dockerfile . + - name: Build consul-envoy:latest-version image + id: buildConsulEnvoyLatestImage + run: | + if ${{ endsWith(github.repository, '-enterprise') }} == 'true' + then + docker build -t consul-envoy:latest-version --build-arg CONSUL_IMAGE=docker.mirror.hashicorp.services/${{ env.CONSUL_LATEST_IMAGE_NAME }}:${{ env.CONSUL_LATEST_VERSION }}-ent --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets + else + docker build -t consul-envoy:latest-version --build-arg CONSUL_IMAGE=docker.mirror.hashicorp.services/${{ env.CONSUL_LATEST_IMAGE_NAME }}:${{ env.CONSUL_LATEST_VERSION }} --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets + fi + - name: Build consul-envoy:target-version image + id: buildConsulEnvoyTargetImage + continue-on-error: true + run: docker build -t consul-envoy:target-version --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets + - name: Retry Build consul-envoy:target-version image + if: steps.buildConsulEnvoyTargetImage.outcome == 'failure' + run: docker build -t consul-envoy:target-version --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets + - name: Build sds image + run: docker build -t consul-sds-server ./test/integration/connect/envoy/test-sds-server/ + - name: Configure GH workaround for ipv6 loopback + if: ${{ !endsWith(github.repository, '-enterprise') }} + run: | + cat /etc/hosts && echo "-----------" + sudo sed -i 's/::1 *localhost ip6-localhost ip6-loopback/::1 ip6-localhost ip6-loopback/g' /etc/hosts + cat /etc/hosts + - name: Upgrade Integration Tests + run: | + mkdir -p "${{ env.TEST_RESULTS_DIR }}" + cd ./test/integration/consul-container/test/upgrade + docker run --rm ${{ env.CONSUL_LATEST_IMAGE_NAME }}:local consul version + go run gotest.tools/gotestsum@v${{env.GOTESTSUM_VERSION}} \ + --raw-command \ + --format=github-actions \ + --rerun-fails \ + --packages="./..." \ + -- \ + go test \ + -p=4 \ + -tags "${{ env.GOTAGS }}" \ + -timeout=30m \ + -json \ + ./... \ + --follow-log=false \ + --target-image ${{ env.CONSUL_LATEST_IMAGE_NAME }} \ + --target-version local \ + --latest-image docker.mirror.hashicorp.services/${{ env.CONSUL_LATEST_IMAGE_NAME }} \ + --latest-version "${{ env.CONSUL_LATEST_VERSION }}" + ls -lrt + env: + # this is needed because of incompatibility between RYUK container and GHA + GOTESTSUM_JUNITFILE: ${{ env.TEST_RESULTS_DIR }}/results.xml + GOTESTSUM_FORMAT: standard-verbose + COMPOSE_INTERACTIVE_NO_CLI: 1 + # tput complains if this isn't set to something. + TERM: ansi + # NOTE: ENT specific step as we store secrets in Vault. + - name: Authenticate to Vault + if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + id: vault-auth + run: vault-auth + + # NOTE: ENT specific step as we store secrets in Vault. + - name: Fetch Secrets + if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + id: secrets + uses: hashicorp/vault-action@v3 + with: + url: ${{ steps.vault-auth.outputs.addr }} + caCertificate: ${{ steps.vault-auth.outputs.ca_certificate }} + token: ${{ steps.vault-auth.outputs.token }} + secrets: | + kv/data/github/${{ github.repository }}/datadog apikey | DATADOG_API_KEY; + + - name: prepare datadog-ci + if: ${{ !cancelled() && !endsWith(github.repository, '-enterprise') }} + run: | + curl -L --fail "https://github.com/DataDog/datadog-ci/releases/latest/download/datadog-ci_linux-x64" --output "/usr/local/bin/datadog-ci" + chmod +x /usr/local/bin/datadog-ci + + - name: upload coverage + # do not run on forks + if: ${{ !cancelled() && github.event.pull_request.head.repo.full_name == github.repository }} + env: + DATADOG_API_KEY: "${{ endsWith(github.repository, '-enterprise') && env.DATADOG_API_KEY || secrets.DATADOG_API_KEY }}" + DD_ENV: ci + run: datadog-ci junit upload --service "$GITHUB_REPOSITORY" $TEST_RESULTS_DIR/results.xml + + upgrade-integration-test-deployer: + runs-on: ${{ fromJSON(needs.setup.outputs.compute-large ) }} + needs: + - setup + - get-go-version + - dev-build + permissions: + id-token: write # NOTE: this permission is explicitly required for Vault auth. + contents: read + strategy: + fail-fast: false + matrix: + consul-version: ["1.15", "1.18", "1.19", "1.20"] + env: + CONSUL_LATEST_VERSION: ${{ matrix.consul-version }} + steps: + - name: Checkout code + uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 + with: + ref: ${{ env.BRANCH }} + # NOTE: This step is specifically needed for ENT. It allows us to access the required private HashiCorp repos. + - name: Setup Git + if: ${{ endsWith(github.repository, '-enterprise') }} + run: git config --global url."https://${{ secrets.ELEVATED_GITHUB_TOKEN }}:@github.com".insteadOf "https://github.com" + - uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0 + with: + go-version: ${{ needs.get-go-version.outputs.go-version }} + - run: go env + - name: Build image + run: make test-deployer-setup + - name: Upgrade Integration Tests + run: | + mkdir -p "${{ env.TEST_RESULTS_DIR }}" + export NOLOGBUFFER=1 + cd ./test-integ/upgrade + docker run --rm ${{ env.CONSUL_LATEST_IMAGE_NAME }}:local consul version + go run gotest.tools/gotestsum@v${{env.GOTESTSUM_VERSION}} \ + --raw-command \ + --format=standard-verbose \ + --debug \ + --packages="./..." \ + -- \ + go test \ + -tags "${{ env.GOTAGS }}" \ + -timeout=60m \ + -parallel=2 \ + -json \ + ./... \ + --target-image ${{ env.CONSUL_LATEST_IMAGE_NAME }} \ + --target-version local \ + --latest-image docker.mirror.hashicorp.services/${{ env.CONSUL_LATEST_IMAGE_NAME }} \ + --latest-version "${{ env.CONSUL_LATEST_VERSION }}" + env: + # this is needed because of incompatibility between RYUK container and GHA + GOTESTSUM_JUNITFILE: ${{ env.TEST_RESULTS_DIR }}/results.xml + GOTESTSUM_FORMAT: standard-verbose + COMPOSE_INTERACTIVE_NO_CLI: 1 + # tput complains if this isn't set to something. + TERM: ansi + # NOTE: ENT specific step as we store secrets in Vault. + - name: Authenticate to Vault + if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + id: vault-auth + run: vault-auth + + # NOTE: ENT specific step as we store secrets in Vault. + - name: Fetch Secrets + if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + id: secrets + uses: hashicorp/vault-action@v3 + with: + url: ${{ steps.vault-auth.outputs.addr }} + caCertificate: ${{ steps.vault-auth.outputs.ca_certificate }} + token: ${{ steps.vault-auth.outputs.token }} + secrets: | + kv/data/github/${{ github.repository }}/datadog apikey | DATADOG_API_KEY; + + - name: prepare datadog-ci + if: ${{ !cancelled() && !endsWith(github.repository, '-enterprise') }} + run: | + curl -L --fail "https://github.com/DataDog/datadog-ci/releases/latest/download/datadog-ci_linux-x64" --output "/usr/local/bin/datadog-ci" + chmod +x /usr/local/bin/datadog-ci + + - name: upload coverage + # do not run on forks + if: ${{ !cancelled() && github.event.pull_request.head.repo.full_name == github.repository }} + env: + DATADOG_API_KEY: "${{ endsWith(github.repository, '-enterprise') && env.DATADOG_API_KEY || secrets.DATADOG_API_KEY }}" + DD_ENV: ci + run: datadog-ci junit upload --service "$GITHUB_REPOSITORY" $TEST_RESULTS_DIR/results.xml + + test-integrations-success: + needs: + - setup + - dev-build + - generate-envoy-job-matrices + - envoy-integration-test + - upgrade-integration-test + - upgrade-integration-test-deployer + runs-on: ${{ fromJSON(needs.setup.outputs.compute-small) }} + if: ${{ always() }} + steps: + - name: evaluate upstream job results + run: | + # exit 1 if failure or cancelled result for any upstream job + if printf '${{ toJSON(needs) }}' | grep -E -i '\"result\": \"(failure|cancelled)\"'; then + printf "Tests failed or workflow cancelled:\n\n${{ toJSON(needs) }}" + exit 1 + fi + - name: Notify Slack + if: ${{ failure() }} + id: slack + uses: slackapi/slack-github-action@70cd7be8e40a46e8b0eced40b0de447bdb42f68e # v1.26.0 + with: + payload: | + { + "message": "One or more nightly integration tests have failed on branch ${{ env.BRANCH }} for Consul. ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}" + } + env: + SLACK_WEBHOOK_URL: ${{ secrets.CONSUL_NIGHTLY_INTEG_TEST_SLACK_WEBHOOK }} diff --git a/.github/workflows/nightly-test-main.yaml b/.github/workflows/nightly-test-main.yaml index a3ce2edbb56b..dacd829e1701 100644 --- a/.github/workflows/nightly-test-main.yaml +++ b/.github/workflows/nightly-test-main.yaml @@ -24,7 +24,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -56,7 +56,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -95,7 +95,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -128,7 +128,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -167,7 +167,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock @@ -198,7 +198,7 @@ jobs: # Not necessary to use yarn, but enables caching - uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2 with: - node-version: 18 + node-version: 20 cache: 'yarn' cache-dependency-path: ./ui/yarn.lock diff --git a/.github/workflows/reusable-conditional-skip.yml b/.github/workflows/reusable-conditional-skip.yml index 36a5eaba60e8..4d735b6b693c 100644 --- a/.github/workflows/reusable-conditional-skip.yml +++ b/.github/workflows/reusable-conditional-skip.yml @@ -37,7 +37,7 @@ jobs: fetch-depth: 0 - name: Check for skippable file changes id: changed-files - uses: tj-actions/changed-files@e9772d140489982e0e3704fea5ee93d536f1e275 # v45.0.1 + uses: tj-actions/changed-files@2f7c5bfce28377bc069a65ba478de0a74aa0ca32 # v46.0.1 with: # This is a multi-line YAML string with one match pattern per line. # Do not use quotes around values, as it's not supported. @@ -51,6 +51,7 @@ jobs: website/** grafana/** .changelog/** + .github/CODEOWNERS - name: Print changed files env: SKIPPABLE_CHANGED_FILES: ${{ steps.changed-files.outputs.all_changed_files }} diff --git a/.github/workflows/test-integrations.yml b/.github/workflows/test-integrations.yml index ba4eecdf4210..aec3abbd13b3 100644 --- a/.github/workflows/test-integrations.yml +++ b/.github/workflows/test-integrations.yml @@ -5,14 +5,12 @@ name: test-integrations on: pull_request: - branches-ignore: - - stable-website - - 'docs/**' - - 'ui/**' - - 'mktg-**' # Digital Team Terraform-generated branch prefix - - 'backport/docs/**' - - 'backport/ui/**' - - 'backport/mktg-**' + types: [opened, synchronize, labeled] + # Runs on PRs to main and all release branches + branches: + - main + - "release/**" + push: branches: # Push events on the main branch @@ -83,7 +81,7 @@ jobs: contents: read strategy: matrix: - nomad-version: ['v1.8.3', 'v1.7.7', 'v1.6.10'] + nomad-version: ['v1.10.0', 'v1.9.7', 'v1.8.4'] steps: - name: Checkout Nomad uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 @@ -166,7 +164,7 @@ jobs: contents: read strategy: matrix: - vault-version: ["1.17.5", "1.16.3", "1.15.6"] + vault-version: ["1.16.3", "1.17.6", "1.18.5", "1.19.2"] env: VAULT_BINARY_VERSION: ${{ matrix.vault-version }} steps: @@ -303,9 +301,10 @@ jobs: fail-fast: false matrix: xds-target: ["server", "client"] + envoy-version: ${{ fromJSON(needs.get-envoy-versions.outputs.envoy-versions-json) }} test-cases: ${{ fromJSON(needs.generate-envoy-job-matrices.outputs.envoy-matrix) }} env: - ENVOY_VERSION: ${{ needs.get-envoy-versions.outputs.max-envoy-version }} + ENVOY_VERSION: ${{ matrix.envoy-version }} XDS_TARGET: ${{ matrix.xds-target }} AWS_LAMBDA_REGION: us-west-2 steps: @@ -328,7 +327,7 @@ jobs: - name: Docker build run: docker build -t consul:local -f ./build-support/docker/Consul-Dev.dockerfile ./bin - - name: Envoy Integration Tests + - name: Envoy Integration Tests-${{ matrix.envoy-version }}-${{ matrix.test-cases }} id: envoy-integration-tests env: GOTESTSUM_JUNITFILE: ${{ env.TEST_RESULTS_DIR }}/results.xml @@ -399,120 +398,123 @@ jobs: DD_ENV: ci run: datadog-ci junit upload --service "$GITHUB_REPOSITORY" $TEST_RESULTS_DIR/results.xml - compatibility-integration-test: - runs-on: ${{ fromJSON(needs.setup.outputs.compute-xl) }} # NOTE: do not change without tuning the -p and -parallel flags in go test. - needs: - - setup - - get-go-version - - get-envoy-versions - - dev-build - permissions: - id-token: write # NOTE: this permission is explicitly required for Vault auth. - contents: read - env: - ENVOY_VERSION: ${{ needs.get-envoy-versions.outputs.max-envoy-version }} - #TODO don't harcode this image name - CONSUL_DATAPLANE_IMAGE: "docker.mirror.hashicorp.services/hashicorppreview/consul-dataplane:1.6-dev-ubi" - steps: - - uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 - # NOTE: This step is specifically needed for ENT. It allows us to access the required private HashiCorp repos. - - name: Setup Git - if: ${{ endsWith(github.repository, '-enterprise') }} - run: git config --global url."https://${{ secrets.ELEVATED_GITHUB_TOKEN }}:@github.com".insteadOf "https://github.com" - - uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0 - with: - go-version: ${{ needs.get-go-version.outputs.go-version }} - - run: go env - - name: docker env - run: | - docker version - docker info - - name: fetch binary - uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7 - with: - name: '${{ env.CONSUL_BINARY_UPLOAD_NAME }}' - path: . - - name: restore mode+x - run: chmod +x consul - # Build the consul:local image from the already built binary - - name: Build consul:local image - run: docker build -t ${{ env.CONSUL_LATEST_IMAGE_NAME }}:local -f ./build-support/docker/Consul-Dev.dockerfile . - - name: Build consul-envoy:target-version image - id: buildConsulEnvoyImage - continue-on-error: true - run: docker build -t consul-envoy:target-version --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets - - name: Retry Build consul-envoy:target-version image - if: steps.buildConsulEnvoyImage.outcome == 'failure' - run: docker build -t consul-envoy:target-version --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets - - name: Build consul-dataplane:local image - run: docker build -t consul-dataplane:local --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg CONSUL_DATAPLANE_IMAGE=${{ env.CONSUL_DATAPLANE_IMAGE }} -f ./test/integration/consul-container/assets/Dockerfile-consul-dataplane ./test/integration/consul-container/assets - - name: Configure GH workaround for ipv6 loopback - if: ${{ !endsWith(github.repository, '-enterprise') }} - run: | - cat /etc/hosts && echo "-----------" - sudo sed -i 's/::1 *localhost ip6-localhost ip6-loopback/::1 ip6-localhost ip6-loopback/g' /etc/hosts - cat /etc/hosts - - name: Compatibility Integration Tests - run: | - mkdir -p "/tmp/test-results" - cd ./test/integration/consul-container - docker run --rm ${{ env.CONSUL_LATEST_IMAGE_NAME }}:local consul version - go run gotest.tools/gotestsum@v${{env.GOTESTSUM_VERSION}} \ - --raw-command \ - --format=github-actions \ - --rerun-fails \ - -- \ - go test \ - -p=6 \ - -parallel=4 \ - -tags "${{ env.GOTAGS }}" \ - -timeout=30m \ - -json \ - `go list -tags "${{ env.GOTAGS }}" ./... | grep -v upgrade | grep -v peering_commontopo` \ - --target-image ${{ env.CONSUL_LATEST_IMAGE_NAME }} \ - --target-version local \ - --latest-image docker.mirror.hashicorp.services/${{ env.CONSUL_LATEST_IMAGE_NAME }} \ - --latest-version latest - ls -lrt - env: - # this is needed because of incompatibility between RYUK container and GHA - GOTESTSUM_JUNITFILE: ${{ env.TEST_RESULTS_DIR }}/results.xml - GOTESTSUM_FORMAT: standard-verbose - COMPOSE_INTERACTIVE_NO_CLI: 1 - # tput complains if this isn't set to something. - TERM: ansi - - # NOTE: ENT specific step as we store secrets in Vault. - - name: Authenticate to Vault - if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} - id: vault-auth - run: vault-auth - - # NOTE: ENT specific step as we store secrets in Vault. - - name: Fetch Secrets - if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} - id: secrets - uses: hashicorp/vault-action@v3 - with: - url: ${{ steps.vault-auth.outputs.addr }} - caCertificate: ${{ steps.vault-auth.outputs.ca_certificate }} - token: ${{ steps.vault-auth.outputs.token }} - secrets: | - kv/data/github/${{ github.repository }}/datadog apikey | DATADOG_API_KEY; - - - name: prepare datadog-ci - if: ${{ !cancelled() && !endsWith(github.repository, '-enterprise') }} - run: | - curl -L --fail "https://github.com/DataDog/datadog-ci/releases/latest/download/datadog-ci_linux-x64" --output "/usr/local/bin/datadog-ci" - chmod +x /usr/local/bin/datadog-ci - - - name: upload coverage - # do not run on forks - if: ${{ !cancelled() && github.event.pull_request.head.repo.full_name == github.repository }} - env: - DATADOG_API_KEY: "${{ endsWith(github.repository, '-enterprise') && env.DATADOG_API_KEY || secrets.DATADOG_API_KEY }}" - DD_ENV: ci - run: datadog-ci junit upload --service "$GITHUB_REPOSITORY" $TEST_RESULTS_DIR/results.xml + # compatibility-integration-test: + # runs-on: ${{ fromJSON(needs.setup.outputs.compute-xl) }} # NOTE: do not change without tuning the -p and -parallel flags in go test. + # needs: + # - setup + # - get-go-version + # - get-envoy-versions + # - dev-build + # permissions: + # id-token: write # NOTE: this permission is explicitly required for Vault auth. + # contents: read + # env: + # ENVOY_VERSION: ${{ needs.get-envoy-versions.outputs.max-envoy-version }} + # #TODO don't harcode this image name + # CONSUL_DATAPLANE_IMAGE: "docker.mirror.hashicorp.services/hashicorppreview/consul-dataplane:1.6-dev-ubi" + # steps: + # - uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4 + # # NOTE: This step is specifically needed for ENT. It allows us to access the required private HashiCorp repos. + # - name: Setup Git + # if: ${{ endsWith(github.repository, '-enterprise') }} + # run: git config --global url."https://${{ secrets.ELEVATED_GITHUB_TOKEN }}:@github.com".insteadOf "https://github.com" + # - uses: actions/setup-go@0c52d547c9bc32b1aa3301fd7a9cb496313a4491 # v5.0.0 + # with: + # go-version: ${{ needs.get-go-version.outputs.go-version }} + # - run: go env + # - name: docker env + # run: | + # docker version + # docker info + # - name: fetch binary + # uses: actions/download-artifact@65a9edc5881444af0b9093a5e628f2fe47ea3b2e # v4.1.7 + # with: + # name: '${{ env.CONSUL_BINARY_UPLOAD_NAME }}' + # path: . + # - name: restore mode+x + # run: chmod +x consul + # # Build the consul:local image from the already built binary + # - name: Build consul:local image + # run: docker build -t ${{ env.CONSUL_LATEST_IMAGE_NAME }}:local -f ./build-support/docker/Consul-Dev.dockerfile . + # - name: Build consul-envoy:target-version image + # id: buildConsulEnvoyImage + # continue-on-error: true + # run: docker build -t consul-envoy:target-version --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets + # - name: Retry Build consul-envoy:target-version image + # if: steps.buildConsulEnvoyImage.outcome == 'failure' + # run: docker build -t consul-envoy:target-version --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg ENVOY_VERSION=${{ env.ENVOY_VERSION }} -f ./test/integration/consul-container/assets/Dockerfile-consul-envoy ./test/integration/consul-container/assets + # - name: Build consul-dataplane:local image + # run: docker build -t consul-dataplane:local --build-arg CONSUL_IMAGE=${{ env.CONSUL_LATEST_IMAGE_NAME }}:local --build-arg CONSUL_DATAPLANE_IMAGE=${{ env.CONSUL_DATAPLANE_IMAGE }} -f ./test/integration/consul-container/assets/Dockerfile-consul-dataplane ./test/integration/consul-container/assets + # - name: Configure GH workaround for ipv6 loopback + # if: ${{ !endsWith(github.repository, '-enterprise') }} + # run: | + # cat /etc/hosts && echo "-----------" + # sudo sed -i 's/::1 *localhost ip6-localhost ip6-loopback/::1 ip6-localhost ip6-loopback/g' /etc/hosts + # cat /etc/hosts + + # # TODO: Disabling the "Compatibility Integration Tests" test temporarily, need to enable again once dependent pipeline issues are fixed. + # # Please run this test locally and post results in PR description. + # # - name: Compatibility Integration Tests + # # run: | + # # mkdir -p "/tmp/test-results" + # # cd ./test/integration/consul-container + # # docker run --rm ${{ env.CONSUL_LATEST_IMAGE_NAME }}:local consul version + # # go run gotest.tools/gotestsum@v${{env.GOTESTSUM_VERSION}} \ + # # --raw-command \ + # # --format=github-actions \ + # # --rerun-fails \ + # # -- \ + # # go test \ + # # -p=6 \ + # # -parallel=4 \ + # # -tags "${{ env.GOTAGS }}" \ + # # -timeout=30m \ + # # -json \ + # # `go list -tags "${{ env.GOTAGS }}" ./... | grep -v upgrade | grep -v peering_commontopo` \ + # # --target-image ${{ env.CONSUL_LATEST_IMAGE_NAME }} \ + # # --target-version local \ + # # --latest-image docker.mirror.hashicorp.services/${{ env.CONSUL_LATEST_IMAGE_NAME }} \ + # # --latest-version latest + # # ls -lrt + # # env: + # # # this is needed because of incompatibility between RYUK container and GHA + # # GOTESTSUM_JUNITFILE: ${{ env.TEST_RESULTS_DIR }}/results.xml + # # GOTESTSUM_FORMAT: standard-verbose + # # COMPOSE_INTERACTIVE_NO_CLI: 1 + # # # tput complains if this isn't set to something. + # # TERM: ansi + + # # NOTE: ENT specific step as we store secrets in Vault. + # - name: Authenticate to Vault + # if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + # id: vault-auth + # run: vault-auth + + # # NOTE: ENT specific step as we store secrets in Vault. + # - name: Fetch Secrets + # if: ${{ !cancelled() && endsWith(github.repository, '-enterprise') }} + # id: secrets + # uses: hashicorp/vault-action@v3 + # with: + # url: ${{ steps.vault-auth.outputs.addr }} + # caCertificate: ${{ steps.vault-auth.outputs.ca_certificate }} + # token: ${{ steps.vault-auth.outputs.token }} + # secrets: | + # kv/data/github/${{ github.repository }}/datadog apikey | DATADOG_API_KEY; + + # - name: prepare datadog-ci + # if: ${{ !cancelled() && !endsWith(github.repository, '-enterprise') }} + # run: | + # curl -L --fail "https://github.com/DataDog/datadog-ci/releases/latest/download/datadog-ci_linux-x64" --output "/usr/local/bin/datadog-ci" + # chmod +x /usr/local/bin/datadog-ci + + # - name: upload coverage + # # do not run on forks and dependabot PRs + # if: ${{ !cancelled() && github.event.pull_request.head.repo.full_name == github.repository && github.actor != 'dependabot[bot]' }} + # env: + # DATADOG_API_KEY: "${{ endsWith(github.repository, '-enterprise') && env.DATADOG_API_KEY || secrets.DATADOG_API_KEY }}" + # DD_ENV: ci + # run: datadog-ci junit upload --service "$GITHUB_REPOSITORY" $TEST_RESULTS_DIR/results.xml integration-test-with-deployer: runs-on: ${{ fromJSON(needs.setup.outputs.compute-large ) }} @@ -608,7 +610,6 @@ jobs: - vault-integration-test - generate-envoy-job-matrices - envoy-integration-test - - compatibility-integration-test - integration-test-with-deployer runs-on: ${{ fromJSON(needs.setup.outputs.compute-small) }} if: always() && needs.conditional-skip.outputs.skip-ci != 'true' diff --git a/.gitignore b/.gitignore index fcd852606c5e..431045cbd0b5 100644 --- a/.gitignore +++ b/.gitignore @@ -74,3 +74,4 @@ terraform.rc # Avoid accidental commits of consul-k8s submodule used by some dev environments consul-k8s/ +.vercel diff --git a/.go-version b/.go-version index 2560439f071b..7bdcec52d093 100644 --- a/.go-version +++ b/.go-version @@ -1 +1 @@ -1.22.12 +1.23.12 diff --git a/.golangci.yml b/.golangci.yml index e530ce70bdf1..e71d22c2d5c4 100644 --- a/.golangci.yml +++ b/.golangci.yml @@ -25,6 +25,9 @@ issues: - linters: [staticcheck] text: "SA9004:" + - linters: [staticcheck] + text: "SA1006:" + - linters: [staticcheck] text: 'SA1019: "io/ioutil" has been deprecated since Go 1.16' @@ -37,6 +40,9 @@ issues: - linters: [unparam] text: "always receives" + - linters: [ unparam ] + text: 'result \d+ \(bool\) is always false' + # Often functions will implement an interface that returns an error without # needing to return an error. Sometimes the error return value is unnecessary # but a linter can not tell the difference. @@ -62,15 +68,18 @@ issues: - linters: [unparam] path: "(_ce.go|_ce_test.go|_ent.go|_ent_test.go)" + - linters: [ staticcheck ] + text: 'SA1019:' + linters-settings: govet: - check-shadowing: true enable-all: true disable: - fieldalignment - nilness - shadow - unusedwrite + - printf gofmt: simplify: true forbidigo: @@ -112,4 +121,4 @@ linters-settings: run: timeout: 10m concurrency: 4 - skip-dirs-use-default: false + skip-dirs-use-default: false \ No newline at end of file diff --git a/.release/linux/package/etc/consul.d/consul.hcl b/.release/linux/package/etc/consul.d/consul.hcl index b25f18685856..20c8dae5b0eb 100644 --- a/.release/linux/package/etc/consul.d/consul.hcl +++ b/.release/linux/package/etc/consul.d/consul.hcl @@ -1,7 +1,7 @@ # Copyright (c) HashiCorp, Inc. # SPDX-License-Identifier: BUSL-1.1 -# Full configuration options can be found at https://www.consul.io/docs/agent/config +# Full configuration options can be found at https://developer.hashicorp.com/docs/agent/config # datacenter # This flag controls the datacenter in which the agent is running. If not provided, @@ -92,7 +92,7 @@ data_dir = "/opt/consul" #retry_join = ["[::1]:8301"] #retry_join = ["consul.domain.internal", "10.0.4.67"] # Cloud Auto-join examples: -# More details - https://www.consul.io/docs/agent/cloud-auto-join +# More details - https://developer.hashicorp.com/docs/agent/cloud-auto-join #retry_join = ["provider=aws tag_key=... tag_value=..."] #retry_join = ["provider=azure tag_name=... tag_value=... tenant_id=... client_id=... subscription_id=... secret_access_key=..."] #retry_join = ["provider=gce project_name=... tag_value=..."] diff --git a/.release/linux/package/usr/lib/systemd/system/consul.service b/.release/linux/package/usr/lib/systemd/system/consul.service index 65eca696e1a1..e96149e001ee 100644 --- a/.release/linux/package/usr/lib/systemd/system/consul.service +++ b/.release/linux/package/usr/lib/systemd/system/consul.service @@ -1,6 +1,6 @@ [Unit] Description="HashiCorp Consul - A service mesh solution" -Documentation=https://www.consul.io/ +Documentation=https://developer.hashicorp.com/ Requires=network-online.target After=network-online.target ConditionFileNotEmpty=/etc/consul.d/consul.hcl diff --git a/.release/release-metadata.hcl b/.release/release-metadata.hcl index 963192fc4b80..fc0cddc08d0e 100644 --- a/.release/release-metadata.hcl +++ b/.release/release-metadata.hcl @@ -4,6 +4,6 @@ url_docker_registry_dockerhub = "https://hub.docker.com/r/hashicorp/consul" url_docker_registry_ecr = "https://gallery.ecr.aws/hashicorp/consul" url_license = "https://github.com/hashicorp/consul/blob/main/LICENSE" -url_project_website = "https://www.consul.io" -url_release_notes = "https://www.consul.io/docs/release-notes" +url_project_website = "https://developer.hashicorp.com" +url_release_notes = "https://developer.hashicorp.com/docs/release-notes" url_source_repository = "https://github.com/hashicorp/consul" \ No newline at end of file diff --git a/.release/security-scan.hcl b/.release/security-scan.hcl index 2e7102fea7f9..98a8a5e1b65a 100644 --- a/.release/security-scan.hcl +++ b/.release/security-scan.hcl @@ -39,7 +39,17 @@ container { vulnerabilities = [ "CVE-2024-4067", # libsolv@0:0.7.24-3.el9 "CVE-2019-12900", # bzip2-libs@0:1.0.8-8.el9 - "CVE-2024-12797" # openssl-libs@1:3.2.2-6.el9_5 + "CVE-2024-12797", # openssl-libs@1:3.2.2-6.el9_5 + "CVE-2024-53427", # jq@1.7.1-r0 + "CVE-2025-31498", # c-ares@1.34.3-r0 + "CVE-2025-30258", # gnupg@2.4.7-r0 + "CVE-2025-31498", # c-ares@1.34.3-r0 + "CVE-2025-30258", # gnupg@2.4.7-r0 + "CVE-2024-53427", # jq@1.7.1-r0 + "CVE-2022-49043", # libxml2@0:2.9.13-6.el9_5.2 + "CVE-2025-46394", + "CVE-2024-58251", + "CVE-2025-47268" ] paths = [ "internal/tools/proto-gen-rpc-glue/e2e/consul/*", diff --git a/.release/versions.hcl b/.release/versions.hcl index 253430f3ccc1..76ddeb2a899a 100644 --- a/.release/versions.hcl +++ b/.release/versions.hcl @@ -6,14 +6,12 @@ schema = 1 active_versions { - version "1.20" { + version "1.21" { ce_active = true } + version "1.20" {} version "1.19" {} version "1.18" { lts = true } - version "1.15" { - lts = true - } } diff --git a/CHANGELOG.md b/CHANGELOG.md index 6e06f1b4cfef..0bee6ce10092 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,117 @@ +## 1.21.4 (August 13, 2025) + +SECURITY: + +* security: Update Go to 1.23.12 to address CVE-2025-47906 [[GH-22547](https://github.com/hashicorp/consul/issues/22547)] + +IMPROVEMENTS: + +* ui: Replaced internal code editor with HDS (HashiCorp Design System) code editor and code block components for improved accessibility and maintainability across the Consul UI. [[GH-22513](https://github.com/hashicorp/consul/issues/22513)] + +BUG FIXES: + +* cli: capture pprof when ACL is enabled and a token with operator:read is used, even if enable_debug config is not explicitly set. [[GH-22552](https://github.com/hashicorp/consul/issues/22552)] + +## 1.21.3 (July 18, 2025) + +IMPROVEMENTS: + +* ui: Improved display and handling of IPv6 addresses for better readability and usability in the Consul web interface. [[GH-22468](https://github.com/hashicorp/consul/issues/22468)] + +BUG FIXES: + +* cli: validate IP address in service registration to prevent invalid IPs in service and tagged addresses. [[GH-22467](https://github.com/hashicorp/consul/issues/22467)] +* ui: display IPv6 addresses with proper bracketed formatting [[GH-22423](https://github.com/hashicorp/consul/issues/22423)] + + +## 1.21.2 (June 17, 2025) + +SECURITY: + +* security: Upgrade UBI base image version to address CVE +[CVE-2025-4802](https://access.redhat.com/security/cve/cve-2025-4802) +[CVE-2024-40896](https://access.redhat.com/security/cve/cve-2024-40896) +[CVE-2024-12243](https://nvd.nist.gov/vuln/detail/CVE-2024-12243) +[CVE-2025-24528](https://access.redhat.com/security/cve/cve-2025-24528) +[CVE-2025-3277](https://access.redhat.com/security/cve/cve-2025-3277) +[CVE-2024-12133](https://access.redhat.com/security/cve/cve-2024-12133) +[CVE-2024-57970](https://access.redhat.com/security/cve/cve-2024-57970) +[CVE-2025-31115](https://access.redhat.com/security/cve/cve-2025-31115) [[GH-22409](https://github.com/hashicorp/consul/issues/22409)] +* cli: update tls ca and cert create to reduce excessive file perms for generated public files [[GH-22286](https://github.com/hashicorp/consul/issues/22286)] +* connect: Added non default namespace and partition checks to ConnectCA CSR requests. [[GH-22376](https://github.com/hashicorp/consul/issues/22376)] +* security: Upgrade Go to 1.23.10. [[GH-22412](https://github.com/hashicorp/consul/issues/22412)] + +IMPROVEMENTS: + +* config: Warn about invalid characters in `datacenter` resulting in non-generation of X.509 certificates when using external CA for agent TLS communication. [[GH-22382](https://github.com/hashicorp/consul/issues/22382)] +* connect: Use net.JoinHostPort for host:port formatting to handle IPv6. [[GH-22359](https://github.com/hashicorp/consul/issues/22359)] + +BUG FIXES: + +* http: return a clear error when both Service.Service and Service.ID are missing during catalog registration [[GH-22381](https://github.com/hashicorp/consul/issues/22381)] +* license: (Enterprise only) Fixed issue where usage metrics are not written to the snapshot to export the license data. [[GH-10668](https://github.com/hashicorp/consul/issues/10668)] +* wan-federation: Fixed an issue where advertised IPv6 addresses were causing WAN federation to fail. [[GH-22226](https://github.com/hashicorp/consul/issues/22226)] + +## 1.21.1 (May 21, 2025) + +FEATURES: + +* xds: Extend LUA Script support for API Gateway [[GH-22321](https://github.com/hashicorp/consul/issues/22321)] +* xds: Added a configurable option to disable XDS session load balancing, intended for scenarios where an external load balancer is used in front of Consul servers, making internal load balancing unnecessary. + +IMPROVEMENTS: + +* http: Add peer query param on catalog service API [[GH-22189](https://github.com/hashicorp/consul/issues/22189)] + +## 1.21.0 (May 06, 2025) + +FEATURES: + +* config: add UseSNI flag in remote JSONWebKeySet +agent: send TLS SNI in remote JSONWebKeySet [[GH-22177](https://github.com/hashicorp/consul/issues/22177)] +* v2: remove HCP Link integration [[GH-21883](https://github.com/hashicorp/consul/issues/21883)] + +IMPROVEMENTS: + +* raft: add a configuration `raft_prevote_disabled` to allow disabling raft prevote [[GH-21758](https://github.com/hashicorp/consul/issues/21758)] +* raft: update raft library to 1.7.0 which include pre-vote extension [[GH-21758](https://github.com/hashicorp/consul/issues/21758)] +* SubMatView: Log level change from ERROR to INFO for subject materialized view as subscription creation is retryable on ACL change. [[GH-22141](https://github.com/hashicorp/consul/issues/22141)] +* ui: Adds a copyable token accessor/secret on the settings page when signed in [[GH-22105](https://github.com/hashicorp/consul/issues/22105)] +* xDS: Log level change from ERROR to INFO for xDS delta discovery request. Stream can be cancelled on server shutdown and other scenarios. It is retryable and error is a superfluous log. [[GH-22141](https://github.com/hashicorp/consul/issues/22141)] + +## 1.21.0-rc2 (April 17, 2025) +* Enhancement: Added support for Consul Session to update the state of a Health Check, allowing for more dynamic and responsive health monitoring within the Consul ecosystem. This feature enables sessions to directly influence health check statuses, improving the overall reliability and accuracy of service health assessments. + +## 1.21.0-rc1 (March 06, 2025) + +SECURITY: + +* Update `golang.org/x/crypto` to v0.35.0 to address [GO-2025-3487](https://pkg.go.dev/vuln/GO-2025-3487). +* Update `golang.org/x/oauth2` to v0.27.0 to address [GO-2025-3488](https://pkg.go.dev/vuln/GO-2025-3488). +* Update `github.com/go-jose/go-jose/v3` to v3.0.4 to address [GO-2025-3485](https://pkg.go.dev/vuln/GO-2025-3485). [[GH-22207](https://github.com/hashicorp/consul/issues/22207)] +* Upgrade Go to 1.23.6. [[GH-22204](https://github.com/hashicorp/consul/issues/22204)] + +FEATURES: + +* config: add UseSNI flag in remote JSONWebKeySet +* agent: send TLS SNI in remote JSONWebKeySet [[GH-22177](https://github.com/hashicorp/consul/issues/22177)] +* v2: remove HCP Link integration [[GH-21883](https://github.com/hashicorp/consul/issues/21883)] + +IMPROVEMENTS: + +* raft: add a configuration `raft_prevote_disabled` to allow disabling raft prevote [[GH-21758](https://github.com/hashicorp/consul/issues/21758)] +* raft: update raft library to 1.7.0 which include pre-vote extension [[GH-21758](https://github.com/hashicorp/consul/issues/21758)] +* SubMatView: Log level change from ERROR to INFO for subject materialized view as subscription creation is retryable on ACL change. [[GH-22141](https://github.com/hashicorp/consul/issues/22141)] +* ui: Adds a copyable token accessor/secret on the settings page when signed in [[GH-22105](https://github.com/hashicorp/consul/issues/22105)] +* xDS: Log level change from ERROR to INFO for xDS delta discovery request. Stream can be cancelled on server shutdown and other scenarios. It is retryable and error is a superfluous log. [[GH-22141](https://github.com/hashicorp/consul/issues/22141)] + +BUG FIXES: + +* logging: Fixed compilation error for OS NetBSD. [[GH-22184](https://github.com/hashicorp/consul/issues/22184)] + +## 1.21.0 (March 17, 2025) +* Enhancement: Added support for Consul Session to update the state of a Health Check, allowing for more dynamic and responsive health monitoring within the Consul ecosystem. This feature enables sessions to directly influence health check statuses, improving the overall reliability and accuracy of service health assessments. + ## 1.20.3 (February 13, 2025) SECURITY: @@ -2299,7 +2413,7 @@ FEATURES: * cli: Add `-consul-dns-port` flag to the `consul connect redirect-traffic` command to allow forwarding DNS traffic to a specific Consul DNS port. [[GH-15050](https://github.com/hashicorp/consul/issues/15050)] * connect: Add Envoy connection balancing configuration fields. [[GH-14616](https://github.com/hashicorp/consul/issues/14616)] * grpc: Added metrics for external gRPC server. Added `server_type=internal|external` label to gRPC metrics. [[GH-14922](https://github.com/hashicorp/consul/issues/14922)] -* http: Add new `get-or-empty` operation to the txn api. Refer to the [API docs](https://www.consul.io/api-docs/txn#kv-operations) for more information. [[GH-14474](https://github.com/hashicorp/consul/issues/14474)] +* http: Add new `get-or-empty` operation to the txn api. Refer to the [API docs](https://developer.hashicorp.com/api-docs/txn#kv-operations) for more information. [[GH-14474](https://github.com/hashicorp/consul/issues/14474)] * peering: Add mesh gateway local mode support for cluster peering. [[GH-14817](https://github.com/hashicorp/consul/issues/14817)] * peering: Add support for stale queries for trust bundle lookups [[GH-14724](https://github.com/hashicorp/consul/issues/14724)] * peering: Add support to failover to services running on cluster peers. [[GH-14396](https://github.com/hashicorp/consul/issues/14396)] @@ -2453,7 +2567,7 @@ BUG FIXES: BREAKING CHANGES: -* ca: If using Vault as the service mesh CA provider, the Vault policy used by Consul now requires the `update` capability on the intermediate PKI's tune mount configuration endpoint, such as `/sys/mounts/connect_inter/tune`. The breaking nature of this change is resolved in 1.13.3. Refer to [upgrade guidance](https://www.consul.io/docs/upgrading/upgrade-specific#modify-vault-policy-for-vault-ca-provider) for more information. +* ca: If using Vault as the service mesh CA provider, the Vault policy used by Consul now requires the `update` capability on the intermediate PKI's tune mount configuration endpoint, such as `/sys/mounts/connect_inter/tune`. The breaking nature of this change is resolved in 1.13.3. Refer to [upgrade guidance](https://developer.hashicorp.com/docs/upgrading/upgrade-specific#modify-vault-policy-for-vault-ca-provider) for more information. SECURITY: @@ -2462,7 +2576,7 @@ SECURITY: FEATURES: -* cli: Adds new subcommands for `peering` workflows. Refer to the [CLI docs](https://www.consul.io/commands/peering) for more information. [[GH-14423](https://github.com/hashicorp/consul/issues/14423)] +* cli: Adds new subcommands for `peering` workflows. Refer to the [CLI docs](https://developer.hashicorp.com/commands/peering) for more information. [[GH-14423](https://github.com/hashicorp/consul/issues/14423)] * connect: Server address changes are streamed to peers [[GH-14285](https://github.com/hashicorp/consul/issues/14285)] * service-defaults: Added support for `local_request_timeout_ms` and `local_connect_timeout_ms` in servicedefaults config entry [[GH-14395](https://github.com/hashicorp/consul/issues/14395)] @@ -2499,7 +2613,7 @@ BUG FIXES: BREAKING CHANGES: -* ca: If using Vault as the service mesh CA provider, the Vault policy used by Consul now requires the `update` capability on the intermediate PKI's tune mount configuration endpoint, such as `/sys/mounts/connect_inter/tune`. The breaking nature of this change is resolved in 1.12.6. Refer to [upgrade guidance](https://www.consul.io/docs/upgrading/upgrade-specific#modify-vault-policy-for-vault-ca-provider) for more information. +* ca: If using Vault as the service mesh CA provider, the Vault policy used by Consul now requires the `update` capability on the intermediate PKI's tune mount configuration endpoint, such as `/sys/mounts/connect_inter/tune`. The breaking nature of this change is resolved in 1.12.6. Refer to [upgrade guidance](https://developer.hashicorp.com/docs/upgrading/upgrade-specific#modify-vault-policy-for-vault-ca-provider) for more information. SECURITY: @@ -2527,7 +2641,7 @@ BUG FIXES: BREAKING CHANGES: -* ca: If using Vault as the service mesh CA provider, the Vault policy used by Consul now requires the `update` capability on the intermediate PKI's tune mount configuration endpoint, such as `/sys/mounts/connect_inter/tune`. The breaking nature of this change is resolved in 1.11.11. Refer to [upgrade guidance](https://www.consul.io/docs/upgrading/upgrade-specific#modify-vault-policy-for-vault-ca-provider) for more information. +* ca: If using Vault as the service mesh CA provider, the Vault policy used by Consul now requires the `update` capability on the intermediate PKI's tune mount configuration endpoint, such as `/sys/mounts/connect_inter/tune`. The breaking nature of this change is resolved in 1.11.11. Refer to [upgrade guidance](https://developer.hashicorp.com/docs/upgrading/upgrade-specific#modify-vault-policy-for-vault-ca-provider) for more information. SECURITY: @@ -2579,16 +2693,16 @@ connect: Terminating gateways with a wildcard service entry should no longer pic BREAKING CHANGES: * config-entry: Exporting a specific service name across all namespace is invalid. -* connect: contains an upgrade compatibility issue when restoring snapshots containing service mesh proxy registrations from pre-1.13 versions of Consul [[GH-14107](https://github.com/hashicorp/consul/issues/14107)]. Fixed in 1.13.1 [[GH-14149](https://github.com/hashicorp/consul/issues/14149)]. Refer to [1.13 upgrade guidance](https://www.consul.io/docs/upgrading/upgrade-specific#all-service-mesh-deployments) for more information. -* connect: if using auto-encrypt or auto-config, TLS is required for gRPC communication between Envoy and Consul as of 1.13.0; this TLS for gRPC requirement will be removed in a future 1.13 patch release. Refer to [1.13 upgrade guidance](https://www.consul.io/docs/upgrading/upgrade-specific#service-mesh-deployments-using-auto-encrypt-or-auto-config) for more information. -* connect: if a pre-1.13 Consul agent's HTTPS port was not enabled, upgrading to 1.13 may turn on TLS for gRPC communication for Envoy and Consul depending on the agent's TLS configuration. Refer to [1.13 upgrade guidance](https://www.consul.io/docs/upgrading/upgrade-specific#grpc-tls) for more information. +* connect: contains an upgrade compatibility issue when restoring snapshots containing service mesh proxy registrations from pre-1.13 versions of Consul [[GH-14107](https://github.com/hashicorp/consul/issues/14107)]. Fixed in 1.13.1 [[GH-14149](https://github.com/hashicorp/consul/issues/14149)]. Refer to [1.13 upgrade guidance](https://developer.hashicorp.com/docs/upgrading/upgrade-specific#all-service-mesh-deployments) for more information. +* connect: if using auto-encrypt or auto-config, TLS is required for gRPC communication between Envoy and Consul as of 1.13.0; this TLS for gRPC requirement will be removed in a future 1.13 patch release. Refer to [1.13 upgrade guidance](https://developer.hashicorp.com/docs/upgrading/upgrade-specific#service-mesh-deployments-using-auto-encrypt-or-auto-config) for more information. +* connect: if a pre-1.13 Consul agent's HTTPS port was not enabled, upgrading to 1.13 may turn on TLS for gRPC communication for Envoy and Consul depending on the agent's TLS configuration. Refer to [1.13 upgrade guidance](https://developer.hashicorp.com/docs/upgrading/upgrade-specific#grpc-tls) for more information. * connect: Removes support for Envoy 1.19 [[GH-13807](https://github.com/hashicorp/consul/issues/13807)] * telemetry: config flag `telemetry { disable_compat_1.9 = (true|false) }` has been removed. Before upgrading you should remove this flag from your config if the flag is being used. [[GH-13532](https://github.com/hashicorp/consul/issues/13532)] FEATURES: -* **Cluster Peering (Beta)** This version adds a new model to federate Consul clusters for both service mesh and traditional service discovery. Cluster peering allows for service interconnectivity with looser coupling than the existing WAN federation. For more information refer to the [cluster peering](https://www.consul.io/docs/connect/cluster-peering) documentation. -* **Transparent proxying through terminating gateways** This version adds egress traffic control to destinations outside of Consul's catalog, such as APIs on the public internet. Transparent proxies can dial [destinations defined in service-defaults](https://www.consul.io/docs/connect/config-entries/service-defaults#destination) and have the traffic routed through terminating gateways. For more information refer to the [terminating gateway](https://www.consul.io/docs/connect/gateways/terminating-gateway#terminating-gateway-configuration) documentation. +* **Cluster Peering (Beta)** This version adds a new model to federate Consul clusters for both service mesh and traditional service discovery. Cluster peering allows for service interconnectivity with looser coupling than the existing WAN federation. For more information refer to the [cluster peering](https://developer.hashicorp.com/docs/connect/cluster-peering) documentation. +* **Transparent proxying through terminating gateways** This version adds egress traffic control to destinations outside of Consul's catalog, such as APIs on the public internet. Transparent proxies can dial [destinations defined in service-defaults](https://developer.hashicorp.com/docs/connect/config-entries/service-defaults#destination) and have the traffic routed through terminating gateways. For more information refer to the [terminating gateway](https://developer.hashicorp.com/docs/connect/gateways/terminating-gateway#terminating-gateway-configuration) documentation. * acl: It is now possible to login and logout using the gRPC API [[GH-12935](https://github.com/hashicorp/consul/issues/12935)] * agent: Added information about build date alongside other version information for Consul. Extended /agent/self endpoint and `consul version` commands to report this. Agent also reports build date in log on startup. [[GH-13357](https://github.com/hashicorp/consul/issues/13357)] @@ -2953,7 +3067,7 @@ SECURITY: FEATURES: * Admin Partitions (Consul Enterprise only) This version adds admin partitions, a new entity defining administrative and networking boundaries within a Consul deployment. For more information refer to the - [Admin Partition](https://www.consul.io/docs/enterprise/admin-partitions) documentation. [[GH-11855](https://github.com/hashicorp/consul/issues/11855)] + [Admin Partition](https://developer.hashicorp.com/docs/enterprise/admin-partitions) documentation. [[GH-11855](https://github.com/hashicorp/consul/issues/11855)] * networking: **(Enterprise Only)** Make `segment_limit` configurable, cap at 256. BUG FIXES: @@ -2974,7 +3088,7 @@ SECURITY: FEATURES: -* Admin Partitions (Consul Enterprise only) This version adds admin partitions, a new entity defining administrative and networking boundaries within a Consul deployment. For more information refer to the [Admin Partition](https://www.consul.io/docs/enterprise/admin-partitions) documentation. +* Admin Partitions (Consul Enterprise only) This version adds admin partitions, a new entity defining administrative and networking boundaries within a Consul deployment. For more information refer to the [Admin Partition](https://developer.hashicorp.com/docs/enterprise/admin-partitions) documentation. * ca: Add a configurable TTL for Connect CA root certificates. The configuration is supported by the Vault and Consul providers. [[GH-11428](https://github.com/hashicorp/consul/issues/11428)] * ca: Add a configurable TTL to the AWS ACM Private CA provider root certificate. [[GH-11449](https://github.com/hashicorp/consul/issues/11449)] * health-checks: add support for h2c in http2 ping health checks [[GH-10690](https://github.com/hashicorp/consul/issues/10690)] @@ -3366,7 +3480,7 @@ token. [[GH-10795](https://github.com/hashicorp/consul/issues/10795)] KNOWN ISSUES: -* The change to enable streaming by default uncovered an incompatibility between streaming and WAN federation over mesh gateways causing traffic to fall back to attempting a direct WAN connection rather than transiting through the gateways. We currently suggest explicitly setting [`use_streaming_backend=false`](https://www.consul.io/docs/agent/config/config-files#use_streaming_backend) if using WAN federation over mesh gateways when upgrading to 1.10.1 and are working to address this issue in a future patch release. +* The change to enable streaming by default uncovered an incompatibility between streaming and WAN federation over mesh gateways causing traffic to fall back to attempting a direct WAN connection rather than transiting through the gateways. We currently suggest explicitly setting [`use_streaming_backend=false`](https://developer.hashicorp.com/docs/agent/config/config-files#use_streaming_backend) if using WAN federation over mesh gateways when upgrading to 1.10.1 and are working to address this issue in a future patch release. SECURITY: @@ -3468,7 +3582,7 @@ dir. [[GH-10089](https://github.com/hashicorp/consul/issues/10089)] * licensing: **(Enterprise Only)** Consul Enterprise has gained the ability update its license via a configuration reload. The same environment variables and configurations will be used to determine the new license. [[GH-10267](https://github.com/hashicorp/consul/issues/10267)] * monitoring: optimize the monitoring endpoint to avoid losing logs when under high load. [[GH-10368](https://github.com/hashicorp/consul/issues/10368)] * raft: allow reloading of raft trailing logs and snapshot timing to allow recovery from some [replication failure modes](https://github.com/hashicorp/consul/issues/9609). -telemetry: add metrics and documentation for [monitoring for replication issues](https://consul.io/docs/agent/telemetry#raft-replication-capacity-issues). [[GH-10129](https://github.com/hashicorp/consul/issues/10129)] +telemetry: add metrics and documentation for [monitoring for replication issues](https://developer.hashicorp.com/consul/docs/reference/agent/telemetry#raft-replication-capacity-issues). [[GH-10129](https://github.com/hashicorp/consul/issues/10129)] * streaming: change `use_streaming_backend` to default to true so that streaming is used by default when it is supported. [[GH-10149](https://github.com/hashicorp/consul/issues/10149)] * ui: Add 'optional route segments' and move namespaces to use them [[GH-10212](https://github.com/hashicorp/consul/issues/10212)] * ui: Adding a notice about how TransparentProxy mode affects the Upstreams list at the top of tab view [[GH-10136](https://github.com/hashicorp/consul/issues/10136)] @@ -4560,7 +4674,7 @@ BREAKING CHANGES: * http: The HTTP API no longer accepts JSON fields that are unknown to it. Instead errors will be returned with 400 status codes [[GH-6874](https://github.com/hashicorp/consul/pull/6874)] * dns: PTR record queries now return answers that contain the Consul datacenter as a label between `service` and the domain. [[GH-6909](https://github.com/hashicorp/consul/pull/6909)] -* agent: The ACL requirement for the [agent/force-leave endpoint](https://www.consul.io/api/agent.html#force-leave-and-shutdown) is now `operator:write` rather than `agent:write`. [[GH-7033](https://github.com/hashicorp/consul/pull/7033)] +* agent: The ACL requirement for the [agent/force-leave endpoint](https://developer.hashicorp.com/api/agent.html#force-leave-and-shutdown) is now `operator:write` rather than `agent:write`. [[GH-7033](https://github.com/hashicorp/consul/pull/7033)] * logging: Switch over to using go-hclog and allow emitting either structured or unstructured logs. This changes the log format quite a bit and could break any log parsing users may have in place. [[GH-1249](https://github.com/hashicorp/consul/issues/1249)][[GH-7130](https://github.com/hashicorp/consul/pull/7130)] * intentions: Change the ACL requirement and enforcement for wildcard rules. Previously this would look for an ACL rule that would grant access to the service/intention `*`. Now, in order to write a wildcard intention requires write access to all intentions and reading a wildcard intention requires read access to any intention that would match. Additionally intention listing and reading allow access if the requester can read either side of the intention whereas before it only allowed it for permissions on the destination side. [[GH-7028](https://github.com/hashicorp/consul/pull/7028)] * telemetry: `consul.rpc.query` has changed to only measure the _start_ of `srv.blockingQuery()` calls. In certain rare cases where there are lots of idempotent updates this will cause the metric to report lower than before. The counter should now provides more meaningful behavior that maps to the rate of client-initiated requests. [[GH-7224](https://github.com/hashicorp/consul/pull/7224)] @@ -4575,7 +4689,7 @@ FEATURES: * Connect * UI [[GH6639](https://github.com/hashicorp/consul/pull/6639)] * agent: Add Cloud Auto-join support for Tencent Cloud [[GH-6818](https://github.com/hashicorp/consul/pull/6818)] -* connect: Added a new CA provider allowing Connect certificates to be managed by AWS [ACM Private CA](https://www.consul.io/docs/connect/ca/aws.html). +* connect: Added a new CA provider allowing Connect certificates to be managed by AWS [ACM Private CA](https://developer.hashicorp.com/docs/connect/ca/aws.html). * connect: Allow configuration of upstream connection limits in Envoy [[GH-6829](https://github.com/hashicorp/consul/pull/6829)] * ui: Adds UI support for [Exposed Checks](https://github.com/hashicorp/consul/pull/6446) [[GH6575]](https://github.com/hashicorp/consul/pull/6575) * ui: Visualisation of the Discovery Chain [[GH6746]](https://github.com/hashicorp/consul/pull/6746) @@ -4800,8 +4914,8 @@ BREAKING CHANGES: FEATURES: -* **Connect Envoy Supports L7 Routing:** Additional configuration entry types `service-router`, `service-resolver`, and `service-splitter`, allow for configuring Envoy sidecars to enable reliability and deployment patterns at L7 such as HTTP path-based routing, traffic shifting, and advanced failover capabilities. For more information see the [L7 traffic management](https://www.consul.io/docs/connect/l7-traffic-management.html) docs. -* **Mesh Gateways:** Envoy can now be run as a gateway to route Connect traffic across datacenters using SNI headers, allowing connectivty across platforms and clouds and other complex network topologies. Read more in the [mesh gateway docs](https://www.consul.io/docs/connect/mesh_gateway.html). +* **Connect Envoy Supports L7 Routing:** Additional configuration entry types `service-router`, `service-resolver`, and `service-splitter`, allow for configuring Envoy sidecars to enable reliability and deployment patterns at L7 such as HTTP path-based routing, traffic shifting, and advanced failover capabilities. For more information see the [L7 traffic management](https://developer.hashicorp.com/docs/connect/l7-traffic-management.html) docs. +* **Mesh Gateways:** Envoy can now be run as a gateway to route Connect traffic across datacenters using SNI headers, allowing connectivty across platforms and clouds and other complex network topologies. Read more in the [mesh gateway docs](https://developer.hashicorp.com/docs/connect/mesh_gateway.html). * **Intention & CA Replication:** In order to enable connecitivty for services across datacenters, Connect intentions are now replicated and the Connect CA cross-signs from the [primary_datacenter](/docs/agent/config/config-files.html#primary_datacenter). This feature was previously part of Consul Enterprise. * agent: add `local-only` parameter to operator/keyring list requests to force queries to only hit local servers. [[GH-6279](https://github.com/hashicorp/consul/pull/6279)] * connect: expose an API endpoint to compile the discovery chain [[GH-6248](https://github.com/hashicorp/consul/issues/6248)] @@ -4935,8 +5049,8 @@ BREAKING CHANGES: * ui: Legacy UI has been removed. Setting the CONSUL_UI_LEGACY environment variable to 1 or true will no longer revert to serving the old UI. [[GH-5643](https://github.com/hashicorp/consul/pull/5643)] FEATURES: -* **Connect Envoy Supports L7 Observability:** We introduce features that allow configuring Envoy sidecars to emit metrics and tracing at L7 (http, http2, grpc supported). For more information see the [Envoy Integration](https://consul.io/docs/connect/proxies/envoy.html) docs. -* **Centralized Configuration:** Enables central configuration of some service and proxy defaults. For more information see the [Configuration Entries](https://consul.io/docs/agent/config_entries.html) docs +* **Connect Envoy Supports L7 Observability:** We introduce features that allow configuring Envoy sidecars to emit metrics and tracing at L7 (http, http2, grpc supported). For more information see the [Envoy Integration](https://developer.hashicorp.com/consul/docs/connect/proxies/envoy.html) docs. +* **Centralized Configuration:** Enables central configuration of some service and proxy defaults. For more information see the [Configuration Entries](https://developer.hashicorp.com/consul/docs/agent/config_entries.html) docs * api: Implement data filtering for some endpoints using a new filtering language. [[GH-5579](https://github.com/hashicorp/consul/pull/5579)] * snapshot agent (Consul Enterprise): Added support for saving snapshots to Azure Blob Storage. * acl: tokens can be created with an optional expiration time [[GH-5353](https://github.com/hashicorp/consul/issues/5353)] @@ -5039,7 +5153,7 @@ BUG FIXES: FEATURES: -* api: The transaction API now supports catalog operations for interacting with nodes, services and checks. See the [transacton API page](https://www.consul.io/api/txn.html#tables-of-operations) for more information. [[GH-4869](https://github.com/hashicorp/consul/pull/4869)] +* api: The transaction API now supports catalog operations for interacting with nodes, services and checks. See the [transacton API page](https://developer.hashicorp.com/api/txn.html#tables-of-operations) for more information. [[GH-4869](https://github.com/hashicorp/consul/pull/4869)] SECURITY: @@ -5141,22 +5255,22 @@ FEATURES: * **Connect Envoy Support**: This release includes support for using Envoy as a Proxy with Consul Connect (Beta). Read the [announcement blog post](https://www.hashicorp.com/blog/consul-1-3-envoy) or [reference - documentation](https://www.consul.io/docs/connect/proxies/envoy.html) + documentation](https://developer.hashicorp.com/docs/connect/proxies/envoy.html) for more detail. * **Sidecar Service Registration**: As part of the ongoing Connect Beta we add a new, more convenient way to [register sidecar - proxies](https://www.consul.io/docs/connect/proxies/sidecar-service.html) + proxies](https://developer.hashicorp.com/docs/connect/proxies/sidecar-service.html) from within a regular service definition. * **Deprecating Managed Proxies**: The Connect Beta launched with a feature named "managed proxies". These will no longer be supported in favour of the simpler sidecar service registration. Existing functionality will not be removed until a later major release but will not be supported with fixes. See the [deprecation - notice](https://www.consul.io/docs/connect/proxies/managed-deprecated.html) + notice](https://developer.hashicorp.com/docs/connect/proxies/managed-deprecated.html) for full details. * New command `consul services register` and `consul services deregister` for registering and deregistering services from the command line. [[GH-4732](https://github.com/hashicorp/consul/issues/4732)] -* api: Service discovery endpoints now support [caching results in the local agent](https://www.consul.io/api/index.html#agent-caching). [[GH-4541](https://github.com/hashicorp/consul/pull/4541)] +* api: Service discovery endpoints now support [caching results in the local agent](https://developer.hashicorp.com/api/index.html#agent-caching). [[GH-4541](https://github.com/hashicorp/consul/pull/4541)] * dns: Added SOA configuration for DNS settings. [[GH-4713](https://github.com/hashicorp/consul/issues/4713)] IMPROVEMENTS: @@ -5164,7 +5278,7 @@ IMPROVEMENTS: * ui: Improve layout of node 'cards' by restricting the grid layout to a maximum of 4 columns [[GH-4761]](https://github.com/hashicorp/consul/pull/4761) * ui: Load the TextEncoder/Decoder polyfill dynamically so it's not downloaded to browsers with native support [[GH-4767](https://github.com/hashicorp/consul/pull/4767)] * cli: `consul connect proxy` now supports a [`--sidecar-for` - option](https://www.consul.io/docs/commands/connect/proxy.html#sidecar-for) to + option](https://developer.hashicorp.com/docs/commands/connect/proxy.html#sidecar-for) to allow simple integration with new sidecar service registrations. * api: /health and /catalog endpoints now support filtering by multiple tags [[GH-1781](https://github.com/hashicorp/consul/issues/1781)] * agent: Only update service `ModifyIndex` when it's state actually changes. This makes service watches much more efficient on large clusters. [[GH-4720](https://github.com/hashicorp/consul/pull/4720)] @@ -5238,7 +5352,7 @@ FEATURES: IMPROVEMENTS: * proxy: With `-register` flag, heartbeat failures will only log once service registration succeeds. [[GH-4314](https://github.com/hashicorp/consul/pull/4314)] -* http: 1.0.3 introduced rejection of non-printable chars in HTTP URLs due to a security vulnerability. Some users who had keys written with an older version which are now dissallowed were unable to delete them. A new config option [disable_http_unprintable_char_filter](https://www.consul.io/docs/agent/config/config-files.html#disable_http_unprintable_char_filter) is added to allow those users to remove the offending keys. Leaving this new option set long term is strongly discouraged as it bypasses filtering necessary to prevent some known vulnerabilities. [[GH-4442](https://github.com/hashicorp/consul/pull/4442)] +* http: 1.0.3 introduced rejection of non-printable chars in HTTP URLs due to a security vulnerability. Some users who had keys written with an older version which are now dissallowed were unable to delete them. A new config option [disable_http_unprintable_char_filter](https://developer.hashicorp.com/docs/agent/config/config-files.html#disable_http_unprintable_char_filter) is added to allow those users to remove the offending keys. Leaving this new option set long term is strongly discouraged as it bypasses filtering necessary to prevent some known vulnerabilities. [[GH-4442](https://github.com/hashicorp/consul/pull/4442)] * agent: Allow for advanced configuration of some gossip related parameters. [[GH-4058](https://github.com/hashicorp/consul/issues/4058)] * agent: Make some Gossip tuneables configurable via the config file [[GH-4444](https://github.com/hashicorp/consul/pull/4444)] * ui: Included searching on `.Tags` when using the freetext search field. [[GH-4383](https://github.com/hashicorp/consul/pull/4383)] @@ -5315,7 +5429,7 @@ FEATURES: * UI: The web UI has been completely redesigned and rebuilt and is in an opt-in beta period. Setting the `CONSUL_UI_BETA` environment variable to `1` or `true` will replace the existing UI with the new one. The existing UI will be deprecated and removed in a future release. [[GH-4086](https://github.com/hashicorp/consul/pull/4086)] -* api: Added support for Prometheus client format in metrics endpoint with `?format=prometheus` (see [docs](https://www.consul.io/api/agent.html#view-metrics)) [[GH-4014](https://github.com/hashicorp/consul/issues/4014)] +* api: Added support for Prometheus client format in metrics endpoint with `?format=prometheus` (see [docs](https://developer.hashicorp.com/api/agent.html#view-metrics)) [[GH-4014](https://github.com/hashicorp/consul/issues/4014)] * agent: New Cloud Auto-join provider: Joyent Triton. [[GH-4108](https://github.com/hashicorp/consul/pull/4108)] * agent: (Consul Enterprise) Implemented license management with license propagation within a datacenter. @@ -5325,7 +5439,7 @@ BREAKING CHANGES: - `CheckID` has been removed from config file check definitions (use `id` instead). - `script` has been removed from config file check definitions (use `args` instead). - `enableTagOverride` is no longer valid in service definitions (use `enable_tag_override` instead). - - The [deprecated set of metric names](https://consul.io/docs/upgrade-specific.html#metric-names-updated) (beginning with `consul.consul.`) has been removed along with the `enable_deprecated_names` option from the metrics configuration. + - The [deprecated set of metric names](https://developer.hashicorp.com/consul/docs/upgrade-specific.html#metric-names-updated) (beginning with `consul.consul.`) has been removed along with the `enable_deprecated_names` option from the metrics configuration. IMPROVEMENTS: @@ -5490,13 +5604,13 @@ IMPROVEMENTS: * agent: (Consul Enterprise) Added [AWS KMS support](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html) for S3 snapshots using the snapshot agent. * agent: Watches in the Consul agent can now be configured to invoke an HTTP endpoint instead of an executable. [[GH-3305](https://github.com/hashicorp/consul/issues/3305)] -* agent: Added a new [`-config-format`](https://www.consul.io/docs/agent/config/cli-flags#_config_format) command line option which can be set to `hcl` or `json` to specify the format of configuration files. This is useful for cases where the file name cannot be controlled in order to provide the required extension. [[GH-3620](https://github.com/hashicorp/consul/issues/3620)] +* agent: Added a new [`-config-format`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_config_format) command line option which can be set to `hcl` or `json` to specify the format of configuration files. This is useful for cases where the file name cannot be controlled in order to provide the required extension. [[GH-3620](https://github.com/hashicorp/consul/issues/3620)] * agent: DNS recursors can now be specified as [go-sockaddr](https://godoc.org/github.com/hashicorp/go-sockaddr/template) templates. [[GH-2932](https://github.com/hashicorp/consul/issues/2932)] * agent: Serf snapshots no longer save network coordinate information. This enables recovery from errors upon agent restart. [[GH-489](https://github.com/hashicorp/serf/issues/489)] * agent: Added defensive code to prevent out of range ping times from infecting network coordinates. Updates to the coordinate system with negative round trip times or round trip times higher than 10 seconds will log an error but will be ignored. * agent: The agent now warns when there are extra unparsed command line arguments and refuses to start. [[GH-3397](https://github.com/hashicorp/consul/issues/3397)] * agent: Updated go-sockaddr library to get CoreOS route detection fixes and the new `mask` functionality. [[GH-3633](https://github.com/hashicorp/consul/issues/3633)] -* agent: Added a new [`enable_agent_tls_for_checks`](https://www.consul.io/docs/agent/config/config-files.html#enable_agent_tls_for_checks) configuration option that allows HTTP health checks for services requiring 2-way TLS to be checked using the agent's credentials. [[GH-3364](https://github.com/hashicorp/consul/issues/3364)] +* agent: Added a new [`enable_agent_tls_for_checks`](https://developer.hashicorp.com/docs/agent/config/config-files.html#enable_agent_tls_for_checks) configuration option that allows HTTP health checks for services requiring 2-way TLS to be checked using the agent's credentials. [[GH-3364](https://github.com/hashicorp/consul/issues/3364)] * agent: Made logging of health check status more uniform and moved log entries with full check output from DEBUG to TRACE level for less noise. [[GH-3683](https://github.com/hashicorp/consul/issues/3683)] * build: Consul is now built with Go 1.9.2. [[GH-3663](https://github.com/hashicorp/consul/issues/3663)] @@ -5521,8 +5635,8 @@ SECURITY: BREAKING CHANGES: -* **Raft Protocol Now Defaults to 3:** The [`-raft-protocol`](https://www.consul.io/docs/agent/config/cli-flags#_raft_protocol) default has been changed from 2 to 3, enabling all [Autopilot](https://www.consul.io/docs/guides/autopilot.html) features by default. Version 3 requires Consul running 0.8.0 or newer on all servers in order to work, so if you are upgrading with older servers in a cluster then you will need to set this back to 2 in order to upgrade. See [Raft Protocol Version Compatibility](https://www.consul.io/docs/upgrade-specific.html#raft-protocol-version-compatibility) for more details. Also the format of `peers.json` used for outage recovery is different when running with the lastest Raft protocol. See [Manual Recovery Using peers.json](https://www.consul.io/docs/guides/outage.html#manual-recovery-using-peers-json) for a description of the required format. [[GH-3477](https://github.com/hashicorp/consul/issues/3477)] -* **Config Files Require an Extension:** As part of supporting the [HCL](https://github.com/hashicorp/hcl#syntax) format for Consul's config files, an `.hcl` or `.json` extension is required for all config files loaded by Consul, even when using the [`-config-file`](https://www.consul.io/docs/agent/config/cli-flags#_config_file) argument to specify a file directly. [[GH-3480](https://github.com/hashicorp/consul/issues/3480)] +* **Raft Protocol Now Defaults to 3:** The [`-raft-protocol`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_raft_protocol) default has been changed from 2 to 3, enabling all [Autopilot](https://developer.hashicorp.com/docs/guides/autopilot.html) features by default. Version 3 requires Consul running 0.8.0 or newer on all servers in order to work, so if you are upgrading with older servers in a cluster then you will need to set this back to 2 in order to upgrade. See [Raft Protocol Version Compatibility](https://developer.hashicorp.com/docs/upgrade-specific.html#raft-protocol-version-compatibility) for more details. Also the format of `peers.json` used for outage recovery is different when running with the lastest Raft protocol. See [Manual Recovery Using peers.json](https://developer.hashicorp.com/docs/guides/outage.html#manual-recovery-using-peers-json) for a description of the required format. [[GH-3477](https://github.com/hashicorp/consul/issues/3477)] +* **Config Files Require an Extension:** As part of supporting the [HCL](https://github.com/hashicorp/hcl#syntax) format for Consul's config files, an `.hcl` or `.json` extension is required for all config files loaded by Consul, even when using the [`-config-file`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_config_file) argument to specify a file directly. [[GH-3480](https://github.com/hashicorp/consul/issues/3480)] * **Deprecated Options Have Been Removed:** All of Consul's previously deprecated command line flags and config options have been removed, so these will need to be mapped to their equivalents before upgrading. [[GH-3480](https://github.com/hashicorp/consul/issues/3480)]
Detailed List of Removed Options and their Equivalents @@ -5533,45 +5647,45 @@ BREAKING CHANGES: | `-atlas-token`| None, Atlas is no longer supported. | | `-atlas-join` | None, Atlas is no longer supported. | | `-atlas-endpoint` | None, Atlas is no longer supported. | - | `-dc` | [`-datacenter`](https://www.consul.io/docs/agent/config/cli-flags#_datacenter) | - | `-retry-join-azure-tag-name` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-azure-tag-value` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-ec2-region` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-ec2-tag-key` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-ec2-tag-value` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-gce-credentials-file` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-gce-project-name` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-gce-tag-name` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `-retry-join-gce-zone-pattern` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | + | `-dc` | [`-datacenter`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_datacenter) | + | `-retry-join-azure-tag-name` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-azure-tag-value` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-ec2-region` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-ec2-tag-key` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-ec2-tag-value` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-gce-credentials-file` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-gce-project-name` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-gce-tag-name` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `-retry-join-gce-zone-pattern` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | | `addresses.rpc` | None, the RPC server for CLI commands is no longer supported. | - | `advertise_addrs` | [`ports`](https://www.consul.io/docs/agent/config/config-files.html#ports) with [`advertise_addr`](https://www.consul/io/docs/agent/config/config-files.html#advertise_addr) and/or [`advertise_addr_wan`](https://www.consul.io/docs/agent/config/config-files.html#advertise_addr_wan) | + | `advertise_addrs` | [`ports`](https://developer.hashicorp.com/docs/agent/config/config-files.html#ports) with [`advertise_addr`](https://www.consul/io/docs/agent/config/config-files.html#advertise_addr) and/or [`advertise_addr_wan`](https://developer.hashicorp.com/docs/agent/config/config-files.html#advertise_addr_wan) | | `atlas_infrastructure` | None, Atlas is no longer supported. | | `atlas_token` | None, Atlas is no longer supported. | | `atlas_acl_token` | None, Atlas is no longer supported. | | `atlas_join` | None, Atlas is no longer supported. | | `atlas_endpoint` | None, Atlas is no longer supported. | - | `dogstatsd_addr` | [`telemetry.dogstatsd_addr`](https://www.consul.io/docs/agent/config/config-files.html#telemetry-dogstatsd_addr) | - | `dogstatsd_tags` | [`telemetry.dogstatsd_tags`](https://www.consul.io/docs/agent/config/config-files.html#telemetry-dogstatsd_tags) | - | `http_api_response_headers` | [`http_config.response_headers`](https://www.consul.io/docs/agent/config/config-files.html#response_headers) | + | `dogstatsd_addr` | [`telemetry.dogstatsd_addr`](https://developer.hashicorp.com/docs/agent/config/config-files.html#telemetry-dogstatsd_addr) | + | `dogstatsd_tags` | [`telemetry.dogstatsd_tags`](https://developer.hashicorp.com/docs/agent/config/config-files.html#telemetry-dogstatsd_tags) | + | `http_api_response_headers` | [`http_config.response_headers`](https://developer.hashicorp.com/docs/agent/config/config-files.html#response_headers) | | `ports.rpc` | None, the RPC server for CLI commands is no longer supported. | | `recursor` | [`recursors`](https://github.com/hashicorp/consul/blob/main/website/source/docs/agent/config/config-files.html.md#recursors) | - | `retry_join_azure` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `retry_join_ec2` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | - | `retry_join_gce` | [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) | + | `retry_join_azure` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `retry_join_ec2` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | + | `retry_join_gce` | [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) | | `statsd_addr` | [`telemetry.statsd_address`](https://github.com/hashicorp/consul/blob/main/website/source/docs/agent/config/config-files.html.md#telemetry-statsd_address) | | `statsite_addr` | [`telemetry.statsite_address`](https://github.com/hashicorp/consul/blob/main/website/source/docs/agent/config/config-files.html.md#telemetry-statsite_address) | - | `statsite_prefix` | [`telemetry.metrics_prefix`](https://www.consul.io/docs/agent/config/config-files.html#telemetry-metrics_prefix) | - | `telemetry.statsite_prefix` | [`telemetry.metrics_prefix`](https://www.consul.io/docs/agent/config/config-files.html#telemetry-metrics_prefix) | - | (service definitions) `serviceid` | [`service_id`](https://www.consul.io/docs/agent/services.html) | - | (service definitions) `dockercontainerid` | [`docker_container_id`](https://www.consul.io/docs/agent/services.html) | - | (service definitions) `tlsskipverify` | [`tls_skip_verify`](https://www.consul.io/docs/agent/services.html) | - | (service definitions) `deregistercriticalserviceafter` | [`deregister_critical_service_after`](https://www.consul.io/docs/agent/services.html) | + | `statsite_prefix` | [`telemetry.metrics_prefix`](https://developer.hashicorp.com/docs/agent/config/config-files.html#telemetry-metrics_prefix) | + | `telemetry.statsite_prefix` | [`telemetry.metrics_prefix`](https://developer.hashicorp.com/docs/agent/config/config-files.html#telemetry-metrics_prefix) | + | (service definitions) `serviceid` | [`service_id`](https://developer.hashicorp.com/docs/agent/services.html) | + | (service definitions) `dockercontainerid` | [`docker_container_id`](https://developer.hashicorp.com/docs/agent/services.html) | + | (service definitions) `tlsskipverify` | [`tls_skip_verify`](https://developer.hashicorp.com/docs/agent/services.html) | + | (service definitions) `deregistercriticalserviceafter` | [`deregister_critical_service_after`](https://developer.hashicorp.com/docs/agent/services.html) |
-* **`statsite_prefix` Renamed to `metrics_prefix`:** Since the `statsite_prefix` configuration option applied to all telemetry providers, `statsite_prefix` was renamed to [`metrics_prefix`](https://www.consul.io/docs/agent/config/config-files.html#telemetry-metrics_prefix). Configuration files will need to be updated when upgrading to this version of Consul. [[GH-3498](https://github.com/hashicorp/consul/issues/3498)] +* **`statsite_prefix` Renamed to `metrics_prefix`:** Since the `statsite_prefix` configuration option applied to all telemetry providers, `statsite_prefix` was renamed to [`metrics_prefix`](https://developer.hashicorp.com/docs/agent/config/config-files.html#telemetry-metrics_prefix). Configuration files will need to be updated when upgrading to this version of Consul. [[GH-3498](https://github.com/hashicorp/consul/issues/3498)] * **`advertise_addrs` Removed:** This configuration option was removed since it was redundant with `advertise_addr` and `advertise_addr_wan` in combination with `ports` and also wrongly stated that you could configure both host and port. [[GH-3516](https://github.com/hashicorp/consul/issues/3516)] -* **Escaping Behavior Changed for go-discover Configs:** The format for [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) and [`-retry-join-wan`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join_wan) values that use [go-discover](https://github.com/hashicorp/go-discover) Cloud auto joining has changed. Values in `key=val` sequences must no longer be URL encoded and can be provided as literals as long as they do not contain spaces, backslashes `\` or double quotes `"`. If values contain these characters then use double quotes as in `"some key"="some value"`. Special characters within a double quoted string can be escaped with a backslash `\`. [[GH-3417](https://github.com/hashicorp/consul/issues/3417)] +* **Escaping Behavior Changed for go-discover Configs:** The format for [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) and [`-retry-join-wan`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join_wan) values that use [go-discover](https://github.com/hashicorp/go-discover) Cloud auto joining has changed. Values in `key=val` sequences must no longer be URL encoded and can be provided as literals as long as they do not contain spaces, backslashes `\` or double quotes `"`. If values contain these characters then use double quotes as in `"some key"="some value"`. Special characters within a double quoted string can be escaped with a backslash `\`. [[GH-3417](https://github.com/hashicorp/consul/issues/3417)] * **HTTP Verbs are Enforced in Many HTTP APIs:** Many endpoints in the HTTP API that previously took any HTTP verb now check for specific HTTP verbs and enforce them. This may break clients relying on the old behavior. [[GH-3405](https://github.com/hashicorp/consul/issues/3405)]
Detailed List of Updated Endpoints and Required HTTP Verbs @@ -5622,7 +5736,7 @@ BREAKING CHANGES:
* **Unauthorized KV Requests Return 403:** When ACLs are enabled, reading a key with an unauthorized token returns a 403. This previously returned a 404 response. -* **Config Section of Agent Self Endpoint has Changed:** The /v1/agent/self endpoint's `Config` section has often been in flux as it was directly returning one of Consul's internal data structures. This configuration structure has been moved under `DebugConfig`, and is documents as for debugging use and subject to change, and a small set of elements of `Config` have been maintained and documented. See [Read Configuration](https://www.consul.io/api/agent.html#read-configuration) endpoint documentation for details. [[GH-3532](https://github.com/hashicorp/consul/issues/3532)] +* **Config Section of Agent Self Endpoint has Changed:** The /v1/agent/self endpoint's `Config` section has often been in flux as it was directly returning one of Consul's internal data structures. This configuration structure has been moved under `DebugConfig`, and is documents as for debugging use and subject to change, and a small set of elements of `Config` have been maintained and documented. See [Read Configuration](https://developer.hashicorp.com/api/agent.html#read-configuration) endpoint documentation for details. [[GH-3532](https://github.com/hashicorp/consul/issues/3532)] * **Deprecated `configtest` Command Removed:** The `configtest` command was deprecated and has been superseded by the `validate` command. * **Undocumented Flags in `validate` Command Removed:** The `validate` command supported the `-config-file` and `-config-dir` command line flags but did not document them. This support has been removed since the flags are not required. * **Metric Names Updated:** Metric names no longer start with `consul.consul`. To help with transitioning dashboards and other metric consumers, the field `enable_deprecated_names` has been added to the telemetry section of the config, which will enable metrics with the old naming scheme to be sent alongside the new ones. [[GH-3535](https://github.com/hashicorp/consul/issues/3535)] @@ -5652,23 +5766,23 @@ BREAKING CHANGES: FEATURES: * **Support for HCL Config Files:** Consul now supports HashiCorp's [HCL](https://github.com/hashicorp/hcl#syntax) format for config files. This is easier to work with than JSON and supports comments. As part of this change, all config files will need to have either an `.hcl` or `.json` extension in order to specify their format. [[GH-3480](https://github.com/hashicorp/consul/issues/3480)] -* **Support for Binding to Multiple Addresses:** Consul now supports binding to multiple addresses for its HTTP, HTTPS, and DNS services. You can provide a space-separated list of addresses to [`-client`](https://www.consul.io/docs/agent/config/cli-flags#_client) and [`addresses`](https://www.consul.io/docs/agent/config/config-files.html#addresses) configurations, or specify a [go-sockaddr](https://godoc.org/github.com/hashicorp/go-sockaddr/template) template that resolves to multiple addresses. [[GH-3480](https://github.com/hashicorp/consul/issues/3480)] +* **Support for Binding to Multiple Addresses:** Consul now supports binding to multiple addresses for its HTTP, HTTPS, and DNS services. You can provide a space-separated list of addresses to [`-client`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_client) and [`addresses`](https://developer.hashicorp.com/docs/agent/config/config-files.html#addresses) configurations, or specify a [go-sockaddr](https://godoc.org/github.com/hashicorp/go-sockaddr/template) template that resolves to multiple addresses. [[GH-3480](https://github.com/hashicorp/consul/issues/3480)] * **Support for RFC1464 DNS TXT records:** Consul DNS responses now contain the node meta data encoded according to RFC1464 as TXT records. [[GH-3343](https://github.com/hashicorp/consul/issues/3343)] * **Support for Running Subproccesses Directly Without a Shell:** Consul agent checks and watches now support an `args` configuration which is a list of arguments to run for the subprocess, which runs the subprocess directly without a shell. The old `script` and `handler` configurations are now deprecated (specify a shell explicitly if you require one). A `-shell=false` option is also available on `consul lock`, `consul watch`, and `consul exec` to run the subprocesses associated with those without a shell. [[GH-3509](https://github.com/hashicorp/consul/issues/3509)] -* **Sentinel Integration:** (Consul Enterprise) Consul's ACL system integrates with [Sentinel](https://www.consul.io/docs/guides/sentinel.html) to enable code policies that apply to KV writes. +* **Sentinel Integration:** (Consul Enterprise) Consul's ACL system integrates with [Sentinel](https://developer.hashicorp.com/docs/guides/sentinel.html) to enable code policies that apply to KV writes. IMPROVEMENTS: * agent: Added support to detect public IPv4 and IPv6 addresses on AWS. [[GH-3471](https://github.com/hashicorp/consul/issues/3471)] * agent: Improved /v1/operator/raft/configuration endpoint which allows Consul to avoid an extra agent RPC call for the `consul operator raft list-peers` command. [[GH-3449](https://github.com/hashicorp/consul/issues/3449)] -* agent: Improved ACL system for the KV store to support list permissions. This behavior can be opted in. For more information, see the [ACL Guide](https://www.consul.io/docs/guides/acl.html#list-policy-for-keys). [[GH-3511](https://github.com/hashicorp/consul/issues/3511)] +* agent: Improved ACL system for the KV store to support list permissions. This behavior can be opted in. For more information, see the [ACL Guide](https://developer.hashicorp.com/docs/guides/acl.html#list-policy-for-keys). [[GH-3511](https://github.com/hashicorp/consul/issues/3511)] * agent: Updates miekg/dns library to later version to pick up bug fixes and improvements. [[GH-3547](https://github.com/hashicorp/consul/issues/3547)] -* agent: Added automatic retries to the RPC path, and a brief RPC drain time when servers leave. These changes make Consul more robust during graceful leaves of Consul servers, such as during upgrades, and help shield applications from "no leader" errors. These are configured with new [`performance`](https://www.consul.io/docs/agent/config/config-files.html#performance) options. [[GH-3514](https://github.com/hashicorp/consul/issues/3514)] +* agent: Added automatic retries to the RPC path, and a brief RPC drain time when servers leave. These changes make Consul more robust during graceful leaves of Consul servers, such as during upgrades, and help shield applications from "no leader" errors. These are configured with new [`performance`](https://developer.hashicorp.com/docs/agent/config/config-files.html#performance) options. [[GH-3514](https://github.com/hashicorp/consul/issues/3514)] * agent: Added a new `discard_check_output` agent-level configuration option that can be used to trade off write load to the Consul servers vs. visibility of health check output. This is reloadable so it can be toggled without fully restarting the agent. [[GH-3562](https://github.com/hashicorp/consul/issues/3562)] * api: Updated the API client to ride out network errors when monitoring locks and semaphores. [[GH-3553](https://github.com/hashicorp/consul/issues/3553)] * build: Updated Go toolchain to version 1.9.1. [[GH-3537](https://github.com/hashicorp/consul/issues/3537)] * cli: `consul lock` and `consul watch` commands will forward `TERM` and `KILL` signals to their child subprocess. [[GH-3509](https://github.com/hashicorp/consul/issues/3509)] -* cli: Added support for [autocompletion](https://www.consul.io/docs/commands/index.html#autocompletion). [[GH-3412](https://github.com/hashicorp/consul/issues/3412)] +* cli: Added support for [autocompletion](https://developer.hashicorp.com/docs/commands/index.html#autocompletion). [[GH-3412](https://github.com/hashicorp/consul/issues/3412)] * server: Updated BoltDB to final version 1.3.1. [[GH-3502](https://github.com/hashicorp/consul/issues/3502)] * server: Improved dead member reap algorithm to fix edge cases where servers could get left behind. [[GH-3452](https://github.com/hashicorp/consul/issues/3452)] @@ -5689,9 +5803,9 @@ SECURITY: ## 0.9.3 (September 8, 2017) FEATURES: -* **LAN Network Segments:** (Consul Enterprise) Added a new [Network Segments](https://www.consul.io/docs/guides/segments.html) capability which allows users to configure Consul to support segmented LAN topologies with multiple, distinct gossip pools. [[GH-3431](https://github.com/hashicorp/consul/issues/3431)] +* **LAN Network Segments:** (Consul Enterprise) Added a new [Network Segments](https://developer.hashicorp.com/docs/guides/segments.html) capability which allows users to configure Consul to support segmented LAN topologies with multiple, distinct gossip pools. [[GH-3431](https://github.com/hashicorp/consul/issues/3431)] * **WAN Join for Cloud Providers:** Added WAN support for retry join for Cloud providers via go-discover, including Amazon AWS, Microsoft Azure, Google Cloud, and SoftLayer. This uses the same "provider" syntax supported for `-retry-join` via the `-retry-join-wan` configuration. [[GH-3406](https://github.com/hashicorp/consul/issues/3406)] -* **RPC Rate Limiter:** Consul agents in client mode have a new [`limits`](https://www.consul.io/docs/agent/config/config-files.html#limits) configuration that enables a rate limit on RPC calls the agent makes to Consul servers. [[GH-3140](https://github.com/hashicorp/consul/issues/3140)] +* **RPC Rate Limiter:** Consul agents in client mode have a new [`limits`](https://developer.hashicorp.com/docs/agent/config/config-files.html#limits) configuration that enables a rate limit on RPC calls the agent makes to Consul servers. [[GH-3140](https://github.com/hashicorp/consul/issues/3140)] IMPROVEMENTS: @@ -5721,20 +5835,20 @@ BUG FIXES: FEATURES: * **Secure ACL Token Introduction:** It's now possible to manage Consul's ACL tokens without having to place any tokens inside configuration files. This supports introduction of tokens as well as rotating. This is enabled with two new APIs: - * A new [`/v1/agent/token`](https://www.consul.io/api/agent.html#update-acl-tokens) API allows an agent's ACL tokens to be introduced without placing them into config files, and to update them without restarting the agent. See the [ACL Guide](https://www.consul.io/docs/guides/acl.html#create-an-agent-token) for an example. This was extended to ACL replication as well, along with a new [`enable_acl_replication`](https://www.consul.io/docs/agent/config/config-files.html#enable_acl_replication) config option. [GH-3324,GH-3357] - * A new [`/v1/acl/bootstrap`](https://www.consul.io/api/acl.html#bootstrap-acls) allows a cluster's first management token to be created without using the `acl_master_token` configuration. See the [ACL Guide](https://www.consul.io/docs/guides/acl.html#bootstrapping-acls) for an example. [[GH-3349](https://github.com/hashicorp/consul/issues/3349)] -* **Metrics Viewing Endpoint:** A new [`/v1/agent/metrics`](https://www.consul.io/api/agent.html#view-metrics) API displays the current values of internally tracked metrics. [[GH-3369](https://github.com/hashicorp/consul/issues/3369)] + * A new [`/v1/agent/token`](https://developer.hashicorp.com/api/agent.html#update-acl-tokens) API allows an agent's ACL tokens to be introduced without placing them into config files, and to update them without restarting the agent. See the [ACL Guide](https://developer.hashicorp.com/docs/guides/acl.html#create-an-agent-token) for an example. This was extended to ACL replication as well, along with a new [`enable_acl_replication`](https://developer.hashicorp.com/docs/agent/config/config-files.html#enable_acl_replication) config option. [GH-3324,GH-3357] + * A new [`/v1/acl/bootstrap`](https://developer.hashicorp.com/api/acl.html#bootstrap-acls) allows a cluster's first management token to be created without using the `acl_master_token` configuration. See the [ACL Guide](https://developer.hashicorp.com/docs/guides/acl.html#bootstrapping-acls) for an example. [[GH-3349](https://github.com/hashicorp/consul/issues/3349)] +* **Metrics Viewing Endpoint:** A new [`/v1/agent/metrics`](https://developer.hashicorp.com/api/agent.html#view-metrics) API displays the current values of internally tracked metrics. [[GH-3369](https://github.com/hashicorp/consul/issues/3369)] IMPROVEMENTS: -* agent: Retry Join for Amazon AWS, Microsoft Azure, Google Cloud, and (new) SoftLayer is now handled through the https://github.com/hashicorp/go-discover library. With this all `-retry-join-{ec2,azure,gce}-*` parameters have been deprecated in favor of a unified configuration. See [`-retry-join`](https://www.consul.io/docs/agent/config/cli-flags#_retry_join) for details. [GH-3282,GH-3351] +* agent: Retry Join for Amazon AWS, Microsoft Azure, Google Cloud, and (new) SoftLayer is now handled through the https://github.com/hashicorp/go-discover library. With this all `-retry-join-{ec2,azure,gce}-*` parameters have been deprecated in favor of a unified configuration. See [`-retry-join`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_retry_join) for details. [GH-3282,GH-3351] * agent: Reports a more detailed error message if the LAN or WAN Serf instance fails to bind to an address. [[GH-3312](https://github.com/hashicorp/consul/issues/3312)] * agent: Added NS records and corrected SOA records to allow Consul's DNS interface to work properly with zone delegation. [[GH-1301](https://github.com/hashicorp/consul/issues/1301)] * agent: Added support for sending metrics with labels/tags to supported backends. [[GH-3369](https://github.com/hashicorp/consul/issues/3369)] * agent: Added a new `prefix_filter` option in the `telemetry` config to allow fine-grained allowing/blocking the sending of certain metrics by prefix. [[GH-3369](https://github.com/hashicorp/consul/issues/3369)] * cli: Added a `-child-exit-code` option to `consul lock` so that it propagates an error code of 2 if the child process exits with an error. [[GH-947](https://github.com/hashicorp/consul/issues/947)] -* docs: Added a new [Geo Failover Guide](https://www.consul.io/docs/guides/geo-failover.html) showing how to use prepared queries to implement geo failover policies for services. [[GH-3328](https://github.com/hashicorp/consul/issues/3328)] -* docs: Added a new [Consul with Containers Guide](https://www.consul.io/docs/guides/consul-containers.html) showing critical aspects of operating a Consul cluster that's run inside containers. [[GH-3347](https://github.com/hashicorp/consul/issues/3347)] +* docs: Added a new [Geo Failover Guide](https://developer.hashicorp.com/docs/guides/geo-failover.html) showing how to use prepared queries to implement geo failover policies for services. [[GH-3328](https://github.com/hashicorp/consul/issues/3328)] +* docs: Added a new [Consul with Containers Guide](https://developer.hashicorp.com/docs/guides/consul-containers.html) showing critical aspects of operating a Consul cluster that's run inside containers. [[GH-3347](https://github.com/hashicorp/consul/issues/3347)] * server: Added a `RemoveEmptyTags` option to prepared query templates which will strip out any empty strings in the tags list before executing a query. This is useful when interpolating into tags in a way where the tag is optional, and where searching for an empty tag would yield no results from the query. [[GH-2151](https://github.com/hashicorp/consul/issues/2151)] * server: Implemented a much faster recursive delete algorithm for the KV store. It has been benchmarked to be up to 100X faster on recursive deletes that affect millions of keys. [GH-1278, GH-3313] @@ -5751,15 +5865,15 @@ BUG FIXES: BREAKING CHANGES: -* agent: Added a new [`enable_script_checks`](https://www.consul.io/docs/agent/config/cli-flags#_enable_script_checks) configuration option that defaults to `false`, meaning that in order to allow an agent to run health checks that execute scripts, this will need to be configured and set to `true`. This provides a safer out-of-the-box configuration for Consul where operators must opt-in to allow script-based health checks. [[GH-3087](https://github.com/hashicorp/consul/issues/3087)] +* agent: Added a new [`enable_script_checks`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_enable_script_checks) configuration option that defaults to `false`, meaning that in order to allow an agent to run health checks that execute scripts, this will need to be configured and set to `true`. This provides a safer out-of-the-box configuration for Consul where operators must opt-in to allow script-based health checks. [[GH-3087](https://github.com/hashicorp/consul/issues/3087)] * api: Reworked `context` support in the API client to more closely match the Go standard library, and added context support to write requests in addition to read requests. [GH-3273, GH-2992] * ui: Since the UI is now bundled with the application we no longer provide a separate UI package for downloading. [[GH-3292](https://github.com/hashicorp/consul/issues/3292)] FEATURES: -* agent: Added a new [`block_endpoints`](https://www.consul.io/docs/agent/config/config-files.html#block_endpoints) configuration option that allows blocking HTTP API endpoints by prefix. This allows operators to completely disallow access to specific endpoints on a given agent. [[GH-3252](https://github.com/hashicorp/consul/issues/3252)] -* cli: Added a new [`consul catalog`](https://www.consul.io/docs/commands/catalog.html) command for reading datacenters, nodes, and services from the catalog. [[GH-3204](https://github.com/hashicorp/consul/issues/3204)] -* server: (Consul Enterprise) Added a new [`consul operator area update`](https://www.consul.io/docs/commands/operator/area.html#update) command and corresponding HTTP endpoint to allow for transitioning the TLS setting of network areas at runtime. [[GH-3075](https://github.com/hashicorp/consul/issues/3075)] +* agent: Added a new [`block_endpoints`](https://developer.hashicorp.com/docs/agent/config/config-files.html#block_endpoints) configuration option that allows blocking HTTP API endpoints by prefix. This allows operators to completely disallow access to specific endpoints on a given agent. [[GH-3252](https://github.com/hashicorp/consul/issues/3252)] +* cli: Added a new [`consul catalog`](https://developer.hashicorp.com/docs/commands/catalog.html) command for reading datacenters, nodes, and services from the catalog. [[GH-3204](https://github.com/hashicorp/consul/issues/3204)] +* server: (Consul Enterprise) Added a new [`consul operator area update`](https://developer.hashicorp.com/docs/commands/operator/area.html#update) command and corresponding HTTP endpoint to allow for transitioning the TLS setting of network areas at runtime. [[GH-3075](https://github.com/hashicorp/consul/issues/3075)] * server: (Consul Enterprise) Added a new `UpgradeVersionTag` field to the Autopilot config to allow for using the migration feature to roll out configuration or cluster changes, without having to upgrade Consul itself. IMPROVEMENTS: @@ -5771,7 +5885,7 @@ IMPROVEMENTS: * agent: Updated memberlist to get latest LAN gossip tuning based on the [Lifeguard paper published by Hashicorp Research](https://www.hashicorp.com/blog/making-gossip-more-robust-with-lifeguard/). [[GH-3287](https://github.com/hashicorp/consul/issues/3287)] * api: Added the ability to pass in a `context` as part of the `QueryOptions` during a request. This provides a way to cancel outstanding blocking queries. [[GH-3195](https://github.com/hashicorp/consul/issues/3195)] * api: Changed signature for "done" channels on `agent.Monitor()` and `session.RenewPeriodic` methods to make them more compatible with `context`. [[GH-3271](https://github.com/hashicorp/consul/issues/3271)] -* docs: Added a complete end-to-end example of ACL bootstrapping in the [ACL Guide](https://www.consul.io/docs/guides/acl.html#bootstrapping-acls). [[GH-3248](https://github.com/hashicorp/consul/issues/3248)] +* docs: Added a complete end-to-end example of ACL bootstrapping in the [ACL Guide](https://developer.hashicorp.com/docs/guides/acl.html#bootstrapping-acls). [[GH-3248](https://github.com/hashicorp/consul/issues/3248)] * vendor: Updated golang.org/x/sys/unix to support IBM s390 platforms. [[GH-3240](https://github.com/hashicorp/consul/issues/3240)] * agent: rewrote Docker health checks without using the Docker client and its dependencies. [[GH-3270](https://github.com/hashicorp/consul/issues/3270)] @@ -5785,7 +5899,7 @@ BUG FIXES: * agent: Fixed an issue in the Docker client where Docker checks would get EOF errors trying to connect to a volume-mounted Docker socket. [[GH-3254](https://github.com/hashicorp/consul/issues/3254)] * agent: Fixed a crash when using Azure auto discovery. [[GH-3193](https://github.com/hashicorp/consul/issues/3193)] * agent: Added `node` read privileges to the `acl_agent_master_token` by default so it can see all nodes, which enables it to be used with operations like `consul members`. [[GH-3113](https://github.com/hashicorp/consul/issues/3113)] -* agent: Fixed an issue where enabling [`-disable-keyring-file`](https://www.consul.io/docs/agent/config/cli-flags#_disable_keyring_file) would cause gossip encryption to be disabled. [[GH-3243](https://github.com/hashicorp/consul/issues/3243)] +* agent: Fixed an issue where enabling [`-disable-keyring-file`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_disable_keyring_file) would cause gossip encryption to be disabled. [[GH-3243](https://github.com/hashicorp/consul/issues/3243)] * agent: Fixed a race condition where checks that are not associated with any existing services were allowed to persist. [[GH-3297](https://github.com/hashicorp/consul/issues/3297)] * agent: Stop docker checks on service deregistration and on shutdown. [GH-3265, GH-3295] * server: Updated the Raft library to pull in a fix where servers that are very far behind in replication can get stuck in a loop trying to install snapshots. [[GH-3201](https://github.com/hashicorp/consul/issues/3201)] @@ -5802,7 +5916,7 @@ BUG FIXES: BREAKING CHANGES: * agent: Parse values given to `?passing` for health endpoints. Previously Consul only checked for the existence of the querystring, not the value. That means using `?passing=false` would actually still include passing values. Consul now parses the value given to passing as a boolean. If no value is provided, the old behavior remains. This may be a breaking change for some users, but the old experience was incorrect and caused enough confusion to warrant changing it. [GH-2212, GH-3136] -* agent: The default value of [`-disable-host-node-id`](https://www.consul.io/docs/agent/config/cli-flags#_disable_host_node_id) has been changed from false to true. This means you need to opt-in to host-based node IDs and by default Consul will generate a random node ID. A high number of users struggled to deploy newer versions of Consul with host-based IDs because of various edge cases of how the host IDs work in Docker, on specially-provisioned machines, etc. so changing this from opt-out to opt-in will ease operations for many Consul users. [[GH-3171](https://github.com/hashicorp/consul/issues/3171)] +* agent: The default value of [`-disable-host-node-id`](https://developer.hashicorp.com/docs/agent/config/cli-flags#_disable_host_node_id) has been changed from false to true. This means you need to opt-in to host-based node IDs and by default Consul will generate a random node ID. A high number of users struggled to deploy newer versions of Consul with host-based IDs because of various edge cases of how the host IDs work in Docker, on specially-provisioned machines, etc. so changing this from opt-out to opt-in will ease operations for many Consul users. [[GH-3171](https://github.com/hashicorp/consul/issues/3171)] IMPROVEMENTS: @@ -5826,9 +5940,9 @@ BUG FIXES: FEATURES: -* agent: Added a method for [transitioning to gossip encryption on an existing cluster](https://www.consul.io/docs/agent/encryption.html#configuring-gossip-encryption-on-an-existing-cluster). [[GH-3079](https://github.com/hashicorp/consul/issues/3079)] -* agent: Added a method for [transitioning to TLS on an existing cluster](https://www.consul.io/docs/agent/encryption.html#configuring-tls-on-an-existing-cluster). [[GH-1705](https://github.com/hashicorp/consul/issues/1705)] -* agent: Added support for [RetryJoin on Azure](https://www.consul.io/docs/agent/options.html#retry_join_azure). [[GH-2978](https://github.com/hashicorp/consul/issues/2978)] +* agent: Added a method for [transitioning to gossip encryption on an existing cluster](https://developer.hashicorp.com/docs/agent/encryption.html#configuring-gossip-encryption-on-an-existing-cluster). [[GH-3079](https://github.com/hashicorp/consul/issues/3079)] +* agent: Added a method for [transitioning to TLS on an existing cluster](https://developer.hashicorp.com/docs/agent/encryption.html#configuring-tls-on-an-existing-cluster). [[GH-1705](https://github.com/hashicorp/consul/issues/1705)] +* agent: Added support for [RetryJoin on Azure](https://developer.hashicorp.com/docs/agent/options.html#retry_join_azure). [[GH-2978](https://github.com/hashicorp/consul/issues/2978)] * agent: (Consul Enterprise) Added [AWS server side encryption support](http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html) for S3 snapshots using the snapshot agent. IMPROVEMENTS: @@ -5882,7 +5996,7 @@ BUG FIXES: * server: Fixed a panic when the tombstone garbage collector was stopped. [[GH-2087](https://github.com/hashicorp/consul/issues/2087)] * server: Fixed a panic in Autopilot that could occur when a node is elected but cannot complete leader establishment and steps back down. [[GH-2980](https://github.com/hashicorp/consul/issues/2980)] -* server: Added a new peers.json format that allows outage recovery when using Raft protocol version 3 and higher. Previously, you'd have to set the Raft protocol version back to 2 in order to manually recover a cluster. See https://www.consul.io/docs/guides/outage.html#manual-recovery-using-peers-json for more details. [[GH-3003](https://github.com/hashicorp/consul/issues/3003)] +* server: Added a new peers.json format that allows outage recovery when using Raft protocol version 3 and higher. Previously, you'd have to set the Raft protocol version back to 2 in order to manually recover a cluster. See https://developer.hashicorp.com/docs/guides/outage.html#manual-recovery-using-peers-json for more details. [[GH-3003](https://github.com/hashicorp/consul/issues/3003)] * ui: Add and update favicons [[GH-2945](https://github.com/hashicorp/consul/issues/2945)] ## 0.8.1 (April 17, 2017) @@ -5910,22 +6024,22 @@ BREAKING CHANGES: * **Command-Line Interface RPC Deprecation:** The RPC client interface has been removed. All CLI commands that used RPC and the `-rpc-addr` flag to communicate with Consul have been converted to use the HTTP API and the appropriate flags for it, and the `rpc` field has been removed from the port and address binding configs. You will need to remove these fields from your config files and update any scripts that passed a custom `-rpc-addr` to the following commands: `force-leave`, `info`, `join`, `keyring`, `leave`, `members`, `monitor`, `reload` -* **Version 8 ACLs Are Now Opt-Out:** The [`acl_enforce_version_8`](https://www.consul.io/docs/agent/options.html#acl_enforce_version_8) configuration now defaults to `true` to enable [full version 8 ACL support](https://www.consul.io/docs/internals/acl.html#version_8_acls) by default. If you are upgrading an existing cluster with ACLs enabled, you will need to set this to `false` during the upgrade on **both Consul agents and Consul servers**. Version 8 ACLs were also changed so that [`acl_datacenter`](https://www.consul.io/docs/agent/options.html#acl_datacenter) must be set on agents in order to enable the agent-side enforcement of ACLs. This makes for a smoother experience in clusters where ACLs aren't enabled at all, but where the agents would have to wait to contact a Consul server before learning that. [[GH-2844](https://github.com/hashicorp/consul/issues/2844)] +* **Version 8 ACLs Are Now Opt-Out:** The [`acl_enforce_version_8`](https://developer.hashicorp.com/docs/agent/options.html#acl_enforce_version_8) configuration now defaults to `true` to enable [full version 8 ACL support](https://developer.hashicorp.com/docs/internals/acl.html#version_8_acls) by default. If you are upgrading an existing cluster with ACLs enabled, you will need to set this to `false` during the upgrade on **both Consul agents and Consul servers**. Version 8 ACLs were also changed so that [`acl_datacenter`](https://developer.hashicorp.com/docs/agent/options.html#acl_datacenter) must be set on agents in order to enable the agent-side enforcement of ACLs. This makes for a smoother experience in clusters where ACLs aren't enabled at all, but where the agents would have to wait to contact a Consul server before learning that. [[GH-2844](https://github.com/hashicorp/consul/issues/2844)] -* **Remote Exec Is Now Opt-In:** The default for [`disable_remote_exec`](https://www.consul.io/docs/agent/options.html#disable_remote_exec) was changed to "true", so now operators need to opt-in to having agents support running commands remotely via [`consul exec`](/docs/commands/exec.html). [[GH-2854](https://github.com/hashicorp/consul/issues/2854)] +* **Remote Exec Is Now Opt-In:** The default for [`disable_remote_exec`](https://developer.hashicorp.com/docs/agent/options.html#disable_remote_exec) was changed to "true", so now operators need to opt-in to having agents support running commands remotely via [`consul exec`](/docs/commands/exec.html). [[GH-2854](https://github.com/hashicorp/consul/issues/2854)] * **Raft Protocol Compatibility:** When upgrading to Consul 0.8.0 from a version lower than 0.7.0, users will need to -set the [`-raft-protocol`](https://www.consul.io/docs/agent/options.html#_raft_protocol) option to 1 in order to maintain backwards compatibility with the old servers during the upgrade. See [Upgrading Specific Versions](https://www.consul.io/docs/upgrade-specific.html) guide for more details. +set the [`-raft-protocol`](https://developer.hashicorp.com/docs/agent/options.html#_raft_protocol) option to 1 in order to maintain backwards compatibility with the old servers during the upgrade. See [Upgrading Specific Versions](https://developer.hashicorp.com/docs/upgrade-specific.html) guide for more details. FEATURES: -* **Autopilot:** A set of features has been added to allow for automatic operator-friendly management of Consul servers. For more information about Autopilot, see the [Autopilot Guide](https://www.consul.io/docs/guides/autopilot.html). +* **Autopilot:** A set of features has been added to allow for automatic operator-friendly management of Consul servers. For more information about Autopilot, see the [Autopilot Guide](https://developer.hashicorp.com/docs/guides/autopilot.html). - **Dead Server Cleanup:** Dead servers will periodically be cleaned up and removed from the Raft peer set, to prevent them from interfering with the quorum size and leader elections. - - **Server Health Checking:** An internal health check has been added to track the stability of servers. The thresholds of this health check are tunable as part of the [Autopilot configuration](https://www.consul.io/docs/agent/options.html#autopilot) and the status can be viewed through the [`/v1/operator/autopilot/health`](https://www.consul.io/docs/agent/http/operator.html#autopilot-health) HTTP endpoint. - - **New Server Stabilization:** When a new server is added to the cluster, there will be a waiting period where it must be healthy and stable for a certain amount of time before being promoted to a full, voting member. This threshold can be configured using the new [`server_stabilization_time`](https://www.consul.io/docs/agent/options.html#server_stabilization_time) setting. - - **Advanced Redundancy:** (Consul Enterprise) A new [`-non-voting-server`](https://www.consul.io/docs/agent/options.html#_non_voting_server) option flag has been added for Consul servers to configure a server that does not participate in the Raft quorum. This can be used to add read scalability to a cluster in cases where a high volume of reads to servers are needed, but non-voting servers can be lost without causing an outage. There's also a new [`redundancy_zone_tag`](https://www.consul.io/docs/agent/options.html#redundancy_zone_tag) configuration that allows Autopilot to manage separating servers into zones for redundancy. Only one server in each zone can be a voting member at one time. This helps when Consul servers are managed with automatic replacement with a system like a resource scheduler or auto-scaling group. Extra non-voting servers in each zone will be available as hot standbys (that help with read-scaling) that can be quickly promoted into service when the voting server in a zone fails. + - **Server Health Checking:** An internal health check has been added to track the stability of servers. The thresholds of this health check are tunable as part of the [Autopilot configuration](https://developer.hashicorp.com/docs/agent/options.html#autopilot) and the status can be viewed through the [`/v1/operator/autopilot/health`](https://developer.hashicorp.com/docs/agent/http/operator.html#autopilot-health) HTTP endpoint. + - **New Server Stabilization:** When a new server is added to the cluster, there will be a waiting period where it must be healthy and stable for a certain amount of time before being promoted to a full, voting member. This threshold can be configured using the new [`server_stabilization_time`](https://developer.hashicorp.com/docs/agent/options.html#server_stabilization_time) setting. + - **Advanced Redundancy:** (Consul Enterprise) A new [`-non-voting-server`](https://developer.hashicorp.com/docs/agent/options.html#_non_voting_server) option flag has been added for Consul servers to configure a server that does not participate in the Raft quorum. This can be used to add read scalability to a cluster in cases where a high volume of reads to servers are needed, but non-voting servers can be lost without causing an outage. There's also a new [`redundancy_zone_tag`](https://developer.hashicorp.com/docs/agent/options.html#redundancy_zone_tag) configuration that allows Autopilot to manage separating servers into zones for redundancy. Only one server in each zone can be a voting member at one time. This helps when Consul servers are managed with automatic replacement with a system like a resource scheduler or auto-scaling group. Extra non-voting servers in each zone will be available as hot standbys (that help with read-scaling) that can be quickly promoted into service when the voting server in a zone fails. - **Upgrade Orchestration:** (Consul Enterprise) Autopilot will automatically orchestrate an upgrade strategy for Consul servers where it will initially add newer versions of Consul servers as non-voters, wait for a full set of newer versioned servers to be added, and then gradually swap into service as voters and swap out older versioned servers to non-voters. This allows operators to safely bring up new servers, wait for the upgrade to be complete, and then terminate the old servers. -* **Network Areas:** (Consul Enterprise) A new capability has been added which allows operators to define network areas that join together two Consul datacenters. Unlike Consul's WAN feature, network areas use just the server RPC port for communication, and pairwise relationships can be made between arbitrary datacenters, so not all servers need to be fully connected. This allows for complex topologies among Consul datacenters like hub/spoke and more general trees. See the [Network Areas Guide](https://www.consul.io/docs/guides/areas.html) for more details. +* **Network Areas:** (Consul Enterprise) A new capability has been added which allows operators to define network areas that join together two Consul datacenters. Unlike Consul's WAN feature, network areas use just the server RPC port for communication, and pairwise relationships can be made between arbitrary datacenters, so not all servers need to be fully connected. This allows for complex topologies among Consul datacenters like hub/spoke and more general trees. See the [Network Areas Guide](https://developer.hashicorp.com/docs/guides/areas.html) for more details. * **WAN Soft Fail:** Request routing between servers in the WAN is now more robust by treating Serf failures as advisory but not final. This means that if there are issues between some subset of the servers in the WAN, Consul will still be able to route RPC requests as long as RPCs are actually still working. Prior to WAN Soft Fail, any datacenters having connectivity problems on the WAN would mean that all DCs might potentially stop sending RPCs to those datacenters. [[GH-2801](https://github.com/hashicorp/consul/issues/2801)] * **WAN Join Flooding:** A new routine was added that looks for Consul servers in the LAN and makes sure that they are joined into the WAN as well. This catches up up newly-added servers onto the WAN as soon as they join the LAN, keeping them in sync automatically. [[GH-2801](https://github.com/hashicorp/consul/issues/2801)] * **Validate command:** To provide consistency across our products, the `configtest` command has been deprecated and replaced with the `validate` command (to match Nomad and Terraform). The `configtest` command will be removed in Consul 0.9. [[GH-2732](https://github.com/hashicorp/consul/issues/2732)] @@ -5939,7 +6053,7 @@ IMPROVEMENTS: * agent: Updated aws-sdk-go version (used for EC2 auto join) for Go 1.8 compatibility. [[GH-2755](https://github.com/hashicorp/consul/issues/2755)] * agent: User-supplied node IDs are now normalized to lower-case. [[GH-2798](https://github.com/hashicorp/consul/issues/2798)] * agent: Added checks to enforce uniqueness of agent node IDs at cluster join time and when registering with the catalog. [[GH-2832](https://github.com/hashicorp/consul/issues/2832)] -* cli: Standardized handling of CLI options for connecting to the Consul agent. This makes sure that the same set of flags and environment variables works in all CLI commands (see https://www.consul.io/docs/commands/index.html#environment-variables). [[GH-2717](https://github.com/hashicorp/consul/issues/2717)] +* cli: Standardized handling of CLI options for connecting to the Consul agent. This makes sure that the same set of flags and environment variables works in all CLI commands (see https://developer.hashicorp.com/docs/commands/index.html#environment-variables). [[GH-2717](https://github.com/hashicorp/consul/issues/2717)] * cli: Updated go-cleanhttp library for better HTTP connection handling between CLI commands and the Consul agent (tunes reuse settings). [[GH-2735](https://github.com/hashicorp/consul/issues/2735)] * cli: The `operator raft` subcommand has had its two modes split into the `list-peers` and `remove-peer` subcommands. The old flags for these will continue to work for backwards compatibility, but will be removed in Consul 0.9. * cli: Added an `-id` flag to the `operator raft remove-peer` command to allow removing a peer by ID. [[GH-2847](https://github.com/hashicorp/consul/issues/2847)] @@ -5983,9 +6097,9 @@ BUG FIXES: FEATURES: * **KV Import/Export CLI:** `consul kv export` and `consul kv import` can be used to move parts of the KV tree between disconnected consul clusters, using JSON as the intermediate representation. [[GH-2633](https://github.com/hashicorp/consul/issues/2633)] -* **Node Metadata:** Support for assigning user-defined metadata key/value pairs to nodes has been added. This can be viewed when looking up node info, and can be used to filter the results of various catalog and health endpoints. For more information, see the [Catalog](https://www.consul.io/docs/agent/http/catalog.html), [Health](https://www.consul.io/docs/agent/http/health.html), and [Prepared Query](https://www.consul.io/docs/agent/http/query.html) endpoint documentation, as well as the [Node Meta](https://www.consul.io/docs/agent/options.html#_node_meta) section of the agent configuration. [[GH-2654](https://github.com/hashicorp/consul/issues/2654)] +* **Node Metadata:** Support for assigning user-defined metadata key/value pairs to nodes has been added. This can be viewed when looking up node info, and can be used to filter the results of various catalog and health endpoints. For more information, see the [Catalog](https://developer.hashicorp.com/docs/agent/http/catalog.html), [Health](https://developer.hashicorp.com/docs/agent/http/health.html), and [Prepared Query](https://developer.hashicorp.com/docs/agent/http/query.html) endpoint documentation, as well as the [Node Meta](https://developer.hashicorp.com/docs/agent/options.html#_node_meta) section of the agent configuration. [[GH-2654](https://github.com/hashicorp/consul/issues/2654)] * **Node Identifiers:** Consul agents can now be configured with a unique identifier, or they will generate one at startup that will persist across agent restarts. This identifier is designed to represent a node across all time, even if the name or address of the node changes. Identifiers are currently only exposed in node-related endpoints, but they will be used in future versions of Consul to help manage Consul servers and the Raft quorum in a more robust manner, as the quorum is currently tracked via addresses, which can change. [[GH-2661](https://github.com/hashicorp/consul/issues/2661)] -* **Improved Blocking Queries:** Consul's [blocking query](https://www.consul.io/api/index.html#blocking-queries) implementation was improved to provide a much more fine-grained mechanism for detecting changes. For example, in previous versions of Consul blocking to wait on a change to a specific service would result in a wake up if any service changed. Now, wake ups are scoped to the specific service being watched, if possible. This support has been added to all endpoints that support blocking queries, nothing new is required to take advantage of this feature. [[GH-2671](https://github.com/hashicorp/consul/issues/2671)] +* **Improved Blocking Queries:** Consul's [blocking query](https://developer.hashicorp.com/api/index.html#blocking-queries) implementation was improved to provide a much more fine-grained mechanism for detecting changes. For example, in previous versions of Consul blocking to wait on a change to a specific service would result in a wake up if any service changed. Now, wake ups are scoped to the specific service being watched, if possible. This support has been added to all endpoints that support blocking queries, nothing new is required to take advantage of this feature. [[GH-2671](https://github.com/hashicorp/consul/issues/2671)] * **GCE auto-discovery:** New `-retry-join-gce` configuration options added to allow bootstrapping by automatically discovering Google Cloud instances with a given tag at startup. [[GH-2570](https://github.com/hashicorp/consul/issues/2570)] IMPROVEMENTS: @@ -6006,12 +6120,12 @@ BUG FIXES: FEATURES: -* **Keyring API:** A new `/v1/operator/keyring` HTTP endpoint was added that allows for performing operations such as list, install, use, and remove on the encryption keys in the gossip keyring. See the [Keyring Endpoint](https://www.consul.io/docs/agent/http/operator.html#keyring) for more details. [[GH-2509](https://github.com/hashicorp/consul/issues/2509)] -* **Monitor API:** A new `/v1/agent/monitor` HTTP endpoint was added to allow for viewing streaming log output from the agent, similar to the `consul monitor` command. See the [Monitor Endpoint](https://www.consul.io/docs/agent/http/agent.html#agent_monitor) for more details. [[GH-2511](https://github.com/hashicorp/consul/issues/2511)] -* **Reload API:** A new `/v1/agent/reload` HTTP endpoint was added for triggering a reload of the agent's configuration. See the [Reload Endpoint](https://www.consul.io/docs/agent/http/agent.html#agent_reload) for more details. [[GH-2516](https://github.com/hashicorp/consul/issues/2516)] -* **Leave API:** A new `/v1/agent/leave` HTTP endpoint was added for causing an agent to gracefully shutdown and leave the cluster (previously, only `force-leave` was present in the HTTP API). See the [Leave Endpoint](https://www.consul.io/docs/agent/http/agent.html#agent_leave) for more details. [[GH-2516](https://github.com/hashicorp/consul/issues/2516)] +* **Keyring API:** A new `/v1/operator/keyring` HTTP endpoint was added that allows for performing operations such as list, install, use, and remove on the encryption keys in the gossip keyring. See the [Keyring Endpoint](https://developer.hashicorp.com/docs/agent/http/operator.html#keyring) for more details. [[GH-2509](https://github.com/hashicorp/consul/issues/2509)] +* **Monitor API:** A new `/v1/agent/monitor` HTTP endpoint was added to allow for viewing streaming log output from the agent, similar to the `consul monitor` command. See the [Monitor Endpoint](https://developer.hashicorp.com/docs/agent/http/agent.html#agent_monitor) for more details. [[GH-2511](https://github.com/hashicorp/consul/issues/2511)] +* **Reload API:** A new `/v1/agent/reload` HTTP endpoint was added for triggering a reload of the agent's configuration. See the [Reload Endpoint](https://developer.hashicorp.com/docs/agent/http/agent.html#agent_reload) for more details. [[GH-2516](https://github.com/hashicorp/consul/issues/2516)] +* **Leave API:** A new `/v1/agent/leave` HTTP endpoint was added for causing an agent to gracefully shutdown and leave the cluster (previously, only `force-leave` was present in the HTTP API). See the [Leave Endpoint](https://developer.hashicorp.com/docs/agent/http/agent.html#agent_leave) for more details. [[GH-2516](https://github.com/hashicorp/consul/issues/2516)] * **Bind Address Templates (beta):** Consul agents now allow [go-sockaddr/template](https://godoc.org/github.com/hashicorp/go-sockaddr/template) syntax to be used for any bind address configuration (`advertise_addr`, `bind_addr`, `client_addr`, and others). This allows for easy creation of immutable images for Consul that can fetch their own address based on an interface name, network CIDR, address family from an actual RFC number, and many other possible schemes. This feature is in beta and we may tweak the template syntax before final release, but we encourage the community to try this and provide feedback. [[GH-2563](https://github.com/hashicorp/consul/issues/2563)] -* **Complete ACL Coverage (beta):** Consul 0.8 will feature complete ACL coverage for all of Consul. To ease the transition to the new policies, a beta version of complete ACL support was added to help with testing and migration to the new features. Please see the [ACLs Internals Guide](https://www.consul.io/docs/internals/acl.html#version_8_acls) for more details. [GH-2594, GH-2592, GH-2590] +* **Complete ACL Coverage (beta):** Consul 0.8 will feature complete ACL coverage for all of Consul. To ease the transition to the new policies, a beta version of complete ACL support was added to help with testing and migration to the new features. Please see the [ACLs Internals Guide](https://developer.hashicorp.com/docs/internals/acl.html#version_8_acls) for more details. [GH-2594, GH-2592, GH-2590] IMPROVEMENTS: @@ -6087,26 +6201,26 @@ BREAKING CHANGES: * Consul's Go API client will now send ACL tokens using HTTP headers instead of query parameters, requiring Consul 0.6.0 or later. [[GH-2233](https://github.com/hashicorp/consul/issues/2233)] * Removed support for protocol version 1, so Consul 0.7 is no longer compatible with Consul versions prior to 0.3. [[GH-2259](https://github.com/hashicorp/consul/issues/2259)] * The Raft peers information in `consul info` has changed format and includes information about the suffrage of a server, which will be used in future versions of Consul. [[GH-2222](https://github.com/hashicorp/consul/issues/2222)] -* New [`translate_wan_addrs`](https://www.consul.io/docs/agent/options.html#translate_wan_addrs) behavior from [[GH-2118](https://github.com/hashicorp/consul/issues/2118)] translates addresses in HTTP responses and could break clients that are expecting local addresses. A new `X-Consul-Translate-Addresses` header was added to allow clients to detect if translation is enabled for HTTP responses, and a "lan" tag was added to `TaggedAddresses` for clients that need the local address regardless of translation. [[GH-2280](https://github.com/hashicorp/consul/issues/2280)] -* The behavior of the `peers.json` file is different in this version of Consul. This file won't normally be present and is used only during outage recovery. Be sure to read the updated [Outage Recovery Guide](https://www.consul.io/docs/guides/outage.html) for details. [[GH-2222](https://github.com/hashicorp/consul/issues/2222)] -* Consul's default Raft timing is now set to work more reliably on lower-performance servers, which allows small clusters to use lower cost compute at the expense of reduced performance for failed leader detection and leader elections. You will need to configure Consul to get the same performance as before. See the new [Server Performance](https://www.consul.io/docs/guides/performance.html) guide for more details. [[GH-2303](https://github.com/hashicorp/consul/issues/2303)] +* New [`translate_wan_addrs`](https://developer.hashicorp.com/docs/agent/options.html#translate_wan_addrs) behavior from [[GH-2118](https://github.com/hashicorp/consul/issues/2118)] translates addresses in HTTP responses and could break clients that are expecting local addresses. A new `X-Consul-Translate-Addresses` header was added to allow clients to detect if translation is enabled for HTTP responses, and a "lan" tag was added to `TaggedAddresses` for clients that need the local address regardless of translation. [[GH-2280](https://github.com/hashicorp/consul/issues/2280)] +* The behavior of the `peers.json` file is different in this version of Consul. This file won't normally be present and is used only during outage recovery. Be sure to read the updated [Outage Recovery Guide](https://developer.hashicorp.com/docs/guides/outage.html) for details. [[GH-2222](https://github.com/hashicorp/consul/issues/2222)] +* Consul's default Raft timing is now set to work more reliably on lower-performance servers, which allows small clusters to use lower cost compute at the expense of reduced performance for failed leader detection and leader elections. You will need to configure Consul to get the same performance as before. See the new [Server Performance](https://developer.hashicorp.com/docs/guides/performance.html) guide for more details. [[GH-2303](https://github.com/hashicorp/consul/issues/2303)] FEATURES: -* **Transactional Key/Value API:** A new `/v1/txn` API was added that allows for atomic updates to and fetches from multiple entries in the key/value store inside of an atomic transaction. This includes conditional updates based on obtaining locks, and all other key/value store operations. See the [Key/Value Store Endpoint](https://www.consul.io/docs/agent/http/kv.html#txn) for more details. [[GH-2028](https://github.com/hashicorp/consul/issues/2028)] -* **Native ACL Replication:** Added a built-in full replication capability for ACLs. Non-ACL datacenters can now replicate the complete ACL set locally to their state store and fall back to that if there's an outage. Additionally, this provides a good way to make a backup ACL datacenter, or to migrate the ACL datacenter to a different one. See the [ACL Internals Guide](https://www.consul.io/docs/internals/acl.html#replication) for more details. [[GH-2237](https://github.com/hashicorp/consul/issues/2237)] +* **Transactional Key/Value API:** A new `/v1/txn` API was added that allows for atomic updates to and fetches from multiple entries in the key/value store inside of an atomic transaction. This includes conditional updates based on obtaining locks, and all other key/value store operations. See the [Key/Value Store Endpoint](https://developer.hashicorp.com/docs/agent/http/kv.html#txn) for more details. [[GH-2028](https://github.com/hashicorp/consul/issues/2028)] +* **Native ACL Replication:** Added a built-in full replication capability for ACLs. Non-ACL datacenters can now replicate the complete ACL set locally to their state store and fall back to that if there's an outage. Additionally, this provides a good way to make a backup ACL datacenter, or to migrate the ACL datacenter to a different one. See the [ACL Internals Guide](https://developer.hashicorp.com/docs/internals/acl.html#replication) for more details. [[GH-2237](https://github.com/hashicorp/consul/issues/2237)] * **Server Connection Rebalancing:** Consul agents will now periodically reconnect to available Consul servers in order to redistribute their RPC query load. Consul clients will, by default, attempt to establish a new connection every 120s to 180s unless the size of the cluster is sufficiently large. The rate at which agents begin to query new servers is proportional to the size of the Consul cluster (servers should never receive more than 64 new connections per second per Consul server as a result of rebalancing). Clusters in stable environments who use `allow_stale` should see a more even distribution of query load across all of their Consul servers. [[GH-1743](https://github.com/hashicorp/consul/issues/1743)] -* **Raft Updates and Consul Operator Interface:** This version of Consul upgrades to "stage one" of the v2 HashiCorp Raft library. This version offers improved handling of cluster membership changes and recovery after a loss of quorum. This version also provides a foundation for new features that will appear in future Consul versions once the remainder of the v2 library is complete. [[GH-2222](https://github.com/hashicorp/consul/issues/2222)]
Consul's default Raft timing is now set to work more reliably on lower-performance servers, which allows small clusters to use lower cost compute at the expense of reduced performance for failed leader detection and leader elections. You will need to configure Consul to get the same performance as before. See the new [Server Performance](https://www.consul.io/docs/guides/performance.html) guide for more details. [[GH-2303](https://github.com/hashicorp/consul/issues/2303)]
Servers will now abort bootstrapping if they detect an existing cluster with configured Raft peers. This will help prevent safe but spurious leader elections when introducing new nodes with `bootstrap_expect` enabled into an existing cluster. [[GH-2319](https://github.com/hashicorp/consul/issues/2319)]
Added new `consul operator` command, HTTP endpoint, and associated ACL to allow Consul operators to view and update the Raft configuration. This allows a stale server to be removed from the Raft peers without requiring downtime and peers.json recovery file use. See the new [Consul Operator Command](https://www.consul.io/docs/commands/operator.html) and the [Consul Operator Endpoint](https://www.consul.io/docs/agent/http/operator.html) for details, as well as the updated [Outage Recovery Guide](https://www.consul.io/docs/guides/outage.html). [[GH-2312](https://github.com/hashicorp/consul/issues/2312)] +* **Raft Updates and Consul Operator Interface:** This version of Consul upgrades to "stage one" of the v2 HashiCorp Raft library. This version offers improved handling of cluster membership changes and recovery after a loss of quorum. This version also provides a foundation for new features that will appear in future Consul versions once the remainder of the v2 library is complete. [[GH-2222](https://github.com/hashicorp/consul/issues/2222)]
Consul's default Raft timing is now set to work more reliably on lower-performance servers, which allows small clusters to use lower cost compute at the expense of reduced performance for failed leader detection and leader elections. You will need to configure Consul to get the same performance as before. See the new [Server Performance](https://developer.hashicorp.com/docs/guides/performance.html) guide for more details. [[GH-2303](https://github.com/hashicorp/consul/issues/2303)]
Servers will now abort bootstrapping if they detect an existing cluster with configured Raft peers. This will help prevent safe but spurious leader elections when introducing new nodes with `bootstrap_expect` enabled into an existing cluster. [[GH-2319](https://github.com/hashicorp/consul/issues/2319)]
Added new `consul operator` command, HTTP endpoint, and associated ACL to allow Consul operators to view and update the Raft configuration. This allows a stale server to be removed from the Raft peers without requiring downtime and peers.json recovery file use. See the new [Consul Operator Command](https://developer.hashicorp.com/docs/commands/operator.html) and the [Consul Operator Endpoint](https://developer.hashicorp.com/docs/agent/http/operator.html) for details, as well as the updated [Outage Recovery Guide](https://developer.hashicorp.com/docs/guides/outage.html). [[GH-2312](https://github.com/hashicorp/consul/issues/2312)] * **Serf Lifeguard Updates:** Implemented a new set of feedback controls for the gossip layer that help prevent degraded nodes that can't meet the soft real-time requirements from erroneously causing `serfHealth` flapping in other, healthy nodes. This feature tunes itself automatically and requires no configuration. [[GH-2101](https://github.com/hashicorp/consul/issues/2101)] * **Prepared Query Near Parameter:** Prepared queries support baking in a new `Near` sorting parameter. This allows results to be sorted by network round trip time based on a static node, or based on the round trip time from the Consul agent where the request originated. This can be used to find a co-located service instance is one is available, with a transparent fallback to the next best alternate instance otherwise. [[GH-2137](https://github.com/hashicorp/consul/issues/2137)] * **Automatic Service Deregistration:** Added a new `deregister_critical_service_after` timeout field for health checks which will cause the service associated with that check to get deregistered if the check is critical for longer than the timeout. This is useful for cleanup of health checks registered natively by applications, or in other situations where services may not always be cleanly shutdown. [[GH-679](https://github.com/hashicorp/consul/issues/679)] -* **WAN Address Translation Everywhere:** Extended the [`translate_wan_addrs`](https://www.consul.io/docs/agent/options.html#translate_wan_addrs) config option to also translate node addresses in HTTP responses, making it easy to use this feature from non-DNS clients. [[GH-2118](https://github.com/hashicorp/consul/issues/2118)] +* **WAN Address Translation Everywhere:** Extended the [`translate_wan_addrs`](https://developer.hashicorp.com/docs/agent/options.html#translate_wan_addrs) config option to also translate node addresses in HTTP responses, making it easy to use this feature from non-DNS clients. [[GH-2118](https://github.com/hashicorp/consul/issues/2118)] * **RPC Retries:** Consul will now retry RPC calls that result in "no leader" errors for up to 5 seconds. This allows agents to ride out leader elections with a delayed response vs. an error. [[GH-2175](https://github.com/hashicorp/consul/issues/2175)] * **Circonus Telemetry Support:** Added support for Circonus as a telemetry destination. [[GH-2193](https://github.com/hashicorp/consul/issues/2193)] IMPROVEMENTS: -* agent: Reap time for failed nodes is now configurable via new `reconnect_timeout` and `reconnect_timeout_wan` config options ([use with caution](https://www.consul.io/docs/agent/options.html#reconnect_timeout)). [[GH-1935](https://github.com/hashicorp/consul/issues/1935)] +* agent: Reap time for failed nodes is now configurable via new `reconnect_timeout` and `reconnect_timeout_wan` config options ([use with caution](https://developer.hashicorp.com/docs/agent/options.html#reconnect_timeout)). [[GH-1935](https://github.com/hashicorp/consul/issues/1935)] * agent: Joins based on a DNS lookup will use TCP and attempt to join with the full list of returned addresses. [[GH-2101](https://github.com/hashicorp/consul/issues/2101)] * agent: Consul will now refuse to start with a helpful message if the same UNIX socket is used for more than one listening endpoint. [[GH-1910](https://github.com/hashicorp/consul/issues/1910)] * agent: Removed an obsolete warning message when Consul starts on Windows. [[GH-1920](https://github.com/hashicorp/consul/issues/1920)] @@ -6118,7 +6232,7 @@ IMPROVEMENTS: * checks: Script checks now support an optional `timeout` parameter. [[GH-1762](https://github.com/hashicorp/consul/issues/1762)] * checks: HTTP health checks limit saved output to 4K to avoid performance issues. [[GH-1952](https://github.com/hashicorp/consul/issues/1952)] * cli: Added a `-stale` mode for watchers to allow them to pull data from any Consul server, not just the leader. [[GH-2045](https://github.com/hashicorp/consul/issues/2045)] [[GH-917](https://github.com/hashicorp/consul/issues/917)] -* dns: Consul agents can now limit the number of UDP answers returned via the DNS interface. The default number of UDP answers is `3`, however by adjusting the `dns_config.udp_answer_limit` configuration parameter, it is now possible to limit the results down to `1`. This tunable provides environments where RFC3484 section 6, rule 9 is enforced with an important workaround in order to preserve the desired behavior of randomized DNS results. Most modern environments will not need to adjust this setting as this RFC was made obsolete by RFC 6724\. See the [agent options](https://www.consul.io/docs/agent/options.html#udp_answer_limit) documentation for additional details for when this should be used. [[GH-1712](https://github.com/hashicorp/consul/issues/1712)] +* dns: Consul agents can now limit the number of UDP answers returned via the DNS interface. The default number of UDP answers is `3`, however by adjusting the `dns_config.udp_answer_limit` configuration parameter, it is now possible to limit the results down to `1`. This tunable provides environments where RFC3484 section 6, rule 9 is enforced with an important workaround in order to preserve the desired behavior of randomized DNS results. Most modern environments will not need to adjust this setting as this RFC was made obsolete by RFC 6724\. See the [agent options](https://developer.hashicorp.com/docs/agent/options.html#udp_answer_limit) documentation for additional details for when this should be used. [[GH-1712](https://github.com/hashicorp/consul/issues/1712)] * dns: Consul now compresses all DNS responses by default. This prevents issues when recursing records that were originally compressed, where Consul would sometimes generate an invalid, uncompressed response that was too large. [[GH-2266](https://github.com/hashicorp/consul/issues/2266)] * dns: Added a new `recursor_timeout` configuration option to set the timeout for Consul's internal DNS client that's used for recursing queries to upstream DNS servers. [[GH-2321](https://github.com/hashicorp/consul/issues/2321)] * dns: Added a new `-dns-port` command line option so this can be set without a config file. [[GH-2263](https://github.com/hashicorp/consul/issues/2263)] @@ -6149,7 +6263,7 @@ BACKWARDS INCOMPATIBILITIES: queries and how they are executed, but this will affect how they are managed. Now management of prepared queries can be delegated within an organization. If you use prepared queries, you'll need to read the - [Consul 0.6.4 upgrade instructions](https://www.consul.io/docs/upgrade-specific.html) + [Consul 0.6.4 upgrade instructions](https://developer.hashicorp.com/docs/upgrade-specific.html) before upgrading to this version of Consul. [[GH-1748](https://github.com/hashicorp/consul/issues/1748)] * Consul's Go API client now pools connections by default, and requires you to manually opt-out of this behavior. Previously, idle connections were supported and their @@ -6168,7 +6282,7 @@ FEATURES: an empty prefix) to apply prepared query features like datacenter failover to multiple services with a single query definition. This makes it easy to apply a common policy to multiple services without having to manage many prepared queries. See - [Prepared Query Templates](https://www.consul.io/docs/agent/http/query.html#templates) + [Prepared Query Templates](https://developer.hashicorp.com/docs/agent/http/query.html#templates) for more details. [[GH-1764](https://github.com/hashicorp/consul/issues/1764)] * Added a new ability to translate address lookups when doing queries of nodes in remote datacenters via DNS using a new `translate_wan_addrs` configuration @@ -6373,7 +6487,7 @@ UPGRADE NOTES: if this is the case you will need to configure Consul's advertise or bind addresses before upgrading. -See https://www.consul.io/docs/upgrade-specific.html for detailed upgrade +See https://developer.hashicorp.com/docs/upgrade-specific.html for detailed upgrade instructions. ## 0.5.2 (May 18, 2015) diff --git a/Dockerfile b/Dockerfile index 037e437ae9e2..b284ca71cd99 100644 --- a/Dockerfile +++ b/Dockerfile @@ -16,14 +16,14 @@ # Official docker image that includes binaries from releases.hashicorp.com. This # downloads the release from releases.hashicorp.com and therefore requires that # the release is published before building the Docker image. -FROM docker.mirror.hashicorp.services/alpine:3.21 as official +FROM docker.mirror.hashicorp.services/alpine:3.22 as official # This is the release of Consul to pull in. ARG VERSION LABEL org.opencontainers.image.authors="Consul Team " \ org.opencontainers.image.url="https://www.consul.io/" \ - org.opencontainers.image.documentation="https://www.consul.io/docs" \ + org.opencontainers.image.documentation="https://developer.hashicorp.com/docs" \ org.opencontainers.image.source="https://github.com/hashicorp/consul" \ org.opencontainers.image.version=${VERSION} \ org.opencontainers.image.vendor="HashiCorp" \ @@ -119,7 +119,7 @@ CMD ["agent", "-dev", "-client", "0.0.0.0"] # Production docker image that uses CI built binaries. # Remember, this image cannot be built locally. -FROM docker.mirror.hashicorp.services/alpine:3.21 as default +FROM docker.mirror.hashicorp.services/alpine:3.22 as default ARG PRODUCT_VERSION ARG BIN_NAME @@ -137,7 +137,7 @@ ARG TARGETOS TARGETARCH LABEL org.opencontainers.image.authors="Consul Team " \ org.opencontainers.image.url="https://www.consul.io/" \ - org.opencontainers.image.documentation="https://www.consul.io/docs" \ + org.opencontainers.image.documentation="https://developer.hashicorp.com/docs" \ org.opencontainers.image.source="https://github.com/hashicorp/consul" \ org.opencontainers.image.version=${PRODUCT_VERSION} \ org.opencontainers.image.vendor="HashiCorp" \ @@ -217,7 +217,7 @@ CMD ["agent", "-dev", "-client", "0.0.0.0"] # Red Hat UBI-based image # This target is used to build a Consul image for use on OpenShift. -FROM registry.access.redhat.com/ubi9-minimal:9.5 as ubi +FROM registry.access.redhat.com/ubi9-minimal:9.6 as ubi ARG PRODUCT_VERSION ARG PRODUCT_REVISION @@ -234,7 +234,7 @@ ARG TARGETOS TARGETARCH LABEL org.opencontainers.image.authors="Consul Team " \ org.opencontainers.image.url="https://www.consul.io/" \ - org.opencontainers.image.documentation="https://www.consul.io/docs" \ + org.opencontainers.image.documentation="https://developer.hashicorp.com/consul/docs" \ org.opencontainers.image.source="https://github.com/hashicorp/consul" \ org.opencontainers.image.version=${PRODUCT_VERSION} \ org.opencontainers.image.vendor="HashiCorp" \ diff --git a/Dockerfile-windows b/Dockerfile-windows index 14582908db55..722033b78cd5 100644 --- a/Dockerfile-windows +++ b/Dockerfile-windows @@ -5,7 +5,7 @@ ENV chocolateyVersion=1.4.0 LABEL org.opencontainers.image.authors="Consul Team " \ org.opencontainers.image.url="https://www.consul.io/" \ - org.opencontainers.image.documentation="https://www.consul.io/docs" \ + org.opencontainers.image.documentation="https://developer.hashicorp.com/docs" \ org.opencontainers.image.source="https://github.com/hashicorp/consul" \ org.opencontainers.image.version=$VERSION \ org.opencontainers.image.vendor="HashiCorp" \ diff --git a/Makefile b/Makefile index 1d27b3d5feb4..66bc97cc1676 100644 --- a/Makefile +++ b/Makefile @@ -1,5 +1,5 @@ # For documentation on building consul from source, refer to: -# https://www.consul.io/docs/install#compiling-from-source +# https://developer.hashicorp.com/docs/install#compiling-from-source SHELL = bash @@ -10,12 +10,12 @@ GO_MODULES := $(shell find . -name go.mod -exec dirname {} \; | grep -v "proto-g # These version variables can either be a valid string for "go install @" # or the string @DEV to imply use what is currently installed locally. ### -GOLANGCI_LINT_VERSION='v1.56.1' -MOCKERY_VERSION='v2.41.0' +GOLANGCI_LINT_VERSION='v1.64.8' +MOCKERY_VERSION='v2.53.4' BUF_VERSION='v1.26.0' PROTOC_GEN_GO_GRPC_VERSION='v1.2.0' -MOG_VERSION='v0.4.2' +MOG_VERSION='74a24e5f2782c2421cc6335c478686f62e9a0688' PROTOC_GO_INJECT_TAG_VERSION='v1.3.0' PROTOC_GEN_GO_BINARY_VERSION='v0.1.0' DEEP_COPY_VERSION='e112476c0181d3d69067bac191f9b6bcda2ce812' diff --git a/README.md b/README.md index c23053f0e1f7..a491be1ad1a6 100644 --- a/README.md +++ b/README.md @@ -9,8 +9,7 @@ Consul is a distributed, highly available, and data center aware solution to connect and configure applications across dynamic, distributed infrastructure. -* Website: https://www.consul.io -* Tutorials: [HashiCorp Learn](https://learn.hashicorp.com/consul) +* Documentation and Tutorials: [https://developer.hashicorp.com/consul] * Forum: [Discuss](https://discuss.hashicorp.com/c/consul) Consul provides several key features: @@ -40,7 +39,7 @@ Consul provides several key features: Consul runs on Linux, macOS, FreeBSD, Solaris, and Windows and includes an optional [browser based UI](https://demo.consul.io). A commercial version -called [Consul Enterprise](https://www.consul.io/docs/enterprise) is also +called [Consul Enterprise](https://developer.hashicorp.com/docs/enterprise) is also available. **Please note**: We take Consul's security and our users' trust very seriously. If you @@ -59,7 +58,7 @@ A few quick start guides are available on the Consul website: ## Documentation -Full, comprehensive documentation is available on the Consul website: https://consul.io/docs +Full, comprehensive documentation is available on the Consul website: https://developer.hashicorp.com/consul/docs ## Contributing diff --git a/agent/agent.go b/agent/agent.go index c5fb3cc886c0..6dff34b53675 100644 --- a/agent/agent.go +++ b/agent/agent.go @@ -1601,7 +1601,7 @@ func newConsulConfig(runtimeCfg *config.RuntimeConfig, logger hclog.Logger) (*co cfg.Reporting.License.Enabled = runtimeCfg.Reporting.License.Enabled cfg.ServerRejoinAgeMax = runtimeCfg.ServerRejoinAgeMax - + cfg.EnableXDSLoadBalancing = runtimeCfg.EnableXDSLoadBalancing enterpriseConsulConfig(cfg, runtimeCfg) return cfg, nil diff --git a/agent/cacheshim/cache.go b/agent/cacheshim/cache.go index 64754da64486..12cb3ac09e6c 100644 --- a/agent/cacheshim/cache.go +++ b/agent/cacheshim/cache.go @@ -17,7 +17,7 @@ type ResultMeta struct { // Age identifies how "stale" the result is. It's semantics differ based on // whether or not the cache type performs background refresh or not as defined - // in https://www.consul.io/api/index.html#agent-caching. + // in https://developer.hashicorp.com/api/index.html#agent-caching. // // For background refresh types, Age is 0 unless the background blocking query // is currently in a failed state and so not keeping up with the server's diff --git a/agent/catalog_endpoint.go b/agent/catalog_endpoint.go index 8af4654b90f2..4847f052c8ab 100644 --- a/agent/catalog_endpoint.go +++ b/agent/catalog_endpoint.go @@ -354,6 +354,8 @@ func (s *HTTPHandlers) catalogServiceNodes(resp http.ResponseWriter, req *http.R return nil, nil } + s.parsePeerName(req, &args) + // Check for a tag params := req.URL.Query() if _, ok := params["tag"]; ok { diff --git a/agent/catalog_endpoint_test.go b/agent/catalog_endpoint_test.go index af1f2d36e30e..c5a61f3d64a0 100644 --- a/agent/catalog_endpoint_test.go +++ b/agent/catalog_endpoint_test.go @@ -964,6 +964,76 @@ func TestCatalogServiceNodes_Filter(t *testing.T) { require.Len(t, nodes, 1) } +func TestCatalogServiceNodes_PeerFilter(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + t.Parallel() + a := StartTestAgent(t, TestAgent{HCL: "", Overrides: `peering = { test_allow_peer_registrations = true }`}) + defer a.Shutdown() + + peerName := "test" + queryPath := "/v1/catalog/service/api?filter=" + url.QueryEscape("ServiceMeta.somekey == somevalue") + peerQuerySuffix(peerName) + + // Make sure an empty list is returned, not a nil + { + req, _ := http.NewRequest("GET", queryPath, nil) + resp := httptest.NewRecorder() + obj, err := a.srv.CatalogServiceNodes(resp, req) + require.NoError(t, err) + + assertIndex(t, resp) + + nodes := obj.(structs.ServiceNodes) + require.Empty(t, nodes) + } + + // Register node + args := &structs.RegisterRequest{ + Datacenter: "dc1", + Node: "foo", + Address: "127.0.0.1", + PeerName: peerName, + Service: &structs.NodeService{ + Service: "api", + Meta: map[string]string{ + "somekey": "somevalue", + }, + }, + } + + var out struct{} + require.NoError(t, a.RPC(context.Background(), "Catalog.Register", args, &out)) + + // Register a second service for the node + args = &structs.RegisterRequest{ + Datacenter: "dc1", + Node: "foo", + Address: "127.0.0.1", + PeerName: peerName, + Service: &structs.NodeService{ + ID: "api2", + Service: "api", + Meta: map[string]string{ + "somekey": "notvalue", + }, + }, + SkipNodeUpdate: true, + } + + require.NoError(t, a.RPC(context.Background(), "Catalog.Register", args, &out)) + + req, _ := http.NewRequest("GET", queryPath, nil) + resp := httptest.NewRecorder() + obj, err := a.srv.CatalogServiceNodes(resp, req) + require.NoError(t, err) + assertIndex(t, resp) + + nodes := obj.(structs.ServiceNodes) + require.Len(t, nodes, 1) +} + func TestCatalogServiceNodes_WanTranslation(t *testing.T) { if testing.Short() { t.Skip("too slow for testing.Short") diff --git a/agent/config/builder.go b/agent/config/builder.go index 5e8b5c215f6d..ccdb2454f481 100644 --- a/agent/config/builder.go +++ b/agent/config/builder.go @@ -1033,6 +1033,7 @@ func (b *builder) build() (rt RuntimeConfig, err error) { KVMaxValueSize: uint64Val(c.Limits.KVMaxValueSize), LeaveDrainTime: b.durationVal("performance.leave_drain_time", c.Performance.LeaveDrainTime), LeaveOnTerm: leaveOnTerm, + EnableXDSLoadBalancing: boolVal(c.Performance.EnableXDSLoadBalancing), StaticRuntimeConfig: StaticRuntimeConfig{ EncryptVerifyIncoming: boolVal(c.EncryptVerifyIncoming), EncryptVerifyOutgoing: boolVal(c.EncryptVerifyOutgoing), @@ -1212,6 +1213,8 @@ func (b *builder) validate(rt RuntimeConfig) error { // validContentPath defines a regexp for a valid content path name. validContentPath := regexp.MustCompile(`^[A-Za-z0-9/_-]+$`) hasVersion := regexp.MustCompile(`^/v\d+/$`) + // reDNSCompatible ensures that the name is capable to be part of a DNS name. + reDNSCompatible := regexp.MustCompile(`^[a-z0-9](?:[a-z0-9-]{0,61}[a-z0-9])?$`) // ---------------------------------------------------------------- // check required params we cannot recover from first // @@ -1223,6 +1226,9 @@ func (b *builder) validate(rt RuntimeConfig) error { if err := validateBasicName("datacenter", rt.Datacenter, false); err != nil { return err } + if !reDNSCompatible.MatchString(rt.Datacenter) { + b.warn("Datacenter : %q will not be PKI X.509 compatible due to invalid characters. Valid characters include lowercase alphanumeric characters with dashes in between.", rt.Datacenter) + } if rt.DataDir == "" && !rt.DevMode { return fmt.Errorf("data_dir cannot be empty") } @@ -1410,7 +1416,7 @@ func (b *builder) validate(rt RuntimeConfig) error { } return fmt.Errorf("CRITICAL: Deprecated data folder found at %q!\n"+ "Consul will refuse to boot with this directory present.\n"+ - "See https://www.consul.io/docs/upgrade-specific.html for more information.", mdbPath) + "See https://developer.hashicorp.com/docs/upgrade-specific.html for more information.", mdbPath) } // Raft LogStore validation @@ -1510,11 +1516,11 @@ func (b *builder) validate(rt RuntimeConfig) error { // if rt.ServerMode && !rt.DevMode && !rt.Bootstrap && rt.BootstrapExpect == 2 { - b.warn(`bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table`) + b.warn(`bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://developer.hashicorp.com/docs/internals/consensus.html#deployment-table`) } if rt.ServerMode && !rt.Bootstrap && rt.BootstrapExpect > 2 && rt.BootstrapExpect%2 == 0 { - b.warn(`bootstrap_expect is even number: A cluster with an even number of servers does not achieve optimum fault tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table`) + b.warn(`bootstrap_expect is even number: A cluster with an even number of servers does not achieve optimum fault tolerance. See https://developer.hashicorp.com/docs/internals/consensus.html#deployment-table`) } if rt.ServerMode && rt.Bootstrap && rt.BootstrapExpect == 0 { diff --git a/agent/config/builder_test.go b/agent/config/builder_test.go index 26d20bdfbacc..fac47373aad0 100644 --- a/agent/config/builder_test.go +++ b/agent/config/builder_test.go @@ -731,3 +731,117 @@ func TestBuilder_CloudConfigWithEnvironmentVars(t *testing.T) { }) } } + +func TestBuilder_DatacenterDNSCompatibleWarning(t *testing.T) { + type testCase struct { + name string + datacenter string + expectError bool + expectWarning bool + } + + fn := func(t *testing.T, tc testCase) { + opts := LoadOpts{ + FlagValues: FlagValuesTarget{ + Config: Config{ + Datacenter: pString(tc.datacenter), + DataDir: pString("dir"), + }, + }, + } + patchLoadOptsShims(&opts) + result, err := Load(opts) + + if tc.expectError { + require.Error(t, err) + require.Contains(t, err.Error(), "datacenter can only contain lowercase alphanumeric, - or _ characters") + return + } + + require.NoError(t, err) + + warningFound := false + expectedWarningSubstr := "will not be PKI X.509 compatible due to invalid characters" + for _, warning := range result.Warnings { + if strings.Contains(warning, expectedWarningSubstr) { + warningFound = true + break + } + } + + if tc.expectWarning { + require.True(t, warningFound, "Expected warning about datacenter DNS compatibility but got none") + } else { + require.False(t, warningFound, "Got unexpected warning about datacenter DNS compatibility") + } + } + + var testCases = []testCase{ + { + name: "valid lowercase datacenter", + datacenter: "deecee1", + expectError: false, + expectWarning: false, + }, + { + name: "invalid excess characters", + datacenter: "asdfaasdfasdfasdfsdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfasdfadd1", + expectError: false, + expectWarning: true, + }, + { + name: "valid lowercase datacenter", + datacenter: "deecee1", + expectError: false, + expectWarning: false, + }, + { + name: "valid with dash", + datacenter: "dc-west-1", + expectError: false, + expectWarning: false, + }, + { + name: "valid uppercase letters", + datacenter: "DEECEE1", + expectError: false, // Will not fail because config is lowercased before validation + expectWarning: false, // Won't get to warning check + }, + { + name: "invalid underscore", + datacenter: "dc_1", + expectError: false, // Passes basic validation since underscore is allowed + expectWarning: true, // But fails DNS compatibility check + }, + { + name: "invalid starts with dash", + datacenter: "-dc1", + expectError: false, // Basic validation allows leading dash + expectWarning: true, // But DNS compatibility doesn't + }, + { + name: "invalid ends with dash", + datacenter: "dc1-", + expectError: false, // Basic validation allows trailing dash + expectWarning: true, // But DNS compatibility doesn't + }, + { + name: "invalid special characters", + datacenter: "dc@1", + expectError: true, // Will fail basic validation + expectWarning: false, // Won't get to warning check + }, + { + name: "invalid space", + datacenter: "dc 1", + expectError: true, // Will fail basic validation + expectWarning: false, // Won't get to warning check + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + fn(t, tc) + }) + } +} diff --git a/agent/config/config.deepcopy.go b/agent/config/config.deepcopy.go index bd9b85fca4ac..25713f820f1a 100644 --- a/agent/config/config.deepcopy.go +++ b/agent/config/config.deepcopy.go @@ -728,6 +728,10 @@ func (o *RuntimeConfig) DeepCopy() *RuntimeConfig { cp.Cloud.TLSConfig.CurvePreferences = make([]tls.CurveID, len(o.Cloud.TLSConfig.CurvePreferences)) copy(cp.Cloud.TLSConfig.CurvePreferences, o.Cloud.TLSConfig.CurvePreferences) } + if o.Cloud.TLSConfig.EncryptedClientHelloConfigList != nil { + cp.Cloud.TLSConfig.EncryptedClientHelloConfigList = make([]byte, len(o.Cloud.TLSConfig.EncryptedClientHelloConfigList)) + copy(cp.Cloud.TLSConfig.EncryptedClientHelloConfigList, o.Cloud.TLSConfig.EncryptedClientHelloConfigList) + } } if o.DNSServiceTTL != nil { cp.DNSServiceTTL = make(map[string]time.Duration, len(o.DNSServiceTTL)) diff --git a/agent/config/config.go b/agent/config/config.go index 4a93c2702021..43aa115f106d 100644 --- a/agent/config/config.go +++ b/agent/config/config.go @@ -677,11 +677,12 @@ type HTTPConfig struct { } type Performance struct { - LeaveDrainTime *string `mapstructure:"leave_drain_time"` - RaftMultiplier *int `mapstructure:"raft_multiplier"` // todo(fs): validate as uint - RPCHoldTimeout *string `mapstructure:"rpc_hold_timeout"` - GRPCKeepaliveInterval *string `mapstructure:"grpc_keepalive_interval"` - GRPCKeepaliveTimeout *string `mapstructure:"grpc_keepalive_timeout"` + LeaveDrainTime *string `mapstructure:"leave_drain_time"` + RaftMultiplier *int `mapstructure:"raft_multiplier"` // todo(fs): validate as uint + RPCHoldTimeout *string `mapstructure:"rpc_hold_timeout"` + GRPCKeepaliveInterval *string `mapstructure:"grpc_keepalive_interval"` + GRPCKeepaliveTimeout *string `mapstructure:"grpc_keepalive_timeout"` + EnableXDSLoadBalancing *bool `mapstructure:"enable_xds_load_balancing"` } type Telemetry struct { diff --git a/agent/config/default.go b/agent/config/default.go index 1b98056aa110..ceb6ce50fdd3 100644 --- a/agent/config/default.go +++ b/agent/config/default.go @@ -120,6 +120,7 @@ func DefaultSource() Source { rpc_hold_timeout = "7s" grpc_keepalive_interval = "30s" grpc_keepalive_timeout = "20s" + enable_xds_load_balancing = true } ports = { dns = 8600 diff --git a/agent/config/runtime.go b/agent/config/runtime.go index e31047354dda..bd2d0a0c0d73 100644 --- a/agent/config/runtime.go +++ b/agent/config/runtime.go @@ -90,7 +90,7 @@ type RuntimeConfig struct { // ACLEnableKeyListPolicy is used to opt-in to the "list" policy added to // KV ACLs in Consul 1.0. // - // See https://www.consul.io/docs/guides/acl.html#list-policy-for-keys for + // See https://developer.hashicorp.com/docs/guides/acl.html#list-policy-for-keys for // more details. // // hcl: acl.enable_key_list_policy = (true|false) @@ -739,6 +739,13 @@ type RuntimeConfig struct { // time, the connection will be closed by the server. GRPCKeepaliveTimeout time.Duration + // EnableXDSLoadBalancing controls xDS load balancing between the servers. Enabled by default. + // When enabled, Consul balances loads from xDS clients across available servers equally with an error margin of 0.1. + // + // When disabled, Consul does not restrict on the number of xDS connections on a server. + // In this scenario, you should deploy an external load balancer in front of the consul servers and distribute the load accordingly. + EnableXDSLoadBalancing bool + // HTTPAddrs contains the list of TCP addresses and UNIX sockets the HTTP // server will bind to. If the HTTP endpoint is disabled (ports.http <= 0) // the list is empty. @@ -885,7 +892,7 @@ type RuntimeConfig struct { // PrimaryGateways is a list of addresses and/or go-discover expressions to // discovery the mesh gateways in the primary datacenter. See - // https://www.consul.io/docs/agent/config/cli-flags#cloud-auto-joining for + // https://developer.hashicorp.com/docs/agent/config/cli-flags#cloud-auto-joining for // details. // // hcl: primary_gateways = []string @@ -1087,7 +1094,7 @@ type RuntimeConfig struct { // RetryJoinLAN is a list of addresses and/or go-discover expressions to // join with retry enabled. See - // https://www.consul.io/docs/agent/config/cli-flags#cloud-auto-joining for + // https://developer.hashicorp.com/docs/agent/config/cli-flags#cloud-auto-joining for // details. // // hcl: retry_join = []string @@ -1112,7 +1119,7 @@ type RuntimeConfig struct { // RetryJoinWAN is a list of addresses and/or go-discover expressions to // join -wan with retry enabled. See - // https://www.consul.io/docs/agent/config/cli-flags#cloud-auto-joining for + // https://developer.hashicorp.com/docs/agent/config/cli-flags#cloud-auto-joining for // details. // // hcl: retry_join_wan = []string @@ -1495,7 +1502,7 @@ type RuntimeConfig struct { // handler to act appropriately. These are managed entirely in the // agent layer using the standard APIs. // - // See https://www.consul.io/docs/agent/watches.html for details. + // See https://developer.hashicorp.com/docs/agent/watches.html for details. // // hcl: watches = [ // { type=string ... }, diff --git a/agent/config/runtime_test.go b/agent/config/runtime_test.go index 549429bf8682..395efb51c3d0 100644 --- a/agent/config/runtime_test.go +++ b/agent/config/runtime_test.go @@ -1968,7 +1968,7 @@ func TestLoad_IntegrationWithFlags(t *testing.T) { rt.GRPCTLSAddrs = []net.Addr{defaultGrpcTlsAddr} }, expectedWarnings: []string{ - `bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table`, + `bootstrap_expect = 2: A cluster with 2 servers will provide no failure tolerance. See https://developer.hashicorp.com/docs/internals/consensus.html#deployment-table`, `bootstrap_expect > 0: expecting 2 servers`, }, }) @@ -1991,7 +1991,7 @@ func TestLoad_IntegrationWithFlags(t *testing.T) { rt.GRPCTLSAddrs = []net.Addr{defaultGrpcTlsAddr} }, expectedWarnings: []string{ - `bootstrap_expect is even number: A cluster with an even number of servers does not achieve optimum fault tolerance. See https://www.consul.io/docs/internals/consensus.html#deployment-table`, + `bootstrap_expect is even number: A cluster with an even number of servers does not achieve optimum fault tolerance. See https://developer.hashicorp.com/docs/internals/consensus.html#deployment-table`, `bootstrap_expect > 0: expecting 4 servers`, }, }) @@ -2124,6 +2124,16 @@ func TestLoad_IntegrationWithFlags(t *testing.T) { hcl: []string{`performance = { raft_multiplier = 20 }`}, expectedErr: `performance.raft_multiplier cannot be 20. Must be between 1 and 10`, }) + run(t, testCase{ + desc: "disable XDS Load balancing", + args: []string{`-data-dir=` + dataDir}, + json: []string{`{ "performance": { "enable_xds_load_balancing": false} }`}, + hcl: []string{`performance = { enable_xds_load_balancing=false }`}, + expected: func(rt *RuntimeConfig) { + rt.EnableXDSLoadBalancing = false + rt.DataDir = dataDir + }, + }) run(t, testCase{ desc: "node_name invalid", args: []string{ @@ -7055,6 +7065,7 @@ func TestLoad_FullConfig(t *testing.T) { WAL: consul.WALConfig{SegmentSize: 15 * 1024 * 1024}, }, AutoReloadConfigCoalesceInterval: 1 * time.Second, + EnableXDSLoadBalancing: false, } entFullRuntimeConfig(expected) @@ -7371,8 +7382,9 @@ func TestRuntimeConfig_Sanitize(t *testing.T) { }, }, }, - Locality: &Locality{Region: strPtr("us-west-1"), Zone: strPtr("us-west-1a")}, - ServerRejoinAgeMax: 24 * 7 * time.Hour, + Locality: &Locality{Region: strPtr("us-west-1"), Zone: strPtr("us-west-1a")}, + ServerRejoinAgeMax: 24 * 7 * time.Hour, + EnableXDSLoadBalancing: true, } b, err := json.MarshalIndent(rt.Sanitized(), "", " ") diff --git a/agent/config/testdata/TestRuntimeConfig_Sanitize.golden b/agent/config/testdata/TestRuntimeConfig_Sanitize.golden index 53e533b24620..de03cc1eba0a 100644 --- a/agent/config/testdata/TestRuntimeConfig_Sanitize.golden +++ b/agent/config/testdata/TestRuntimeConfig_Sanitize.golden @@ -522,5 +522,6 @@ "VersionMetadata": "", "VersionPrerelease": "", "Watches": [], - "XDSUpdateRateLimit": 0 + "XDSUpdateRateLimit": 0, + "EnableXDSLoadBalancing":true } \ No newline at end of file diff --git a/agent/config/testdata/full-config.hcl b/agent/config/testdata/full-config.hcl index bf3187962748..6180efb60af2 100644 --- a/agent/config/testdata/full-config.hcl +++ b/agent/config/testdata/full-config.hcl @@ -341,6 +341,7 @@ performance { rpc_hold_timeout = "15707s" grpc_keepalive_interval = "33s" grpc_keepalive_timeout = "22s" + enable_xds_load_balancing = false } pid_file = "43xN80Km" ports { diff --git a/agent/config/testdata/full-config.json b/agent/config/testdata/full-config.json index 5816a6713f8a..3f33cfa3cd4d 100644 --- a/agent/config/testdata/full-config.json +++ b/agent/config/testdata/full-config.json @@ -389,7 +389,8 @@ "raft_multiplier": 5, "rpc_hold_timeout": "15707s", "grpc_keepalive_interval": "33s", - "grpc_keepalive_timeout": "22s" + "grpc_keepalive_timeout": "22s", + "enable_xds_load_balancing": false }, "pid_file": "43xN80Km", "ports": { diff --git a/agent/connect/testing_ca.go b/agent/connect/testing_ca.go index a852d9130c87..2dae37e3edf9 100644 --- a/agent/connect/testing_ca.go +++ b/agent/connect/testing_ca.go @@ -334,7 +334,7 @@ func TestServerLeaf(t testing.T, dc string, root *structs.CARoot) (string, strin // TestCSR returns a CSR to sign the given service along with the PEM-encoded // private key for this certificate. -func TestCSR(t testing.T, uri CertURI) (string, string) { +func TestCSR(t Fatalfer, uri CertURI) (string, string) { template := &x509.CertificateRequest{ URIs: []*url.URL{uri.URI()}, SignatureAlgorithm: x509.ECDSAWithSHA256, @@ -387,7 +387,7 @@ func testKeyID(t testing.T, raw interface{}) []byte { // which will be the same for multiple CAs/Leafs. Also note that our UUID // generator also reads from crypto rand and is called far more often during // tests than this will be. -func testPrivateKey(t testing.T, keyType string, keyBits int) (crypto.Signer, string) { +func testPrivateKey(t Fatalfer, keyType string, keyBits int) (crypto.Signer, string) { pk, pkPEM, err := GeneratePrivateKeyWithConfig(keyType, keyBits) if err != nil { t.Fatalf("error generating private key: %s", err) @@ -396,6 +396,13 @@ func testPrivateKey(t testing.T, keyType string, keyBits int) (crypto.Signer, st return pk, pkPEM } +// Fatalfer is a subset of testing.T that only has the Fatalf method. This is +// used to avoid import cycles in the connect package, since we use this in +// places that need to call Fatalf but don't need the full testing.T interface. +type Fatalfer interface { + Fatalf(format string, args ...any) +} + // testSerialNumber generates a serial number suitable for a certificate. For // testing, this just sets it to a random number, but one that can fit in a // uint64 since we use that in our datastructures and assume cert serials will diff --git a/agent/consul/catalog_endpoint.go b/agent/consul/catalog_endpoint.go index 09c23d90c55e..d13b8ff20647 100644 --- a/agent/consul/catalog_endpoint.go +++ b/agent/consul/catalog_endpoint.go @@ -200,6 +200,10 @@ func servicePreApply(service *structs.NodeService, authz resolver.Result, authzC if err := service.Validate(); err != nil { return err } + // Check if service name and service ID are empty. + if service.ID == "" && service.Service == "" { + return fmt.Errorf("Must provide service name (Service.Service)") + } // If no service id, but service name, use default if service.ID == "" && service.Service != "" { diff --git a/agent/consul/catalog_endpoint_test.go b/agent/consul/catalog_endpoint_test.go index 1f3311b0cd98..40341e02fcce 100644 --- a/agent/consul/catalog_endpoint_test.go +++ b/agent/consul/catalog_endpoint_test.go @@ -312,6 +312,30 @@ func createTokenWithPolicyName(t *testing.T, cc rpc.ClientCodec, policyName stri return createTokenWithPolicyNameFull(t, cc, policyName, policyRules, token).SecretID } +func TestCatalog_Register_RejectsMissingServiceName(t *testing.T) { + t.Parallel() + + _, s1 := testServer(t) + defer s1.Shutdown() + codec := rpcClient(t, s1) + defer codec.Close() + + args := structs.RegisterRequest{ + Datacenter: "dc1", + Node: "node1", + Address: "127.0.0.1", + Service: &structs.NodeService{ + ID: "", + Service: "", // Missing required field + Port: 8080, + }, + } + + var out struct{} + err := msgpackrpc.CallWithCodec(codec, "Catalog.Register", &args, &out) + require.Error(t, err) + require.Contains(t, err.Error(), "Must provide service name (Service.Service)") +} func TestCatalog_Register_ForwardLeader(t *testing.T) { if testing.Short() { t.Skip("too slow for testing.Short") diff --git a/agent/consul/config.go b/agent/consul/config.go index 1cba0f4ef169..a2916968c6b4 100644 --- a/agent/consul/config.go +++ b/agent/consul/config.go @@ -463,6 +463,13 @@ type Config struct { // ServerRejoinAgeMax is used to specify the duration of time a server // is allowed to be down/offline before a startup operation is refused. ServerRejoinAgeMax time.Duration + + // EnableXDSLoadBalancing controls xDS load balancing between the servers. Enabled by default. + // When enabled, Consul balances loads from xDS clients across available servers equally with an error margin of 0.1. + // + // When disabled, Consul does not restrict on the number of xDS connections on a server. + // In this scenario, you should deploy an external load balancer in front of the consul servers and distribute the load accordingly. + EnableXDSLoadBalancing bool } func (c *Config) InPrimaryDatacenter() bool { diff --git a/agent/consul/leader_connect_ca.go b/agent/consul/leader_connect_ca.go index ee6562912fe5..8e5cae3def58 100644 --- a/agent/consul/leader_connect_ca.go +++ b/agent/consul/leader_connect_ca.go @@ -1439,6 +1439,10 @@ func (c *CAManager) AuthorizeAndSignCertificate(csr *x509.CertificateRequest, au } c.logger.Trace("authorizing and signing cert", "spiffeID", spiffeID) + if err := c.validateSupportedIdentityScopesInCertificate(spiffeID); err != nil { + return nil, err + } + // Perform authorization. var authzContext acl.AuthorizerContext allow := authz.ToAllowAuthorizer() diff --git a/agent/consul/leader_connect_ca_ce.go b/agent/consul/leader_connect_ca_ce.go new file mode 100644 index 000000000000..147d891c8155 --- /dev/null +++ b/agent/consul/leader_connect_ca_ce.go @@ -0,0 +1,31 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +//go:build !consulent + +package consul + +import ( + "github.com/hashicorp/consul/agent/connect" +) + +func (c *CAManager) validateSupportedIdentityScopesInCertificate(spiffeID connect.CertURI) error { + switch v := spiffeID.(type) { + case *connect.SpiffeIDService: + if v.Namespace != "default" || v.Partition != "default" { + return connect.InvalidCSRError("Non default partition or namespace is supported in Enterprise only."+ + "Provided namespace is %s and partition is %s", v.Namespace, v.Partition) + } + case *connect.SpiffeIDMeshGateway: + if v.Partition != "default" { + return connect.InvalidCSRError("Non default partition is supported in Enterprise only."+ + "Provided partition is %s", v.Partition) + } + case *connect.SpiffeIDAgent, *connect.SpiffeIDServer: + return nil + default: + c.logger.Trace("spiffe ID type is not expected", "spiffeID", spiffeID, "spiffeIDType", v) + return connect.InvalidCSRError("SPIFFE ID in CSR must be a service, mesh-gateway, or agent ID") + } + return nil +} diff --git a/agent/consul/leader_connect_ca_ce_test.go b/agent/consul/leader_connect_ca_ce_test.go new file mode 100644 index 000000000000..06f777d69a51 --- /dev/null +++ b/agent/consul/leader_connect_ca_ce_test.go @@ -0,0 +1,136 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: MPL-2.0 + +//go:build !consulent + +package consul + +import ( + "github.com/hashicorp/consul/agent/connect" + "github.com/hashicorp/consul/sdk/testutil" + "github.com/stretchr/testify/require" + "testing" +) + +func TestValidateSupportedIdentityScopesForServiceInCertificate(t *testing.T) { + config := DefaultConfig() + manager := NewCAManager(nil, nil, testutil.Logger(t), config) + + tests := []struct { + name string + expectErr string + datacenter string + namespace string + service string + partition string + }{ + { + name: "err_unsupported_partition_and_namespace_for_service", + expectErr: "Non default partition or namespace is supported in Enterprise only." + + "Provided namespace is test-namespace and partition is test-partition", + namespace: "test-namespace", + service: "test-service", + partition: "test-partition", + }, + { + name: "err_unsupported_namespace_for_service", + expectErr: "Non default partition or namespace is supported in Enterprise only." + + "Provided namespace is test-namespace and partition is default", + namespace: "test-namespace", + service: "test-service", + partition: "default", + }, + { + name: "err_unsupported_partition_for_service", + expectErr: "Non default partition or namespace is supported in Enterprise only." + + "Provided namespace is default and partition is test-partition", + namespace: "default", + service: "test-service", + partition: "test-partition", + }, + { + name: "default_partition_and_namespace_supported_for_service", + expectErr: "", + namespace: "default", + service: "test-service", + partition: "default", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + spiffeIDService := &connect.SpiffeIDService{ + Host: config.NodeName, + Datacenter: tc.datacenter, + Namespace: tc.namespace, + Service: tc.service, + Partition: tc.partition, + } + err := manager.validateSupportedIdentityScopesInCertificate(spiffeIDService) + if tc.expectErr != "" { + require.Error(t, err) + require.Contains(t, err.Error(), tc.expectErr) + } else { + require.NoError(t, err) + } + }) + } +} + +func TestValidateSupportedIdentityScopesForMeshGatewayInCertificate(t *testing.T) { + config := DefaultConfig() + manager := NewCAManager(nil, nil, testutil.Logger(t), config) + + tests := []struct { + name string + expectErr string + partition string + }{ + { + name: "err_unsupported_partition_for_mesh_gateway", + expectErr: "Non default partition is supported in Enterprise only." + + "Provided partition is test-partition", + partition: "test-partition", + }, + { + name: "default_partition_supported_for_mesh_gateway", + expectErr: "", + partition: "default", + }, + } + + for _, tc := range tests { + t.Run(tc.name, func(t *testing.T) { + spiffeIDMeshGateway := &connect.SpiffeIDMeshGateway{ + Host: config.NodeName, + Datacenter: config.Datacenter, + Partition: tc.partition, + } + err := manager.validateSupportedIdentityScopesInCertificate(spiffeIDMeshGateway) + if tc.expectErr != "" { + require.Error(t, err) + require.Contains(t, err.Error(), tc.expectErr) + } else { + require.NoError(t, err) + } + }) + } +} + +func TestValidateSupportedIdentityScopesForServerInCertificate(t *testing.T) { + config := DefaultConfig() + manager := NewCAManager(nil, nil, testutil.Logger(t), config) + spiffeIDServer := &connect.SpiffeIDServer{ + Host: config.NodeName, Datacenter: config.Datacenter} + err := manager.validateSupportedIdentityScopesInCertificate(spiffeIDServer) + require.NoError(t, err) +} + +func TestValidateSupportedIdentityScopesForAgentInCertificate(t *testing.T) { + config := DefaultConfig() + manager := NewCAManager(nil, nil, testutil.Logger(t), config) + spiffeIDAgent := &connect.SpiffeIDAgent{ + Host: config.NodeName, Datacenter: config.Datacenter, Partition: "test-partition", Agent: "test-agent"} + err := manager.validateSupportedIdentityScopesInCertificate(spiffeIDAgent) + require.NoError(t, err) +} diff --git a/agent/consul/server.go b/agent/consul/server.go index e33fd493ae45..252f2ff71d96 100644 --- a/agent/consul/server.go +++ b/agent/consul/server.go @@ -781,7 +781,7 @@ func NewServer(config *Config, flat Deps, externalGRPCServer *grpc.Server, if err != nil { return "", err } - return fmt.Sprintf("%s:%d", addr, s.WanJoinPort), nil + return net.JoinHostPort(addr, strconv.Itoa(s.WanJoinPort)), nil } go s.Flood(addrFn, s.serfWAN) } @@ -857,9 +857,10 @@ func NewServer(config *Config, flat Deps, externalGRPCServer *grpc.Server, go s.trackLeaderChanges() s.xdsCapacityController = xdscapacity.NewController(xdscapacity.Config{ - Logger: s.logger.Named(logging.XDSCapacityController), - GetStore: func() xdscapacity.Store { return s.fsm.State() }, - SessionLimiter: flat.XDSStreamLimiter, + Logger: s.logger.Named(logging.XDSCapacityController), + GetStore: func() xdscapacity.Store { return s.fsm.State() }, + SessionLimiter: flat.XDSStreamLimiter, + EnableXDSLoadBalancing: s.config.EnableXDSLoadBalancing, }) go s.xdsCapacityController.Run(&lib.StopChannelContext{StopCh: s.shutdownCh}) @@ -2117,7 +2118,7 @@ const peersInfoContent = ` As of Consul 0.7.0, the peers.json file is only used for recovery after an outage. The format of this file depends on what the server has configured for its Raft protocol version. Please see the agent configuration -page at https://www.consul.io/docs/agent/config/cli-flags#_raft_protocol for more +page at https://developer.hashicorp.com/docs/agent/config/cli-flags#_raft_protocol for more details about this parameter. For Raft protocol version 2 and earlier, this should be formatted as a JSON @@ -2160,7 +2161,7 @@ The "address" field is the address and port of the server. The "non_voter" field controls whether the server is a non-voter, which is used in some advanced Autopilot configurations, please see -https://www.consul.io/docs/guides/autopilot.html for more information. If +https://developer.hashicorp.com/docs/guides/autopilot.html for more information. If "non_voter" is omitted it will default to false, which is typical for most clusters. @@ -2177,5 +2178,5 @@ creating the peers.json file, and that all servers receive the same configuration. Once the peers.json file is successfully ingested and applied, it will be deleted. -Please see https://www.consul.io/docs/guides/outage.html for more information. +Please see https://developer.hashicorp.com/docs/guides/outage.html for more information. ` diff --git a/agent/consul/state/session.go b/agent/consul/state/session.go index d57b05947d39..ea37285a462b 100644 --- a/agent/consul/state/session.go +++ b/agent/consul/state/session.go @@ -5,6 +5,7 @@ package state import ( "fmt" + "github.com/hashicorp/consul/api" "reflect" "strings" "time" @@ -219,7 +220,7 @@ func (s *Store) SessionCreate(idx uint64, sess *structs.Session) error { // future. // Call the session creation - if err := sessionCreateTxn(tx, idx, sess); err != nil { + if err := s.sessionCreateTxn(tx, idx, sess); err != nil { return err } @@ -229,7 +230,7 @@ func (s *Store) SessionCreate(idx uint64, sess *structs.Session) error { // sessionCreateTxn is the inner method used for creating session entries in // an open transaction. Any health checks registered with the session will be // checked for failing status. Returns any error encountered. -func sessionCreateTxn(tx WriteTxn, idx uint64, sess *structs.Session) error { +func (s *Store) sessionCreateTxn(tx WriteTxn, idx uint64, sess *structs.Session) error { // Check that we have a session ID if sess.ID == "" { return ErrMissingSessionID @@ -270,7 +271,7 @@ func sessionCreateTxn(tx WriteTxn, idx uint64, sess *structs.Session) error { return fmt.Errorf("failed inserting session: %s", err) } - return nil + return s.updateSessionCheck(tx, idx, sess, api.HealthPassing) } // SessionGet is used to retrieve an active session from the state store. @@ -448,5 +449,33 @@ func (s *Store) deleteSessionTxn(tx WriteTxn, idx uint64, sessionID string, entM } } + // session invalidating the health-checks + return s.updateSessionCheck(tx, idx, session, api.HealthCritical) +} + +// updateSessionCheck The method updates the health-checks associated with the session +func (s *Store) updateSessionCheck(tx WriteTxn, idx uint64, session *structs.Session, checkState string) error { + // Find all checks for the given Node + iter, err := tx.Get(tableChecks, indexNode, Query{Value: session.Node, EnterpriseMeta: session.EnterpriseMeta}) + if err != nil { + return fmt.Errorf("failed check lookup: %s", err) + } + + for check := iter.Next(); check != nil; check = iter.Next() { + if hc := check.(*structs.HealthCheck); hc.Type == "session" && hc.Definition.SessionName == session.Name { + updatedCheck := hc.Clone() + updatedCheck.Status = checkState + switch { + case checkState == api.HealthPassing: + updatedCheck.Output = fmt.Sprintf("Session '%s' in force", session.ID) + default: + updatedCheck.Output = fmt.Sprintf("Session '%s' is invalid", session.ID) + } + + if err := s.ensureCheckTxn(tx, idx, true, updatedCheck); err != nil { + return err + } + } + } return nil } diff --git a/agent/consul/state/session_ce.go b/agent/consul/state/session_ce.go index 2fa24878541c..b5d09c766483 100644 --- a/agent/consul/state/session_ce.go +++ b/agent/consul/state/session_ce.go @@ -151,9 +151,10 @@ func validateSessionChecksTxn(tx ReadTxn, session *structs.Session) error { } // Verify that the check is not in critical state - status := check.(*structs.HealthCheck).Status - if status == api.HealthCritical { - return fmt.Errorf("Check '%s' is in %s state", checkID, status) + healthCheck := check.(*structs.HealthCheck) + // we are discounting the health check for session checks since they are expected to be in critical state without session and this flow is expected to be used for session checks + if healthCheck.Status == api.HealthCritical && healthCheck.Type != "session" { + return fmt.Errorf("Check '%s' is in %s state", checkID, healthCheck.Status) } } return nil diff --git a/agent/consul/state/session_test.go b/agent/consul/state/session_test.go index 08f7ad09d0c1..e647c1f9c9c8 100644 --- a/agent/consul/state/session_test.go +++ b/agent/consul/state/session_test.go @@ -961,3 +961,147 @@ func TestStateStore_Session_Invalidate_PreparedQuery_Delete(t *testing.T) { t.Fatalf("bad: %v", q2) } } + +// the goal of this test is to verify if the system is blocking the session registration when a check is in critical state. +func TestHealthCheck_SessionRegistrationFail(t *testing.T) { + s := testStateStore(t) + + var check *structs.HealthCheck + // setup node + testRegisterNode(t, s, 1, "foo-node") + testRegisterCheckCustom(t, s, 1, "foo", func(chk *structs.HealthCheck) { + chk.Node = "foo-node" + chk.Type = "tll" + chk.Status = api.HealthCritical + chk.Definition = structs.HealthCheckDefinition{ + SessionName: "test-session", + } + check = chk + }) + + // Ensure the index was not updated if nothing was destroyed. + if idx := s.maxIndex("sessions"); idx != 0 { + t.Fatalf("bad index: %d", idx) + } + + // Register a new session + sess := &structs.Session{ + ID: testUUID(), + Node: "foo-node", + Name: "test-session", + Checks: make([]types.CheckID, 0), + } + + sess.Checks = append(sess.Checks, check.CheckID) + // assert the check is critical initially + assertHealthCheckStatus(t, s, sess, check.CheckID, api.HealthCritical) + + if err := s.SessionCreate(2, sess); err == nil { + // expecting error: Check 'foo' is in critical state + t.Fatalf("expected error, got nil") + } +} + +// Allow the session to be created even if the check is critical. +// This is mainly to discount the health check of type `session` +func TestHealthCheck_SessionRegistrationAllow(t *testing.T) { + s := testStateStore(t) + + var check *structs.HealthCheck + // setup node + testRegisterNode(t, s, 1, "foo-node") + testRegisterCheckCustom(t, s, 1, "foo", func(chk *structs.HealthCheck) { + chk.Node = "foo-node" + chk.Type = "session" + chk.Status = api.HealthCritical + chk.Definition = structs.HealthCheckDefinition{ + SessionName: "test-session", + } + check = chk + }) + + // Ensure the index was not updated if nothing was destroyed. + if idx := s.maxIndex("sessions"); idx != 0 { + t.Fatalf("bad index: %d", idx) + } + + // Register a new session + sess := &structs.Session{ + ID: testUUID(), + Node: "foo-node", + Name: "test-session", + Checks: make([]types.CheckID, 0), + } + + sess.Checks = append(sess.Checks, check.CheckID) + // assert the check is critical initially + assertHealthCheckStatus(t, s, sess, check.CheckID, api.HealthCritical) + + if err := s.SessionCreate(2, sess); err != nil { + t.Fatalf("The system shall allow session to be created ignoring the session check is critical. err: %s", err) + } +} + +// test the session health check when session status is changed +func TestHealthCheck_Session(t *testing.T) { + s := testStateStore(t) + + var check *structs.HealthCheck + // setup node + testRegisterNode(t, s, 1, "foo-node") + testRegisterCheckCustom(t, s, 1, "foo", func(chk *structs.HealthCheck) { + chk.Node = "foo-node" + chk.Type = "session" + chk.Status = api.HealthCritical + chk.Definition = structs.HealthCheckDefinition{ + SessionName: "test-session", + } + check = chk + }) + + // Ensure the index was not updated if nothing was destroyed. + if idx := s.maxIndex("sessions"); idx != 0 { + t.Fatalf("bad index: %d", idx) + } + + // Register a new session + sess := &structs.Session{ + ID: testUUID(), + Node: "foo-node", + Name: "test-session", + } + // assert the check is critical initially + assertHealthCheckStatus(t, s, sess, check.CheckID, api.HealthCritical) + + if err := s.SessionCreate(2, sess); err != nil { + t.Fatalf("The system shall allow session to be created ignoring the session check is critical. err: %s", err) + } + // assert the check is critical after session creation + assertHealthCheckStatus(t, s, sess, check.CheckID, api.HealthPassing) + + // Destroy the session. + if err := s.SessionDestroy(3, sess.ID, nil); err != nil { + t.Fatalf("err: %s", err) + } + // assert the check is critical after session destroy + assertHealthCheckStatus(t, s, sess, check.CheckID, api.HealthCritical) +} + +func assertHealthCheckStatus(t *testing.T, s *Store, session *structs.Session, checkID types.CheckID, expectedStatus string) { + _, hc, err := s.NodeChecks(nil, session.Node, structs.DefaultEnterpriseMetaInPartition(""), structs.DefaultPeerKeyword) + if err != nil { + t.Fatalf("err: %s", err) + } + // assert the check is healthy + for _, c := range hc { + if c.CheckID == checkID { + if c.Status != expectedStatus { + t.Fatalf("check is expected to be %s but actually it is %s", expectedStatus, c.Status) + } else { + return + } + } + } + + t.Fatalf("check %s, is not found", string(checkID)) +} diff --git a/agent/consul/state/state_store_test.go b/agent/consul/state/state_store_test.go index 751ecee779de..eb4d120031ed 100644 --- a/agent/consul/state/state_store_test.go +++ b/agent/consul/state/state_store_test.go @@ -308,6 +308,29 @@ func testRegisterCheckWithPartition(t *testing.T, s *Store, idx uint64, } } +func testRegisterCheckCustom(t *testing.T, s *Store, idx uint64, checkID types.CheckID, update func(chk *structs.HealthCheck)) { + chk := &structs.HealthCheck{ + CheckID: checkID, + EnterpriseMeta: *structs.DefaultEnterpriseMetaInPartition(""), + } + + update(chk) + + if err := s.EnsureCheck(idx, chk); err != nil { + t.Fatalf("err: %s", err) + } + + tx := s.db.Txn(false) + defer tx.Abort() + c, err := tx.First(tableChecks, indexID, NodeCheckQuery{Node: chk.Node, CheckID: string(checkID), EnterpriseMeta: chk.EnterpriseMeta}) + if err != nil { + t.Fatalf("err: %s", err) + } + if result, ok := c.(*structs.HealthCheck); !ok || result.CheckID != checkID { + t.Fatalf("bad check: %#v", result) + } +} + func testRegisterSidecarProxy(t *testing.T, s *Store, idx uint64, nodeID string, targetServiceID string) { testRegisterSidecarProxyOpts(t, s, idx, nodeID, targetServiceID) } diff --git a/agent/consul/xdscapacity/capacity.go b/agent/consul/xdscapacity/capacity.go index b3c6df2978e2..a6a492fd1c8e 100644 --- a/agent/consul/xdscapacity/capacity.go +++ b/agent/consul/xdscapacity/capacity.go @@ -55,9 +55,10 @@ type Controller struct { // Config contains the dependencies for Controller. type Config struct { - Logger hclog.Logger - GetStore func() Store - SessionLimiter SessionLimiter + Logger hclog.Logger + GetStore func() Store + SessionLimiter SessionLimiter + EnableXDSLoadBalancing bool } // SessionLimiter is used to enforce the session limit to achieve the ideal @@ -150,12 +151,24 @@ func calcRateLimit(numProxies uint32) rate.Limit { return rate.Limit(perSecond) } +// updateMaxSessions calculates and updates the maximum number of sessions +// allowed per server based on the number of healthy servers and proxies. +// +// When xDS load balancing is enabled, the maximum number of sessions is calculated with the following formula: +// ( / ) * (1 + error margin). +// This formula ensures a roughly even distribution of xDS streams across servers. +// +// If xDS load balancing is disabled, the maxSessions is set to 0, +// which translates to unlimited sessions per server. +// Set maxSessions to 0 when there is an external load balancer in front of the servers. func (c *Controller) updateMaxSessions(numServers, numProxies uint32) { if numServers == 0 || numProxies == 0 { return } - - maxSessions := uint32(math.Ceil((float64(numProxies) / float64(numServers)) * (1 + errorMargin))) + maxSessions := uint32(0) + if c.cfg.EnableXDSLoadBalancing { + maxSessions = uint32(math.Ceil((float64(numProxies) / float64(numServers)) * (1 + errorMargin))) + } if maxSessions == c.prevMaxSessions { return } diff --git a/agent/consul/xdscapacity/capacity_test.go b/agent/consul/xdscapacity/capacity_test.go index bb6f4cc0d4a7..4dc66f3263f5 100644 --- a/agent/consul/xdscapacity/capacity_test.go +++ b/agent/consul/xdscapacity/capacity_test.go @@ -68,9 +68,10 @@ func TestController(t *testing.T) { }) adj := NewController(Config{ - Logger: testutil.Logger(t), - GetStore: func() Store { return store }, - SessionLimiter: limiter, + Logger: testutil.Logger(t), + GetStore: func() Store { return store }, + SessionLimiter: limiter, + EnableXDSLoadBalancing: true, }) ctx, cancel := context.WithCancel(context.Background()) diff --git a/agent/envoyextensions/builtin/lua/lua.go b/agent/envoyextensions/builtin/lua/lua.go index 91e912842ef4..9e59074fba72 100644 --- a/agent/envoyextensions/builtin/lua/lua.go +++ b/agent/envoyextensions/builtin/lua/lua.go @@ -6,16 +6,18 @@ package lua import ( "errors" "fmt" + envoy_core_v3 "github.com/envoyproxy/go-control-plane/envoy/config/core/v3" envoy_listener_v3 "github.com/envoyproxy/go-control-plane/envoy/config/listener/v3" envoy_lua_v3 "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/http/lua/v3" envoy_http_v3 "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/network/http_connection_manager/v3" envoy_resource_v3 "github.com/envoyproxy/go-control-plane/pkg/resource/v3" - "github.com/hashicorp/consul/api" - "github.com/hashicorp/consul/envoyextensions/extensioncommon" "github.com/hashicorp/go-multierror" "github.com/mitchellh/mapstructure" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/envoyextensions/extensioncommon" ) var _ extensioncommon.BasicExtension = (*lua)(nil) @@ -57,7 +59,7 @@ func (l *lua) validate() error { if l.Script == "" { resultErr = multierror.Append(resultErr, fmt.Errorf("missing Script value")) } - if l.ProxyType != string(api.ServiceKindConnectProxy) { + if l.ProxyType != string(api.ServiceKindConnectProxy) && l.ProxyType != string(api.ServiceKindAPIGateway) { resultErr = multierror.Append(resultErr, fmt.Errorf("unexpected ProxyType %q", l.ProxyType)) } if l.Listener != "inbound" && l.Listener != "outbound" { diff --git a/agent/envoyextensions/builtin/lua/lua_test.go b/agent/envoyextensions/builtin/lua/lua_test.go index afe65d067f43..dbb4c082e37e 100644 --- a/agent/envoyextensions/builtin/lua/lua_test.go +++ b/agent/envoyextensions/builtin/lua/lua_test.go @@ -6,7 +6,13 @@ package lua import ( "testing" + envoy_core_v3 "github.com/envoyproxy/go-control-plane/envoy/config/core/v3" + envoy_listener_v3 "github.com/envoyproxy/go-control-plane/envoy/config/listener/v3" + envoy_lua_v3 "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/http/lua/v3" + envoy_http_v3 "github.com/envoyproxy/go-control-plane/envoy/extensions/filters/network/http_connection_manager/v3" "github.com/stretchr/testify/require" + "google.golang.org/protobuf/proto" + "google.golang.org/protobuf/types/known/anypb" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/envoyextensions/extensioncommon" @@ -17,17 +23,15 @@ func TestConstructor(t *testing.T) { m := map[string]interface{}{ "ProxyType": "connect-proxy", "Listener": "inbound", - "Script": "lua-script", + "Script": "function envoy_on_request(request_handle) request_handle:headers():add('test', 'test') end", } - for k, v := range overrides { m[k] = v } - return m } - cases := map[string]struct { + tests := map[string]struct { extensionName string arguments map[string]interface{} expected lua @@ -59,7 +63,16 @@ func TestConstructor(t *testing.T) { expected: lua{ ProxyType: "connect-proxy", Listener: "inbound", - Script: "lua-script", + Script: "function envoy_on_request(request_handle) request_handle:headers():add('test', 'test') end", + }, + ok: true, + }, + "api gateway proxy type": { + arguments: makeArguments(map[string]interface{}{"ProxyType": "api-gateway"}), + expected: lua{ + ProxyType: "api-gateway", + Listener: "inbound", + Script: "function envoy_on_request(request_handle) request_handle:headers():add('test', 'test') end", }, ok: true, }, @@ -68,23 +81,21 @@ func TestConstructor(t *testing.T) { expected: lua{ ProxyType: "connect-proxy", Listener: "inbound", - Script: "lua-script", + Script: "function envoy_on_request(request_handle) request_handle:headers():add('test', 'test') end", }, ok: true, }, } - for n, tc := range cases { - t.Run(n, func(t *testing.T) { - + for name, tc := range tests { + t.Run(name, func(t *testing.T) { extensionName := api.BuiltinLuaExtension if tc.extensionName != "" { extensionName = tc.extensionName } - svc := api.CompoundServiceName{Name: "svc"} ext := extensioncommon.RuntimeConfig{ - ServiceName: svc, + ServiceName: api.CompoundServiceName{Name: "svc"}, EnvoyExtension: api.EnvoyExtension{ Name: extensionName, Arguments: tc.arguments, @@ -102,3 +113,161 @@ func TestConstructor(t *testing.T) { }) } } + +func TestLuaExtension_PatchFilter(t *testing.T) { + makeFilter := func(filters []*envoy_http_v3.HttpFilter) *envoy_listener_v3.Filter { + hcm := &envoy_http_v3.HttpConnectionManager{ + HttpFilters: filters, + } + any, err := anypb.New(hcm) + require.NoError(t, err) + return &envoy_listener_v3.Filter{ + Name: "envoy.filters.network.http_connection_manager", + ConfigType: &envoy_listener_v3.Filter_TypedConfig{ + TypedConfig: any, + }, + } + } + + makeLuaFilter := func(script string) *envoy_http_v3.HttpFilter { + luaConfig := &envoy_lua_v3.Lua{ + DefaultSourceCode: &envoy_core_v3.DataSource{ + Specifier: &envoy_core_v3.DataSource_InlineString{ + InlineString: script, + }, + }, + } + return &envoy_http_v3.HttpFilter{ + Name: "envoy.filters.http.lua", + ConfigType: &envoy_http_v3.HttpFilter_TypedConfig{ + TypedConfig: mustMarshalAny(luaConfig), + }, + } + } + + tests := map[string]struct { + extension *lua + filter *envoy_listener_v3.Filter + isInbound bool + expectedFilter *envoy_listener_v3.Filter + expectPatched bool + expectError string + }{ + "non-http filter is ignored": { + extension: &lua{ + ProxyType: "connect-proxy", + Listener: "inbound", + Script: "function envoy_on_request(request_handle) end", + }, + filter: &envoy_listener_v3.Filter{ + Name: "envoy.filters.network.tcp_proxy", + }, + expectedFilter: &envoy_listener_v3.Filter{ + Name: "envoy.filters.network.tcp_proxy", + }, + expectPatched: false, + }, + "listener direction mismatch": { + extension: &lua{ + ProxyType: "connect-proxy", + Listener: "inbound", + Script: "function envoy_on_request(request_handle) end", + }, + filter: makeFilter([]*envoy_http_v3.HttpFilter{ + {Name: "envoy.filters.http.router"}, + }), + isInbound: false, + expectedFilter: makeFilter([]*envoy_http_v3.HttpFilter{{Name: "envoy.filters.http.router"}}), + expectPatched: false, + }, + "successful patch with router filter": { + extension: &lua{ + ProxyType: "connect-proxy", + Listener: "inbound", + Script: "function envoy_on_request(request_handle) end", + }, + filter: makeFilter([]*envoy_http_v3.HttpFilter{ + {Name: "envoy.filters.http.router"}, + }), + isInbound: true, + expectedFilter: makeFilter([]*envoy_http_v3.HttpFilter{ + makeLuaFilter("function envoy_on_request(request_handle) end"), + {Name: "envoy.filters.http.router"}, + }), + expectPatched: true, + }, + "successful patch with multiple filters": { + extension: &lua{ + ProxyType: "connect-proxy", + Listener: "inbound", + Script: "function envoy_on_request(request_handle) end", + }, + filter: makeFilter([]*envoy_http_v3.HttpFilter{ + {Name: "envoy.filters.http.other1"}, + {Name: "envoy.filters.http.router"}, + {Name: "envoy.filters.http.other2"}, + }), + isInbound: true, + expectedFilter: makeFilter([]*envoy_http_v3.HttpFilter{ + {Name: "envoy.filters.http.other1"}, + makeLuaFilter("function envoy_on_request(request_handle) end"), + {Name: "envoy.filters.http.router"}, + {Name: "envoy.filters.http.other2"}, + }), + expectPatched: true, + }, + "invalid filter config": { + extension: &lua{ + ProxyType: "connect-proxy", + Listener: "inbound", + Script: "function envoy_on_request(request_handle) end", + }, + filter: &envoy_listener_v3.Filter{ + Name: "envoy.filters.network.http_connection_manager", + ConfigType: &envoy_listener_v3.Filter_TypedConfig{ + TypedConfig: &anypb.Any{}, + }, + }, + isInbound: true, + expectedFilter: nil, + expectPatched: false, + expectError: "error unmarshalling filter", + }, + } + + for name, tc := range tests { + t.Run(name, func(t *testing.T) { + direction := extensioncommon.TrafficDirectionOutbound + if tc.isInbound { + direction = extensioncommon.TrafficDirectionInbound + } + + payload := extensioncommon.FilterPayload{ + Message: tc.filter, + TrafficDirection: direction, + } + + filter, patched, err := tc.extension.PatchFilter(payload) + + if tc.expectError != "" { + require.Error(t, err) + require.Contains(t, err.Error(), tc.expectError) + return + } + + require.NoError(t, err) + require.Equal(t, tc.expectPatched, patched) + if tc.expectedFilter != nil { + require.Equal(t, tc.expectedFilter, filter) + } + }) + } +} + +func mustMarshalAny(m proto.Message) *anypb.Any { + a, err := anypb.New(m) + if err != nil { + panic(err) + } + return a +} diff --git a/agent/grpc-middleware/testutil/testservice/simple.pb.go b/agent/grpc-middleware/testutil/testservice/simple.pb.go index f727e0e39a50..054c5d054246 100644 --- a/agent/grpc-middleware/testutil/testservice/simple.pb.go +++ b/agent/grpc-middleware/testutil/testservice/simple.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: simple.proto @@ -171,7 +171,7 @@ func file_simple_proto_rawDescGZIP() []byte { } var file_simple_proto_msgTypes = make([]protoimpl.MessageInfo, 2) -var file_simple_proto_goTypes = []interface{}{ +var file_simple_proto_goTypes = []any{ (*Req)(nil), // 0: testservice.Req (*Resp)(nil), // 1: testservice.Resp } @@ -193,7 +193,7 @@ func file_simple_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_simple_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_simple_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Req); i { case 0: return &v.state @@ -205,7 +205,7 @@ func file_simple_proto_init() { return nil } } - file_simple_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_simple_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*Resp); i { case 0: return &v.state diff --git a/agent/http_test.go b/agent/http_test.go index 73a599546a5f..b04432922c52 100644 --- a/agent/http_test.go +++ b/agent/http_test.go @@ -638,7 +638,7 @@ func TestHTTPAPIResponseHeaders(t *testing.T) { `) defer a.Shutdown() - requireHasHeadersSet(t, a, "/v1/agent/self", "application/json") + requireHasHeadersSet(t, a, "/v1/agent/self", "application/json; charset=utf-8") // Check the Index page that just renders a simple message with UI disabled // also gets the right headers. @@ -784,7 +784,7 @@ func TestErrorContentTypeHeaderSet(t *testing.T) { `) defer a.Shutdown() - requireHasHeadersSet(t, a, "/fake-path-doesn't-exist", "application/json") + requireHasHeadersSet(t, a, "/fake-path-doesn't-exist", "application/json; charset=utf-8") } func TestAcceptEncodingGzip(t *testing.T) { diff --git a/agent/kvs_endpoint.go b/agent/kvs_endpoint.go index 65ec44307802..1c24dffa27de 100644 --- a/agent/kvs_endpoint.go +++ b/agent/kvs_endpoint.go @@ -211,7 +211,7 @@ func (s *HTTPHandlers) KVSPut(resp http.ResponseWriter, req *http.Request, args return nil, HTTPError{ StatusCode: http.StatusRequestEntityTooLarge, Reason: fmt.Sprintf("Request body(%d bytes) too large, max size: %d bytes. See %s.", - req.ContentLength, s.agent.config.KVMaxValueSize, "https://www.consul.io/docs/agent/config/config-files#kv_max_value_size"), + req.ContentLength, s.agent.config.KVMaxValueSize, "https://developer.hashicorp.com/docs/agent/config/config-files#kv_max_value_size"), } } diff --git a/agent/kvs_endpoint_test.go b/agent/kvs_endpoint_test.go index 2b3563000815..eea3bef8a702 100644 --- a/agent/kvs_endpoint_test.go +++ b/agent/kvs_endpoint_test.go @@ -11,9 +11,8 @@ import ( "reflect" "testing" - "github.com/hashicorp/consul/testrpc" - "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/testrpc" ) func TestKVSEndpoint_PUT_GET_DELETE(t *testing.T) { @@ -458,7 +457,7 @@ func TestKVSEndpoint_GET_Raw(t *testing.T) { if len(contentTypeHdr) != 1 { t.Fatalf("expected 1 value for Content-Type header, got %d: %+v", len(contentTypeHdr), contentTypeHdr) } - if contentTypeHdr[0] != "text/plain" { + if contentTypeHdr[0] != "text/plain; charset=utf-8" { t.Fatalf("expected Content-Type header to be \"text/plain\", got %q", contentTypeHdr[0]) } diff --git a/agent/metrics_test.go b/agent/metrics_test.go index c65fcc802d29..07ceaa8f8033 100644 --- a/agent/metrics_test.go +++ b/agent/metrics_test.go @@ -309,7 +309,7 @@ func TestHTTPHandlers_AgentMetrics_LeaderShipMetrics(t *testing.T) { } // TestHTTPHandlers_AgentMetrics_ConsulAutopilot_Prometheus adds testing around -// the published autopilot metrics on https://www.consul.io/docs/agent/telemetry#autopilot +// the published autopilot metrics on https://developer.hashicorp.com/docs/reference/agent/telemetry#autopilot func TestHTTPHandlers_AgentMetrics_ConsulAutopilot_Prometheus(t *testing.T) { skipIfShortTesting(t) // This test cannot use t.Parallel() since we modify global state, ie the global metrics instance diff --git a/agent/proxycfg-sources/catalog/mock_ConfigManager.go b/agent/proxycfg-sources/catalog/mock_ConfigManager.go index dbd82e702b7d..877e30638f34 100644 --- a/agent/proxycfg-sources/catalog/mock_ConfigManager.go +++ b/agent/proxycfg-sources/catalog/mock_ConfigManager.go @@ -3,10 +3,9 @@ package catalog import ( + "context" proxycfg "github.com/hashicorp/consul/agent/proxycfg" mock "github.com/stretchr/testify/mock" - "context" - structs "github.com/hashicorp/consul/agent/structs" ) diff --git a/agent/proxycfg-sources/catalog/mock_Watcher.go b/agent/proxycfg-sources/catalog/mock_Watcher.go index 1fc6ba7c6ea3..abf13a23f212 100644 --- a/agent/proxycfg-sources/catalog/mock_Watcher.go +++ b/agent/proxycfg-sources/catalog/mock_Watcher.go @@ -3,11 +3,11 @@ package catalog import ( + "context" limiter "github.com/hashicorp/consul/agent/grpc-external/limiter" - mock "github.com/stretchr/testify/mock" - "github.com/hashicorp/consul/agent/structs" proxycfg "github.com/hashicorp/consul/agent/proxycfg" - "context" + "github.com/hashicorp/consul/agent/structs" + mock "github.com/stretchr/testify/mock" ) // MockWatcher is an autogenerated mock type for the Watcher type diff --git a/agent/proxycfg-sources/local/mock_ConfigManager.go b/agent/proxycfg-sources/local/mock_ConfigManager.go index 66b204d1312f..920d8392feb6 100644 --- a/agent/proxycfg-sources/local/mock_ConfigManager.go +++ b/agent/proxycfg-sources/local/mock_ConfigManager.go @@ -3,10 +3,9 @@ package local import ( + "context" proxycfg "github.com/hashicorp/consul/agent/proxycfg" mock "github.com/stretchr/testify/mock" - "context" - structs "github.com/hashicorp/consul/agent/structs" ) diff --git a/agent/proxycfg/state_test.go b/agent/proxycfg/state_test.go index 43743eb40a03..307b81c65c25 100644 --- a/agent/proxycfg/state_test.go +++ b/agent/proxycfg/state_test.go @@ -1365,7 +1365,7 @@ func TestState_WatchesAndUpdates(t *testing.T) { Address: "10.30.1.1", Datacenter: "dc4", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.1", 8443, structs.ServiceAddress{Address: "10.0.1.1", Port: 8443}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), @@ -1377,7 +1377,7 @@ func TestState_WatchesAndUpdates(t *testing.T) { Address: "10.30.1.2", Datacenter: "dc4", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.2", 8443, structs.ServiceAddress{Address: "10.30.1.2", Port: 8443}, structs.ServiceAddress{Address: "456.us-west-2.elb.notaws.com", Port: 443}), @@ -1411,7 +1411,7 @@ func TestState_WatchesAndUpdates(t *testing.T) { Address: "10.30.1.1", Datacenter: "dc5", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.1", 8443, structs.ServiceAddress{Address: "10.0.1.1", Port: 8443}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), @@ -1423,7 +1423,7 @@ func TestState_WatchesAndUpdates(t *testing.T) { Address: "10.30.1.2", Datacenter: "dc5", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.2", 8443, structs.ServiceAddress{Address: "10.30.1.2", Port: 8443}, structs.ServiceAddress{Address: "456.us-west-2.elb.notaws.com", Port: 443}), @@ -4163,7 +4163,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.1.1", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "123.us-west-1.elb.notaws.com", Port: 443}), @@ -4173,7 +4173,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.2.2", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), @@ -4190,7 +4190,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "gateway.mydomain", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "123.us-west-1.elb.notaws.com", Port: 443}), @@ -4200,7 +4200,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.2.2", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), @@ -4212,7 +4212,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "gateway.mydomain", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "123.us-west-1.elb.notaws.com", Port: 443}), @@ -4228,7 +4228,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "gateway.mydomain", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "8.8.8.8", Port: 443}), @@ -4238,7 +4238,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.2.2", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), @@ -4250,7 +4250,7 @@ func Test_hostnameEndpoints(t *testing.T) { Node: "mesh-gateway", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.2.2", 8443, structs.ServiceAddress{}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), diff --git a/agent/proxycfg/testing.go b/agent/proxycfg/testing.go index bdd565ec0454..f029fd246777 100644 --- a/agent/proxycfg/testing.go +++ b/agent/proxycfg/testing.go @@ -395,7 +395,7 @@ func TestGatewayNodesDC1(t testing.T) structs.CheckServiceNodes { Address: "10.10.1.1", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.10.1.1", 8443, structs.ServiceAddress{Address: "10.10.1.1", Port: 8443}, structs.ServiceAddress{Address: "198.118.1.1", Port: 443}), @@ -407,7 +407,7 @@ func TestGatewayNodesDC1(t testing.T) structs.CheckServiceNodes { Address: "10.10.1.2", Datacenter: "dc1", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.10.1.2", 8443, structs.ServiceAddress{Address: "10.0.1.2", Port: 8443}, structs.ServiceAddress{Address: "198.118.1.2", Port: 443}), @@ -424,7 +424,7 @@ func TestGatewayNodesDC2(t testing.T) structs.CheckServiceNodes { Address: "10.0.1.1", Datacenter: "dc2", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.1.1", 8443, structs.ServiceAddress{Address: "10.0.1.1", Port: 8443}, structs.ServiceAddress{Address: "198.18.1.1", Port: 443}), @@ -436,7 +436,7 @@ func TestGatewayNodesDC2(t testing.T) structs.CheckServiceNodes { Address: "10.0.1.2", Datacenter: "dc2", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.1.2", 8443, structs.ServiceAddress{Address: "10.0.1.2", Port: 8443}, structs.ServiceAddress{Address: "198.18.1.2", Port: 443}), @@ -453,7 +453,7 @@ func TestGatewayNodesDC3(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.1", Datacenter: "dc3", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.1", 8443, structs.ServiceAddress{Address: "10.0.1.1", Port: 8443}, structs.ServiceAddress{Address: "198.38.1.1", Port: 443}), @@ -465,7 +465,7 @@ func TestGatewayNodesDC3(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.2", Datacenter: "dc3", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.2", 8443, structs.ServiceAddress{Address: "10.30.1.2", Port: 8443}, structs.ServiceAddress{Address: "198.38.1.2", Port: 443}), @@ -482,7 +482,7 @@ func TestGatewayNodesDC4Hostname(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.1", Datacenter: "dc4", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.1", 8443, structs.ServiceAddress{Address: "10.0.1.1", Port: 8443}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), @@ -494,7 +494,7 @@ func TestGatewayNodesDC4Hostname(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.2", Datacenter: "dc4", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.2", 8443, structs.ServiceAddress{Address: "10.30.1.2", Port: 8443}, structs.ServiceAddress{Address: "456.us-west-2.elb.notaws.com", Port: 443}), @@ -506,7 +506,7 @@ func TestGatewayNodesDC4Hostname(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.3", Datacenter: "dc4", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.3", 8443, structs.ServiceAddress{Address: "10.30.1.3", Port: 8443}, structs.ServiceAddress{Address: "198.38.1.1", Port: 443}), @@ -523,7 +523,7 @@ func TestGatewayNodesDC5Hostname(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.1", Datacenter: "dc5", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.1", 8443, structs.ServiceAddress{Address: "10.0.1.1", Port: 8443}, structs.ServiceAddress{Address: "123.us-west-2.elb.notaws.com", Port: 443}), @@ -535,7 +535,7 @@ func TestGatewayNodesDC5Hostname(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.2", Datacenter: "dc5", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.2", 8443, structs.ServiceAddress{Address: "10.30.1.2", Port: 8443}, structs.ServiceAddress{Address: "456.us-west-2.elb.notaws.com", Port: 443}), @@ -547,7 +547,7 @@ func TestGatewayNodesDC5Hostname(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.3", Datacenter: "dc5", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.3", 8443, structs.ServiceAddress{Address: "10.30.1.3", Port: 8443}, structs.ServiceAddress{Address: "198.38.1.1", Port: 443}), @@ -564,7 +564,7 @@ func TestGatewayNodesDC6Hostname(t testing.T) structs.CheckServiceNodes { Address: "10.30.1.1", Datacenter: "dc6", }, - Service: structs.TestNodeServiceMeshGatewayWithAddrs(t, + Service: structs.TestNodeServiceMeshGatewayWithAddrs( "10.30.1.1", 8443, structs.ServiceAddress{Address: "10.0.1.1", Port: 8443}, structs.ServiceAddress{Address: "123.us-east-1.elb.notaws.com", Port: 443}), diff --git a/agent/proxycfg/testing_mesh_gateway.go b/agent/proxycfg/testing_mesh_gateway.go index 80d37220ccd8..32720a601ea0 100644 --- a/agent/proxycfg/testing_mesh_gateway.go +++ b/agent/proxycfg/testing_mesh_gateway.go @@ -420,7 +420,7 @@ func TestConfigSnapshotMeshGateway(t testing.T, variant string, nsFn func(ns *st // Create a duplicate entry in FedStateGateways, with a high ModifyIndex, to // verify that fresh data in the federation state is preferred over stale data // in GatewayGroups. - svc := structs.TestNodeServiceMeshGatewayWithAddrs(t, + svc := structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.1.3", 8443, structs.ServiceAddress{Address: "10.0.1.3", Port: 8443}, structs.ServiceAddress{Address: "198.18.1.3", Port: 443}, @@ -437,7 +437,7 @@ func TestConfigSnapshotMeshGateway(t testing.T, variant string, nsFn func(ns *st // Create a duplicate entry in FedStateGateways, with a low ModifyIndex, to // verify that stale data in the federation state is ignored in favor of the // fresher data in GatewayGroups. - svc := structs.TestNodeServiceMeshGatewayWithAddrs(t, + svc := structs.TestNodeServiceMeshGatewayWithAddrs( "10.0.1.3", 8443, structs.ServiceAddress{Address: "10.0.1.3", Port: 8443}, structs.ServiceAddress{Address: "198.18.1.3", Port: 443}, diff --git a/agent/rpc/peering/testing.go b/agent/rpc/peering/testing.go index 8989950ee29b..56a796664425 100644 --- a/agent/rpc/peering/testing.go +++ b/agent/rpc/peering/testing.go @@ -36,6 +36,9 @@ not valid var validAddress = "1.2.3.4:80" var validHostnameAddress = "foo.bar.baz:80" +var validIPv6Address = "[2001:db8::1]:80" +var invalidIPv6Address = "[2001:db8::1:80]" +var ipv6AddressWithoutPort = "2001:db8::1" var validServerName = "server.consul" diff --git a/agent/rpc/peering/validate_test.go b/agent/rpc/peering/validate_test.go index 669baf41702f..e18703b3370c 100644 --- a/agent/rpc/peering/validate_test.go +++ b/agent/rpc/peering/validate_test.go @@ -66,6 +66,36 @@ func TestValidatePeeringToken(t *testing.T) { "1.2.3.4", }, }, + { + name: "invalid ipv6 without port", + token: &structs.PeeringToken{ + CA: []string{validCA}, + ServerAddresses: []string{ipv6AddressWithoutPort}, + }, + wantErr: &errPeeringInvalidServerAddress{ + ipv6AddressWithoutPort, + }, + }, + { + name: "invalid ipv6 without port - manual", + token: &structs.PeeringToken{ + CA: []string{validCA}, + ManualServerAddresses: []string{ipv6AddressWithoutPort}, + }, + wantErr: &errPeeringInvalidServerAddress{ + ipv6AddressWithoutPort, + }, + }, + { + name: "invalid ipv6 missing brackets - manual", + token: &structs.PeeringToken{ + CA: []string{validCA}, + ManualServerAddresses: []string{invalidIPv6Address}, + }, + wantErr: &errPeeringInvalidServerAddress{ + invalidIPv6Address, + }, + }, { name: "invalid server name", token: &structs.PeeringToken{ @@ -101,6 +131,24 @@ func TestValidatePeeringToken(t *testing.T) { PeerID: validPeerID, }, }, + { + name: "valid token with ipv6 address - manual", + token: &structs.PeeringToken{ + CA: []string{validCA}, + ManualServerAddresses: []string{validIPv6Address}, + ServerName: validServerName, + PeerID: validPeerID, + }, + }, + { + name: "valid mixed ipv4 and ipv6 addresses - manual", + token: &structs.PeeringToken{ + CA: []string{validCA}, + ManualServerAddresses: []string{validAddress, validIPv6Address}, + ServerName: validServerName, + PeerID: validPeerID, + }, + }, } for _, tc := range tt { diff --git a/agent/structs/config_entry_gateways.go b/agent/structs/config_entry_gateways.go index c130073dbb66..1e538d79c973 100644 --- a/agent/structs/config_entry_gateways.go +++ b/agent/structs/config_entry_gateways.go @@ -6,8 +6,10 @@ package structs import ( "encoding/json" "fmt" + "net" "regexp" "sort" + "strconv" "strings" "github.com/miekg/dns" @@ -691,7 +693,8 @@ func (g *GatewayService) Addresses(defaultHosts []string) []string { // ensuring we trim any trailing DNS . characters from the domain name as we // go for _, h := range hosts { - addresses = append(addresses, fmt.Sprintf("%s:%d", strings.TrimRight(h, "."), g.Port)) + host := strings.TrimRight(h, ".") + addresses = append(addresses, net.JoinHostPort(host, strconv.Itoa(g.Port))) } return addresses } diff --git a/agent/structs/structs.go b/agent/structs/structs.go index cae14af52b01..d2c7e0eaa29c 100644 --- a/agent/structs/structs.go +++ b/agent/structs/structs.go @@ -303,7 +303,7 @@ type QueryOptions struct { // returned. Clients that wish to allow for stale results on error can set // StaleIfError to a longer duration to change this behavior. It is ignored // if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. MaxAge time.Duration `mapstructure:"max-age,omitempty"` // MustRevalidate forces the agent to fetch a fresh version of a cached @@ -317,7 +317,7 @@ type QueryOptions struct { // if the servers are unavailable to fetch a fresh one. Only makes sense when // UseCache is true and MaxAge is set to a lower, non-zero value. It is // ignored if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. StaleIfError time.Duration `mapstructure:"stale-if-error,omitempty"` // Filter specifies the go-bexpr filter expression to be used for @@ -1912,6 +1912,7 @@ type HealthCheckDefinition struct { GRPCUseTLS bool `json:",omitempty"` AliasNode string `json:",omitempty"` AliasService string `json:",omitempty"` + SessionName string `json:",omitempty"` TTL time.Duration `json:",omitempty"` } diff --git a/agent/structs/testing_catalog.go b/agent/structs/testing_catalog.go index 8047695bba56..c90af67d0617 100644 --- a/agent/structs/testing_catalog.go +++ b/agent/structs/testing_catalog.go @@ -168,7 +168,7 @@ func TestNodeServiceExpose(t testing.T) *NodeService { // TestNodeServiceMeshGateway returns a *NodeService representing a valid Mesh Gateway func TestNodeServiceMeshGateway(t testing.T) *NodeService { - return TestNodeServiceMeshGatewayWithAddrs(t, + return TestNodeServiceMeshGatewayWithAddrs( "10.1.2.3", 8443, ServiceAddress{Address: "10.1.2.3", Port: 8443}, @@ -192,7 +192,8 @@ func TestNodeServiceTerminatingGateway(t testing.T, address string) *NodeService } } -func TestNodeServiceMeshGatewayWithAddrs(t testing.T, address string, port int, lanAddr, wanAddr ServiceAddress) *NodeService { +// TestNodeServiceMeshGatewayWithAddrs returns a *NodeService representing a valid Mesh Gateway with custom addresses. +func TestNodeServiceMeshGatewayWithAddrs(address string, port int, lanAddr, wanAddr ServiceAddress) *NodeService { return &NodeService{ Kind: ServiceKindMeshGateway, Service: "mesh-gateway", diff --git a/agent/txn_endpoint.go b/agent/txn_endpoint.go index 04dcd7fa3902..4542bf251545 100644 --- a/agent/txn_endpoint.go +++ b/agent/txn_endpoint.go @@ -94,7 +94,7 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( return nil, 0, HTTPError{ StatusCode: http.StatusRequestEntityTooLarge, Reason: fmt.Sprintf("Request body(%d bytes) too large, max size: %d bytes. See %s.", - req.ContentLength, maxTxnLen, "https://www.consul.io/docs/agent/config/config-files#txn_max_req_len"), + req.ContentLength, maxTxnLen, "https://developer.hashicorp.com/docs/agent/config/config-files#txn_max_req_len"), } } @@ -107,7 +107,7 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( return nil, 0, HTTPError{ StatusCode: http.StatusRequestEntityTooLarge, Reason: fmt.Sprintf("Request body too large, max size: %d bytes. See %s.", - maxTxnLen, "https://www.consul.io/docs/agent/config/config-files#txn_max_req_len"), + maxTxnLen, "https://developer.hashicorp.com/docs/agent/config/config-files#txn_max_req_len"), } } else { // Note the body is in API format, and not the RPC format. If we can't @@ -231,6 +231,16 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( }, }, } + if len(in.Service.Service.TaggedAddresses) > 0 { + taggedAddresses := make(map[string]structs.ServiceAddress) + for name, addr := range in.Service.Service.TaggedAddresses { + taggedAddresses[name] = structs.ServiceAddress{ + Address: addr.Address, + Port: addr.Port, + } + } + out.Service.Service.TaggedAddresses = taggedAddresses + } if svc.Proxy != nil { out.Service.Service.Proxy = structs.ConnectProxyConfig{} @@ -300,6 +310,7 @@ func (s *HTTPHandlers) convertOps(resp http.ResponseWriter, req *http.Request) ( Status: check.Status, Notes: check.Notes, Output: check.Output, + Type: check.Type, ServiceID: check.ServiceID, ServiceName: check.ServiceName, ServiceTags: check.ServiceTags, diff --git a/agent/txn_endpoint_test.go b/agent/txn_endpoint_test.go index fe19ed825301..6f3ba65b9603 100644 --- a/agent/txn_endpoint_test.go +++ b/agent/txn_endpoint_test.go @@ -542,6 +542,7 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { "Status": "critical", "Notes": "Http based health check", "Output": "", + "Type": "http", "ServiceID": "", "ServiceName": "", "Definition": { @@ -564,6 +565,7 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { "Status": "passing", "Notes": "Http based health check", "Output": "success", + "Type": "http", "ServiceID": "", "ServiceName": "", "Definition": { @@ -586,6 +588,7 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { "Status": "passing", "Notes": "Http based health check", "Output": "success", + "Type": "http", "ServiceID": "", "ServiceName": "", "ExposedPort": 5678, @@ -624,6 +627,7 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { Name: "Node http check", Status: api.HealthCritical, Notes: "Http based health check", + Type: "http", Definition: structs.HealthCheckDefinition{ Interval: 6 * time.Second, Timeout: 6 * time.Second, @@ -646,6 +650,7 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { Status: api.HealthPassing, Notes: "Http based health check", Output: "success", + Type: "http", Definition: structs.HealthCheckDefinition{ Interval: 10 * time.Second, Timeout: 10 * time.Second, @@ -668,6 +673,7 @@ func TestTxnEndpoint_UpdateCheck(t *testing.T) { Status: api.HealthPassing, Notes: "Http based health check", Output: "success", + Type: "http", ExposedPort: 5678, Definition: structs.HealthCheckDefinition{ Interval: 15 * time.Second, @@ -712,6 +718,23 @@ func TestTxnEndpoint_NodeService(t *testing.T) { } } }, + { + "Service": { + "Verb": "set", + "Node": "%s", + "Service": { + "Service": "test2", + "Address": "192.168.0.10", + "Port" : 8080, + "TaggedAddresses": { + "lan": { + "Address": "192.168.0.10", + "Port": 8080 + } + } + } + } + }, { "Service": { "Verb": "set", @@ -736,7 +759,7 @@ func TestTxnEndpoint_NodeService(t *testing.T) { } } ] -`, a.config.NodeName, a.config.NodeName))) +`, a.config.NodeName, a.config.NodeName, a.config.NodeName))) req, _ := http.NewRequest("PUT", "/v1/txn", buf) resp := httptest.NewRecorder() obj, err := a.srv.Txn(resp, req) @@ -747,7 +770,7 @@ func TestTxnEndpoint_NodeService(t *testing.T) { if !ok { t.Fatalf("bad type: %T", obj) } - require.Equal(t, 2, len(txnResp.Results)) + require.Equal(t, 3, len(txnResp.Results)) index := txnResp.Results[0].Service.ModifyIndex expected := structs.TxnResponse{ @@ -768,6 +791,29 @@ func TestTxnEndpoint_NodeService(t *testing.T) { EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(), }, }, + &structs.TxnResult{ + Service: &structs.NodeService{ + Service: "test2", + ID: "test2", + Address: "192.168.0.10", + Port: 8080, + TaggedAddresses: map[string]structs.ServiceAddress{ + "lan": { + Address: "192.168.0.10", + Port: 8080, + }, + }, + Weights: &structs.Weights{ + Passing: 1, + Warning: 1, + }, + RaftIndex: structs.RaftIndex{ + CreateIndex: index, + ModifyIndex: index, + }, + EnterpriseMeta: *structs.DefaultEnterpriseMetaInDefaultPartition(), + }, + }, &structs.TxnResult{ Service: &structs.NodeService{ Service: "test-sidecar-proxy", diff --git a/agent/watch_handler.go b/agent/watch_handler.go index b926b7167505..45478194cc2d 100644 --- a/agent/watch_handler.go +++ b/agent/watch_handler.go @@ -199,7 +199,7 @@ func makeWatchPlan(logger hclog.Logger, params map[string]interface{}) (*watch.P handler, hasHandler := wp.Exempt["handler"] if hasHandler { logger.Warn("The 'handler' field in watches has been deprecated " + - "and replaced with the 'args' field. See https://www.consul.io/docs/agent/watches.html") + "and replaced with the 'args' field. See https://developer.hashicorp.com/docs/agent/watches") } if _, ok := handler.(string); hasHandler && !ok { return nil, fmt.Errorf("Watch handler must be a string") diff --git a/agent/xds/extensionruntime/runtime_config.go b/agent/xds/extensionruntime/runtime_config.go index 924368ebdf2a..16f02c4ab910 100644 --- a/agent/xds/extensionruntime/runtime_config.go +++ b/agent/xds/extensionruntime/runtime_config.go @@ -128,7 +128,29 @@ func GetRuntimeConfigurations(cfgSnap *proxycfg.ConfigSnapshot) map[api.Compound EnvoyID: envoyID.EnvoyID(), OutgoingProxyKind: api.ServiceKindTerminatingGateway, } + } + case structs.ServiceKindAPIGateway: + kind = api.ServiceKindAPIGateway + // For API Gateway, we need to handle the gateway itself + localSvc := api.CompoundServiceName{ + Name: cfgSnap.Service, + Namespace: cfgSnap.ProxyID.NamespaceOrDefault(), + Partition: cfgSnap.ProxyID.PartitionOrEmpty(), + } + // Handle extensions for the API Gateway itself + extensionConfigurationsMap[localSvc] = []extensioncommon.RuntimeConfig{} + cfgSnapExts := convertEnvoyExtensions(cfgSnap.Proxy.EnvoyExtensions) + for _, ext := range cfgSnapExts { + extCfg := extensioncommon.RuntimeConfig{ + EnvoyExtension: ext, + ServiceName: localSvc, + IsSourcedFromUpstream: false, + Upstreams: upstreamMap, + Kind: kind, + Protocol: proxyConfigProtocol(cfgSnap.Proxy.Config), + } + extensionConfigurationsMap[localSvc] = append(extensionConfigurationsMap[localSvc], extCfg) } } diff --git a/agent/xds/extensionruntime/runtime_config_test.go b/agent/xds/extensionruntime/runtime_config_test.go new file mode 100644 index 000000000000..a15213cb4258 --- /dev/null +++ b/agent/xds/extensionruntime/runtime_config_test.go @@ -0,0 +1,94 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package extensionruntime + +import ( + "testing" + + "github.com/stretchr/testify/require" + + "github.com/hashicorp/consul/agent/proxycfg" + "github.com/hashicorp/consul/agent/structs" + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/envoyextensions/extensioncommon" +) + +func TestGetRuntimeConfigurations_APIGateway(t *testing.T) { + tests := []struct { + name string + cfgSnap *proxycfg.ConfigSnapshot + expectedConfig map[api.CompoundServiceName][]extensioncommon.RuntimeConfig + }{ + { + name: "API Gateway with no extensions", + cfgSnap: &proxycfg.ConfigSnapshot{ + Kind: structs.ServiceKindAPIGateway, + Proxy: structs.ConnectProxyConfig{}, + Service: "api-gateway", + ProxyID: proxycfg.ProxyID{ + ServiceID: structs.ServiceID{ + ID: "api-gateway", + }, + }, + }, + expectedConfig: map[api.CompoundServiceName][]extensioncommon.RuntimeConfig{ + { + Name: "api-gateway", + Namespace: "default", + }: {}, + }, + }, + { + name: "API Gateway with extensions", + cfgSnap: &proxycfg.ConfigSnapshot{ + Kind: structs.ServiceKindAPIGateway, + Proxy: structs.ConnectProxyConfig{ + EnvoyExtensions: []structs.EnvoyExtension{ + { + Name: "builtin/lua", + Arguments: map[string]interface{}{ + "Script": "function envoy_on_response(response_handle) response_handle:headers():add('x-test', 'test') end", + }, + }, + }, + }, + Service: "api-gateway", + ProxyID: proxycfg.ProxyID{ + ServiceID: structs.ServiceID{ + ID: "api-gateway", + }, + }, + }, + expectedConfig: map[api.CompoundServiceName][]extensioncommon.RuntimeConfig{ + { + Name: "api-gateway", + Namespace: "default", + }: { + { + EnvoyExtension: api.EnvoyExtension{ + Name: "builtin/lua", + Arguments: map[string]interface{}{ + "Script": "function envoy_on_response(response_handle) response_handle:headers():add('x-test', 'test') end", + }, + }, + ServiceName: api.CompoundServiceName{ + Name: "api-gateway", + Namespace: "default", + }, + Upstreams: map[api.CompoundServiceName]*extensioncommon.UpstreamData{}, + IsSourcedFromUpstream: false, + Kind: api.ServiceKindAPIGateway, + }, + }, + }, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + config := GetRuntimeConfigurations(tt.cfgSnap) + require.Equal(t, tt.expectedConfig, config) + }) + } +} diff --git a/agent/xds/routes.go b/agent/xds/routes.go index a2ad845eb32c..0c48cd31ace4 100644 --- a/agent/xds/routes.go +++ b/agent/xds/routes.go @@ -8,6 +8,7 @@ import ( "fmt" "net" "sort" + "strconv" "strings" "time" @@ -615,7 +616,7 @@ func generateUpstreamIngressDomains(listenerKey proxycfg.IngressListenerKey, u s continue } - domainWithPort := fmt.Sprintf("%s:%d", h, listenerKey.Port) + domainWithPort := net.JoinHostPort(h, strconv.Itoa(listenerKey.Port)) // Do not add a duplicate domain if a hostname with port is already in the // set diff --git a/api/api.go b/api/api.go index 27af1ea5697a..3ce25cb9ac47 100644 --- a/api/api.go +++ b/api/api.go @@ -142,7 +142,7 @@ type QueryOptions struct { RequireConsistent bool // UseCache requests that the agent cache results locally. See - // https://www.consul.io/api/features/caching.html for more details on the + // https://developer.hashicorp.com/api/features/caching.html for more details on the // semantics. UseCache bool @@ -152,14 +152,14 @@ type QueryOptions struct { // returned. Clients that wish to allow for stale results on error can set // StaleIfError to a longer duration to change this behavior. It is ignored // if the endpoint supports background refresh caching. See - // https://www.consul.io/api/features/caching.html for more details. + // https://developer.hashicorp.com/api/features/caching.html for more details. MaxAge time.Duration // StaleIfError specifies how stale the client will accept a cached response // if the servers are unavailable to fetch a fresh one. Only makes sense when // UseCache is true and MaxAge is set to a lower, non-zero value. It is // ignored if the endpoint supports background refresh caching. See - // https://www.consul.io/api/features/caching.html for more details. + // https://developer.hashicorp.com/api/features/caching.html for more details. StaleIfError time.Duration // WaitIndex is used to enable a blocking query. Waits diff --git a/api/api_test.go b/api/api_test.go index 9a3ed7374c90..e1d361f32824 100644 --- a/api/api_test.go +++ b/api/api_test.go @@ -935,11 +935,11 @@ func TestAPI_Headers(t *testing.T) { _, _, err = kv.Get("test-headers", nil) require.NoError(t, err) - require.Equal(t, "application/json", request.Header.Get("Content-Type")) + require.Equal(t, "application/json; charset=utf-8", request.Header.Get("Content-Type")) _, err = kv.Delete("test-headers", nil) require.NoError(t, err) - require.Equal(t, "application/json", request.Header.Get("Content-Type")) + require.Equal(t, "application/json; charset=utf-8", request.Header.Get("Content-Type")) err = c.Snapshot().Restore(nil, strings.NewReader("foo")) require.Error(t, err) diff --git a/api/go.mod b/api/go.mod index 3ade8b5c6c21..3d7747018b88 100644 --- a/api/go.mod +++ b/api/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul/api -go 1.22.12 +go 1.23.12 replace github.com/hashicorp/consul/sdk => ../sdk @@ -23,7 +23,7 @@ require ( github.com/hashicorp/serf v0.10.1 github.com/mitchellh/mapstructure v1.5.0 github.com/stretchr/testify v1.8.4 - golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 + golang.org/x/exp v0.0.0-20250808145144-a408d31f581a ) require ( @@ -46,7 +46,7 @@ require ( github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 // indirect github.com/stretchr/objx v0.5.0 // indirect - golang.org/x/net v0.34.0 // indirect - golang.org/x/sys v0.29.0 // indirect + golang.org/x/net v0.43.0 // indirect + golang.org/x/sys v0.35.0 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect ) diff --git a/api/go.sum b/api/go.sum index a34505df4171..fad19fca857c 100644 --- a/api/go.sum +++ b/api/go.sum @@ -173,8 +173,8 @@ github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqri golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 h1:yqrTHse8TCMW1M1ZCP+VAR/l0kKxwaAIqN/il7x4voA= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a h1:Y+7uR/b1Mw2iSXZ3G//1haIiSElDQZ8KWh0h+sZPG90= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a/go.mod h1:rT6SFzZ7oxADUDx58pcaKFTcZ+inxAa9fTrYx/uVYwg= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= @@ -182,15 +182,15 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8= -golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= -golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= +golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= +golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ= -golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw= +golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -212,8 +212,8 @@ golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= diff --git a/api/health.go b/api/health.go index a0230020460d..60a5b3dee8a0 100644 --- a/api/health.go +++ b/api/health.go @@ -75,6 +75,8 @@ type HealthCheckDefinition struct { IntervalDuration time.Duration `json:"-"` TimeoutDuration time.Duration `json:"-"` DeregisterCriticalServiceAfterDuration time.Duration `json:"-"` + // when parent Type is `session`, and if this session is destroyed, the check will be marked as critical + SessionName string `json:",omitempty"` // DEPRECATED in Consul 1.4.1. Use the above time.Duration fields instead. Interval ReadableDuration diff --git a/api/lock.go b/api/lock.go index e9529f7bde64..a3885a76d51d 100644 --- a/api/lock.go +++ b/api/lock.go @@ -59,7 +59,7 @@ var ( ) // Lock is used to implement client-side leader election. It is follows the -// algorithm as described here: https://www.consul.io/docs/guides/leader-election.html. +// algorithm as described here: https://developer.hashicorp.com/docs/guides/leader-election.html. type Lock struct { c *Client opts *LockOptions diff --git a/api/operator_autopilot.go b/api/operator_autopilot.go index 7628bf6f2fff..20fc93448012 100644 --- a/api/operator_autopilot.go +++ b/api/operator_autopilot.go @@ -62,7 +62,7 @@ type AutopilotConfiguration struct { } // Defines default values for the AutopilotConfiguration type, consistent with -// https://www.consul.io/api-docs/operator/autopilot#parameters-1 +// https://developer.hashicorp.com/api-docs/operator/autopilot#parameters-1 func NewAutopilotConfiguration() AutopilotConfiguration { cfg := AutopilotConfiguration{ CleanupDeadServers: true, diff --git a/command/connect/envoy/envoy.go b/command/connect/envoy/envoy.go index b974a525e9cd..b7520cbacde4 100644 --- a/command/connect/envoy/envoy.go +++ b/command/connect/envoy/envoy.go @@ -762,7 +762,7 @@ func (c *cmd) generateConfig() ([]byte, error) { } if c.envoyReadyBindAddress != "" && c.envoyReadyBindPort != 0 { - bsCfg.ReadyBindAddr = fmt.Sprintf("%s:%d", c.envoyReadyBindAddress, c.envoyReadyBindPort) + bsCfg.ReadyBindAddr = net.JoinHostPort(c.envoyReadyBindAddress, strconv.Itoa(c.envoyReadyBindPort)) } if !c.disableCentralConfig { diff --git a/command/connect/envoy/flags.go b/command/connect/envoy/flags.go index 4d7994ccd45d..9e5f1629b7e9 100644 --- a/command/connect/envoy/flags.go +++ b/command/connect/envoy/flags.go @@ -24,9 +24,9 @@ type ServiceAddressValue struct { func (s *ServiceAddressValue) String() string { if s == nil || (s.value.Port == 0 && s.value.Address == "") { - return fmt.Sprintf(":%d", defaultGatewayPort) + return net.JoinHostPort("", strconv.Itoa(defaultGatewayPort)) } - return fmt.Sprintf("%v:%d", s.value.Address, s.value.Port) + return net.JoinHostPort(s.value.Address, strconv.Itoa(s.value.Port)) } func (s *ServiceAddressValue) Value() api.ServiceAddress { @@ -85,7 +85,8 @@ type ServiceAddressMapValue struct { func (s *ServiceAddressMapValue) String() string { buf := new(strings.Builder) for k, v := range s.value { - buf.WriteString(fmt.Sprintf("%v=%v:%d,", k, v.Address, v.Port)) + addr := net.JoinHostPort(v.Address, strconv.Itoa(v.Port)) + buf.WriteString(fmt.Sprintf("%v=%v,", k, addr)) } return buf.String() } diff --git a/command/connect/proxy/proxy.go b/command/connect/proxy/proxy.go index 9b352dab5d0c..840a27fc6a7c 100644 --- a/command/connect/proxy/proxy.go +++ b/command/connect/proxy/proxy.go @@ -323,9 +323,7 @@ func (c *cmd) configWatcher(client *api.Client) (proxyImpl.ConfigWatcher, error) addr := config.LocalBindSocketPath if addr == "" { - addr = fmt.Sprintf( - "%s:%d", - config.LocalBindAddress, config.LocalBindPort) + addr = net.JoinHostPort(config.LocalBindAddress, strconv.Itoa(config.LocalBindPort)) } c.UI.Info(fmt.Sprintf( diff --git a/command/debug/debug.go b/command/debug/debug.go index 9fe18280b820..30384017f773 100644 --- a/command/debug/debug.go +++ b/command/debug/debug.go @@ -28,6 +28,7 @@ import ( "github.com/hashicorp/consul/command/flags" "github.com/hashicorp/hcdiag/command" + "github.com/ryanuber/columnize" ) const ( @@ -240,6 +241,12 @@ func (c *cmd) Run(args []string) int { return 1 } + // Capture Raft configuration + err = c.captureRaft() + if err != nil { + c.UI.Warn(fmt.Sprintf("Raft list peers capture failed: %v", err)) + } + // Archive the data if configured to if c.archive { err = c.createArchive() @@ -290,18 +297,32 @@ func (c *cmd) prepare() (version string, err error) { copy(c.capture, defaultTargets) } - // If EnableDebug is not true, skip collecting pprof + // If EnableDebug is not true, check if ACLs are enabled and we have operator:read permission enableDebug, ok := self["DebugConfig"]["EnableDebug"].(bool) if !ok { return "", fmt.Errorf("agent response did not contain EnableDebug key") } - if !enableDebug { + + // Check if ACLs are enabled - if so, try to validate we have operator:read permission + aclEnabled, ok := self["DebugConfig"]["ACLsEnabled"].(bool) + if !ok { + return "", fmt.Errorf("agent response did not contain EnableDebug key") + } + + allowPprof := enableDebug || aclEnabled + + if !allowPprof { + // Remove pprof from capture cs := c.capture for i := 0; i < len(cs); i++ { if cs[i] == "pprof" { c.capture = append(cs[:i], cs[i+1:]...) i-- - c.UI.Warn("[WARN] Unable to capture pprof. Set enable_debug to true on target agent to enable profiling.") + if aclEnabled { + c.UI.Warn("[WARN] Unable to capture pprof. Either set enable_debug to true on target agent or provide an ACL token with operator:read permissions.") + } else { + c.UI.Warn("[WARN] Unable to capture pprof. Set enable_debug to true on target agent to enable profiling.") + } } } } @@ -564,6 +585,72 @@ func (c *cmd) captureMetrics(ctx context.Context) error { return nil } +func (c *cmd) captureRaft() error { + reply, err := c.client.Operator().RaftGetConfiguration(nil) + if err != nil { + return fmt.Errorf("Failed to retrieve raft configuration: %v", err) + } + + leaderLastCommitIndex := uint64(0) + serverIdLastIndexMap := make(map[string]uint64) + + for _, raftServer := range reply.Servers { + serverIdLastIndexMap[raftServer.ID] = raftServer.LastIndex + } + + for _, s := range reply.Servers { + if s.Leader { + lastIndex, ok := serverIdLastIndexMap[s.ID] + if ok { + leaderLastCommitIndex = lastIndex + } + } + } + + // Format it as a nice table. + result := []string{"Node\x1fID\x1fAddress\x1fState\x1fVoter\x1fRaftProtocol\x1fCommit Index\x1fTrails Leader By"} + for _, s := range reply.Servers { + raftProtocol := s.ProtocolVersion + + if raftProtocol == "" { + raftProtocol = "<=1" + } + state := "follower" + if s.Leader { + state = "leader" + } + + trailsLeaderByText := "-" + serverLastIndex, ok := serverIdLastIndexMap[s.ID] + if ok { + trailsLeaderBy := leaderLastCommitIndex - serverLastIndex + trailsLeaderByText = fmt.Sprintf("%d commits", trailsLeaderBy) + if s.Leader { + trailsLeaderByText = "-" + } else if trailsLeaderBy == 1 { + trailsLeaderByText = fmt.Sprintf("%d commit", trailsLeaderBy) + } + } + result = append(result, fmt.Sprintf("%s\x1f%s\x1f%s\x1f%s\x1f%v\x1f%s\x1f%v\x1f%s", + s.Node, s.ID, s.Address, state, s.Voter, raftProtocol, serverLastIndex, trailsLeaderByText)) + } + + listpeers, err := columnize.Format(result, &columnize.Config{Delim: string([]byte{0x1f})}), nil + + fh, err := os.Create(filepath.Join(c.output, "listpeers.json")) + if err != nil { + return fmt.Errorf("failed to create listpeers file: %w", err) + } + defer fh.Close() + + b := strings.NewReader(listpeers) + _, err = b.WriteTo(fh) + if err != nil && !errors.Is(err, context.DeadlineExceeded) { + return fmt.Errorf("failed to copy listpeers to file: %w", err) + } + return nil +} + // allowedTarget returns true if the target is a recognized name of a capture // target. func allowedTarget(target string) bool { diff --git a/command/debug/debug_test.go b/command/debug/debug_test.go index c05d115873f0..c9833828571d 100644 --- a/command/debug/debug_test.go +++ b/command/debug/debug_test.go @@ -79,6 +79,7 @@ func TestDebugCommand(t *testing.T) { fs.WithFile("host.json", "", fs.MatchFileContent(validJSON)), fs.WithFile("members.json", "", fs.MatchFileContent(validJSON)), fs.WithFile("metrics.json", "", fs.MatchAnyFileContent), + fs.WithFile("listpeers.json", "", fs.MatchAnyFileContent), fs.WithFile("consul.log", "", fs.MatchFileContent(validLogFile)), fs.WithFile("profile.prof", "", fs.MatchFileContent(validProfileData)), fs.WithFile("trace.out", "", fs.MatchAnyFileContent), @@ -224,7 +225,7 @@ func TestDebugCommand_Archive(t *testing.T) { } // should only contain this one capture target - if h.Name != "debug/agent.json" && h.Name != "debug/index.json" { + if h.Name != "debug/agent.json" && h.Name != "debug/index.json" && h.Name != "debug/listpeers.json" { t.Fatalf("archive contents do not match: %s", h.Name) } } @@ -551,3 +552,128 @@ func TestDebugCommand_DebugDisabled(t *testing.T) { errOutput := ui.ErrorWriter.String() require.Contains(t, errOutput, "Unable to capture pprof") } + +// TestDebugCommand_PprofScenarios tests the four specific pprof capture scenarios +// based on enable_debug and ACL settings +func TestDebugCommand_PprofScenarios(t *testing.T) { + if testing.Short() { + t.Skip("too slow for testing.Short") + } + + testCases := []struct { + name string + enableDebug bool + aclEnabled bool + expectPprofFiles bool + expectWarning bool + warningContains string + }{ + { + name: "enable_debug=true, acl=disabled - pprof captured", + enableDebug: true, + aclEnabled: false, + expectPprofFiles: true, + expectWarning: false, + }, + { + name: "enable_debug=true, acl=enabled - pprof captured", + enableDebug: true, + aclEnabled: true, + expectPprofFiles: true, + expectWarning: false, + }, + { + name: "enable_debug=false, acl=enabled - pprof captured", + enableDebug: false, + aclEnabled: true, + expectPprofFiles: true, + expectWarning: false, + }, + { + name: "enable_debug=false, acl=disabled - pprof NOT captured", + enableDebug: false, + aclEnabled: false, + expectPprofFiles: false, + expectWarning: true, + warningContains: "Unable to capture pprof", + }, + } + + for _, tc := range testCases { + t.Run(tc.name, func(t *testing.T) { + // Note: Not using t.Parallel() because concurrent pprof generation + // can cause conflicts and test failures + + testDir := testutil.TempDir(t, "debug") + + // Configure agent based on test case + config := fmt.Sprintf(` + enable_debug = %t + acl = { + enabled = %t + default_policy = "allow" + } + `, tc.enableDebug, tc.aclEnabled) + + a := agent.NewTestAgent(t, config) + defer a.Shutdown() + testrpc.WaitForLeader(t, a.RPC, "dc1") + + ui := cli.NewMockUi() + cmd := New(ui) + cmd.validateTiming = false + + outputPath := fmt.Sprintf("%s/debug", testDir) + args := []string{ + "-http-addr=" + a.HTTPAddr(), + "-output=" + outputPath, + "-archive=false", + "-duration=1s", + "-interval=1s", + } + + code := cmd.Run(args) + require.Equal(t, 0, code, "debug command should succeed") + + // Check if pprof files were created + // Check files in root directory (profile.prof, trace.out) + rootProfiles := []string{"profile.prof", "trace.out"} + for _, profile := range rootProfiles { + profilePath := filepath.Join(outputPath, profile) + _, err := os.Stat(profilePath) + + if tc.expectPprofFiles { + require.NoError(t, err, + "Expected pprof file %s to be created for scenario: %s", profile, tc.name) + } else { + require.True(t, os.IsNotExist(err), + "Expected pprof file %s NOT to be created for scenario: %s", profile, tc.name) + } + } + + // Check files in timestamped subdirectories (heap.prof, goroutine.prof) + subdirProfiles := []string{"heap.prof", "goroutine.prof"} + for _, profile := range subdirProfiles { + profileFiles, _ := filepath.Glob(filepath.Join(outputPath, "*", profile)) + + if tc.expectPprofFiles { + require.True(t, len(profileFiles) > 0, + "Expected pprof file %s to be created in subdirectories for scenario: %s", profile, tc.name) + } else { + require.True(t, len(profileFiles) == 0, + "Expected pprof file %s NOT to be created in subdirectories for scenario: %s", profile, tc.name) + } + } + + // Check warning messages + errOutput := ui.ErrorWriter.String() + if tc.expectWarning { + require.Contains(t, errOutput, tc.warningContains, + "Expected warning message for scenario: %s", tc.name) + } else { + require.NotContains(t, errOutput, "Unable to capture pprof", + "Did not expect pprof warning for scenario: %s", tc.name) + } + }) + } +} diff --git a/command/resource/client/client.go b/command/resource/client/client.go index 7113e4e58733..6edb22c422e4 100644 --- a/command/resource/client/client.go +++ b/command/resource/client/client.go @@ -119,7 +119,7 @@ type QueryOptions struct { RequireConsistent bool // UseCache requests that the agent cache results locally. See - // https://www.consul.io/api/features/caching.html for more details on the + // https://developer.hashicorp.com/api/features/caching.html for more details on the // semantics. UseCache bool @@ -129,14 +129,14 @@ type QueryOptions struct { // returned. Clients that wish to allow for stale results on error can set // StaleIfError to a longer duration to change this behavior. It is ignored // if the endpoint supports background refresh caching. See - // https://www.consul.io/api/features/caching.html for more details. + // https://developer.hashicorp.com/api/features/caching.html for more details. MaxAge time.Duration // StaleIfError specifies how stale the client will accept a cached response // if the servers are unavailable to fetch a fresh one. Only makes sense when // UseCache is true and MaxAge is set to a lower, non-zero value. It is // ignored if the endpoint supports background refresh caching. See - // https://www.consul.io/api/features/caching.html for more details. + // https://developer.hashicorp.com/api/features/caching.html for more details. StaleIfError time.Duration // WaitIndex is used to enable a blocking query. Waits diff --git a/command/services/register/register.go b/command/services/register/register.go index 33599dda0132..ac330d9725e9 100644 --- a/command/services/register/register.go +++ b/command/services/register/register.go @@ -6,6 +6,8 @@ package register import ( "flag" "fmt" + "net" + "strings" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/command/flags" @@ -73,13 +75,26 @@ func (c *cmd) Run(args []string) int { return 1 } + // Validate service address if provided + if c.flagAddress != "" { + if err := validateServiceAddressWithPortCheck(c.flagAddress, false); err != nil { + c.UI.Error(fmt.Sprintf("Invalid Service address when using CLI flags. Use -port flag instead: %v", err)) + return 1 + } + } + var taggedAddrs map[string]api.ServiceAddress if len(c.flagTaggedAddresses) > 0 { taggedAddrs = make(map[string]api.ServiceAddress) for k, v := range c.flagTaggedAddresses { addr, err := api.ParseServiceAddr(v) if err != nil { - c.UI.Error(fmt.Sprintf("Invalid Tagged Address: %v", err)) + c.UI.Error(fmt.Sprintf("Invalid Tagged address: %v", err)) + return 1 + } + // Validate the address part of the tagged address + if err := validateServiceAddressWithPortCheck(addr.Address, true); err != nil { + c.UI.Error(fmt.Sprintf("Invalid Tagged address for tagged address '%s': %v", k, err)) return 1 } taggedAddrs[k] = addr @@ -115,6 +130,22 @@ func (c *cmd) Run(args []string) int { c.UI.Error(fmt.Sprintf("Error: %s", err)) return 1 } + // Validate addresses in services loaded from files + for _, svc := range svcs { + if svc.Address != "" { + if err := validateServiceAddressWithPortCheck(svc.Address, false); err != nil { + c.UI.Error(fmt.Sprintf("Invalid Service address for service '%s'. Use port field instead: %v", svc.Name, err)) + return 1 + } + } + // Validate tagged addresses + for tag, addr := range svc.TaggedAddresses { + if err := validateServiceAddressWithPortCheck(addr.Address, true); err != nil { + c.UI.Error(fmt.Sprintf("Invalid Tagged address for tagged address '%s' in service '%s': %v", tag, svc.Name, err)) + return 1 + } + } + } } // Create and test the HTTP client @@ -162,3 +193,99 @@ Usage: consul services register [options] [FILE...] Additional flags and more advanced use cases are detailed below. ` ) + +// This function validates that a service address is properly formatted +// and catches common malformed IP patterns +func validateServiceAddress(addr string) error { + if addr == "" { + return nil // Empty addresses are allowed + } + + // Parse the address to separate host and port if present + host, _, err := net.SplitHostPort(addr) + if err != nil { + // If SplitHostPort fails, treat the whole string as host + host = addr + } + + // Check if it's a valid IP address + if ip := net.ParseIP(host); ip != nil { + // Valid IP - allow all valid IPs including ANY addresses + return nil + } + + // If not an IP, it might be a hostname or malformed IP + // Check for common malformed IP patterns + if looksLikeIP(host) { + return fmt.Errorf("malformed IP address: %s", host) + } + + // If not an IP, assume it's a hostname - validate it's not empty + if strings.TrimSpace(host) == "" { + return fmt.Errorf("address cannot be empty") + } + + return nil +} + +// This function validates a service address and optionally checks for port presence +func validateServiceAddressWithPortCheck(addr string, allowPort bool) error { + + // Validate the basic address format + if err := validateServiceAddress(addr); err != nil { + return err + } + + // Check for port presence if not allowed + if !allowPort { + if _, port, err := net.SplitHostPort(addr); err == nil && port != "" { + return fmt.Errorf("address should not contain port") + } + } + + return nil +} + +// This function returns true if the string appears to be an IP address +// but fails to parse correctly (indicating it's malformed) +func looksLikeIP(addr string) bool { + // Check for obviously malformed IP patterns + if strings.Contains(addr, "..") || strings.Contains(addr, ":::") { + return true + } + + // Check for multiple :: sequences (IPv6 can have at most one ::) + if strings.Count(addr, "::") > 1 { + return true + } + + // Check for too many colons (IPv6 can have at most 7) + if strings.Count(addr, ":") > 7 { + return true + } + + // Check for IPv4-like patterns with too many dots + if strings.Count(addr, ".") > 3 { + // Check if most segments are numeric, which may indicate a malformed IP + parts := strings.Split(addr, ".") + numericParts := 0 + for _, part := range parts { + if part != "" { + isNumeric := true + for _, r := range part { + if r < '0' || r > '9' { + isNumeric = false + break + } + } + if isNumeric { + numericParts++ + } + } + } + // If most parts are numeric, it's likely a malformed IP + return numericParts > 2 + } + + return false +} diff --git a/command/services/register/register_test.go b/command/services/register/register_test.go index 8b4d94328a12..aacb67790dac 100644 --- a/command/services/register/register_test.go +++ b/command/services/register/register_test.go @@ -228,3 +228,129 @@ func testFile(t *testing.T, suffix string) *os.File { return f } + +func TestValidateServiceAddress(t *testing.T) { + tests := []struct { + name string + addr string + wantErr bool + }{ + { + name: "empty address", + addr: "", + wantErr: false, + }, + { + name: "valid IPv4", + addr: "192.168.1.1", + wantErr: false, + }, + { + name: "valid IPv4 with port", + addr: "192.168.1.1:8080", + wantErr: false, + }, + { + name: "valid IPv6", + addr: "::1", + wantErr: false, + }, + { + name: "valid IPv6 with brackets and port", + addr: "[::1]:8080", + wantErr: false, + }, + { + name: "IPv4 ANY address", + addr: "0.0.0.0", + wantErr: false, // Allow ANY addresses + }, + { + name: "IPv6 ANY address", + addr: "::", + wantErr: false, // Allow ANY addresses + }, + { + name: "IPv6 ANY address with brackets", + addr: "[::]", + wantErr: false, // Allow ANY addresses + }, + { + name: "malformed IPv6 - too many colons", + addr: ":::8500", + wantErr: true, + }, + { + name: "malformed IPv4 - too many octets", + addr: "192.168.1.1.1", + wantErr: true, // This should be caught as malformed IP + }, + { + name: "malformed IP with consecutive dots", + addr: "192.168..1", + wantErr: true, + }, + { + name: "valid full IPv6 address", + addr: "2001:0db8:85a3:0000:0000:8a2e:0370:7334", + wantErr: false, + }, + { + name: "invalid IPv6 address with port", + addr: "2001:0db8:85a3:0000:0000:8a2e:0370:7334:8080", + wantErr: true, + }, + { + name: "valid IPv6 address with port", + addr: "[2001:0db8:85a3:0000:0000:8a2e:0370:7334]:8080", + wantErr: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + err := validateServiceAddress(tt.addr) + if (err != nil) != tt.wantErr { + t.Errorf("validateServiceAddress(%q) error = %v, wantErr %v", tt.addr, err, tt.wantErr) + } + }) + } +} + +func TestLooksLikeIP(t *testing.T) { + tests := []struct { + name string + addr string + want bool + }{ + { + name: "malformed with consecutive colons", + addr: ":::8500", + want: true, + }, + { + name: "malformed with consecutive dots", + addr: "192.168..1", + want: true, + }, + { + name: "valid IPv4", + addr: "192.168.1.1", + want: false, + }, + { + name: "too many colons", + addr: ":::::::::", + want: true, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + got := looksLikeIP(tt.addr) + if got != tt.want { + t.Errorf("looksLikeIP(%q) = %v, want %v", tt.addr, got, tt.want) + } + }) + } +} diff --git a/command/tls/ca/create/tls_ca_create.go b/command/tls/ca/create/tls_ca_create.go index 43a9c249f154..5dc1bc33970c 100644 --- a/command/tls/ca/create/tls_ca_create.go +++ b/command/tls/ca/create/tls_ca_create.go @@ -21,6 +21,15 @@ func New(ui cli.Ui) *cmd { return c } +const ( + // DirectoryPerms represents read+write+execute for owner, read+execute for group and others (0755) + DirectoryPerms = 0755 + // PublicFilePerms represents read+write for owner, read-only for group and others (0644) + PublicFilePerms = 0644 + // PrivateFilePerms represents read+write for owner only (0600) + PrivateFilePerms = 0600 +) + type cmd struct { UI cli.Ui flags *flag.FlagSet @@ -82,13 +91,15 @@ func (c *cmd) Run(args []string) int { return 1 } - if err := file.WriteAtomicWithPerms(certFileName, []byte(ca), 0755, 0666); err != nil { + // public CA cert file + if err := file.WriteAtomicWithPerms(certFileName, []byte(ca), DirectoryPerms, PublicFilePerms); err != nil { c.UI.Error(err.Error()) return 1 } c.UI.Output("==> Saved " + certFileName) - if err := file.WriteAtomicWithPerms(pkFileName, []byte(pk), 0755, 0600); err != nil { + // CA private key + if err := file.WriteAtomicWithPerms(pkFileName, []byte(pk), DirectoryPerms, PrivateFilePerms); err != nil { c.UI.Error(err.Error()) return 1 } diff --git a/command/tls/cert/create/tls_cert_create.go b/command/tls/cert/create/tls_cert_create.go index 9e0f92173bd1..cfaff836cd0c 100644 --- a/command/tls/cert/create/tls_cert_create.go +++ b/command/tls/cert/create/tls_cert_create.go @@ -24,6 +24,15 @@ func New(ui cli.Ui) *cmd { return c } +const ( + // DirectoryPerms represents read+write+execute for owner, read+execute for group and others (0755) + DirectoryPerms = 0755 + // PublicFilePerms represents read+write for owner, read-only for group and others (0644) + PublicFilePerms = 0644 + // PrivateFilePerms represents read+write for owner only (0600) + PrivateFilePerms = 0600 +) + type cmd struct { UI cli.Ui flags *flag.FlagSet @@ -193,13 +202,15 @@ func (c *cmd) Run(args []string) int { return 1 } - if err := file.WriteAtomicWithPerms(certFileName, []byte(pub), 0755, 0666); err != nil { + // public cert + if err := file.WriteAtomicWithPerms(certFileName, []byte(pub), DirectoryPerms, PublicFilePerms); err != nil { c.UI.Error(err.Error()) return 1 } c.UI.Output("==> Saved " + certFileName) - if err := file.WriteAtomicWithPerms(pkFileName, []byte(priv), 0755, 0600); err != nil { + // private key + if err := file.WriteAtomicWithPerms(pkFileName, []byte(priv), DirectoryPerms, PrivateFilePerms); err != nil { c.UI.Error(err.Error()) return 1 } diff --git a/connect/proxy/config.go b/connect/proxy/config.go index 19476d48d49f..5cac5edefa68 100644 --- a/connect/proxy/config.go +++ b/connect/proxy/config.go @@ -5,6 +5,8 @@ package proxy import ( "fmt" + "net" + "strconv" "time" "github.com/mitchellh/mapstructure" @@ -122,9 +124,7 @@ func (uc *UpstreamConfig) applyDefaults() { func (uc *UpstreamConfig) String() string { addr := uc.LocalBindSocketPath if addr == "" { - addr = fmt.Sprintf( - "%s:%d", - uc.LocalBindAddress, uc.LocalBindPort) + addr = net.JoinHostPort(uc.LocalBindAddress, strconv.Itoa(uc.LocalBindPort)) } return fmt.Sprintf("%s->%s:%s/%s/%s", addr, uc.DestinationType, uc.DestinationPartition, uc.DestinationNamespace, uc.DestinationName) diff --git a/docs/README.md b/docs/README.md index 156dc946a495..623d296a58cd 100644 --- a/docs/README.md +++ b/docs/README.md @@ -12,7 +12,7 @@ either a significant architectural layer, or major functional area of Consul. These documents assume a basic understanding of Consul's feature set, which can be found in the public [user documentation]. -[user documentation]: https://www.consul.io/docs +[user documentation]: https://developer.hashicorp.com/docs ![Overview](./overview.svg) @@ -49,7 +49,7 @@ Most top level directories contain Go source code. The directories listed below contain other important source related to Consul. * [ui] contains the source code for the Consul UI. -* [website] contains the source for [consul.io](https://www.consul.io/). A pull requests +* [website] contains the source for [consul.io](https://developer.hashicorp.com/). A pull requests can update the source code and Consul's documentation at the same time. * [.github] contains the source for our CI and GitHub repository automation. diff --git a/docs/cli/README.md b/docs/cli/README.md index aaa1a8e55219..ad99a01aa0d0 100644 --- a/docs/cli/README.md +++ b/docs/cli/README.md @@ -10,7 +10,7 @@ The [cli reference] in Consul user documentation has a full reference to all ava commands. [HTTP API]: ../http-api -[cli reference]: https://www.consul.io/commands +[cli reference]: https://developer.hashicorp.com/commands ## Code diff --git a/docs/client-agent/README.md b/docs/client-agent/README.md index 4d91044d2c3f..2376aabd0236 100644 --- a/docs/client-agent/README.md +++ b/docs/client-agent/README.md @@ -2,6 +2,6 @@ - agent/cache - [agent/local](https://github.com/hashicorp/consul/tree/main/agent/local) -- anti-entropy sync in [agent/ae](https://github.com/hashicorp/consul/tree/main/agent/ae) powering the [Anti-Entropy Sync Back](https://www.consul.io/docs/internals/anti-entropy.html) process to the Consul servers. +- anti-entropy sync in [agent/ae](https://github.com/hashicorp/consul/tree/main/agent/ae) powering the [Anti-Entropy Sync Back](https://developer.hashicorp.com/docs/internals/anti-entropy.html) process to the Consul servers. -Applications on client nodes use their local agent in client mode to [register services](https://www.consul.io/api/agent.html) and to discover other services or interact with the key/value store. +Applications on client nodes use their local agent in client mode to [register services](https://developer.hashicorp.com/api/agent.html) and to discover other services or interact with the key/value store. diff --git a/docs/cluster-federation/network-areas/README.md b/docs/cluster-federation/network-areas/README.md index 08a2014d53c0..87c0871d09fb 100644 --- a/docs/cluster-federation/network-areas/README.md +++ b/docs/cluster-federation/network-areas/README.md @@ -5,7 +5,7 @@ ### Description and Background Network areas define pairwise gossip pools over the WAN between Consul datacenters. Gossip traffic between Consul servers in an area is done over the server RPC port, and uses TCP connections. This is unlike Consul's primary WAN and LAN pools, which primarily rely on UDP-based probes. -TCP was used because it allows configuration with TLS certificates. The TLS encryption will be used if [configured](https://www.consul.io/docs/security/encryption#rpc-encryption-with-tls) and [enabled](https://www.consul.io/api-docs/operator/area#usetls). Because gossip connects every node in that pool, there is also a connection between servers from the same datacenter. Connections between servers from the same datacenter default to using TLS if it is configured. The overhead of the TCP protocol is limited since there is a small number of servers in an area. Note that the symmetric keys used by Consul's WAN and LAN pools are not used to encrypt traffic from network areas. +TCP was used because it allows configuration with TLS certificates. The TLS encryption will be used if [configured](https://developer.hashicorp.com/docs/security/encryption#rpc-encryption-with-tls) and [enabled](https://developer.hashicorp.com/api-docs/operator/area#usetls). Because gossip connects every node in that pool, there is also a connection between servers from the same datacenter. Connections between servers from the same datacenter default to using TLS if it is configured. The overhead of the TCP protocol is limited since there is a small number of servers in an area. Note that the symmetric keys used by Consul's WAN and LAN pools are not used to encrypt traffic from network areas. In versions of Consul prior to v1.8, network areas would establish a new TCP connection for every network area message. This was then substituted by connection pooling, where each server will maintain a TCP connection to every server in the network area. However, note that when a server in a version > `v1.8.0` dials a server on an older version, it will fall-back to the old connection-per-message behavior. @@ -35,7 +35,7 @@ Every Consul Enterprise server maintains a reconciliation routine where every 30 Joining a network area pool involves: 1. Setting memberlist and Serf configuration. - * Prior to Consul `v1.8.11` and `v1.9.5`, network areas were configured with memberlist's [DefaultWANConfig](https://github.com/hashicorp/memberlist/blob/838073fef1a4e1f6cb702a57a8075304098b1c31/config.go#L315). This was then updated to instead use the server's [gossip_wan](https://www.consul.io/docs/agent/config/config-files#gossip_wan) configuration, which falls back to the DefaultWANConfig if it was not specified. + * Prior to Consul `v1.8.11` and `v1.9.5`, network areas were configured with memberlist's [DefaultWANConfig](https://github.com/hashicorp/memberlist/blob/838073fef1a4e1f6cb702a57a8075304098b1c31/config.go#L315). This was then updated to instead use the server's [gossip_wan](https://developer.hashicorp.com/docs/agent/config/config-files#gossip_wan) configuration, which falls back to the DefaultWANConfig if it was not specified. * As of Consul `v1.8.11`/`v1.9.5` it is not possible to tune gossip communication on a per-area basis. 2. Update the server's gossip network, which keeps track of network areas that the server is a part of. This gossip network is also used to dispatch incoming **gossip** connections to handlers for the appropriate area. diff --git a/docs/cluster-membership/README.md b/docs/cluster-membership/README.md index ce378c81dc17..3edb5846eb94 100644 --- a/docs/cluster-membership/README.md +++ b/docs/cluster-membership/README.md @@ -9,4 +9,4 @@ This section is a work in progress. It will contain topics like the following: - consul exec -Both client and server mode agents participate in a [Gossip Protocol](https://www.consul.io/docs/internals/gossip.html) which provides two important mechanisms. First, it allows for agents to learn about all the other agents in the cluster, just by joining initially with a single existing member of the cluster. This allows clients to discover new Consul servers. Second, the gossip protocol provides a distributed failure detector, whereby the agents in the cluster randomly probe each other at regular intervals. Because of this failure detector, Consul can run health checks locally on each agent and just sent edge-triggered updates when the state of a health check changes, confident that if the agent dies altogether then the cluster will detect that. This makes Consul's health checking design very scaleable compared to centralized systems with a central polling type of design. +Both client and server mode agents participate in a [Gossip Protocol](https://developer.hashicorp.com/docs/internals/gossip.html) which provides two important mechanisms. First, it allows for agents to learn about all the other agents in the cluster, just by joining initially with a single existing member of the cluster. This allows clients to discover new Consul servers. Second, the gossip protocol provides a distributed failure detector, whereby the agents in the cluster randomly probe each other at regular intervals. Because of this failure detector, Consul can run health checks locally on each agent and just sent edge-triggered updates when the state of a health check changes, confident that if the agent dies altogether then the cluster will detect that. This makes Consul's health checking design very scaleable compared to centralized systems with a central polling type of design. diff --git a/docs/config/README.md b/docs/config/README.md index 98cd35ee8376..1ec0474c5cf2 100644 --- a/docs/config/README.md +++ b/docs/config/README.md @@ -10,12 +10,12 @@ specified using command line flags, and some can be loaded with [Auto-Config]. See also the [checklist for adding a new field] to the configuration. [hcl]: https://github.com/hashicorp/hcl/tree/hcl1 -[Agent Configuration]: https://www.consul.io/docs/agent/config +[Agent Configuration]: https://developer.hashicorp.com/docs/agent/config [checklist for adding a new field]: ./checklist-adding-config-fields.md [Auto-Config]: #auto-config -[Config Entries]: https://www.consul.io/docs/agent/config/config-files#config_entries -[Services]: https://www.consul.io/docs/discovery/services -[Checks]: https://www.consul.io/docs/discovery/checks +[Config Entries]: https://developer.hashicorp.com/docs/agent/config/config-files#config_entries +[Services]: https://developer.hashicorp.com/docs/discovery/services +[Checks]: https://developer.hashicorp.com/docs/discovery/checks ## Code @@ -53,6 +53,6 @@ implemented in a couple packages. * the server RPC endpoint is in [agent/consul/auto_config_endpoint.go] * the client that receives and applies the config is implemented in [agent/auto-config] -[auto_config]: https://www.consul.io/docs/agent/config/config-files#auto_config +[auto_config]: https://developer.hashicorp.com/docs/agent/config/config-files#auto_config [agent/consul/auto_config_endpoint.go]: https://github.com/hashicorp/consul/blob/main/agent/consul/auto_config_endpoint.go [agent/auto-config]: https://github.com/hashicorp/consul/tree/main/agent/auto-config diff --git a/docs/config/checklist-adding-config-fields.md b/docs/config/checklist-adding-config-fields.md index 1f6b251e9899..4bfb83ffe008 100644 --- a/docs/config/checklist-adding-config-fields.md +++ b/docs/config/checklist-adding-config-fields.md @@ -121,7 +121,7 @@ You can now access that field from `s.srv.config.` inside an RPC handler. ## Adding a New Field to Service Definition -The [Service Definition](https://www.consul.io/docs/agent/services.html) syntax +The [Service Definition](https://developer.hashicorp.com/docs/agent/services.html) syntax appears both in Consul config files but also in the `/v1/agent/service/register` API. diff --git a/docs/faq.md b/docs/faq.md index e791cff19bcb..890acefcfdd3 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -4,18 +4,18 @@ This section addresses some frequently asked questions about Consul's architectu ### How does eventually-consistent gossip relate to the Raft consensus protocol? -When you query Consul for information about a service, such as via the [DNS interface](https://www.consul.io/docs/discovery/dns), the agent will always make an internal RPC request to a Consul server that will query the consistent state store. Even though an agent might learn that another agent is down via gossip, that won't be reflected in service discovery until the current Raft leader server perceives that through gossip and updates the catalog using Raft. You can see an example of where these layers are plumbed together here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/consul/leader.go#L559-L602. +When you query Consul for information about a service, such as via the [DNS interface](https://developer.hashicorp.com/docs/discovery/dns), the agent will always make an internal RPC request to a Consul server that will query the consistent state store. Even though an agent might learn that another agent is down via gossip, that won't be reflected in service discovery until the current Raft leader server perceives that through gossip and updates the catalog using Raft. You can see an example of where these layers are plumbed together here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/consul/leader.go#L559-L602. ## Why does a blocking query sometimes return with identical results? -Consul's [blocking queries](https://www.consul.io/api/index.html#blocking-queries) make a best-effort attempt to wait for new information, but they may return the same results as the initial query under some circumstances. First, queries are limited to 10 minutes max, so if they time out they will return. Second, due to Consul's prefix-based internal immutable radix tree indexing, there may be modifications to higher-level nodes in the radix tree that cause spurious wakeups. In particular, waiting on things that do not exist is not very efficient, but not very expensive for Consul to serve, so we opted to keep the code complexity low and not try to optimize for that case. You can see the common handler that implements the blocking query logic here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/consul/rpc.go#L361-L439. For more on the immutable radix tree implementation, see https://github.com/hashicorp/go-immutable-radix/ and https://github.com/hashicorp/go-memdb, and the general support for "watches". +Consul's [blocking queries](https://developer.hashicorp.com/api/index.html#blocking-queries) make a best-effort attempt to wait for new information, but they may return the same results as the initial query under some circumstances. First, queries are limited to 10 minutes max, so if they time out they will return. Second, due to Consul's prefix-based internal immutable radix tree indexing, there may be modifications to higher-level nodes in the radix tree that cause spurious wakeups. In particular, waiting on things that do not exist is not very efficient, but not very expensive for Consul to serve, so we opted to keep the code complexity low and not try to optimize for that case. You can see the common handler that implements the blocking query logic here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/consul/rpc.go#L361-L439. For more on the immutable radix tree implementation, see https://github.com/hashicorp/go-immutable-radix/ and https://github.com/hashicorp/go-memdb, and the general support for "watches". ### Do the client agents store any key/value entries? -No. These are always fetched via an internal RPC request to a Consul server. The agent doesn't do any caching, and if you want to be able to fetch these values even if there's no cluster leader, then you can use a more relaxed [consistency mode](https://www.consul.io/api/index.html#consistency-modes). You can see an example where the `/v1/kv/` HTTP endpoint on the agent makes an internal RPC call here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/kvs_endpoint.go#L56-L90. +No. These are always fetched via an internal RPC request to a Consul server. The agent doesn't do any caching, and if you want to be able to fetch these values even if there's no cluster leader, then you can use a more relaxed [consistency mode](https://developer.hashicorp.com/api/index.html#consistency-modes). You can see an example where the `/v1/kv/` HTTP endpoint on the agent makes an internal RPC call here - https://github.com/hashicorp/consul/blob/v1.0.5/agent/kvs_endpoint.go#L56-L90. ### I don't want to run a Consul agent on every node, can I just run servers with a load balancer in front? -We strongly recommend running the Consul agent on each node in a cluster. Even the key/value store benefits from having agents on each node. For example, when you lock a key it's done through a [session](https://www.consul.io/docs/internals/sessions.html), which has a lifetime that's by default tied to the health of the agent as determined by Consul's gossip-based distributed failure detector. If the agent dies, the session will be released automatically, allowing some other process to quickly see that and obtain the lock without having to wait for an open-ended TTL to expire. If you are using Consul's service discovery features, the local agent runs the health checks for each service registered on that node and only needs to send edge-triggered updates to the Consul servers (because gossip will determine if the agent itself dies). Most attempts to avoid running an agent on each node will face solving issues that are already solved by Consul's design if the agent is deployed as intended. +We strongly recommend running the Consul agent on each node in a cluster. Even the key/value store benefits from having agents on each node. For example, when you lock a key it's done through a [session](https://developer.hashicorp.com/docs/internals/sessions.html), which has a lifetime that's by default tied to the health of the agent as determined by Consul's gossip-based distributed failure detector. If the agent dies, the session will be released automatically, allowing some other process to quickly see that and obtain the lock without having to wait for an open-ended TTL to expire. If you are using Consul's service discovery features, the local agent runs the health checks for each service registered on that node and only needs to send edge-triggered updates to the Consul servers (because gossip will determine if the agent itself dies). Most attempts to avoid running an agent on each node will face solving issues that are already solved by Consul's design if the agent is deployed as intended. -For cases where you really cannot run an agent alongside a service, such as for monitoring an [external service](https://www.consul.io/docs/guides/external.html), there's a companion project called the [Consul External Service Monitor](https://github.com/hashicorp/consul-esm) that may help. +For cases where you really cannot run an agent alongside a service, such as for monitoring an [external service](https://developer.hashicorp.com/docs/guides/external.html), there's a companion project called the [Consul External Service Monitor](https://github.com/hashicorp/consul-esm) that may help. diff --git a/docs/http-api/README.md b/docs/http-api/README.md index 262b764c5deb..223c984b646b 100644 --- a/docs/http-api/README.md +++ b/docs/http-api/README.md @@ -6,4 +6,4 @@ Work in progress. This section will eventually contain docs about: * HTTP endpoints * the [api](https://github.com/hashicorp/consul/tree/main/api) client library - the `api` package provides an official Go API client for Consul, which is also used by Consul's - [CLI](https://www.consul.io/docs/commands/index.html) commands to communicate with the local Consul agent. + [CLI](https://developer.hashicorp.com/docs/commands/index.html) commands to communicate with the local Consul agent. diff --git a/docs/persistence/README.md b/docs/persistence/README.md index 52d48725986d..6ed808b95d87 100644 --- a/docs/persistence/README.md +++ b/docs/persistence/README.md @@ -11,8 +11,8 @@ introduction to the Consul deployment architecture and the [Consensus Protocol] the cluster persistence subsystem. [RPC]: ../rpc -[Consul Architecture Guide]: https://www.consul.io/docs/architecture -[Consensus Protocol]: https://www.consul.io/docs/architecture/consensus +[Consul Architecture Guide]: https://developer.hashicorp.com/docs/architecture +[Consensus Protocol]: https://developer.hashicorp.com/docs/architecture/consensus ![Overview](./overview.svg) @@ -97,14 +97,14 @@ facing operations. [CLI]: ../cli [HTTP API]: ../http-api -[commands/snapshot]: https://www.consul.io/commands/snapshot +[commands/snapshot]: https://developer.hashicorp.com/commands/snapshot [consul/snapshot]: https://github.com/hashicorp/consul/tree/main/snapshot Finally, there is also a [snapshot agent] (enterprise only) that uses the snapshot API endpoints to periodically capture a snapshot, and optionally send it somewhere for storage. -[snapshot agent]: https://www.consul.io/commands/snapshot/agent +[snapshot agent]: https://developer.hashicorp.com/commands/snapshot/agent ## Raft Autopilot diff --git a/docs/rpc/README.md b/docs/rpc/README.md index adfa19459fc6..647b2a90c73d 100644 --- a/docs/rpc/README.md +++ b/docs/rpc/README.md @@ -22,7 +22,7 @@ isn't practical (e.g. cross-DC traffic over mesh gateways). The diagram below shows all the possible routing flows: -[server port]: https://www.consul.io/docs/agent/config/config-files#server_rpc_port +[server port]: https://developer.hashicorp.com/docs/reference/agent/configuration-files/general#server_rpc_port ![RPC Routing](./routing.svg) diff --git a/docs/service-discovery/health-checks.md b/docs/service-discovery/health-checks.md index dc6c42950db3..98c8b32b216b 100644 --- a/docs/service-discovery/health-checks.md +++ b/docs/service-discovery/health-checks.md @@ -3,7 +3,7 @@ This section is still a work in progress. [agent/checks](https://github.com/hashicorp/consul/tree/main/agent/checks) contains the logic for -performing active [health checking](https://www.consul.io/docs/agent/checks.html). +performing active [health checking](https://developer.hashicorp.com/docs/agent/checks.html). ## Check Registration flows @@ -12,23 +12,23 @@ There are many paths to register a check. Many of these use different struct types, so to properly validate and convert a check, all of these paths must be reviewed and tested. -1. API [/v1/catalog/register](https://www.consul.io/api-docs/catalog#register-entity) - the `Checks` +1. API [/v1/catalog/register](https://developer.hashicorp.com/api-docs/catalog#register-entity) - the `Checks` field on `structs.RegisterRequest`. The entrypoint is `CatalogRegister` in [agent/catalog_endpoint.go]. -2. API [/v1/agent/check/register](https://www.consul.io/api-docs/agent/check#register-check) - the entrypoint +2. API [/v1/agent/check/register](https://developer.hashicorp.com/api-docs/agent/check#register-check) - the entrypoint is `AgentRegisterCheck` in [agent/agent_endpoint.go] -3. API [/v1/agent/service/register](https://www.consul.io/api-docs/agent/service#register-service) - +3. API [/v1/agent/service/register](https://developer.hashicorp.com/api-docs/agent/service#register-service) - the `Check` or `Checks` fields on `ServiceDefinition`. The entrypoint is `AgentRegisterService` in [agent/agent_endpoint.go]. -4. Config [Checks](https://www.consul.io/docs/discovery/checks) - the `Checks` and `Check` fields +4. Config [Checks](https://developer.hashicorp.com/docs/discovery/checks) - the `Checks` and `Check` fields on `config.Config` in [agent/config/config.go]. -5. Config [Service.Checks](https://www.consul.io/docs/discovery/services) - the +5. Config [Service.Checks](https://developer.hashicorp.com/docs/discovery/services) - the `Checks` and `Check` fields on `ServiceDefinition` in [agent/config/config.go]. 6. The returned fields of `ServiceDefinition` in [agent/config/builder.go]. -7. CLI [consul services register](https://www.consul.io/commands/services/register) - the +7. CLI [consul services register](https://developer.hashicorp.com/commands/services/register) - the `Checks` and `Check` fields on `api.AgentServiceRegistration`. The entrypoint is `ServicesFromFiles` in [command/services/config.go]. -8. API [/v1/txn](https://www.consul.io/api-docs/txn) - the `Transaction` API allows for registering a check. +8. API [/v1/txn](https://developer.hashicorp.com/api-docs/txn) - the `Transaction` API allows for registering a check. [agent/catalog_endpoint.go]: https://github.com/hashicorp/consul/blob/main/agent/catalog_endpoint.go diff --git a/docs/service-mesh/ca/README.md b/docs/service-mesh/ca/README.md index cd63244ee276..1b72f30b49d0 100644 --- a/docs/service-mesh/ca/README.md +++ b/docs/service-mesh/ca/README.md @@ -27,8 +27,8 @@ Those certificates will be used to authenticate/encrypt communication between se [source](./hl-ca-overview.mmd) The features that benefit from Consul CA management are: -- [service Mesh/Connect](https://www.consul.io/docs/connect) -- [auto encrypt](https://www.consul.io/docs/agent/options#auto_encrypt) +- [service Mesh/Connect](https://developer.hashicorp.com/docs/connect) +- [auto encrypt](https://developer.hashicorp.com/docs/agent/options#auto_encrypt) ### CA and Certificate relationship diff --git a/docs/v2-architecture/controller-architecture/README.md b/docs/v2-architecture/controller-architecture/README.md index d781a63cc098..18eb9d1102aa 100644 --- a/docs/v2-architecture/controller-architecture/README.md +++ b/docs/v2-architecture/controller-architecture/README.md @@ -86,7 +86,7 @@ forwards the operation to the leader via a gRPC [forwarding service] listening on the multiplexed RPC port ([`ports.server`]). [forwarding service]: ../../../proto/private/pbstorage/raft.proto -[`ports.server`]: https://developer.hashicorp.com/consul/docs/agent/config/config-files#server_rpc_port +[`ports.server`]: https://developer.hashicorp.com/consul/docs/reference/agent/configuration-files/general#server_rpc_port #### Step 5 diff --git a/envoyextensions/extensioncommon/basic_envoy_extender.go b/envoyextensions/extensioncommon/basic_envoy_extender.go index eb126c3e38b6..4b6e718bc19c 100644 --- a/envoyextensions/extensioncommon/basic_envoy_extender.go +++ b/envoyextensions/extensioncommon/basic_envoy_extender.go @@ -122,7 +122,7 @@ func (b *BasicEnvoyExtender) Extend(resources *xdscommon.IndexedResources, confi switch config.Kind { // Currently we only support extensions for terminating gateways and connect proxies. - case api.ServiceKindTerminatingGateway, api.ServiceKindConnectProxy: + case api.ServiceKindTerminatingGateway, api.ServiceKindConnectProxy, api.ServiceKindAPIGateway: default: return resources, nil } @@ -287,7 +287,7 @@ func (b *BasicEnvoyExtender) patchListeners(config *RuntimeConfig, listeners Lis func (b *BasicEnvoyExtender) patchSupportedListenerFilterChains(config *RuntimeConfig, l *envoy_listener_v3.Listener, nameOrSNI string) (*envoy_listener_v3.Listener, error) { switch config.Kind { - case api.ServiceKindTerminatingGateway, api.ServiceKindConnectProxy: + case api.ServiceKindTerminatingGateway, api.ServiceKindConnectProxy, api.ServiceKindAPIGateway: return b.patchListenerFilterChains(config, l, nameOrSNI) } return l, nil diff --git a/envoyextensions/extensioncommon/basic_envoy_extender_test.go b/envoyextensions/extensioncommon/basic_envoy_extender_test.go new file mode 100644 index 000000000000..91568e9cffd5 --- /dev/null +++ b/envoyextensions/extensioncommon/basic_envoy_extender_test.go @@ -0,0 +1,268 @@ +// Copyright (c) HashiCorp, Inc. +// SPDX-License-Identifier: BUSL-1.1 + +package extensioncommon + +import ( + "fmt" + "testing" + + envoy_cluster_v3 "github.com/envoyproxy/go-control-plane/envoy/config/cluster/v3" + envoy_core_v3 "github.com/envoyproxy/go-control-plane/envoy/config/core/v3" + envoy_endpoint_v3 "github.com/envoyproxy/go-control-plane/envoy/config/endpoint/v3" + envoy_listener_v3 "github.com/envoyproxy/go-control-plane/envoy/config/listener/v3" + envoy_route_v3 "github.com/envoyproxy/go-control-plane/envoy/config/route/v3" + "github.com/stretchr/testify/require" + "google.golang.org/protobuf/proto" + "google.golang.org/protobuf/types/known/anypb" + + "github.com/hashicorp/consul/api" +) + +// TestBasicEnvoyExtender_CanApply tests the CanApply functionality +func TestBasicEnvoyExtender_CanApply(t *testing.T) { + tests := []struct { + name string + extender *BasicEnvoyExtender + config *RuntimeConfig + expected bool + }{ + { + name: "API Gateway with matching proxy type", + extender: &BasicEnvoyExtender{ + Extension: &mockExtension{ + proxyType: "api-gateway", + }, + }, + config: &RuntimeConfig{ + Kind: api.ServiceKindAPIGateway, + }, + expected: true, + }, + { + name: "API Gateway with non-matching proxy type", + extender: &BasicEnvoyExtender{ + Extension: &mockExtension{ + proxyType: "connect-proxy", + }, + }, + config: &RuntimeConfig{ + Kind: api.ServiceKindAPIGateway, + }, + expected: false, + }, + { + name: "Connect Proxy with matching proxy type", + extender: &BasicEnvoyExtender{ + Extension: &mockExtension{ + proxyType: "connect-proxy", + }, + }, + config: &RuntimeConfig{ + Kind: api.ServiceKindConnectProxy, + }, + expected: true, + }, + { + name: "Connect Proxy with non-matching proxy type", + extender: &BasicEnvoyExtender{ + Extension: &mockExtension{ + proxyType: "api-gateway", + }, + }, + config: &RuntimeConfig{ + Kind: api.ServiceKindConnectProxy, + }, + expected: false, + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + actual := tt.extender.CanApply(tt.config) + require.Equal(t, tt.expected, actual) + }) + } +} + +// TestBasicEnvoyExtender_PatchFilter tests the PatchFilter functionality +func TestBasicEnvoyExtender_PatchFilter(t *testing.T) { + tests := []struct { + name string + extender *BasicEnvoyExtender + payload FilterPayload + expectedConfig *envoy_listener_v3.Filter + expectedError string + }{ + { + name: "API Gateway successful filter patch", + extender: &BasicEnvoyExtender{ + Extension: &mockExtension{ + proxyType: "api-gateway", + patchFunc: func(payload FilterPayload) (*envoy_listener_v3.Filter, bool, error) { + return &envoy_listener_v3.Filter{ + Name: "test-filter", + ConfigType: &envoy_listener_v3.Filter_TypedConfig{ + TypedConfig: mustMarshalAny(&envoy_core_v3.DataSource{ + Specifier: &envoy_core_v3.DataSource_InlineString{ + InlineString: "test-data", + }, + }), + }, + }, true, nil + }, + }, + }, + payload: FilterPayload{ + Message: &envoy_listener_v3.Filter{ + Name: "test-filter", + }, + RuntimeConfig: &RuntimeConfig{ + Kind: api.ServiceKindAPIGateway, + }, + }, + expectedConfig: &envoy_listener_v3.Filter{ + Name: "test-filter", + ConfigType: &envoy_listener_v3.Filter_TypedConfig{ + TypedConfig: mustMarshalAny(&envoy_core_v3.DataSource{ + Specifier: &envoy_core_v3.DataSource_InlineString{ + InlineString: "test-data", + }, + }), + }, + }, + }, + { + name: "Connect Proxy successful filter patch", + extender: &BasicEnvoyExtender{ + Extension: &mockExtension{ + proxyType: "connect-proxy", + patchFunc: func(payload FilterPayload) (*envoy_listener_v3.Filter, bool, error) { + return &envoy_listener_v3.Filter{ + Name: "connect-proxy-filter", + ConfigType: &envoy_listener_v3.Filter_TypedConfig{ + TypedConfig: mustMarshalAny(&envoy_core_v3.DataSource{ + Specifier: &envoy_core_v3.DataSource_InlineString{ + InlineString: "connect-proxy-data", + }, + }), + }, + }, true, nil + }, + }, + }, + payload: FilterPayload{ + Message: &envoy_listener_v3.Filter{ + Name: "connect-proxy-filter", + }, + RuntimeConfig: &RuntimeConfig{ + Kind: api.ServiceKindConnectProxy, + }, + }, + expectedConfig: &envoy_listener_v3.Filter{ + Name: "connect-proxy-filter", + ConfigType: &envoy_listener_v3.Filter_TypedConfig{ + TypedConfig: mustMarshalAny(&envoy_core_v3.DataSource{ + Specifier: &envoy_core_v3.DataSource_InlineString{ + InlineString: "connect-proxy-data", + }, + }), + }, + }, + }, + { + name: "Connect Proxy filter patch with error", + extender: &BasicEnvoyExtender{ + Extension: &mockExtension{ + proxyType: "connect-proxy", + patchFunc: func(payload FilterPayload) (*envoy_listener_v3.Filter, bool, error) { + return nil, false, fmt.Errorf("test error") + }, + }, + }, + payload: FilterPayload{ + Message: &envoy_listener_v3.Filter{ + Name: "connect-proxy-filter", + }, + RuntimeConfig: &RuntimeConfig{ + Kind: api.ServiceKindConnectProxy, + }, + }, + expectedError: "test error", + }, + } + + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + actual, _, err := tt.extender.Extension.PatchFilter(tt.payload) + if tt.expectedError != "" { + require.Error(t, err) + require.Contains(t, err.Error(), tt.expectedError) + return + } + require.NoError(t, err) + require.Equal(t, tt.expectedConfig, actual) + }) + } +} + +// mockExtension implements the Extension interface for testing +type mockExtension struct { + proxyType string + patchFunc func(FilterPayload) (*envoy_listener_v3.Filter, bool, error) +} + +func (m *mockExtension) CanApply(config *RuntimeConfig) bool { + return config.Kind == api.ServiceKind(m.proxyType) +} + +func (m *mockExtension) Validate(config *RuntimeConfig) error { + return nil +} + +func (m *mockExtension) PatchFilter(payload FilterPayload) (*envoy_listener_v3.Filter, bool, error) { + if m.patchFunc != nil { + return m.patchFunc(payload) + } + return payload.Message, false, nil +} + +func (m *mockExtension) PatchCluster(payload ClusterPayload) (*envoy_cluster_v3.Cluster, bool, error) { + return payload.Message, false, nil +} + +func (m *mockExtension) PatchClusterLoadAssignment(payload ClusterLoadAssignmentPayload) (*envoy_endpoint_v3.ClusterLoadAssignment, bool, error) { + return payload.Message, false, nil +} + +func (m *mockExtension) PatchClusters(config *RuntimeConfig, clusters ClusterMap) (ClusterMap, error) { + return clusters, nil +} + +func (m *mockExtension) PatchRoutes(config *RuntimeConfig, routes RouteMap) (RouteMap, error) { + return routes, nil +} + +func (m *mockExtension) PatchListeners(config *RuntimeConfig, listeners ListenerMap) (ListenerMap, error) { + return listeners, nil +} + +func (m *mockExtension) PatchFilters(config *RuntimeConfig, filters []*envoy_listener_v3.Filter, isInboundListener bool) ([]*envoy_listener_v3.Filter, error) { + return filters, nil +} + +func (m *mockExtension) PatchRoute(payload RoutePayload) (*envoy_route_v3.RouteConfiguration, bool, error) { + return payload.Message, false, nil +} + +func (m *mockExtension) PatchListener(payload ListenerPayload) (*envoy_listener_v3.Listener, bool, error) { + return payload.Message, false, nil +} + +func mustMarshalAny(pb proto.Message) *anypb.Any { + a, err := anypb.New(pb) + if err != nil { + panic(err) + } + return a +} diff --git a/envoyextensions/go.mod b/envoyextensions/go.mod index 88aeb3a794c6..4d7ce0057c09 100644 --- a/envoyextensions/go.mod +++ b/envoyextensions/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul/envoyextensions -go 1.22.12 +go 1.23.12 replace ( github.com/hashicorp/consul/api => ../api @@ -43,8 +43,8 @@ require ( github.com/mitchellh/mapstructure v1.5.0 // indirect github.com/pkg/errors v0.9.1 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect - golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 // indirect - golang.org/x/sys v0.29.0 // indirect + golang.org/x/exp v0.0.0-20250808145144-a408d31f581a // indirect + golang.org/x/sys v0.35.0 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect diff --git a/envoyextensions/go.sum b/envoyextensions/go.sum index bb8e849f18bb..b328da425b17 100644 --- a/envoyextensions/go.sum +++ b/envoyextensions/go.sum @@ -188,8 +188,8 @@ golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnf golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 h1:yqrTHse8TCMW1M1ZCP+VAR/l0kKxwaAIqN/il7x4voA= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a h1:Y+7uR/b1Mw2iSXZ3G//1haIiSElDQZ8KWh0h+sZPG90= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a/go.mod h1:rT6SFzZ7oxADUDx58pcaKFTcZ+inxAa9fTrYx/uVYwg= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= @@ -204,8 +204,8 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8= -golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= -golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= +golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= +golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -235,8 +235,8 @@ golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= diff --git a/envoyextensions/xdscommon/ENVOY_VERSIONS b/envoyextensions/xdscommon/ENVOY_VERSIONS index 8e98108e654b..6002a273b17b 100644 --- a/envoyextensions/xdscommon/ENVOY_VERSIONS +++ b/envoyextensions/xdscommon/ENVOY_VERSIONS @@ -6,11 +6,9 @@ # Every prior "minor" version for a given "major" (x.y) version is implicitly supported unless excluded by # `xdscommon.UnsupportedEnvoyVersions`. For example, 1.28.3 implies support for 1.28.0, 1.28.1, and 1.28.2. # -# See https://www.consul.io/docs/connect/proxies/envoy#supported-versions for more information on Consul's Envoy +# See https://developer.hashicorp.com/docs/connect/proxies/envoy#supported-versions for more information on Consul's Envoy # version support. -1.33.0 -1.32.3 -1.31.5 -1.30.9 -1.29.12 -1.28.7 +1.34.1 +1.33.2 +1.32.5 +1.31.8 \ No newline at end of file diff --git a/envoyextensions/xdscommon/proxysupport.go b/envoyextensions/xdscommon/proxysupport.go index d0afcc7390fe..5645a1747e1d 100644 --- a/envoyextensions/xdscommon/proxysupport.go +++ b/envoyextensions/xdscommon/proxysupport.go @@ -67,7 +67,7 @@ func parseEnvoyVersions(raw string) ([]string, error) { // This list must be sorted by semver descending. Only one point release for // each major release should be present. // -// see: https://www.consul.io/docs/connect/proxies/envoy#supported-versions +// see: https://developer.hashicorp.com/docs/connect/proxies/envoy#supported-versions var EnvoyVersions = initEnvoyVersions() // UnsupportedEnvoyVersions lists any unsupported Envoy versions (mainly minor versions) that fall @@ -76,7 +76,7 @@ var EnvoyVersions = initEnvoyVersions() // even though 1.21 is a supported major release, you would then add 1.21.3 to this list. // This list will be empty in most cases. // -// see: https://www.consul.io/docs/connect/proxies/envoy#supported-versions +// see: https://developer.hashicorp.com/docs/connect/proxies/envoy#supported-versions var UnsupportedEnvoyVersions = []string{} // GetMaxEnvoyMajorVersion grabs the first value in EnvoyVersions and strips the last number off in order diff --git a/go.mod b/go.mod index 415ffac9f823..445f1597d3ef 100644 --- a/go.mod +++ b/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul -go 1.22.12 +go 1.23.12 replace ( github.com/hashicorp/consul/api => ./api @@ -21,23 +21,23 @@ require ( github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e github.com/armon/go-metrics v0.4.1 github.com/armon/go-radix v1.0.0 - github.com/aws/aws-sdk-go v1.55.5 + github.com/aws/aws-sdk-go v1.55.7 github.com/coreos/go-oidc/v3 v3.9.0 github.com/deckarep/golang-set/v2 v2.3.1 github.com/docker/go-connections v0.4.0 github.com/envoyproxy/go-control-plane v0.12.0 github.com/envoyproxy/go-control-plane/xdsmatcher v0.0.0-20230524161521-aaaacbfbe53e - github.com/fatih/color v1.16.0 + github.com/fatih/color v1.18.0 github.com/fsnotify/fsnotify v1.6.0 github.com/fullstorydev/grpchan v1.1.1 - github.com/go-jose/go-jose/v3 v3.0.3 + github.com/go-jose/go-jose/v3 v3.0.4 github.com/go-openapi/runtime v0.26.2 - github.com/go-openapi/strfmt v0.21.10 + github.com/go-openapi/strfmt v0.23.0 github.com/google/go-cmp v0.6.0 github.com/google/gofuzz v1.2.0 github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 github.com/google/tcpproxy v0.0.0-20180808230851-dfa16c61dad2 - github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 + github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 github.com/hashi-derek/grpc-proxy v0.0.0-20231207191910-191266484d75 github.com/hashicorp/consul-awsauth v0.0.0-20250130185352-0a5f57fe920a github.com/hashicorp/consul-net-rpc v0.0.0-20221205195236-156cfab66a69 @@ -50,7 +50,7 @@ require ( github.com/hashicorp/go-checkpoint v0.5.0 github.com/hashicorp/go-cleanhttp v0.5.2 github.com/hashicorp/go-connlimit v0.3.0 - github.com/hashicorp/go-discover v0.0.0-20230724184603-e89ebd1b2f65 + github.com/hashicorp/go-discover v1.0.0 github.com/hashicorp/go-hclog v1.6.3 github.com/hashicorp/go-immutable-radix v1.3.1 github.com/hashicorp/go-immutable-radix/v2 v2.1.0 @@ -68,11 +68,11 @@ require ( github.com/hashicorp/hcdiag v0.5.1 github.com/hashicorp/hcl v1.0.1-vault-7 github.com/hashicorp/hcl/v2 v2.14.1 - github.com/hashicorp/hcp-scada-provider v0.2.4 + github.com/hashicorp/hcp-scada-provider v0.2.6 github.com/hashicorp/hcp-sdk-go v0.80.0 github.com/hashicorp/hil v0.0.0-20200423225030-a18a1cd20038 github.com/hashicorp/memberlist v0.5.0 - github.com/hashicorp/raft v1.7.0 + github.com/hashicorp/raft v1.7.3 github.com/hashicorp/raft-autopilot v0.1.6 github.com/hashicorp/raft-boltdb/v2 v2.2.2 github.com/hashicorp/raft-wal v0.4.1 @@ -87,7 +87,7 @@ require ( github.com/miekg/dns v1.1.50 github.com/mitchellh/cli v1.1.4 github.com/mitchellh/copystructure v1.2.0 - github.com/mitchellh/go-testing-interface v1.14.0 + github.com/mitchellh/go-testing-interface v1.14.1 github.com/mitchellh/hashstructure v0.0.0-20170609045927-2bca23e0e452 github.com/mitchellh/hashstructure/v2 v2.0.2 github.com/mitchellh/mapstructure v1.5.0 @@ -102,26 +102,26 @@ require ( github.com/rboyer/safeio v0.2.3 github.com/ryanuber/columnize v2.1.2+incompatible github.com/shirou/gopsutil/v3 v3.22.9 - github.com/stretchr/testify v1.8.4 + github.com/stretchr/testify v1.9.0 github.com/xeipuuv/gojsonschema v1.2.0 github.com/zclconf/go-cty v1.11.1 go.etcd.io/bbolt v1.3.7 - go.opentelemetry.io/otel v1.17.0 - go.opentelemetry.io/otel/metric v1.17.0 - go.opentelemetry.io/otel/sdk v1.17.0 + go.opentelemetry.io/otel v1.24.0 + go.opentelemetry.io/otel/metric v1.24.0 + go.opentelemetry.io/otel/sdk v1.24.0 go.opentelemetry.io/otel/sdk/metric v0.39.0 go.opentelemetry.io/proto/otlp v1.0.0 go.uber.org/goleak v1.1.10 - golang.org/x/crypto v0.32.0 - golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 - golang.org/x/net v0.34.0 - golang.org/x/oauth2 v0.25.0 - golang.org/x/sync v0.10.0 - golang.org/x/sys v0.29.0 - golang.org/x/time v0.9.0 - google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 - google.golang.org/grpc v1.58.3 - google.golang.org/protobuf v1.33.0 + golang.org/x/crypto v0.41.0 + golang.org/x/exp v0.0.0-20250808145144-a408d31f581a + golang.org/x/net v0.43.0 + golang.org/x/oauth2 v0.30.0 + golang.org/x/sync v0.16.0 + golang.org/x/sys v0.35.0 + golang.org/x/time v0.12.0 + google.golang.org/genproto/googleapis/rpc v0.0.0-20240823204242-4ba0660f739c + google.golang.org/grpc v1.65.0 + google.golang.org/protobuf v1.34.2 gotest.tools/v3 v3.4.0 k8s.io/api v0.26.2 k8s.io/apimachinery v0.26.2 @@ -129,8 +129,11 @@ require ( ) require ( - cloud.google.com/go/compute/metadata v0.3.0 // indirect - cloud.google.com/go/iam v1.1.1 // indirect + cel.dev/expr v0.15.0 // indirect + cloud.google.com/go/auth v0.9.1 // indirect + cloud.google.com/go/auth/oauth2adapt v0.2.4 // indirect + cloud.google.com/go/compute/metadata v0.5.0 // indirect + cloud.google.com/go/iam v1.1.13 // indirect github.com/Azure/azure-sdk-for-go v68.0.0+incompatible // indirect github.com/Azure/go-autorest v14.2.0+incompatible // indirect github.com/Azure/go-autorest/autorest v0.11.28 // indirect @@ -150,16 +153,31 @@ require ( github.com/agext/levenshtein v1.2.3 // indirect github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 // indirect + github.com/aws/aws-sdk-go-v2 v1.33.0 // indirect + github.com/aws/aws-sdk-go-v2/config v1.29.1 // indirect + github.com/aws/aws-sdk-go-v2/credentials v1.17.54 // indirect + github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 // indirect + github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 // indirect + github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28 // indirect + github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect + github.com/aws/aws-sdk-go-v2/service/ec2 v1.200.0 // indirect + github.com/aws/aws-sdk-go-v2/service/ecs v1.53.8 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 // indirect + github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9 // indirect + github.com/aws/aws-sdk-go-v2/service/sso v1.24.11 // indirect + github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.10 // indirect + github.com/aws/aws-sdk-go-v2/service/sts v1.33.9 // indirect + github.com/aws/smithy-go v1.22.1 // indirect github.com/benbjohnson/immutable v0.4.0 // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/bgentry/speakeasy v0.1.0 // indirect github.com/boltdb/bolt v1.3.1 // indirect github.com/cenkalti/backoff/v3 v3.0.0 // indirect github.com/census-instrumentation/opencensus-proto v0.4.1 // indirect - github.com/cespare/xxhash/v2 v2.2.0 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible // indirect github.com/circonus-labs/circonusllhist v0.1.3 // indirect - github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 // indirect + github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b // indirect github.com/coreos/etcd v3.3.27+incompatible // indirect github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf // indirect github.com/coreos/pkg v0.0.0-20220810130054-c7d1c02cb6cf // indirect @@ -169,12 +187,13 @@ require ( github.com/digitalocean/godo v1.10.0 // indirect github.com/dimchansky/utfbom v1.1.1 // indirect github.com/emicklei/go-restful/v3 v3.10.1 // indirect - github.com/envoyproxy/protoc-gen-validate v1.0.2 // indirect - github.com/go-logr/logr v1.3.0 // indirect + github.com/envoyproxy/protoc-gen-validate v1.0.4 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect + github.com/go-logr/logr v1.4.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect github.com/go-ole/go-ole v1.2.6 // indirect github.com/go-openapi/analysis v0.21.5 // indirect - github.com/go-openapi/errors v0.21.0 // indirect + github.com/go-openapi/errors v0.22.0 // indirect github.com/go-openapi/jsonpointer v0.20.1 // indirect github.com/go-openapi/jsonreference v0.20.3 // indirect github.com/go-openapi/loads v0.21.3 // indirect @@ -183,22 +202,24 @@ require ( github.com/go-openapi/validate v0.22.4 // indirect github.com/go-ozzo/ozzo-validation v3.6.0+incompatible // indirect github.com/gogo/protobuf v1.3.2 // indirect - github.com/golang-jwt/jwt/v4 v4.5.1 // indirect + github.com/golang-jwt/jwt/v4 v4.5.2 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/protobuf v1.5.4 // indirect github.com/golang/snappy v0.0.4 // indirect github.com/google/btree v1.0.1 // indirect github.com/google/gnostic v0.5.7-v3refs // indirect github.com/google/go-querystring v1.0.0 // indirect - github.com/google/s2a-go v0.1.4 // indirect - github.com/google/uuid v1.4.0 // indirect - github.com/googleapis/enterprise-certificate-proxy v0.2.3 // indirect - github.com/googleapis/gax-go/v2 v2.11.0 // indirect + github.com/google/s2a-go v0.1.8 // indirect + github.com/google/uuid v1.6.0 // indirect + github.com/googleapis/enterprise-certificate-proxy v0.3.2 // indirect + github.com/googleapis/gax-go/v2 v2.13.0 // indirect github.com/gophercloud/gophercloud v0.3.0 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.16.0 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect + github.com/hashicorp/go-discover/provider/gce v0.0.0-20241120163552-5eb1507d16b4 // indirect + github.com/hashicorp/go-metrics v0.5.4 // indirect github.com/hashicorp/go-msgpack v0.5.5 // indirect - github.com/hashicorp/go-msgpack/v2 v2.1.1 // indirect + github.com/hashicorp/go-msgpack/v2 v2.1.2 // indirect github.com/hashicorp/go-secure-stdlib/parseutil v0.1.6 // indirect github.com/hashicorp/go-secure-stdlib/strutil v0.1.2 // indirect github.com/hashicorp/golang-lru/v2 v2.0.0 // indirect @@ -247,7 +268,7 @@ require ( github.com/softlayer/softlayer-go v0.0.0-20180806151055-260589d94c7d // indirect github.com/spf13/cast v1.5.0 // indirect github.com/spf13/pflag v1.0.5 // indirect - github.com/stretchr/objx v0.5.0 // indirect + github.com/stretchr/objx v0.5.2 // indirect github.com/tencentcloud/tencentcloud-sdk-go v1.0.162 // indirect github.com/tklauser/go-sysconf v0.3.10 // indirect github.com/tklauser/numcpus v0.5.0 // indirect @@ -256,25 +277,26 @@ require ( github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect github.com/yusufpapurcu/wmi v1.2.2 // indirect - go.mongodb.org/mongo-driver v1.13.1 // indirect + go.mongodb.org/mongo-driver v1.14.0 // indirect go.opencensus.io v0.24.0 // indirect - go.opentelemetry.io/otel/trace v1.17.0 // indirect + go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect + go.opentelemetry.io/otel/trace v1.24.0 // indirect golang.org/x/lint v0.0.0-20241112194109-818c5a804067 // indirect - golang.org/x/mod v0.22.0 // indirect - golang.org/x/term v0.28.0 // indirect - golang.org/x/text v0.21.0 // indirect - golang.org/x/tools v0.29.0 // indirect - google.golang.org/api v0.126.0 // indirect - google.golang.org/appengine v1.6.8 // indirect - google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98 // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98 // indirect + golang.org/x/mod v0.27.0 // indirect + golang.org/x/term v0.34.0 // indirect + golang.org/x/text v0.28.0 // indirect + golang.org/x/tools v0.36.0 // indirect + google.golang.org/api v0.195.0 // indirect + google.golang.org/genproto v0.0.0-20240823204242-4ba0660f739c // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/ini.v1 v1.66.2 // indirect gopkg.in/natefinch/npipe.v2 v2.0.0-20160621034901-c1b8fa8bdcce // indirect gopkg.in/resty.v1 v1.12.0 // indirect gopkg.in/yaml.v2 v2.4.0 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/klog/v2 v2.100.1 // indirect + k8s.io/klog/v2 v2.130.1 // indirect k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 // indirect k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 // indirect sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect diff --git a/go.sum b/go.sum index c15b91655833..133861b7fa66 100644 --- a/go.sum +++ b/go.sum @@ -1,3 +1,5 @@ +cel.dev/expr v0.15.0 h1:O1jzfJCQBfL5BFoYktaxwIhuttaQPsVWerH9/EEKx0w= +cel.dev/expr v0.15.0/go.mod h1:TRSuuV7DlVCE/uwv5QbAiW/v8l5O8C4eEPHeu7gf7Sg= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU= @@ -25,18 +27,22 @@ cloud.google.com/go v0.90.0/go.mod h1:kRX0mNRHe0e2rC6oNakvwQqzyDmg57xJ+SZU1eT2aD cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+YI= cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4= cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc= +cloud.google.com/go/auth v0.9.1 h1:+pMtLEV2k0AXKvs/tGZojuj6QaioxfUjOpMsG5Gtx+w= +cloud.google.com/go/auth v0.9.1/go.mod h1:Sw8ocT5mhhXxFklyhT12Eiy0ed6tTrPMCJjSI8KhYLk= +cloud.google.com/go/auth/oauth2adapt v0.2.4 h1:0GWE/FUsXhf6C+jAkWgYm7X9tK8cuEIfy19DBn6B6bY= +cloud.google.com/go/auth/oauth2adapt v0.2.4/go.mod h1:jC/jOpwFP6JBxhB3P5Rr0a9HLMC/Pe3eaL4NmdvqPtc= cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o= cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE= cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc= cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg= cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc= cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ= -cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc= -cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k= +cloud.google.com/go/compute/metadata v0.5.0 h1:Zr0eK8JbFv6+Wi4ilXAR8FJ3wyNdpxHKJNPos6LTZOY= +cloud.google.com/go/compute/metadata v0.5.0/go.mod h1:aHnloV2TPI38yx4s9+wAZhHykWvVCfu7hQbF+9CWoiY= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk= -cloud.google.com/go/iam v1.1.1 h1:lW7fzj15aVIXYHREOqjRBV9PsH0Z6u8Y46a1YGvQP4Y= -cloud.google.com/go/iam v1.1.1/go.mod h1:A5avdyVL2tCppe4unb0951eI9jreack+RJ0/d+KUZOU= +cloud.google.com/go/iam v1.1.13 h1:7zWBXG9ERbMLrzQBRhFliAV+kjcRToDTgQT3CTwYyv4= +cloud.google.com/go/iam v1.1.13/go.mod h1:K8mY0uSXwEXS30KrnVb+j54LB/ntfZu1dr+4zFMNbus= cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I= cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw= cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA= @@ -100,6 +106,7 @@ github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuy github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= +github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho= github.com/aliyun/alibaba-cloud-sdk-go v1.62.156 h1:K4N91T1+RlSlx+t2dujeDviy4ehSGVjEltluDgmeHS4= github.com/aliyun/alibaba-cloud-sdk-go v1.62.156/go.mod h1:Api2AkmMgGaSUAhmk76oaFObkoeCPc/bKAqcyplPODs= github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY= @@ -119,8 +126,39 @@ github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgI github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so= github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw= github.com/aws/aws-sdk-go v1.30.27/go.mod h1:5zCpMtNQVjRREroY7sYe8lOMRSxkhG6MZveU8YkpAk0= -github.com/aws/aws-sdk-go v1.55.5 h1:KKUZBfBoyqy5d3swXyiC7Q76ic40rYcbqH7qjh59kzU= -github.com/aws/aws-sdk-go v1.55.5/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= +github.com/aws/aws-sdk-go v1.55.7 h1:UJrkFq7es5CShfBwlWAC8DA077vp8PyVbQd3lqLiztE= +github.com/aws/aws-sdk-go v1.55.7/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU= +github.com/aws/aws-sdk-go-v2 v1.33.0 h1:Evgm4DI9imD81V0WwD+TN4DCwjUMdc94TrduMLbgZJs= +github.com/aws/aws-sdk-go-v2 v1.33.0/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U= +github.com/aws/aws-sdk-go-v2/config v1.29.1 h1:JZhGawAyZ/EuJeBtbQYnaoftczcb2drR2Iq36Wgz4sQ= +github.com/aws/aws-sdk-go-v2/config v1.29.1/go.mod h1:7bR2YD5euaxBhzt2y/oDkt3uNRb6tjFp98GlTFueRwk= +github.com/aws/aws-sdk-go-v2/credentials v1.17.54 h1:4UmqeOqJPvdvASZWrKlhzpRahAulBfyTJQUaYy4+hEI= +github.com/aws/aws-sdk-go-v2/credentials v1.17.54/go.mod h1:RTdfo0P0hbbTxIhmQrOsC/PquBZGabEPnCaxxKRPSnI= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 h1:5grmdTdMsovn9kPZPI23Hhvp0ZyNm5cRO+IZFIYiAfw= +github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24/go.mod h1:zqi7TVKTswH3Ozq28PkmBmgzG1tona7mo9G2IJg4Cis= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 h1:igORFSiH3bfq4lxKFkTSYDhJEUCYo6C8VKiWJjYwQuQ= +github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28/go.mod h1:3So8EA/aAYm36L7XIvCVwLa0s5N0P7o2b1oqnx/2R4g= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28 h1:1mOW9zAUMhTSrMDssEHS/ajx8JcAj/IcftzcmNlmVLI= +github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28/go.mod h1:kGlXVIWDfvt2Ox5zEaNglmq0hXPHgQFNMix33Tw22jA= +github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ= +github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc= +github.com/aws/aws-sdk-go-v2/service/ec2 v1.200.0 h1:3hH6o7Z2WeE1twvz44Aitn6Qz8DZN3Dh5IB4Eh2xq7s= +github.com/aws/aws-sdk-go-v2/service/ec2 v1.200.0/go.mod h1:I76S7jN0nfsYTBtuTgTsJtK2Q8yJVDgrLr5eLN64wMA= +github.com/aws/aws-sdk-go-v2/service/ecs v1.53.8 h1:v1OectQdV/L+KSFSiqK00fXGN8FbaljRfNFysmWB8D0= +github.com/aws/aws-sdk-go-v2/service/ecs v1.53.8/go.mod h1:F0DbgxpvuSvtYun5poG67EHLvci4SgzsMVO6SsPUqKk= +github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 h1:iXtILhvDxB6kPvEXgsDhGaZCSC6LQET5ZHSdJozeI0Y= +github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1/go.mod h1:9nu0fVANtYiAePIBh2/pFUSwtJ402hLnp854CNoDOeE= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9 h1:TQmKDyETFGiXVhZfQ/I0cCFziqqX58pi4tKJGYGFSz0= +github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9/go.mod h1:HVLPK2iHQBUx7HfZeOQSEu3v2ubZaAY2YPbAm5/WUyY= +github.com/aws/aws-sdk-go-v2/service/sso v1.24.11 h1:kuIyu4fTT38Kj7YCC7ouNbVZSSpqkZ+LzIfhCr6Dg+I= +github.com/aws/aws-sdk-go-v2/service/sso v1.24.11/go.mod h1:Ro744S4fKiCCuZECXgOi760TiYylUM8ZBf6OGiZzJtY= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.10 h1:l+dgv/64iVlQ3WsBbnn+JSbkj01jIi+SM0wYsj3y/hY= +github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.10/go.mod h1:Fzsj6lZEb8AkTE5S68OhcbBqeWPsR8RnGuKPr8Todl8= +github.com/aws/aws-sdk-go-v2/service/sts v1.33.9 h1:BRVDbewN6VZcwr+FBOszDKvYeXY1kJ+GGMCcpghlw0U= +github.com/aws/aws-sdk-go-v2/service/sts v1.33.9/go.mod h1:f6vjfZER1M17Fokn0IzssOTMT2N8ZSq+7jnNF0tArvw= +github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro= +github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg= +github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA= github.com/benbjohnson/immutable v0.4.0 h1:CTqXbEerYso8YzVPxmWxh2gnoRQbbB9X1quUC8+vGZA= github.com/benbjohnson/immutable v0.4.0/go.mod h1:iAr8OjJGLnLmVUr9MZ/rz4PWUy6Ouc2JLYuMArmvAJM= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= @@ -138,8 +176,8 @@ github.com/census-instrumentation/opencensus-proto v0.4.1 h1:iKLQ0xPNFxR/2hzXZMr github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= -github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= @@ -151,13 +189,10 @@ github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDk github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc= github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk= -github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI= github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= -github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 h1:/inchEIKaYC1Akx+H+gqO04wryn5h75LSazbRlnya1k= -github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs= +github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b h1:ga8SEFjZ60pxLcmhnThWgvH2wg8376yUJmPhEH4H3kw= +github.com/cncf/xds/go v0.0.0-20240423153145-555b57ec207b/go.mod h1:W+zGtBO5Y1IgJhy4+A9GOqVhqLpfZi+vwmdNXUehLA8= github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= @@ -213,15 +248,17 @@ github.com/envoyproxy/go-control-plane v0.12.0/go.mod h1:ZBTaoJ23lqITozF0M6G4/Ir github.com/envoyproxy/go-control-plane/xdsmatcher v0.0.0-20230524161521-aaaacbfbe53e h1:g8euodkL4GdSpVAjfzhssb07KgVmOUqyF4QOmwFumTs= github.com/envoyproxy/go-control-plane/xdsmatcher v0.0.0-20230524161521-aaaacbfbe53e/go.mod h1:/NGEcKqwNq3HAS2vCqHfsPx9sJZbkiNQ6dGx9gTE/NA= github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= -github.com/envoyproxy/protoc-gen-validate v1.0.2 h1:QkIBuU5k+x7/QXPvPPnWXWlCdaBFApVqftFV6k087DA= -github.com/envoyproxy/protoc-gen-validate v1.0.2/go.mod h1:GpiZQP3dDbg4JouG/NNS7QWXpgx6x8QiMKdmN72jogE= +github.com/envoyproxy/protoc-gen-validate v1.0.4 h1:gVPz/FMfvh57HdSJQyvBtF00j8JU4zdyUgIUNhlgg0A= +github.com/envoyproxy/protoc-gen-validate v1.0.4/go.mod h1:qys6tmnRsYrQqIhm2bvKZH4Blx/1gTIZ2UKVY1M+Yew= github.com/evanphx/json-patch/v5 v5.5.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= -github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM= -github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= github.com/fatih/structs v1.1.0/go.mod h1:9NiDSp5zOcgEDl+j00MP/WkGVPOlPRLejGD8Ga6PJ7M= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/frankban/quicktest v1.10.0/go.mod h1:ui7WezCLWMWxVWr1GETZY3smRy0G4KWq9vcPtJmFl7Y= github.com/frankban/quicktest v1.13.0/go.mod h1:qLE0fzW0VuyUAJgPU19zByoIr0HtCHN/r/VLSOOIySU= github.com/frankban/quicktest v1.14.3 h1:FJKSZTDHjyhriyC81FLQ0LY93eSai0ZyR/ZIkd3ZUKE= @@ -236,25 +273,26 @@ github.com/go-asn1-ber/asn1-ber v1.3.1/go.mod h1:hEBeB/ic+5LoWskz+yKT7vGhhPYkPro github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8= -github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k= -github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= +github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY= +github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= +github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY= github.com/go-ldap/ldap/v3 v3.1.10/go.mod h1:5Zun81jBTabRaI8lzN7E1JjyEl1g6zI6u9pd8luAK4Q= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= -github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= +github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= -github.com/go-logr/logr v1.3.0 h1:2y3SDp0ZXuc6/cjLSZ+Q3ir+QB9T/iG5yYRXqsagWSY= -github.com/go-logr/logr v1.3.0/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= +github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= +github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE= github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY= github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0= github.com/go-openapi/analysis v0.21.5 h1:3tHfEBh6Ia8eKc4M7khOGjPOAlWKJ10d877Cr9teujI= github.com/go-openapi/analysis v0.21.5/go.mod h1:25YcZosX9Lwz2wBsrFrrsL8bmjjXdlyP6zsr2AMy29M= -github.com/go-openapi/errors v0.21.0 h1:FhChC/duCnfoLj1gZ0BgaBmzhJC2SL/sJr8a2vAobSY= -github.com/go-openapi/errors v0.21.0/go.mod h1:jxNTMUxRCKj65yb/okJGEtahVd7uvWnuWfj53bse4ho= +github.com/go-openapi/errors v0.22.0 h1:c4xY/OLxUBSTiepAg3j/MHuAv5mJhnf53LLMWFB+u/w= +github.com/go-openapi/errors v0.22.0/go.mod h1:J3DmZScxCDufmIMsdOuDHxJbdOGC0xtUynjIx092vXE= github.com/go-openapi/jsonpointer v0.20.1 h1:MkK4VEIEZMj4wT9PmjaUmGflVBr9nvud4Q4UVFbDoBE= github.com/go-openapi/jsonpointer v0.20.1/go.mod h1:bHen+N0u1KEO3YlmqOjTT9Adn1RfD91Ar825/PuiRVs= github.com/go-openapi/jsonreference v0.20.3 h1:EjGcjTW8pD1mRis6+w/gmoBdqv5+RbE9B85D1NgDOVQ= @@ -265,8 +303,8 @@ github.com/go-openapi/runtime v0.26.2 h1:elWyB9MacRzvIVgAZCBJmqTi7hBzU0hlKD4IvfX github.com/go-openapi/runtime v0.26.2/go.mod h1:O034jyRZ557uJKzngbMDJXkcKJVzXJiymdSfgejrcRw= github.com/go-openapi/spec v0.20.12 h1:cgSLbrsmziAP2iais+Vz7kSazwZ8rsUZd6TUzdDgkVI= github.com/go-openapi/spec v0.20.12/go.mod h1:iSCgnBcwbMW9SfzJb8iYynXvcY6C/QFrI7otzF7xGM4= -github.com/go-openapi/strfmt v0.21.10 h1:JIsly3KXZB/Qf4UzvzJpg4OELH/0ASDQsyk//TTBDDk= -github.com/go-openapi/strfmt v0.21.10/go.mod h1:vNDMwbilnl7xKiO/Ve/8H8Bb2JIInBnH+lqiw6QWgis= +github.com/go-openapi/strfmt v0.23.0 h1:nlUS6BCqcnAk0pyhi9Y+kdDVZdZMHfEKQiS4HaMgO/c= +github.com/go-openapi/strfmt v0.23.0/go.mod h1:NrtIpfKtWIygRkKVsxh7XQMDQW5HKQl6S5ik2elW+K4= github.com/go-openapi/swag v0.22.5 h1:fVS63IE3M0lsuWRzuom3RLwUMVI2peDH01s6M70ugys= github.com/go-openapi/swag v0.22.5/go.mod h1:Gl91UqO+btAM0plGGxHqJcQZ1ZTy6jbmridBTsDy8A0= github.com/go-openapi/validate v0.22.4 h1:5v3jmMyIPKTR8Lv9syBAIRxG6lY0RqeBPB1LKEijzk8= @@ -285,11 +323,11 @@ github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69 github.com/goji/httpauth v0.0.0-20160601135302-2da839ab0f4d/go.mod h1:nnjvkQ9ptGaCkuDUx6wNykzzlUixGxvkme+H/lnzb+A= github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= -github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo= -github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang-jwt/jwt/v4 v4.5.2 h1:YtQM7lnr8iZ+j5q71MGKkNw9Mn7AjHM68uc9g5fXeUI= +github.com/golang-jwt/jwt/v4 v4.5.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/glog v1.1.0 h1:/d3pCKDPWNnvIWe0vVUpNP32qc8U3PDVxySP/y360qE= -github.com/golang/glog v1.1.0/go.mod h1:pfYeQZ3JWZoXTV5sFc986z3HTpwQs9At6P4ImfuP3NQ= +github.com/golang/glog v1.2.1 h1:OptwRhECazUx5ix5TTWC3EZhsZEHWcYWY4FQHTIubm4= +github.com/golang/glog v1.2.1/go.mod h1:6AhwSGph0fcJtXVM/PEHPqZlFeoLxhs7/t5UDAwmO+w= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= @@ -324,7 +362,6 @@ github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= -github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM= github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= @@ -374,27 +411,27 @@ github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLe github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1 h1:K6RDEckDVWvDI9JAJYCmNdQXq6neHJOYx3V6jnqNEec= github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE= github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI= -github.com/google/s2a-go v0.1.4 h1:1kZ/sQM3srePvKs3tXAvQzo66XfcReoqFpIpIccE7Oc= -github.com/google/s2a-go v0.1.4/go.mod h1:Ej+mSEMGRnqRzjc7VtF+jdBwYG5fuJfiZ8ELkjEwM0A= +github.com/google/s2a-go v0.1.8 h1:zZDs9gcbt9ZPLV0ndSyQk6Kacx2g/X+SKYovpnz3SMM= +github.com/google/s2a-go v0.1.8/go.mod h1:6iNWHTpQ+nfNRN5E00MSdfDwVesa8hhS32PhPO8deJA= github.com/google/tcpproxy v0.0.0-20180808230851-dfa16c61dad2 h1:AtvtonGEH/fZK0XPNNBdB6swgy7Iudfx88wzyIpwqJ8= github.com/google/tcpproxy v0.0.0-20180808230851-dfa16c61dad2/go.mod h1:DavVbd41y+b7ukKDmlnPR4nGYmkWXR6vHUkjQNiHPBs= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4= -github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= -github.com/googleapis/enterprise-certificate-proxy v0.2.3 h1:yk9/cqRKtT9wXZSsRH9aurXEpJX+U6FLtpYTdC3R06k= -github.com/googleapis/enterprise-certificate-proxy v0.2.3/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/googleapis/enterprise-certificate-proxy v0.3.2 h1:Vie5ybvEvT75RniqhfFxPRy3Bf7vr3h0cechB90XaQs= +github.com/googleapis/enterprise-certificate-proxy v0.3.2/go.mod h1:VLSiSSBs/ksPL8kq3OBOQ6WRI2QnaFynd1DCjZ62+V0= github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg= github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk= github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0= -github.com/googleapis/gax-go/v2 v2.11.0 h1:9V9PWXEsWnPpQhu/PeQIkS4eGzMlTLGgt80cUUI8Ki4= -github.com/googleapis/gax-go/v2 v2.11.0/go.mod h1:DxmR61SGKkGLa2xigwuZIQpkCI2S5iydzRfb3peWZJI= +github.com/googleapis/gax-go/v2 v2.13.0 h1:yitjD5f7jQHhyDsnhKEBU52NdvvdSeGzlAnDPT0hH1s= +github.com/googleapis/gax-go/v2 v2.13.0/go.mod h1:Z/fvTZXF8/uw7Xu5GuslPw+bplx6SS338j1Is2S+B7A= github.com/gophercloud/gophercloud v0.3.0 h1:6sjpKIpVwRIIwmcEGp+WwNovNsem+c+2vm6oxshRpL8= github.com/gophercloud/gophercloud v0.3.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= -github.com/grpc-ecosystem/go-grpc-middleware v1.3.0 h1:+9834+KizmvFV7pXQGSXQTsaWhq2GjuNUt0aUU0YBYw= -github.com/grpc-ecosystem/go-grpc-middleware v1.3.0/go.mod h1:z0ButlSOZa5vEBq9m2m2hlwIgKw+rp3sdCBRoJY+30Y= +github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 h1:UH//fgunKIs4JdUbpDl1VZCDaL56wXCB/5+wF6uHfaI= +github.com/grpc-ecosystem/go-grpc-middleware v1.4.0/go.mod h1:g5qyo/la0ALbONm6Vbp88Yd8NsDy6rZz+RcrMPxvld8= github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk= github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw= @@ -419,8 +456,10 @@ github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9n github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48= github.com/hashicorp/go-connlimit v0.3.0 h1:oAojHGjFxUTTTA8c5XXnDqWJ2HLuWbDiBPTpWvNzvqM= github.com/hashicorp/go-connlimit v0.3.0/go.mod h1:OUj9FGL1tPIhl/2RCfzYHrIiWj+VVPGNyVPnUX8AqS0= -github.com/hashicorp/go-discover v0.0.0-20230724184603-e89ebd1b2f65 h1:+ZwaKkFuVWS7FZoiltT+7XW/MEFckY9Wxe+xIEErLcM= -github.com/hashicorp/go-discover v0.0.0-20230724184603-e89ebd1b2f65/go.mod h1:RH2Jr1/cCsZ1nRLmAOC65hp/gRehf55SsUIYV2+NAxI= +github.com/hashicorp/go-discover v1.0.0 h1:yNkCyetOdCDtuZLyMGmYW7oC/mlRmeQou23wcgmRetM= +github.com/hashicorp/go-discover v1.0.0/go.mod h1:jqvs0vDZPpnKlN21oG80bwkiIKPGCrmKChV6qItAjI0= +github.com/hashicorp/go-discover/provider/gce v0.0.0-20241120163552-5eb1507d16b4 h1:ywaDsVo7n5ko12YD8uXjuQ8G2mQhC2mxAc4Kj3WW3GE= +github.com/hashicorp/go-discover/provider/gce v0.0.0-20241120163552-5eb1507d16b4/go.mod h1:yxikfLXA8Y5JA3FcFTR720PfqVEFd0dZY9FBpmcsO54= github.com/hashicorp/go-hclog v0.9.1/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= github.com/hashicorp/go-hclog v0.9.2/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ= github.com/hashicorp/go-hclog v0.14.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ= @@ -436,11 +475,13 @@ github.com/hashicorp/go-immutable-radix/v2 v2.1.0/go.mod h1:hgdqLXA4f6NIjRVisM1T github.com/hashicorp/go-kms-wrapping/entropy/v2 v2.0.0/go.mod h1:xvb32K2keAc+R8DSFG2IwDcydK9DBQE+fGA5fsw6hSk= github.com/hashicorp/go-memdb v1.3.4 h1:XSL3NR682X/cVk2IeV0d70N4DZ9ljI885xAEU8IoK3c= github.com/hashicorp/go-memdb v1.3.4/go.mod h1:uBTr1oQbtuMgd1SSGoR8YV27eT3sBHbYiNm53bMpgSg= +github.com/hashicorp/go-metrics v0.5.4 h1:8mmPiIJkTPPEbAiV97IxdAGNdRdaWwVap1BU6elejKY= +github.com/hashicorp/go-metrics v0.5.4/go.mod h1:CG5yz4NZ/AI/aQt9Ucm/vdBnbh7fvmv4lxZ350i+QQI= github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= github.com/hashicorp/go-msgpack v0.5.5 h1:i9R9JSrqIz0QVLz3sz+i3YJdT7TTSLcfLLzJi9aZTuI= github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM= -github.com/hashicorp/go-msgpack/v2 v2.1.1 h1:xQEY9yB2wnHitoSzk/B9UjXWRQ67QKu5AOm8aFp8N3I= -github.com/hashicorp/go-msgpack/v2 v2.1.1/go.mod h1:upybraOAblm4S7rx0+jeNy+CWWhzywQsSRV5033mMu4= +github.com/hashicorp/go-msgpack/v2 v2.1.2 h1:4Ee8FTp834e+ewB71RDrQ0VKpyFdrKOjvYtnQ/ltVj0= +github.com/hashicorp/go-msgpack/v2 v2.1.2/go.mod h1:upybraOAblm4S7rx0+jeNy+CWWhzywQsSRV5033mMu4= github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk= github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= @@ -492,8 +533,8 @@ github.com/hashicorp/hcl v1.0.1-vault-7 h1:ag5OxFVy3QYTFTJODRzTKVZ6xvdfLLCA1cy/Y github.com/hashicorp/hcl v1.0.1-vault-7/go.mod h1:XYhtn6ijBSAj6n4YqAaf7RBPS4I06AItNorpy+MoQNM= github.com/hashicorp/hcl/v2 v2.14.1 h1:x0BpjfZ+CYdbiz+8yZTQ+gdLO7IXvOut7Da+XJayx34= github.com/hashicorp/hcl/v2 v2.14.1/go.mod h1:e4z5nxYlWNPdDSNYX+ph14EvWYMFm3eP0zIUqPc2jr0= -github.com/hashicorp/hcp-scada-provider v0.2.4 h1:XvctVEd4VqWVlqN1VA4vIhJANstZrc4gd2oCfrFLWZc= -github.com/hashicorp/hcp-scada-provider v0.2.4/go.mod h1:ZFTgGwkzNv99PLQjTsulzaCplCzOTBh0IUQsPKzrQFo= +github.com/hashicorp/hcp-scada-provider v0.2.6 h1:nGBrrPpL9B+6jOrvYkOUULuEB+HpAjySYa5rrL4eaaE= +github.com/hashicorp/hcp-scada-provider v0.2.6/go.mod h1:ZFTgGwkzNv99PLQjTsulzaCplCzOTBh0IUQsPKzrQFo= github.com/hashicorp/hcp-sdk-go v0.80.0 h1:oKGx7+X0llBN5NEpkWg0Qe3x9DIAH6cc3MrxZptDB7Y= github.com/hashicorp/hcp-sdk-go v0.80.0/go.mod h1:vQ4fzdL1AmhIAbCw+4zmFe5Hbpajj3NvRWkJoVuxmAk= github.com/hashicorp/hil v0.0.0-20200423225030-a18a1cd20038 h1:n9J0rwVWXDpNd5iZnwY7w4WZyq53/rROeI7OVvLW8Ok= @@ -508,8 +549,8 @@ github.com/hashicorp/net-rpc-msgpackrpc/v2 v2.0.0/go.mod h1:6pdNz0vo0mF0GvhwDG56 github.com/hashicorp/raft v1.1.0/go.mod h1:4Ak7FSPnuvmb0GV6vgIAJ4vYT4bek9bb6Q+7HVbyzqM= github.com/hashicorp/raft v1.2.0/go.mod h1:vPAJM8Asw6u8LxC3eJCUZmRP/E4QmUGE1R7g7k8sG/8= github.com/hashicorp/raft v1.3.11/go.mod h1:J8naEwc6XaaCfts7+28whSeRvCqTd6e20BlCU3LtEO4= -github.com/hashicorp/raft v1.7.0 h1:4u24Qn6lQ6uwziM++UgsyiT64Q8GyRn43CV41qPiz1o= -github.com/hashicorp/raft v1.7.0/go.mod h1:N1sKh6Vn47mrWvEArQgILTyng8GoDRNYlgKyK7PMjs0= +github.com/hashicorp/raft v1.7.3 h1:DxpEqZJysHN0wK+fviai5mFcSYsCkNpFUl1xpAW8Rbo= +github.com/hashicorp/raft v1.7.3/go.mod h1:DfvCGFxpAUPE0L4Uc8JLlTPtc3GzSbdH0MTJCLgnmJQ= github.com/hashicorp/raft-autopilot v0.1.6 h1:C1q3RNF2FfXNZfHWbvVAu0QixaQK8K5pX4O5lh+9z4I= github.com/hashicorp/raft-autopilot v0.1.6/go.mod h1:Af4jZBwaNOI+tXfIqIdbcAnh/UyyqIMj/pOISIfhArw= github.com/hashicorp/raft-boltdb v0.0.0-20171010151810-6e5ba93211ea/go.mod h1:pNv7Wc3ycL6F5oOWn+tPGo2gWD4a5X+yp/ntwdKLjRk= @@ -569,19 +610,23 @@ github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8Hm github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/joyent/triton-go v1.7.1-0.20200416154420-6801d15b779f h1:ENpDacvnr8faw5ugQmEF1QYk+f/Y9lXFvuYmRxykago= github.com/joyent/triton-go v1.7.1-0.20200416154420-6801d15b779f/go.mod h1:KDSfL7qe5ZfQqvlDMkVjCztbmcpp/c8M77vhQP8ZPvk= +github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4= github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= +github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= +github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= +github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM= github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= -github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= +github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= @@ -646,8 +691,8 @@ github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc github.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg= github.com/mitchellh/go-testing-interface v0.0.0-20171004221916-a61a99592b77/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI= -github.com/mitchellh/go-testing-interface v1.14.0 h1:/x0XQ6h+3U3nAyk1yx+bHPURrKa9sVVvYbuqZ7pIAtI= -github.com/mitchellh/go-testing-interface v1.14.0/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8= +github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJJ2JqpQmpLJOu07cU= +github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8= github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo= github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0= github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0= @@ -672,10 +717,10 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= -github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= +github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/natefinch/npipe v0.0.0-20160621034901-c1b8fa8bdcce h1:TqjP/BTDrwN7zP9xyXVuLsMBXYMt6LLYi55PlrIcq8U= github.com/natefinch/npipe v0.0.0-20160621034901-c1b8fa8bdcce/go.mod h1:ifHPsLndGGzvgzcaXUvzmt6LxKT4pJ+uzEhtnMt+f7A= github.com/nicolai86/scaleway-sdk v1.10.2-0.20180628010248-798f60e20bb2 h1:BQ1HW7hr4IVovMwWg0E0PYcyW8CzqDcVmaew9cujU4s= @@ -725,6 +770,8 @@ github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4 github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU= +github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M= +github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0= github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw= github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= @@ -738,6 +785,8 @@ github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7q github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4= +github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo= +github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc= github.com/prometheus/common v0.39.0 h1:oOyhkDq05hPZKItWVBkJ6g6AtGxi+fy7F4JvUV8uhsI= github.com/prometheus/common v0.39.0/go.mod h1:6XBZ7lYdLCbkAVhwRsWTZn+IN5AB9F/NXd5w0BbEX0Y= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= @@ -745,6 +794,8 @@ github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A= +github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU= +github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA= github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo= github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= @@ -785,6 +836,7 @@ github.com/shopspring/decimal v1.3.1 h1:2Usl1nmF/WZucqkFZhnfFYxxxu8LG21F6nPQBE5g github.com/shopspring/decimal v1.3.1/go.mod h1:DKyhrW/HYNuLGql+MJL6WCR6knT2jwCFRcu2hWCYk4o= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= +github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88= github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0= github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/skratchdot/open-golang v0.0.0-20200116055534-eef842397966 h1:JIAuq3EEf9cgbU6AtGPK4CTG3Zf6CKMNqf0MHTggAUA= @@ -810,8 +862,9 @@ github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= -github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= @@ -822,8 +875,8 @@ github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/ github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= -github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= -github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= +github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/tencentcloud/tencentcloud-sdk-go v1.0.162 h1:8fDzz4GuVg4skjY2B0nMN7h6uN61EDVkuLyI2+qGHhI= github.com/tencentcloud/tencentcloud-sdk-go v1.0.162/go.mod h1:asUz5BPXxgoPGaRgZaVm1iGcUAuHyYUo1nXqKa83cvI= github.com/tklauser/go-sysconf v0.3.10 h1:IJ1AZGZRWbY8T5Vfk04D9WOA5WSejdflXxP03OUqALw= @@ -842,9 +895,6 @@ github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGr github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0= github.com/vmware/govmomi v0.18.0 h1:f7QxSmP7meCtoAmiKZogvVbLInT+CZx6Px6K5rYsJZo= github.com/vmware/govmomi v0.18.0/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU= -github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= -github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4= -github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM= github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb h1:zGWFAtiMcyryUHoUjUJX0/lt1H2+i2Ka2n+D3DImSNo= github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU= @@ -854,7 +904,6 @@ github.com/xeipuuv/gojsonschema v1.2.0 h1:LhYJRs+L4fBtjZUfuSZIKGeVu0QRy8e5Xi7D17 github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y= github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU= github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q= -github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA= github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= @@ -869,8 +918,8 @@ go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ= go.etcd.io/bbolt v1.3.7 h1:j+zJOnnEjF/kyHlDDgGnVL/AIqIJPq8UoB2GSNfkUfQ= go.etcd.io/bbolt v1.3.7/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw= -go.mongodb.org/mongo-driver v1.13.1 h1:YIc7HTYsKndGK4RFzJ3covLz1byri52x0IoMB0Pt/vk= -go.mongodb.org/mongo-driver v1.13.1/go.mod h1:wcDf1JBCXy2mOW0bWHwO/IOYqdca1MPCwDtFu/Z9+eo= +go.mongodb.org/mongo-driver v1.14.0 h1:P98w8egYRjYe3XDjxhYJagTokP/H6HzlsnojRgZRd80= +go.mongodb.org/mongo-driver v1.14.0/go.mod h1:Vzb0Mk/pa7e6cWw85R4F/endUC3u0U9jGcNU603k65c= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw= @@ -880,26 +929,33 @@ go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk= go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E= go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0= go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo= -go.opentelemetry.io/otel v1.17.0 h1:MW+phZ6WZ5/uk2nd93ANk/6yJ+dVrvNWUjGhnnFU5jM= -go.opentelemetry.io/otel v1.17.0/go.mod h1:I2vmBGtFaODIVMBSTPVDlJSzBDNf93k60E6Ft0nyjo0= -go.opentelemetry.io/otel/metric v1.17.0 h1:iG6LGVz5Gh+IuO0jmgvpTB6YVrCGngi8QGm+pMd8Pdc= -go.opentelemetry.io/otel/metric v1.17.0/go.mod h1:h4skoxdZI17AxwITdmdZjjYJQH5nzijUUjm+wtPph5o= -go.opentelemetry.io/otel/sdk v1.17.0 h1:FLN2X66Ke/k5Sg3V623Q7h7nt3cHXaW1FOvKKrW0IpE= -go.opentelemetry.io/otel/sdk v1.17.0/go.mod h1:U87sE0f5vQB7hwUoW98pW5Rz4ZDuCFBZFNUBlSgmDFQ= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 h1:4Pp6oUg3+e/6M4C0A/3kJ2VYa++dsWVTtGgLVj5xtHg= +go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0/go.mod h1:Mjt1i1INqiaoZOMGR1RIUJN+i3ChKoFRqzrRQhlkbs0= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 h1:jq9TW8u3so/bN+JPT166wjOI6/vQPF6Xe7nMNIltagk= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0/go.mod h1:p8pYQP+m5XfbZm9fxtSKAbM6oIllS7s2AfxrChvc7iw= +go.opentelemetry.io/otel v1.24.0 h1:0LAOdjNmQeSTzGBzduGe/rU4tZhMwL5rWgtp9Ku5Jfo= +go.opentelemetry.io/otel v1.24.0/go.mod h1:W7b9Ozg4nkF5tWI5zsXkaKKDjdVjpD4oAt9Qi/MArHo= +go.opentelemetry.io/otel/metric v1.24.0 h1:6EhoGWWK28x1fbpA4tYTOWBkPefTDQnb8WSGXlc88kI= +go.opentelemetry.io/otel/metric v1.24.0/go.mod h1:VYhLe1rFfxuTXLgj4CBiyz+9WYBA8pNGJgDcSFRKBco= +go.opentelemetry.io/otel/sdk v1.24.0 h1:YMPPDNymmQN3ZgczicBY3B6sf9n62Dlj9pWD3ucgoDw= +go.opentelemetry.io/otel/sdk v1.24.0/go.mod h1:KVrIYw6tEubO9E96HQpcmpTKDVn9gdv35HoYiQWGDFg= go.opentelemetry.io/otel/sdk/metric v0.39.0 h1:Kun8i1eYf48kHH83RucG93ffz0zGV1sh46FAScOTuDI= go.opentelemetry.io/otel/sdk/metric v0.39.0/go.mod h1:piDIRgjcK7u0HCL5pCA4e74qpK/jk3NiUoAHATVAmiI= -go.opentelemetry.io/otel/trace v1.17.0 h1:/SWhSRHmDPOImIAetP1QAeMnZYiQXrTy4fMMYOdSKWQ= -go.opentelemetry.io/otel/trace v1.17.0/go.mod h1:I/4vKTgFclIsXRVucpH25X0mpFSczM7aHeaz0ZBLWjY= +go.opentelemetry.io/otel/trace v1.24.0 h1:CsKnnL4dUAr/0llH9FKuc698G04IrpWV0MQA/Y1YELI= +go.opentelemetry.io/otel/trace v1.24.0/go.mod h1:HPc3Xr/cOApsBI154IU0OI0HJexz+aw5uPdbs3UCjNU= go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I= go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM= go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE= +go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE= go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/goleak v1.1.10 h1:z+mqJhf6ss6BSfSM671tgKyZBFPTTJM+HLxnhPC3wu0= go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= +go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= +go.uber.org/zap v1.18.1/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= @@ -915,12 +971,10 @@ golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPh golang.org/x/crypto v0.0.0-20210711020723-a769d52b0f97/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.0.0-20220314234659-1baeb1ce4c0b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= -golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= -golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc= -golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc= +golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4= +golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -931,8 +985,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0 golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4= golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM= golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 h1:yqrTHse8TCMW1M1ZCP+VAR/l0kKxwaAIqN/il7x4voA= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a h1:Y+7uR/b1Mw2iSXZ3G//1haIiSElDQZ8KWh0h+sZPG90= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a/go.mod h1:rT6SFzZ7oxADUDx58pcaKFTcZ+inxAa9fTrYx/uVYwg= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= @@ -962,8 +1016,8 @@ golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4= -golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY= +golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ= +golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc= golang.org/x/net v0.0.0-20180530234432-1e491301e022/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180611182652-db08ff08e862/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= @@ -1014,8 +1068,8 @@ golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qx golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= -golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= -golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= +golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= +golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -1031,8 +1085,8 @@ golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.25.0 h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70= -golang.org/x/oauth2 v0.25.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= +golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI= +golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -1046,8 +1100,8 @@ golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ= -golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw= +golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -1075,6 +1129,7 @@ golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1089,6 +1144,8 @@ golang.org/x/sys v0.0.0-20200511232937-7e40ca221e25/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1097,6 +1154,7 @@ golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -1107,6 +1165,7 @@ golang.org/x/sys v0.0.0-20210331175145-43e1dd70ce54/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -1115,6 +1174,7 @@ golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -1127,15 +1187,15 @@ golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= -golang.org/x/term v0.28.0 h1:/Ts8HFuMR2E6IP/jlo7QVLZHggjKQbhu/7H0LJFr3Gg= -golang.org/x/term v0.28.0/go.mod h1:Sw/lC2IAUZ92udQNf3WodGtn4k/XoLyZoh8v/8uiwek= +golang.org/x/term v0.34.0 h1:O/2T7POpk0ZZ7MAzMeWFSg6S5IpWd/RXDlM9hgM3DR4= +golang.org/x/term v0.34.0/go.mod h1:5jC53AEywhIVebHgPVeg0mj8OD3VO9OzclacVrqpaAw= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -1145,18 +1205,17 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= -golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo= -golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= +golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= +golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= -golang.org/x/time v0.9.0 h1:EsRrnYcQiGH+5FfbgvV4AP7qEZstoyrHB0DzarOQ4ZY= -golang.org/x/time v0.9.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= +golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= @@ -1216,8 +1275,8 @@ golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.29.0 h1:Xx0h3TtM9rzQpQuR4dKLrdglAmCEN5Oi+P74JdhdzXE= -golang.org/x/tools v0.29.0/go.mod h1:KMQVMRsVxU6nHCFXrBPhDB8XncLNLM0lIy/F14RP588= +golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg= +golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -1250,8 +1309,8 @@ google.golang.org/api v0.51.0/go.mod h1:t4HdrdoNgyN5cbEfm7Lum0lcLDLiise1F8qDKX00 google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6z3k= google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE= google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI= -google.golang.org/api v0.126.0 h1:q4GJq+cAdMAC7XP7njvQ4tvohGLiSlytuL4BQxbIZ+o= -google.golang.org/api v0.126.0/go.mod h1:mBwVAtz+87bEN6CbA1GtZPDOqY2R5ONPqJeIlvyo4Aw= +google.golang.org/api v0.195.0 h1:Ude4N8FvTKnnQJHU48RFI40jOBgIrL8Zqr3/QeST6yU= +google.golang.org/api v0.195.0/go.mod h1:DOGRWuv3P8TU8Lnz7uQc4hyNqrBpMtD9ppW3wBJurgc= google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM= google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= @@ -1259,8 +1318,6 @@ google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= -google.golang.org/appengine v1.6.8 h1:IhEN5q69dyKagZPYMSdIjS2HqprW324FRQZJcGqPAsM= -google.golang.org/appengine v1.6.8/go.mod h1:1jJ3jBArFh5pcgW8gCtRJnepW8FzD1V44FJffLiz/Ds= google.golang.org/genproto v0.0.0-20170818010345-ee236bd376b0/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= @@ -1320,12 +1377,12 @@ google.golang.org/genproto v0.0.0-20210828152312-66f60bf46e71/go.mod h1:eFjDcFEc google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98 h1:Z0hjGZePRE0ZBWotvtrwxFNrNE9CUAGtplaDK5NNI/g= -google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98/go.mod h1:S7mY02OqCJTD0E1OiQy1F72PWFB4bZJ87cAtLPYgDR0= -google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98 h1:FmF5cCW94Ij59cfpoLiwTgodWmm60eEV0CjlsVg2fuw= -google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98/go.mod h1:rsr7RhLuwsDKL7RmgDDCUc6yaGr1iqceVb5Wv6f6YvQ= -google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 h1:bVf09lpb+OJbByTj913DRJioFFAjf/ZGxEz7MajTp2U= -google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98/go.mod h1:TUfxEVdsvPg18p6AslUXFoLdpED4oBnGwyqk3dV1XzM= +google.golang.org/genproto v0.0.0-20240823204242-4ba0660f739c h1:TYOEhrQMrNDTAd2rX9m+WgGr8Ku6YNuj1D7OX6rWSok= +google.golang.org/genproto v0.0.0-20240823204242-4ba0660f739c/go.mod h1:2rC5OendXvZ8wGEo/cSLheztrZDZaSoHanUcd1xtZnw= +google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142 h1:wKguEg1hsxI2/L3hUYrpo1RVi48K+uTyzKqprwLXsb8= +google.golang.org/genproto/googleapis/api v0.0.0-20240814211410-ddb44dafa142/go.mod h1:d6be+8HhtEtucleCbxpPW9PA9XwISACu8nvpPqF0BVo= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240823204242-4ba0660f739c h1:Kqjm4WpoWvwhMPcrAczoTyMySQmYa9Wy2iL6Con4zn8= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240823204242-4ba0660f739c/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU= google.golang.org/grpc v1.8.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= @@ -1354,9 +1411,8 @@ google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnD google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE= google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34= google.golang.org/grpc v1.41.0/go.mod h1:U3l9uK9J0sini8mHphKoXyaqDA/8VyGnDee1zzIUK6k= -google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ= -google.golang.org/grpc v1.58.3 h1:BjnpXut1btbtgN/6sp+brB2Kbm2LjNXnidYujAVbSoQ= -google.golang.org/grpc v1.58.3/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0= +google.golang.org/grpc v1.65.0 h1:bs/cUb4lp1G5iImFFd3u5ixQzweKizoZJAwBNLR42lc= +google.golang.org/grpc v1.65.0/go.mod h1:WgYC2ypjlB0EiQi6wdKixMqukr6lBc0Vo+oOgjrM5ZQ= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= @@ -1372,8 +1428,8 @@ google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp0 google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I= -google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= -google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg= +google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= @@ -1422,8 +1478,8 @@ k8s.io/apimachinery v0.26.2 h1:da1u3D5wfR5u2RpLhE/ZtZS2P7QvDgLZTi9wrNZl/tQ= k8s.io/apimachinery v0.26.2/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I= k8s.io/client-go v0.26.2 h1:s1WkVujHX3kTp4Zn4yGNFK+dlDXy1bAAkIl+cFAiuYI= k8s.io/client-go v0.26.2/go.mod h1:u5EjOuSyBa09yqqyY7m3abZeovO/7D/WehVVlZ2qcqU= -k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg= -k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0= +k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= +k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280 h1:+70TFaan3hfJzs+7VK2o+OGxg8HsuBr/5f6tVAjDu6E= k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4= k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 h1:qY1Ad8PODbnymg2pRbkyMT/ylpTrCM8P2RJ0yroCyIk= diff --git a/grpcmocks/proto-public/pbacl/mock_ACLServiceClient.go b/grpcmocks/proto-public/pbacl/mock_ACLServiceClient.go index b4f5f29e282f..6fe13f343e55 100644 --- a/grpcmocks/proto-public/pbacl/mock_ACLServiceClient.go +++ b/grpcmocks/proto-public/pbacl/mock_ACLServiceClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbacl diff --git a/grpcmocks/proto-public/pbacl/mock_ACLServiceServer.go b/grpcmocks/proto-public/pbacl/mock_ACLServiceServer.go index 98803d82df43..f0c4617b73a8 100644 --- a/grpcmocks/proto-public/pbacl/mock_ACLServiceServer.go +++ b/grpcmocks/proto-public/pbacl/mock_ACLServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbacl diff --git a/grpcmocks/proto-public/pbacl/mock_UnsafeACLServiceServer.go b/grpcmocks/proto-public/pbacl/mock_UnsafeACLServiceServer.go index 21572b566eef..7c6ccd7ef4cb 100644 --- a/grpcmocks/proto-public/pbacl/mock_UnsafeACLServiceServer.go +++ b/grpcmocks/proto-public/pbacl/mock_UnsafeACLServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbacl @@ -17,7 +17,7 @@ func (_m *UnsafeACLServiceServer) EXPECT() *UnsafeACLServiceServer_Expecter { return &UnsafeACLServiceServer_Expecter{mock: &_m.Mock} } -// mustEmbedUnimplementedACLServiceServer provides a mock function with given fields: +// mustEmbedUnimplementedACLServiceServer provides a mock function with no fields func (_m *UnsafeACLServiceServer) mustEmbedUnimplementedACLServiceServer() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *UnsafeACLServiceServer_mustEmbedUnimplementedACLServiceServer_Call) Re } func (_c *UnsafeACLServiceServer_mustEmbedUnimplementedACLServiceServer_Call) RunAndReturn(run func()) *UnsafeACLServiceServer_mustEmbedUnimplementedACLServiceServer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceClient.go b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceClient.go index 58ad21c26c38..a1acee0f81d0 100644 --- a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceClient.go +++ b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbconnectca diff --git a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceServer.go b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceServer.go index 086d6840fc3d..3bc56f73cf32 100644 --- a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceServer.go +++ b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbconnectca diff --git a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsClient.go b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsClient.go index 6f545005b93d..7edd6132df72 100644 --- a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsClient.go +++ b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbconnectca @@ -24,7 +24,7 @@ func (_m *ConnectCAService_WatchRootsClient) EXPECT() *ConnectCAService_WatchRoo return &ConnectCAService_WatchRootsClient_Expecter{mock: &_m.Mock} } -// CloseSend provides a mock function with given fields: +// CloseSend provides a mock function with no fields func (_m *ConnectCAService_WatchRootsClient) CloseSend() error { ret := _m.Called() @@ -69,7 +69,7 @@ func (_c *ConnectCAService_WatchRootsClient_CloseSend_Call) RunAndReturn(run fun return _c } -// Context provides a mock function with given fields: +// Context provides a mock function with no fields func (_m *ConnectCAService_WatchRootsClient) Context() context.Context { ret := _m.Called() @@ -116,7 +116,7 @@ func (_c *ConnectCAService_WatchRootsClient_Context_Call) RunAndReturn(run func( return _c } -// Header provides a mock function with given fields: +// Header provides a mock function with no fields func (_m *ConnectCAService_WatchRootsClient) Header() (metadata.MD, error) { ret := _m.Called() @@ -173,7 +173,7 @@ func (_c *ConnectCAService_WatchRootsClient_Header_Call) RunAndReturn(run func() return _c } -// Recv provides a mock function with given fields: +// Recv provides a mock function with no fields func (_m *ConnectCAService_WatchRootsClient) Recv() (*pbconnectca.WatchRootsResponse, error) { ret := _m.Called() @@ -322,7 +322,7 @@ func (_c *ConnectCAService_WatchRootsClient_SendMsg_Call) RunAndReturn(run func( return _c } -// Trailer provides a mock function with given fields: +// Trailer provides a mock function with no fields func (_m *ConnectCAService_WatchRootsClient) Trailer() metadata.MD { ret := _m.Called() diff --git a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsServer.go b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsServer.go index 4ca2d667e2ef..ca78794bf20c 100644 --- a/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsServer.go +++ b/grpcmocks/proto-public/pbconnectca/mock_ConnectCAService_WatchRootsServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbconnectca @@ -24,7 +24,7 @@ func (_m *ConnectCAService_WatchRootsServer) EXPECT() *ConnectCAService_WatchRoo return &ConnectCAService_WatchRootsServer_Expecter{mock: &_m.Mock} } -// Context provides a mock function with given fields: +// Context provides a mock function with no fields func (_m *ConnectCAService_WatchRootsServer) Context() context.Context { ret := _m.Called() @@ -330,7 +330,7 @@ func (_c *ConnectCAService_WatchRootsServer_SetTrailer_Call) Return() *ConnectCA } func (_c *ConnectCAService_WatchRootsServer_SetTrailer_Call) RunAndReturn(run func(metadata.MD)) *ConnectCAService_WatchRootsServer_SetTrailer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbconnectca/mock_UnsafeConnectCAServiceServer.go b/grpcmocks/proto-public/pbconnectca/mock_UnsafeConnectCAServiceServer.go index e740378bd2e4..94257bb35424 100644 --- a/grpcmocks/proto-public/pbconnectca/mock_UnsafeConnectCAServiceServer.go +++ b/grpcmocks/proto-public/pbconnectca/mock_UnsafeConnectCAServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbconnectca @@ -17,7 +17,7 @@ func (_m *UnsafeConnectCAServiceServer) EXPECT() *UnsafeConnectCAServiceServer_E return &UnsafeConnectCAServiceServer_Expecter{mock: &_m.Mock} } -// mustEmbedUnimplementedConnectCAServiceServer provides a mock function with given fields: +// mustEmbedUnimplementedConnectCAServiceServer provides a mock function with no fields func (_m *UnsafeConnectCAServiceServer) mustEmbedUnimplementedConnectCAServiceServer() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *UnsafeConnectCAServiceServer_mustEmbedUnimplementedConnectCAServiceSer } func (_c *UnsafeConnectCAServiceServer_mustEmbedUnimplementedConnectCAServiceServer_Call) RunAndReturn(run func()) *UnsafeConnectCAServiceServer_mustEmbedUnimplementedConnectCAServiceServer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceClient.go b/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceClient.go index 3c8332f8976e..bb95d3933b2b 100644 --- a/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceClient.go +++ b/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbdataplane diff --git a/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceServer.go b/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceServer.go index 826137e0e81a..b3b77cd58f0f 100644 --- a/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceServer.go +++ b/grpcmocks/proto-public/pbdataplane/mock_DataplaneServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbdataplane diff --git a/grpcmocks/proto-public/pbdataplane/mock_UnsafeDataplaneServiceServer.go b/grpcmocks/proto-public/pbdataplane/mock_UnsafeDataplaneServiceServer.go index 077c5371ba19..13aa5455afd3 100644 --- a/grpcmocks/proto-public/pbdataplane/mock_UnsafeDataplaneServiceServer.go +++ b/grpcmocks/proto-public/pbdataplane/mock_UnsafeDataplaneServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbdataplane @@ -17,7 +17,7 @@ func (_m *UnsafeDataplaneServiceServer) EXPECT() *UnsafeDataplaneServiceServer_E return &UnsafeDataplaneServiceServer_Expecter{mock: &_m.Mock} } -// mustEmbedUnimplementedDataplaneServiceServer provides a mock function with given fields: +// mustEmbedUnimplementedDataplaneServiceServer provides a mock function with no fields func (_m *UnsafeDataplaneServiceServer) mustEmbedUnimplementedDataplaneServiceServer() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *UnsafeDataplaneServiceServer_mustEmbedUnimplementedDataplaneServiceSer } func (_c *UnsafeDataplaneServiceServer_mustEmbedUnimplementedDataplaneServiceServer_Call) RunAndReturn(run func()) *UnsafeDataplaneServiceServer_mustEmbedUnimplementedDataplaneServiceServer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbdataplane/mock_isGetEnvoyBootstrapParamsRequest_NodeSpec.go b/grpcmocks/proto-public/pbdataplane/mock_isGetEnvoyBootstrapParamsRequest_NodeSpec.go index 98f54341cece..98916c888b92 100644 --- a/grpcmocks/proto-public/pbdataplane/mock_isGetEnvoyBootstrapParamsRequest_NodeSpec.go +++ b/grpcmocks/proto-public/pbdataplane/mock_isGetEnvoyBootstrapParamsRequest_NodeSpec.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbdataplane @@ -17,7 +17,7 @@ func (_m *isGetEnvoyBootstrapParamsRequest_NodeSpec) EXPECT() *isGetEnvoyBootstr return &isGetEnvoyBootstrapParamsRequest_NodeSpec_Expecter{mock: &_m.Mock} } -// isGetEnvoyBootstrapParamsRequest_NodeSpec provides a mock function with given fields: +// isGetEnvoyBootstrapParamsRequest_NodeSpec provides a mock function with no fields func (_m *isGetEnvoyBootstrapParamsRequest_NodeSpec) isGetEnvoyBootstrapParamsRequest_NodeSpec() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *isGetEnvoyBootstrapParamsRequest_NodeSpec_isGetEnvoyBootstrapParamsReq } func (_c *isGetEnvoyBootstrapParamsRequest_NodeSpec_isGetEnvoyBootstrapParamsRequest_NodeSpec_Call) RunAndReturn(run func()) *isGetEnvoyBootstrapParamsRequest_NodeSpec_isGetEnvoyBootstrapParamsRequest_NodeSpec_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbdns/mock_DNSServiceClient.go b/grpcmocks/proto-public/pbdns/mock_DNSServiceClient.go index 0b0e9496a04a..127bc9aa9c5b 100644 --- a/grpcmocks/proto-public/pbdns/mock_DNSServiceClient.go +++ b/grpcmocks/proto-public/pbdns/mock_DNSServiceClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbdns diff --git a/grpcmocks/proto-public/pbdns/mock_DNSServiceServer.go b/grpcmocks/proto-public/pbdns/mock_DNSServiceServer.go index eceb76755811..b739fad8a8d7 100644 --- a/grpcmocks/proto-public/pbdns/mock_DNSServiceServer.go +++ b/grpcmocks/proto-public/pbdns/mock_DNSServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbdns diff --git a/grpcmocks/proto-public/pbdns/mock_UnsafeDNSServiceServer.go b/grpcmocks/proto-public/pbdns/mock_UnsafeDNSServiceServer.go index 06a1f30e70e2..a2821fbc633e 100644 --- a/grpcmocks/proto-public/pbdns/mock_UnsafeDNSServiceServer.go +++ b/grpcmocks/proto-public/pbdns/mock_UnsafeDNSServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbdns @@ -17,7 +17,7 @@ func (_m *UnsafeDNSServiceServer) EXPECT() *UnsafeDNSServiceServer_Expecter { return &UnsafeDNSServiceServer_Expecter{mock: &_m.Mock} } -// mustEmbedUnimplementedDNSServiceServer provides a mock function with given fields: +// mustEmbedUnimplementedDNSServiceServer provides a mock function with no fields func (_m *UnsafeDNSServiceServer) mustEmbedUnimplementedDNSServiceServer() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *UnsafeDNSServiceServer_mustEmbedUnimplementedDNSServiceServer_Call) Re } func (_c *UnsafeDNSServiceServer_mustEmbedUnimplementedDNSServiceServer_Call) RunAndReturn(run func()) *UnsafeDNSServiceServer_mustEmbedUnimplementedDNSServiceServer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbresource/mock_ResourceServiceClient.go b/grpcmocks/proto-public/pbresource/mock_ResourceServiceClient.go index fc49e0f50d57..2b0adbb59ee2 100644 --- a/grpcmocks/proto-public/pbresource/mock_ResourceServiceClient.go +++ b/grpcmocks/proto-public/pbresource/mock_ResourceServiceClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbresource diff --git a/grpcmocks/proto-public/pbresource/mock_ResourceServiceServer.go b/grpcmocks/proto-public/pbresource/mock_ResourceServiceServer.go index 3ed07e777dee..192c1cbf9144 100644 --- a/grpcmocks/proto-public/pbresource/mock_ResourceServiceServer.go +++ b/grpcmocks/proto-public/pbresource/mock_ResourceServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbresource diff --git a/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListClient.go b/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListClient.go index 60df7d5cc6ec..f340286f350b 100644 --- a/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListClient.go +++ b/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbresource @@ -24,7 +24,7 @@ func (_m *ResourceService_WatchListClient) EXPECT() *ResourceService_WatchListCl return &ResourceService_WatchListClient_Expecter{mock: &_m.Mock} } -// CloseSend provides a mock function with given fields: +// CloseSend provides a mock function with no fields func (_m *ResourceService_WatchListClient) CloseSend() error { ret := _m.Called() @@ -69,7 +69,7 @@ func (_c *ResourceService_WatchListClient_CloseSend_Call) RunAndReturn(run func( return _c } -// Context provides a mock function with given fields: +// Context provides a mock function with no fields func (_m *ResourceService_WatchListClient) Context() context.Context { ret := _m.Called() @@ -116,7 +116,7 @@ func (_c *ResourceService_WatchListClient_Context_Call) RunAndReturn(run func() return _c } -// Header provides a mock function with given fields: +// Header provides a mock function with no fields func (_m *ResourceService_WatchListClient) Header() (metadata.MD, error) { ret := _m.Called() @@ -173,7 +173,7 @@ func (_c *ResourceService_WatchListClient_Header_Call) RunAndReturn(run func() ( return _c } -// Recv provides a mock function with given fields: +// Recv provides a mock function with no fields func (_m *ResourceService_WatchListClient) Recv() (*pbresource.WatchEvent, error) { ret := _m.Called() @@ -322,7 +322,7 @@ func (_c *ResourceService_WatchListClient_SendMsg_Call) RunAndReturn(run func(in return _c } -// Trailer provides a mock function with given fields: +// Trailer provides a mock function with no fields func (_m *ResourceService_WatchListClient) Trailer() metadata.MD { ret := _m.Called() diff --git a/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListServer.go b/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListServer.go index df2158cb6483..9725f4982f92 100644 --- a/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListServer.go +++ b/grpcmocks/proto-public/pbresource/mock_ResourceService_WatchListServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbresource @@ -24,7 +24,7 @@ func (_m *ResourceService_WatchListServer) EXPECT() *ResourceService_WatchListSe return &ResourceService_WatchListServer_Expecter{mock: &_m.Mock} } -// Context provides a mock function with given fields: +// Context provides a mock function with no fields func (_m *ResourceService_WatchListServer) Context() context.Context { ret := _m.Called() @@ -330,7 +330,7 @@ func (_c *ResourceService_WatchListServer_SetTrailer_Call) Return() *ResourceSer } func (_c *ResourceService_WatchListServer_SetTrailer_Call) RunAndReturn(run func(metadata.MD)) *ResourceService_WatchListServer_SetTrailer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbresource/mock_UnsafeResourceServiceServer.go b/grpcmocks/proto-public/pbresource/mock_UnsafeResourceServiceServer.go index 59e1aef0c7e0..293374a9bceb 100644 --- a/grpcmocks/proto-public/pbresource/mock_UnsafeResourceServiceServer.go +++ b/grpcmocks/proto-public/pbresource/mock_UnsafeResourceServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbresource @@ -17,7 +17,7 @@ func (_m *UnsafeResourceServiceServer) EXPECT() *UnsafeResourceServiceServer_Exp return &UnsafeResourceServiceServer_Expecter{mock: &_m.Mock} } -// mustEmbedUnimplementedResourceServiceServer provides a mock function with given fields: +// mustEmbedUnimplementedResourceServiceServer provides a mock function with no fields func (_m *UnsafeResourceServiceServer) mustEmbedUnimplementedResourceServiceServer() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *UnsafeResourceServiceServer_mustEmbedUnimplementedResourceServiceServe } func (_c *UnsafeResourceServiceServer_mustEmbedUnimplementedResourceServiceServer_Call) RunAndReturn(run func()) *UnsafeResourceServiceServer_mustEmbedUnimplementedResourceServiceServer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbresource/mock_isWatchEvent_Event.go b/grpcmocks/proto-public/pbresource/mock_isWatchEvent_Event.go index 1a9acfd5293b..d05b8a35d036 100644 --- a/grpcmocks/proto-public/pbresource/mock_isWatchEvent_Event.go +++ b/grpcmocks/proto-public/pbresource/mock_isWatchEvent_Event.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbresource @@ -17,7 +17,7 @@ func (_m *isWatchEvent_Event) EXPECT() *isWatchEvent_Event_Expecter { return &isWatchEvent_Event_Expecter{mock: &_m.Mock} } -// isWatchEvent_Event provides a mock function with given fields: +// isWatchEvent_Event provides a mock function with no fields func (_m *isWatchEvent_Event) isWatchEvent_Event() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *isWatchEvent_Event_isWatchEvent_Event_Call) Return() *isWatchEvent_Eve } func (_c *isWatchEvent_Event_isWatchEvent_Event_Call) RunAndReturn(run func()) *isWatchEvent_Event_isWatchEvent_Event_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceClient.go b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceClient.go index 8ec2fca4ccf5..e2effc043193 100644 --- a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceClient.go +++ b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbserverdiscovery diff --git a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceServer.go b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceServer.go index b61f066e96f7..7bdd4e2390e3 100644 --- a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceServer.go +++ b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbserverdiscovery diff --git a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersClient.go b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersClient.go index eb397adadef5..adee58e4e0f5 100644 --- a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersClient.go +++ b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersClient.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbserverdiscovery @@ -24,7 +24,7 @@ func (_m *ServerDiscoveryService_WatchServersClient) EXPECT() *ServerDiscoverySe return &ServerDiscoveryService_WatchServersClient_Expecter{mock: &_m.Mock} } -// CloseSend provides a mock function with given fields: +// CloseSend provides a mock function with no fields func (_m *ServerDiscoveryService_WatchServersClient) CloseSend() error { ret := _m.Called() @@ -69,7 +69,7 @@ func (_c *ServerDiscoveryService_WatchServersClient_CloseSend_Call) RunAndReturn return _c } -// Context provides a mock function with given fields: +// Context provides a mock function with no fields func (_m *ServerDiscoveryService_WatchServersClient) Context() context.Context { ret := _m.Called() @@ -116,7 +116,7 @@ func (_c *ServerDiscoveryService_WatchServersClient_Context_Call) RunAndReturn(r return _c } -// Header provides a mock function with given fields: +// Header provides a mock function with no fields func (_m *ServerDiscoveryService_WatchServersClient) Header() (metadata.MD, error) { ret := _m.Called() @@ -173,7 +173,7 @@ func (_c *ServerDiscoveryService_WatchServersClient_Header_Call) RunAndReturn(ru return _c } -// Recv provides a mock function with given fields: +// Recv provides a mock function with no fields func (_m *ServerDiscoveryService_WatchServersClient) Recv() (*pbserverdiscovery.WatchServersResponse, error) { ret := _m.Called() @@ -322,7 +322,7 @@ func (_c *ServerDiscoveryService_WatchServersClient_SendMsg_Call) RunAndReturn(r return _c } -// Trailer provides a mock function with given fields: +// Trailer provides a mock function with no fields func (_m *ServerDiscoveryService_WatchServersClient) Trailer() metadata.MD { ret := _m.Called() diff --git a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersServer.go b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersServer.go index ea6234ed5dd9..734c789f6b7e 100644 --- a/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersServer.go +++ b/grpcmocks/proto-public/pbserverdiscovery/mock_ServerDiscoveryService_WatchServersServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbserverdiscovery @@ -24,7 +24,7 @@ func (_m *ServerDiscoveryService_WatchServersServer) EXPECT() *ServerDiscoverySe return &ServerDiscoveryService_WatchServersServer_Expecter{mock: &_m.Mock} } -// Context provides a mock function with given fields: +// Context provides a mock function with no fields func (_m *ServerDiscoveryService_WatchServersServer) Context() context.Context { ret := _m.Called() @@ -330,7 +330,7 @@ func (_c *ServerDiscoveryService_WatchServersServer_SetTrailer_Call) Return() *S } func (_c *ServerDiscoveryService_WatchServersServer_SetTrailer_Call) RunAndReturn(run func(metadata.MD)) *ServerDiscoveryService_WatchServersServer_SetTrailer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/grpcmocks/proto-public/pbserverdiscovery/mock_UnsafeServerDiscoveryServiceServer.go b/grpcmocks/proto-public/pbserverdiscovery/mock_UnsafeServerDiscoveryServiceServer.go index 935dbbdbf5bb..641f5e11f958 100644 --- a/grpcmocks/proto-public/pbserverdiscovery/mock_UnsafeServerDiscoveryServiceServer.go +++ b/grpcmocks/proto-public/pbserverdiscovery/mock_UnsafeServerDiscoveryServiceServer.go @@ -1,4 +1,4 @@ -// Code generated by mockery v2.41.0. DO NOT EDIT. +// Code generated by mockery v2.53.4. DO NOT EDIT. package mockpbserverdiscovery @@ -17,7 +17,7 @@ func (_m *UnsafeServerDiscoveryServiceServer) EXPECT() *UnsafeServerDiscoverySer return &UnsafeServerDiscoveryServiceServer_Expecter{mock: &_m.Mock} } -// mustEmbedUnimplementedServerDiscoveryServiceServer provides a mock function with given fields: +// mustEmbedUnimplementedServerDiscoveryServiceServer provides a mock function with no fields func (_m *UnsafeServerDiscoveryServiceServer) mustEmbedUnimplementedServerDiscoveryServiceServer() { _m.Called() } @@ -45,7 +45,7 @@ func (_c *UnsafeServerDiscoveryServiceServer_mustEmbedUnimplementedServerDiscove } func (_c *UnsafeServerDiscoveryServiceServer_mustEmbedUnimplementedServerDiscoveryServiceServer_Call) RunAndReturn(run func()) *UnsafeServerDiscoveryServiceServer_mustEmbedUnimplementedServerDiscoveryServiceServer_Call { - _c.Call.Return(run) + _c.Run(run) return _c } diff --git a/internal/protohcl/testproto/example.pb.go b/internal/protohcl/testproto/example.pb.go index 1b2504ddd6eb..263245c5132f 100644 --- a/internal/protohcl/testproto/example.pb.go +++ b/internal/protohcl/testproto/example.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: example.proto @@ -837,7 +837,7 @@ func file_example_proto_rawDescGZIP() []byte { var file_example_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_example_proto_msgTypes = make([]protoimpl.MessageInfo, 8) -var file_example_proto_goTypes = []interface{}{ +var file_example_proto_goTypes = []any{ (Protocol)(0), // 0: hashicorp.consul.internal.protohcl.testproto.Protocol (*Primitives)(nil), // 1: hashicorp.consul.internal.protohcl.testproto.Primitives (*NestedAndCollections)(nil), // 2: hashicorp.consul.internal.protohcl.testproto.NestedAndCollections @@ -898,7 +898,7 @@ func file_example_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_example_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_example_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Primitives); i { case 0: return &v.state @@ -910,7 +910,7 @@ func file_example_proto_init() { return nil } } - file_example_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_example_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*NestedAndCollections); i { case 0: return &v.state @@ -922,7 +922,7 @@ func file_example_proto_init() { return nil } } - file_example_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_example_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*Wrappers); i { case 0: return &v.state @@ -934,7 +934,7 @@ func file_example_proto_init() { return nil } } - file_example_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_example_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*OneOf); i { case 0: return &v.state @@ -946,7 +946,7 @@ func file_example_proto_init() { return nil } } - file_example_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_example_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*NonDynamicWellKnown); i { case 0: return &v.state @@ -958,7 +958,7 @@ func file_example_proto_init() { return nil } } - file_example_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_example_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*DynamicWellKnown); i { case 0: return &v.state @@ -971,7 +971,7 @@ func file_example_proto_init() { } } } - file_example_proto_msgTypes[3].OneofWrappers = []interface{}{ + file_example_proto_msgTypes[3].OneofWrappers = []any{ (*OneOf_Int32Val)(nil), (*OneOf_Primitives)(nil), } diff --git a/internal/tools/proto-gen-rpc-glue/e2e/consul/go.mod b/internal/tools/proto-gen-rpc-glue/e2e/consul/go.mod index f4a9ed8e0e7b..61c3066e2dd7 100644 --- a/internal/tools/proto-gen-rpc-glue/e2e/consul/go.mod +++ b/internal/tools/proto-gen-rpc-glue/e2e/consul/go.mod @@ -1,5 +1,5 @@ module github.com/hashicorp/consul -go 1.22.12 +go 1.23 require google.golang.org/protobuf v1.28.1 diff --git a/internal/tools/proto-gen-rpc-glue/e2e/consul/proto/pbcommon/common.pb.go b/internal/tools/proto-gen-rpc-glue/e2e/consul/proto/pbcommon/common.pb.go index c5303c923acc..a06f10aaf3ee 100644 --- a/internal/tools/proto-gen-rpc-glue/e2e/consul/proto/pbcommon/common.pb.go +++ b/internal/tools/proto-gen-rpc-glue/e2e/consul/proto/pbcommon/common.pb.go @@ -266,7 +266,7 @@ type QueryOptions struct { // returned. Clients that wish to allow for stale results on error can set // StaleIfError to a longer duration to change this behavior. It is ignored // if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. // mog: func-to=structs.DurationFromProto func-from=structs.DurationToProto MaxAge *duration.Duration `protobuf:"bytes,8,opt,name=MaxAge,proto3" json:"MaxAge,omitempty"` // MustRevalidate forces the agent to fetch a fresh version of a cached @@ -279,7 +279,7 @@ type QueryOptions struct { // if the servers are unavailable to fetch a fresh one. Only makes sense when // UseCache is true and MaxAge is set to a lower, non-zero value. It is // ignored if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. StaleIfError *duration.Duration `protobuf:"bytes,10,opt,name=StaleIfError,proto3" json:"StaleIfError,omitempty"` // Filter specifies the go-bexpr filter expression to be used for // filtering the data prior to returning a response diff --git a/internal/tools/proto-gen-rpc-glue/e2e/go.mod b/internal/tools/proto-gen-rpc-glue/e2e/go.mod index 5b285a9ff076..eb5d142270e1 100644 --- a/internal/tools/proto-gen-rpc-glue/e2e/go.mod +++ b/internal/tools/proto-gen-rpc-glue/e2e/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul/internal/tools/proto-gen-rpc-glue/e2e -go 1.22.12 +go 1.23 replace github.com/hashicorp/consul => ./consul diff --git a/internal/tools/proto-gen-rpc-glue/go.mod b/internal/tools/proto-gen-rpc-glue/go.mod index 49d7fa58fdbf..ce1c05d21e14 100644 --- a/internal/tools/proto-gen-rpc-glue/go.mod +++ b/internal/tools/proto-gen-rpc-glue/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul/internal/tools/proto-gen-rpc-glue -go 1.22.12 +go 1.23 require github.com/stretchr/testify v1.8.4 diff --git a/internal/tools/protoc-gen-consul-rate-limit/go.mod b/internal/tools/protoc-gen-consul-rate-limit/go.mod index 7dfdbee14068..315399cb9823 100644 --- a/internal/tools/protoc-gen-consul-rate-limit/go.mod +++ b/internal/tools/protoc-gen-consul-rate-limit/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul/internal/tools/protoc-gen-consul-rate-limit -go 1.22.12 +go 1.23.12 replace github.com/hashicorp/consul/proto-public => ../../../proto-public diff --git a/internal/tools/protoc-gen-grpc-clone/e2e/proto/service.pb.go b/internal/tools/protoc-gen-grpc-clone/e2e/proto/service.pb.go index 4703ce28cb8d..9ea9c6bdbf3d 100644 --- a/internal/tools/protoc-gen-grpc-clone/e2e/proto/service.pb.go +++ b/internal/tools/protoc-gen-grpc-clone/e2e/proto/service.pb.go @@ -6,7 +6,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: service.proto @@ -190,7 +190,7 @@ func file_service_proto_rawDescGZIP() []byte { } var file_service_proto_msgTypes = make([]protoimpl.MessageInfo, 2) -var file_service_proto_goTypes = []interface{}{ +var file_service_proto_goTypes = []any{ (*Req)(nil), // 0: hashicorp.consul.internal.protoc_gen_grpc_clone.testing.Req (*Resp)(nil), // 1: hashicorp.consul.internal.protoc_gen_grpc_clone.testing.Resp } @@ -212,7 +212,7 @@ func file_service_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_service_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_service_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Req); i { case 0: return &v.state @@ -224,7 +224,7 @@ func file_service_proto_init() { return nil } } - file_service_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_service_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*Resp); i { case 0: return &v.state diff --git a/lib/useragent.go b/lib/useragent.go index 89c73dec5529..3af9b45ce076 100644 --- a/lib/useragent.go +++ b/lib/useragent.go @@ -12,7 +12,7 @@ import ( var ( // projectURL is the project URL. - projectURL = "https://www.consul.io/" + projectURL = "https://developer.hashicorp.com/" // rt is the runtime - variable for tests. rt = runtime.Version() diff --git a/package-lock.json b/package-lock.json new file mode 100644 index 000000000000..9d7f9b74d06a --- /dev/null +++ b/package-lock.json @@ -0,0 +1,6 @@ +{ + "name": "consul-ia-experiments", + "lockfileVersion": 3, + "requires": true, + "packages": {} +} diff --git a/proto-public/annotations/ratelimit/ratelimit.pb.go b/proto-public/annotations/ratelimit/ratelimit.pb.go index 085507402aea..9b4e294a3780 100644 --- a/proto-public/annotations/ratelimit/ratelimit.pb.go +++ b/proto-public/annotations/ratelimit/ratelimit.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: annotations/ratelimit/ratelimit.proto @@ -331,7 +331,7 @@ func file_annotations_ratelimit_ratelimit_proto_rawDescGZIP() []byte { var file_annotations_ratelimit_ratelimit_proto_enumTypes = make([]protoimpl.EnumInfo, 2) var file_annotations_ratelimit_ratelimit_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_annotations_ratelimit_ratelimit_proto_goTypes = []interface{}{ +var file_annotations_ratelimit_ratelimit_proto_goTypes = []any{ (OperationType)(0), // 0: hashicorp.consul.internal.ratelimit.OperationType (OperationCategory)(0), // 1: hashicorp.consul.internal.ratelimit.OperationCategory (*Spec)(nil), // 2: hashicorp.consul.internal.ratelimit.Spec @@ -355,7 +355,7 @@ func file_annotations_ratelimit_ratelimit_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_annotations_ratelimit_ratelimit_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_annotations_ratelimit_ratelimit_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Spec); i { case 0: return &v.state diff --git a/proto-public/go.mod b/proto-public/go.mod index 238b1d809029..05abd592ed68 100644 --- a/proto-public/go.mod +++ b/proto-public/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul/proto-public -go 1.22.12 +go 1.23.12 require ( google.golang.org/grpc v1.56.3 @@ -9,8 +9,8 @@ require ( require ( github.com/golang/protobuf v1.5.4 // indirect - golang.org/x/net v0.34.0 // indirect - golang.org/x/sys v0.29.0 // indirect - golang.org/x/text v0.21.0 // indirect + golang.org/x/net v0.43.0 // indirect + golang.org/x/sys v0.35.0 // indirect + golang.org/x/text v0.28.0 // indirect google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 // indirect ) diff --git a/proto-public/go.sum b/proto-public/go.sum index 8b25c578b0b7..b22031d5fa5d 100644 --- a/proto-public/go.sum +++ b/proto-public/go.sum @@ -2,12 +2,12 @@ github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= -golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= -golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo= -golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= +golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= +golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= +golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= +golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1 h1:KpwkzHKEF7B9Zxg18WzOa7djJ+Ha5DzthMyZYQfEn2A= google.golang.org/genproto v0.0.0-20230410155749-daa745c078e1/go.mod h1:nKE/iIaLqn2bQwXBg8f1g2Ylh6r5MN5CmZvuzZCgsCU= google.golang.org/grpc v1.56.3 h1:8I4C0Yq1EjstUzUJzpcRVbuYA2mODtEmpWiQoN/b2nc= diff --git a/proto-public/pbacl/acl.pb.go b/proto-public/pbacl/acl.pb.go index e8f18d870f72..695fae00901d 100644 --- a/proto-public/pbacl/acl.pb.go +++ b/proto-public/pbacl/acl.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbacl/acl.proto @@ -405,7 +405,7 @@ func file_pbacl_acl_proto_rawDescGZIP() []byte { } var file_pbacl_acl_proto_msgTypes = make([]protoimpl.MessageInfo, 6) -var file_pbacl_acl_proto_goTypes = []interface{}{ +var file_pbacl_acl_proto_goTypes = []any{ (*LogoutResponse)(nil), // 0: hashicorp.consul.acl.LogoutResponse (*LoginRequest)(nil), // 1: hashicorp.consul.acl.LoginRequest (*LoginResponse)(nil), // 2: hashicorp.consul.acl.LoginResponse @@ -433,7 +433,7 @@ func file_pbacl_acl_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbacl_acl_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbacl_acl_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*LogoutResponse); i { case 0: return &v.state @@ -445,7 +445,7 @@ func file_pbacl_acl_proto_init() { return nil } } - file_pbacl_acl_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_pbacl_acl_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*LoginRequest); i { case 0: return &v.state @@ -457,7 +457,7 @@ func file_pbacl_acl_proto_init() { return nil } } - file_pbacl_acl_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_pbacl_acl_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*LoginResponse); i { case 0: return &v.state @@ -469,7 +469,7 @@ func file_pbacl_acl_proto_init() { return nil } } - file_pbacl_acl_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_pbacl_acl_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*LoginToken); i { case 0: return &v.state @@ -481,7 +481,7 @@ func file_pbacl_acl_proto_init() { return nil } } - file_pbacl_acl_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_pbacl_acl_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*LogoutRequest); i { case 0: return &v.state diff --git a/proto-public/pbconnectca/ca.pb.go b/proto-public/pbconnectca/ca.pb.go index 69d2307565c5..08d6f1c4271f 100644 --- a/proto-public/pbconnectca/ca.pb.go +++ b/proto-public/pbconnectca/ca.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbconnectca/ca.proto @@ -450,7 +450,7 @@ func file_pbconnectca_ca_proto_rawDescGZIP() []byte { } var file_pbconnectca_ca_proto_msgTypes = make([]protoimpl.MessageInfo, 5) -var file_pbconnectca_ca_proto_goTypes = []interface{}{ +var file_pbconnectca_ca_proto_goTypes = []any{ (*WatchRootsRequest)(nil), // 0: hashicorp.consul.connectca.WatchRootsRequest (*WatchRootsResponse)(nil), // 1: hashicorp.consul.connectca.WatchRootsResponse (*CARoot)(nil), // 2: hashicorp.consul.connectca.CARoot @@ -478,7 +478,7 @@ func file_pbconnectca_ca_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbconnectca_ca_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbconnectca_ca_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*WatchRootsRequest); i { case 0: return &v.state @@ -490,7 +490,7 @@ func file_pbconnectca_ca_proto_init() { return nil } } - file_pbconnectca_ca_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_pbconnectca_ca_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*WatchRootsResponse); i { case 0: return &v.state @@ -502,7 +502,7 @@ func file_pbconnectca_ca_proto_init() { return nil } } - file_pbconnectca_ca_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_pbconnectca_ca_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*CARoot); i { case 0: return &v.state @@ -514,7 +514,7 @@ func file_pbconnectca_ca_proto_init() { return nil } } - file_pbconnectca_ca_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_pbconnectca_ca_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*SignRequest); i { case 0: return &v.state @@ -526,7 +526,7 @@ func file_pbconnectca_ca_proto_init() { return nil } } - file_pbconnectca_ca_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_pbconnectca_ca_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*SignResponse); i { case 0: return &v.state diff --git a/proto-public/pbdataplane/dataplane.pb.go b/proto-public/pbdataplane/dataplane.pb.go index aa4b70c794df..f4192ef8a5a7 100644 --- a/proto-public/pbdataplane/dataplane.pb.go +++ b/proto-public/pbdataplane/dataplane.pb.go @@ -5,7 +5,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbdataplane/dataplane.proto @@ -685,7 +685,7 @@ func file_pbdataplane_dataplane_proto_rawDescGZIP() []byte { var file_pbdataplane_dataplane_proto_enumTypes = make([]protoimpl.EnumInfo, 2) var file_pbdataplane_dataplane_proto_msgTypes = make([]protoimpl.MessageInfo, 5) -var file_pbdataplane_dataplane_proto_goTypes = []interface{}{ +var file_pbdataplane_dataplane_proto_goTypes = []any{ (DataplaneFeatures)(0), // 0: hashicorp.consul.dataplane.DataplaneFeatures (ServiceKind)(0), // 1: hashicorp.consul.dataplane.ServiceKind (*GetSupportedDataplaneFeaturesRequest)(nil), // 2: hashicorp.consul.dataplane.GetSupportedDataplaneFeaturesRequest @@ -716,7 +716,7 @@ func file_pbdataplane_dataplane_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbdataplane_dataplane_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbdataplane_dataplane_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*GetSupportedDataplaneFeaturesRequest); i { case 0: return &v.state @@ -728,7 +728,7 @@ func file_pbdataplane_dataplane_proto_init() { return nil } } - file_pbdataplane_dataplane_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_pbdataplane_dataplane_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*DataplaneFeatureSupport); i { case 0: return &v.state @@ -740,7 +740,7 @@ func file_pbdataplane_dataplane_proto_init() { return nil } } - file_pbdataplane_dataplane_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_pbdataplane_dataplane_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*GetSupportedDataplaneFeaturesResponse); i { case 0: return &v.state @@ -752,7 +752,7 @@ func file_pbdataplane_dataplane_proto_init() { return nil } } - file_pbdataplane_dataplane_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_pbdataplane_dataplane_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*GetEnvoyBootstrapParamsRequest); i { case 0: return &v.state @@ -764,7 +764,7 @@ func file_pbdataplane_dataplane_proto_init() { return nil } } - file_pbdataplane_dataplane_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_pbdataplane_dataplane_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*GetEnvoyBootstrapParamsResponse); i { case 0: return &v.state @@ -777,7 +777,7 @@ func file_pbdataplane_dataplane_proto_init() { } } } - file_pbdataplane_dataplane_proto_msgTypes[3].OneofWrappers = []interface{}{ + file_pbdataplane_dataplane_proto_msgTypes[3].OneofWrappers = []any{ (*GetEnvoyBootstrapParamsRequest_NodeId)(nil), (*GetEnvoyBootstrapParamsRequest_NodeName)(nil), } diff --git a/proto-public/pbdns/dns.pb.go b/proto-public/pbdns/dns.pb.go index 3bce69909a68..09f3489b95c8 100644 --- a/proto-public/pbdns/dns.pb.go +++ b/proto-public/pbdns/dns.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbdns/dns.proto @@ -235,7 +235,7 @@ func file_pbdns_dns_proto_rawDescGZIP() []byte { var file_pbdns_dns_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_pbdns_dns_proto_msgTypes = make([]protoimpl.MessageInfo, 2) -var file_pbdns_dns_proto_goTypes = []interface{}{ +var file_pbdns_dns_proto_goTypes = []any{ (Protocol)(0), // 0: hashicorp.consul.dns.Protocol (*QueryRequest)(nil), // 1: hashicorp.consul.dns.QueryRequest (*QueryResponse)(nil), // 2: hashicorp.consul.dns.QueryResponse @@ -257,7 +257,7 @@ func file_pbdns_dns_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbdns_dns_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbdns_dns_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*QueryRequest); i { case 0: return &v.state @@ -269,7 +269,7 @@ func file_pbdns_dns_proto_init() { return nil } } - file_pbdns_dns_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_pbdns_dns_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*QueryResponse); i { case 0: return &v.state diff --git a/proto-public/pbmulticluster/v2/computed_exported_services.pb.go b/proto-public/pbmulticluster/v2/computed_exported_services.pb.go index 91319c44c2c5..91ee7e653bc6 100644 --- a/proto-public/pbmulticluster/v2/computed_exported_services.pb.go +++ b/proto-public/pbmulticluster/v2/computed_exported_services.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbmulticluster/v2/computed_exported_services.proto @@ -282,7 +282,7 @@ func file_pbmulticluster_v2_computed_exported_services_proto_rawDescGZIP() []byt } var file_pbmulticluster_v2_computed_exported_services_proto_msgTypes = make([]protoimpl.MessageInfo, 3) -var file_pbmulticluster_v2_computed_exported_services_proto_goTypes = []interface{}{ +var file_pbmulticluster_v2_computed_exported_services_proto_goTypes = []any{ (*ComputedExportedServices)(nil), // 0: hashicorp.consul.multicluster.v2.ComputedExportedServices (*ComputedExportedService)(nil), // 1: hashicorp.consul.multicluster.v2.ComputedExportedService (*ComputedExportedServiceConsumer)(nil), // 2: hashicorp.consul.multicluster.v2.ComputedExportedServiceConsumer @@ -305,7 +305,7 @@ func file_pbmulticluster_v2_computed_exported_services_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*ComputedExportedServices); i { case 0: return &v.state @@ -317,7 +317,7 @@ func file_pbmulticluster_v2_computed_exported_services_proto_init() { return nil } } - file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*ComputedExportedService); i { case 0: return &v.state @@ -329,7 +329,7 @@ func file_pbmulticluster_v2_computed_exported_services_proto_init() { return nil } } - file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*ComputedExportedServiceConsumer); i { case 0: return &v.state @@ -342,7 +342,7 @@ func file_pbmulticluster_v2_computed_exported_services_proto_init() { } } } - file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[2].OneofWrappers = []interface{}{ + file_pbmulticluster_v2_computed_exported_services_proto_msgTypes[2].OneofWrappers = []any{ (*ComputedExportedServiceConsumer_Peer)(nil), (*ComputedExportedServiceConsumer_Partition)(nil), } diff --git a/proto-public/pbmulticluster/v2/exported_services.pb.go b/proto-public/pbmulticluster/v2/exported_services.pb.go index 796534e2c273..8c633cb3bfa6 100644 --- a/proto-public/pbmulticluster/v2/exported_services.pb.go +++ b/proto-public/pbmulticluster/v2/exported_services.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbmulticluster/v2/exported_services.proto @@ -136,7 +136,7 @@ func file_pbmulticluster_v2_exported_services_proto_rawDescGZIP() []byte { } var file_pbmulticluster_v2_exported_services_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_pbmulticluster_v2_exported_services_proto_goTypes = []interface{}{ +var file_pbmulticluster_v2_exported_services_proto_goTypes = []any{ (*ExportedServices)(nil), // 0: hashicorp.consul.multicluster.v2.ExportedServices (*ExportedServicesConsumer)(nil), // 1: hashicorp.consul.multicluster.v2.ExportedServicesConsumer } @@ -156,7 +156,7 @@ func file_pbmulticluster_v2_exported_services_proto_init() { } file_pbmulticluster_v2_exported_services_consumer_proto_init() if !protoimpl.UnsafeEnabled { - file_pbmulticluster_v2_exported_services_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbmulticluster_v2_exported_services_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*ExportedServices); i { case 0: return &v.state diff --git a/proto-public/pbmulticluster/v2/exported_services_consumer.pb.go b/proto-public/pbmulticluster/v2/exported_services_consumer.pb.go index 83817a0101bc..e56892946eab 100644 --- a/proto-public/pbmulticluster/v2/exported_services_consumer.pb.go +++ b/proto-public/pbmulticluster/v2/exported_services_consumer.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbmulticluster/v2/exported_services_consumer.proto @@ -174,7 +174,7 @@ func file_pbmulticluster_v2_exported_services_consumer_proto_rawDescGZIP() []byt } var file_pbmulticluster_v2_exported_services_consumer_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_pbmulticluster_v2_exported_services_consumer_proto_goTypes = []interface{}{ +var file_pbmulticluster_v2_exported_services_consumer_proto_goTypes = []any{ (*ExportedServicesConsumer)(nil), // 0: hashicorp.consul.multicluster.v2.ExportedServicesConsumer } var file_pbmulticluster_v2_exported_services_consumer_proto_depIdxs = []int32{ @@ -191,7 +191,7 @@ func file_pbmulticluster_v2_exported_services_consumer_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbmulticluster_v2_exported_services_consumer_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbmulticluster_v2_exported_services_consumer_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*ExportedServicesConsumer); i { case 0: return &v.state @@ -204,7 +204,7 @@ func file_pbmulticluster_v2_exported_services_consumer_proto_init() { } } } - file_pbmulticluster_v2_exported_services_consumer_proto_msgTypes[0].OneofWrappers = []interface{}{ + file_pbmulticluster_v2_exported_services_consumer_proto_msgTypes[0].OneofWrappers = []any{ (*ExportedServicesConsumer_Peer)(nil), (*ExportedServicesConsumer_Partition)(nil), (*ExportedServicesConsumer_SamenessGroup)(nil), diff --git a/proto-public/pbmulticluster/v2/namespace_exported_services.pb.go b/proto-public/pbmulticluster/v2/namespace_exported_services.pb.go index d156c7d70095..1bbd7bc6b30f 100644 --- a/proto-public/pbmulticluster/v2/namespace_exported_services.pb.go +++ b/proto-public/pbmulticluster/v2/namespace_exported_services.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbmulticluster/v2/namespace_exported_services.proto @@ -128,7 +128,7 @@ func file_pbmulticluster_v2_namespace_exported_services_proto_rawDescGZIP() []by } var file_pbmulticluster_v2_namespace_exported_services_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_pbmulticluster_v2_namespace_exported_services_proto_goTypes = []interface{}{ +var file_pbmulticluster_v2_namespace_exported_services_proto_goTypes = []any{ (*NamespaceExportedServices)(nil), // 0: hashicorp.consul.multicluster.v2.NamespaceExportedServices (*ExportedServicesConsumer)(nil), // 1: hashicorp.consul.multicluster.v2.ExportedServicesConsumer } @@ -148,7 +148,7 @@ func file_pbmulticluster_v2_namespace_exported_services_proto_init() { } file_pbmulticluster_v2_exported_services_consumer_proto_init() if !protoimpl.UnsafeEnabled { - file_pbmulticluster_v2_namespace_exported_services_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbmulticluster_v2_namespace_exported_services_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*NamespaceExportedServices); i { case 0: return &v.state diff --git a/proto-public/pbmulticluster/v2/partition_exported_services.pb.go b/proto-public/pbmulticluster/v2/partition_exported_services.pb.go index 3b8aa0e18f64..666fc8f7e8e2 100644 --- a/proto-public/pbmulticluster/v2/partition_exported_services.pb.go +++ b/proto-public/pbmulticluster/v2/partition_exported_services.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbmulticluster/v2/partition_exported_services.proto @@ -128,7 +128,7 @@ func file_pbmulticluster_v2_partition_exported_services_proto_rawDescGZIP() []by } var file_pbmulticluster_v2_partition_exported_services_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_pbmulticluster_v2_partition_exported_services_proto_goTypes = []interface{}{ +var file_pbmulticluster_v2_partition_exported_services_proto_goTypes = []any{ (*PartitionExportedServices)(nil), // 0: hashicorp.consul.multicluster.v2.PartitionExportedServices (*ExportedServicesConsumer)(nil), // 1: hashicorp.consul.multicluster.v2.ExportedServicesConsumer } @@ -148,7 +148,7 @@ func file_pbmulticluster_v2_partition_exported_services_proto_init() { } file_pbmulticluster_v2_exported_services_consumer_proto_init() if !protoimpl.UnsafeEnabled { - file_pbmulticluster_v2_partition_exported_services_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbmulticluster_v2_partition_exported_services_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*PartitionExportedServices); i { case 0: return &v.state diff --git a/proto-public/pbresource/annotations.pb.go b/proto-public/pbresource/annotations.pb.go index f7a670575898..52b3d433e17b 100644 --- a/proto-public/pbresource/annotations.pb.go +++ b/proto-public/pbresource/annotations.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbresource/annotations.proto @@ -208,7 +208,7 @@ func file_pbresource_annotations_proto_rawDescGZIP() []byte { var file_pbresource_annotations_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_pbresource_annotations_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_pbresource_annotations_proto_goTypes = []interface{}{ +var file_pbresource_annotations_proto_goTypes = []any{ (Scope)(0), // 0: hashicorp.consul.resource.Scope (*ResourceTypeSpec)(nil), // 1: hashicorp.consul.resource.ResourceTypeSpec (*descriptorpb.MessageOptions)(nil), // 2: google.protobuf.MessageOptions @@ -230,7 +230,7 @@ func file_pbresource_annotations_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbresource_annotations_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_annotations_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*ResourceTypeSpec); i { case 0: return &v.state diff --git a/proto-public/pbresource/resource.pb.go b/proto-public/pbresource/resource.pb.go index a0b65b349485..08908bb20732 100644 --- a/proto-public/pbresource/resource.pb.go +++ b/proto-public/pbresource/resource.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbresource/resource.proto @@ -2073,7 +2073,7 @@ func file_pbresource_resource_proto_rawDescGZIP() []byte { var file_pbresource_resource_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_pbresource_resource_proto_msgTypes = make([]protoimpl.MessageInfo, 29) -var file_pbresource_resource_proto_goTypes = []interface{}{ +var file_pbresource_resource_proto_goTypes = []any{ (Condition_State)(0), // 0: hashicorp.consul.resource.Condition.State (*Type)(nil), // 1: hashicorp.consul.resource.Type (*Tenancy)(nil), // 2: hashicorp.consul.resource.Tenancy @@ -2174,7 +2174,7 @@ func file_pbresource_resource_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbresource_resource_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Type); i { case 0: return &v.state @@ -2186,7 +2186,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*Tenancy); i { case 0: return &v.state @@ -2198,7 +2198,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*ID); i { case 0: return &v.state @@ -2210,7 +2210,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*Resource); i { case 0: return &v.state @@ -2222,7 +2222,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*Status); i { case 0: return &v.state @@ -2234,7 +2234,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*Condition); i { case 0: return &v.state @@ -2246,7 +2246,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*Reference); i { case 0: return &v.state @@ -2258,7 +2258,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*Tombstone); i { case 0: return &v.state @@ -2270,7 +2270,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[8].Exporter = func(v any, i int) any { switch v := v.(*ReadRequest); i { case 0: return &v.state @@ -2282,7 +2282,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[9].Exporter = func(v any, i int) any { switch v := v.(*ReadResponse); i { case 0: return &v.state @@ -2294,7 +2294,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[10].Exporter = func(v any, i int) any { switch v := v.(*ListRequest); i { case 0: return &v.state @@ -2306,7 +2306,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[11].Exporter = func(v any, i int) any { switch v := v.(*ListResponse); i { case 0: return &v.state @@ -2318,7 +2318,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[12].Exporter = func(v any, i int) any { switch v := v.(*ListByOwnerRequest); i { case 0: return &v.state @@ -2330,7 +2330,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[13].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[13].Exporter = func(v any, i int) any { switch v := v.(*ListByOwnerResponse); i { case 0: return &v.state @@ -2342,7 +2342,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[14].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[14].Exporter = func(v any, i int) any { switch v := v.(*WriteRequest); i { case 0: return &v.state @@ -2354,7 +2354,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[15].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[15].Exporter = func(v any, i int) any { switch v := v.(*WriteResponse); i { case 0: return &v.state @@ -2366,7 +2366,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[16].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[16].Exporter = func(v any, i int) any { switch v := v.(*WriteStatusRequest); i { case 0: return &v.state @@ -2378,7 +2378,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[17].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[17].Exporter = func(v any, i int) any { switch v := v.(*WriteStatusResponse); i { case 0: return &v.state @@ -2390,7 +2390,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[18].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[18].Exporter = func(v any, i int) any { switch v := v.(*DeleteRequest); i { case 0: return &v.state @@ -2402,7 +2402,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[19].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[19].Exporter = func(v any, i int) any { switch v := v.(*DeleteResponse); i { case 0: return &v.state @@ -2414,7 +2414,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[20].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[20].Exporter = func(v any, i int) any { switch v := v.(*WatchListRequest); i { case 0: return &v.state @@ -2426,7 +2426,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[21].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[21].Exporter = func(v any, i int) any { switch v := v.(*WatchEvent); i { case 0: return &v.state @@ -2438,7 +2438,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[22].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[22].Exporter = func(v any, i int) any { switch v := v.(*MutateAndValidateRequest); i { case 0: return &v.state @@ -2450,7 +2450,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[23].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[23].Exporter = func(v any, i int) any { switch v := v.(*MutateAndValidateResponse); i { case 0: return &v.state @@ -2462,7 +2462,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[26].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[26].Exporter = func(v any, i int) any { switch v := v.(*WatchEvent_Upsert); i { case 0: return &v.state @@ -2474,7 +2474,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[27].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[27].Exporter = func(v any, i int) any { switch v := v.(*WatchEvent_Delete); i { case 0: return &v.state @@ -2486,7 +2486,7 @@ func file_pbresource_resource_proto_init() { return nil } } - file_pbresource_resource_proto_msgTypes[28].Exporter = func(v interface{}, i int) interface{} { + file_pbresource_resource_proto_msgTypes[28].Exporter = func(v any, i int) any { switch v := v.(*WatchEvent_EndOfSnapshot); i { case 0: return &v.state @@ -2499,7 +2499,7 @@ func file_pbresource_resource_proto_init() { } } } - file_pbresource_resource_proto_msgTypes[21].OneofWrappers = []interface{}{ + file_pbresource_resource_proto_msgTypes[21].OneofWrappers = []any{ (*WatchEvent_Upsert_)(nil), (*WatchEvent_Delete_)(nil), (*WatchEvent_EndOfSnapshot_)(nil), diff --git a/proto-public/pbserverdiscovery/serverdiscovery.pb.go b/proto-public/pbserverdiscovery/serverdiscovery.pb.go index 28bdff035183..7e7e6f18aa43 100644 --- a/proto-public/pbserverdiscovery/serverdiscovery.pb.go +++ b/proto-public/pbserverdiscovery/serverdiscovery.pb.go @@ -6,7 +6,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: pbserverdiscovery/serverdiscovery.proto @@ -258,7 +258,7 @@ func file_pbserverdiscovery_serverdiscovery_proto_rawDescGZIP() []byte { } var file_pbserverdiscovery_serverdiscovery_proto_msgTypes = make([]protoimpl.MessageInfo, 3) -var file_pbserverdiscovery_serverdiscovery_proto_goTypes = []interface{}{ +var file_pbserverdiscovery_serverdiscovery_proto_goTypes = []any{ (*WatchServersRequest)(nil), // 0: hashicorp.consul.serverdiscovery.WatchServersRequest (*WatchServersResponse)(nil), // 1: hashicorp.consul.serverdiscovery.WatchServersResponse (*Server)(nil), // 2: hashicorp.consul.serverdiscovery.Server @@ -280,7 +280,7 @@ func file_pbserverdiscovery_serverdiscovery_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_pbserverdiscovery_serverdiscovery_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_pbserverdiscovery_serverdiscovery_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*WatchServersRequest); i { case 0: return &v.state @@ -292,7 +292,7 @@ func file_pbserverdiscovery_serverdiscovery_proto_init() { return nil } } - file_pbserverdiscovery_serverdiscovery_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_pbserverdiscovery_serverdiscovery_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*WatchServersResponse); i { case 0: return &v.state @@ -304,7 +304,7 @@ func file_pbserverdiscovery_serverdiscovery_proto_init() { return nil } } - file_pbserverdiscovery_serverdiscovery_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_pbserverdiscovery_serverdiscovery_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*Server); i { case 0: return &v.state diff --git a/proto/private/pbacl/acl.pb.go b/proto/private/pbacl/acl.pb.go index 323aa44cb5a8..78a866161587 100644 --- a/proto/private/pbacl/acl.pb.go +++ b/proto/private/pbacl/acl.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbacl/acl.proto @@ -120,7 +120,7 @@ func file_private_pbacl_acl_proto_rawDescGZIP() []byte { } var file_private_pbacl_acl_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_private_pbacl_acl_proto_goTypes = []interface{}{ +var file_private_pbacl_acl_proto_goTypes = []any{ (*ACLLink)(nil), // 0: hashicorp.consul.internal.acl.ACLLink } var file_private_pbacl_acl_proto_depIdxs = []int32{ @@ -137,7 +137,7 @@ func file_private_pbacl_acl_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbacl_acl_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbacl_acl_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*ACLLink); i { case 0: return &v.state diff --git a/proto/private/pbautoconf/auto_config.pb.go b/proto/private/pbautoconf/auto_config.pb.go index f68f679255c0..33e375d42190 100644 --- a/proto/private/pbautoconf/auto_config.pb.go +++ b/proto/private/pbautoconf/auto_config.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbautoconf/auto_config.proto @@ -288,7 +288,7 @@ func file_private_pbautoconf_auto_config_proto_rawDescGZIP() []byte { } var file_private_pbautoconf_auto_config_proto_msgTypes = make([]protoimpl.MessageInfo, 2) -var file_private_pbautoconf_auto_config_proto_goTypes = []interface{}{ +var file_private_pbautoconf_auto_config_proto_goTypes = []any{ (*AutoConfigRequest)(nil), // 0: hashicorp.consul.internal.autoconf.AutoConfigRequest (*AutoConfigResponse)(nil), // 1: hashicorp.consul.internal.autoconf.AutoConfigResponse (*pbconfig.Config)(nil), // 2: hashicorp.consul.internal.config.Config @@ -312,7 +312,7 @@ func file_private_pbautoconf_auto_config_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbautoconf_auto_config_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbautoconf_auto_config_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*AutoConfigRequest); i { case 0: return &v.state @@ -324,7 +324,7 @@ func file_private_pbautoconf_auto_config_proto_init() { return nil } } - file_private_pbautoconf_auto_config_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbautoconf_auto_config_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*AutoConfigResponse); i { case 0: return &v.state diff --git a/proto/private/pbcommon/common.pb.go b/proto/private/pbcommon/common.pb.go index cd6dbe1d3a31..02ae7c8e0b00 100644 --- a/proto/private/pbcommon/common.pb.go +++ b/proto/private/pbcommon/common.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbcommon/common.proto @@ -306,7 +306,7 @@ type QueryOptions struct { // returned. Clients that wish to allow for stale results on error can set // StaleIfError to a longer duration to change this behavior. It is ignored // if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. // mog: func-to=structs.DurationFromProto func-from=structs.DurationToProto MaxAge *durationpb.Duration `protobuf:"bytes,8,opt,name=MaxAge,proto3" json:"MaxAge,omitempty"` // MustRevalidate forces the agent to fetch a fresh version of a cached @@ -319,7 +319,7 @@ type QueryOptions struct { // if the servers are unavailable to fetch a fresh one. Only makes sense when // UseCache is true and MaxAge is set to a lower, non-zero value. It is // ignored if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. StaleIfError *durationpb.Duration `protobuf:"bytes,10,opt,name=StaleIfError,proto3" json:"StaleIfError,omitempty"` // Filter specifies the go-bexpr filter expression to be used for // filtering the data prior to returning a response @@ -867,7 +867,7 @@ func file_private_pbcommon_common_proto_rawDescGZIP() []byte { } var file_private_pbcommon_common_proto_msgTypes = make([]protoimpl.MessageInfo, 9) -var file_private_pbcommon_common_proto_goTypes = []interface{}{ +var file_private_pbcommon_common_proto_goTypes = []any{ (*RaftIndex)(nil), // 0: hashicorp.consul.internal.common.RaftIndex (*TargetDatacenter)(nil), // 1: hashicorp.consul.internal.common.TargetDatacenter (*WriteRequest)(nil), // 2: hashicorp.consul.internal.common.WriteRequest @@ -900,7 +900,7 @@ func file_private_pbcommon_common_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbcommon_common_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*RaftIndex); i { case 0: return &v.state @@ -912,7 +912,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*TargetDatacenter); i { case 0: return &v.state @@ -924,7 +924,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*WriteRequest); i { case 0: return &v.state @@ -936,7 +936,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*ReadRequest); i { case 0: return &v.state @@ -948,7 +948,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*QueryOptions); i { case 0: return &v.state @@ -960,7 +960,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*QueryMeta); i { case 0: return &v.state @@ -972,7 +972,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*EnterpriseMeta); i { case 0: return &v.state @@ -984,7 +984,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*EnvoyExtension); i { case 0: return &v.state @@ -996,7 +996,7 @@ func file_private_pbcommon_common_proto_init() { return nil } } - file_private_pbcommon_common_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + file_private_pbcommon_common_proto_msgTypes[8].Exporter = func(v any, i int) any { switch v := v.(*Locality); i { case 0: return &v.state diff --git a/proto/private/pbcommon/common.proto b/proto/private/pbcommon/common.proto index 2296dc69d628..af213c1c7a79 100644 --- a/proto/private/pbcommon/common.proto +++ b/proto/private/pbcommon/common.proto @@ -107,7 +107,7 @@ message QueryOptions { // returned. Clients that wish to allow for stale results on error can set // StaleIfError to a longer duration to change this behavior. It is ignored // if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. // mog: func-to=structs.DurationFromProto func-from=structs.DurationToProto google.protobuf.Duration MaxAge = 8; @@ -122,7 +122,7 @@ message QueryOptions { // if the servers are unavailable to fetch a fresh one. Only makes sense when // UseCache is true and MaxAge is set to a lower, non-zero value. It is // ignored if the endpoint supports background refresh caching. See - // https://www.consul.io/api/index.html#agent-caching for more details. + // https://developer.hashicorp.com/api/index.html#agent-caching for more details. google.protobuf.Duration StaleIfError = 10; // Filter specifies the go-bexpr filter expression to be used for diff --git a/proto/private/pbconfig/config.pb.go b/proto/private/pbconfig/config.pb.go index 06f53e7edcf0..f34beee53e2f 100644 --- a/proto/private/pbconfig/config.pb.go +++ b/proto/private/pbconfig/config.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbconfig/config.proto @@ -840,7 +840,7 @@ func file_private_pbconfig_config_proto_rawDescGZIP() []byte { } var file_private_pbconfig_config_proto_msgTypes = make([]protoimpl.MessageInfo, 8) -var file_private_pbconfig_config_proto_goTypes = []interface{}{ +var file_private_pbconfig_config_proto_goTypes = []any{ (*Config)(nil), // 0: hashicorp.consul.internal.config.Config (*Gossip)(nil), // 1: hashicorp.consul.internal.config.Gossip (*GossipEncryption)(nil), // 2: hashicorp.consul.internal.config.GossipEncryption @@ -871,7 +871,7 @@ func file_private_pbconfig_config_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbconfig_config_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Config); i { case 0: return &v.state @@ -883,7 +883,7 @@ func file_private_pbconfig_config_proto_init() { return nil } } - file_private_pbconfig_config_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*Gossip); i { case 0: return &v.state @@ -895,7 +895,7 @@ func file_private_pbconfig_config_proto_init() { return nil } } - file_private_pbconfig_config_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*GossipEncryption); i { case 0: return &v.state @@ -907,7 +907,7 @@ func file_private_pbconfig_config_proto_init() { return nil } } - file_private_pbconfig_config_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*TLS); i { case 0: return &v.state @@ -919,7 +919,7 @@ func file_private_pbconfig_config_proto_init() { return nil } } - file_private_pbconfig_config_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*ACL); i { case 0: return &v.state @@ -931,7 +931,7 @@ func file_private_pbconfig_config_proto_init() { return nil } } - file_private_pbconfig_config_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*ACLTokens); i { case 0: return &v.state @@ -943,7 +943,7 @@ func file_private_pbconfig_config_proto_init() { return nil } } - file_private_pbconfig_config_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*ACLServiceProviderToken); i { case 0: return &v.state @@ -955,7 +955,7 @@ func file_private_pbconfig_config_proto_init() { return nil } } - file_private_pbconfig_config_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfig_config_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*AutoEncrypt); i { case 0: return &v.state diff --git a/proto/private/pbconfigentry/config_entry.pb.go b/proto/private/pbconfigentry/config_entry.pb.go index 8eac41851ac4..591989f3814f 100644 --- a/proto/private/pbconfigentry/config_entry.pb.go +++ b/proto/private/pbconfigentry/config_entry.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbconfigentry/config_entry.proto @@ -10534,7 +10534,7 @@ func file_private_pbconfigentry_config_entry_proto_rawDescGZIP() []byte { var file_private_pbconfigentry_config_entry_proto_enumTypes = make([]protoimpl.EnumInfo, 13) var file_private_pbconfigentry_config_entry_proto_msgTypes = make([]protoimpl.MessageInfo, 129) -var file_private_pbconfigentry_config_entry_proto_goTypes = []interface{}{ +var file_private_pbconfigentry_config_entry_proto_goTypes = []any{ (Kind)(0), // 0: hashicorp.consul.internal.configentry.Kind (PathWithEscapedSlashesAction)(0), // 1: hashicorp.consul.internal.configentry.PathWithEscapedSlashesAction (HeadersWithUnderscoresAction)(0), // 2: hashicorp.consul.internal.configentry.HeadersWithUnderscoresAction @@ -10887,7 +10887,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbconfigentry_config_entry_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*GetResolvedExportedServicesRequest); i { case 0: return &v.state @@ -10899,7 +10899,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*GetResolvedExportedServicesResponse); i { case 0: return &v.state @@ -10911,7 +10911,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*ResolvedExportedService); i { case 0: return &v.state @@ -10923,7 +10923,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*Consumers); i { case 0: return &v.state @@ -10935,7 +10935,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*ConfigEntry); i { case 0: return &v.state @@ -10947,7 +10947,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*MeshConfig); i { case 0: return &v.state @@ -10959,7 +10959,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*TransparentProxyMeshConfig); i { case 0: return &v.state @@ -10971,7 +10971,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*MeshTLSConfig); i { case 0: return &v.state @@ -10983,7 +10983,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[8].Exporter = func(v any, i int) any { switch v := v.(*MeshDirectionalTLSConfig); i { case 0: return &v.state @@ -10995,7 +10995,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[9].Exporter = func(v any, i int) any { switch v := v.(*MeshHTTPConfig); i { case 0: return &v.state @@ -11007,7 +11007,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[10].Exporter = func(v any, i int) any { switch v := v.(*MeshDirectionalHTTPConfig); i { case 0: return &v.state @@ -11019,7 +11019,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[11].Exporter = func(v any, i int) any { switch v := v.(*PeeringMeshConfig); i { case 0: return &v.state @@ -11031,7 +11031,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[12].Exporter = func(v any, i int) any { switch v := v.(*RequestNormalizationMeshConfig); i { case 0: return &v.state @@ -11043,7 +11043,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[13].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[13].Exporter = func(v any, i int) any { switch v := v.(*ServiceResolver); i { case 0: return &v.state @@ -11055,7 +11055,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[14].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[14].Exporter = func(v any, i int) any { switch v := v.(*ServiceResolverSubset); i { case 0: return &v.state @@ -11067,7 +11067,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[15].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[15].Exporter = func(v any, i int) any { switch v := v.(*ServiceResolverRedirect); i { case 0: return &v.state @@ -11079,7 +11079,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[16].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[16].Exporter = func(v any, i int) any { switch v := v.(*ServiceResolverFailover); i { case 0: return &v.state @@ -11091,7 +11091,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[17].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[17].Exporter = func(v any, i int) any { switch v := v.(*ServiceResolverFailoverPolicy); i { case 0: return &v.state @@ -11103,7 +11103,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[18].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[18].Exporter = func(v any, i int) any { switch v := v.(*ServiceResolverPrioritizeByLocality); i { case 0: return &v.state @@ -11115,7 +11115,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[19].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[19].Exporter = func(v any, i int) any { switch v := v.(*ServiceResolverFailoverTarget); i { case 0: return &v.state @@ -11127,7 +11127,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[20].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[20].Exporter = func(v any, i int) any { switch v := v.(*LoadBalancer); i { case 0: return &v.state @@ -11139,7 +11139,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[21].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[21].Exporter = func(v any, i int) any { switch v := v.(*RingHashConfig); i { case 0: return &v.state @@ -11151,7 +11151,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[22].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[22].Exporter = func(v any, i int) any { switch v := v.(*LeastRequestConfig); i { case 0: return &v.state @@ -11163,7 +11163,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[23].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[23].Exporter = func(v any, i int) any { switch v := v.(*HashPolicy); i { case 0: return &v.state @@ -11175,7 +11175,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[24].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[24].Exporter = func(v any, i int) any { switch v := v.(*CookieConfig); i { case 0: return &v.state @@ -11187,7 +11187,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[25].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[25].Exporter = func(v any, i int) any { switch v := v.(*IngressGateway); i { case 0: return &v.state @@ -11199,7 +11199,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[26].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[26].Exporter = func(v any, i int) any { switch v := v.(*IngressServiceConfig); i { case 0: return &v.state @@ -11211,7 +11211,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[27].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[27].Exporter = func(v any, i int) any { switch v := v.(*GatewayTLSConfig); i { case 0: return &v.state @@ -11223,7 +11223,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[28].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[28].Exporter = func(v any, i int) any { switch v := v.(*GatewayTLSSDSConfig); i { case 0: return &v.state @@ -11235,7 +11235,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[29].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[29].Exporter = func(v any, i int) any { switch v := v.(*IngressListener); i { case 0: return &v.state @@ -11247,7 +11247,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[30].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[30].Exporter = func(v any, i int) any { switch v := v.(*IngressService); i { case 0: return &v.state @@ -11259,7 +11259,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[31].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[31].Exporter = func(v any, i int) any { switch v := v.(*GatewayServiceTLSConfig); i { case 0: return &v.state @@ -11271,7 +11271,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[32].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[32].Exporter = func(v any, i int) any { switch v := v.(*HTTPHeaderModifiers); i { case 0: return &v.state @@ -11283,7 +11283,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[33].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[33].Exporter = func(v any, i int) any { switch v := v.(*ServiceIntentions); i { case 0: return &v.state @@ -11295,7 +11295,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[34].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[34].Exporter = func(v any, i int) any { switch v := v.(*IntentionJWTRequirement); i { case 0: return &v.state @@ -11307,7 +11307,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[35].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[35].Exporter = func(v any, i int) any { switch v := v.(*IntentionJWTProvider); i { case 0: return &v.state @@ -11319,7 +11319,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[36].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[36].Exporter = func(v any, i int) any { switch v := v.(*IntentionJWTClaimVerification); i { case 0: return &v.state @@ -11331,7 +11331,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[37].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[37].Exporter = func(v any, i int) any { switch v := v.(*SourceIntention); i { case 0: return &v.state @@ -11343,7 +11343,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[38].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[38].Exporter = func(v any, i int) any { switch v := v.(*IntentionPermission); i { case 0: return &v.state @@ -11355,7 +11355,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[39].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[39].Exporter = func(v any, i int) any { switch v := v.(*IntentionHTTPPermission); i { case 0: return &v.state @@ -11367,7 +11367,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[40].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[40].Exporter = func(v any, i int) any { switch v := v.(*IntentionHTTPHeaderPermission); i { case 0: return &v.state @@ -11379,7 +11379,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[41].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[41].Exporter = func(v any, i int) any { switch v := v.(*ServiceDefaults); i { case 0: return &v.state @@ -11391,7 +11391,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[42].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[42].Exporter = func(v any, i int) any { switch v := v.(*TransparentProxyConfig); i { case 0: return &v.state @@ -11403,7 +11403,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[43].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[43].Exporter = func(v any, i int) any { switch v := v.(*MeshGatewayConfig); i { case 0: return &v.state @@ -11415,7 +11415,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[44].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[44].Exporter = func(v any, i int) any { switch v := v.(*ExposeConfig); i { case 0: return &v.state @@ -11427,7 +11427,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[45].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[45].Exporter = func(v any, i int) any { switch v := v.(*ExposePath); i { case 0: return &v.state @@ -11439,7 +11439,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[46].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[46].Exporter = func(v any, i int) any { switch v := v.(*UpstreamConfiguration); i { case 0: return &v.state @@ -11451,7 +11451,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[47].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[47].Exporter = func(v any, i int) any { switch v := v.(*UpstreamConfig); i { case 0: return &v.state @@ -11463,7 +11463,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[48].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[48].Exporter = func(v any, i int) any { switch v := v.(*UpstreamLimits); i { case 0: return &v.state @@ -11475,7 +11475,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[49].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[49].Exporter = func(v any, i int) any { switch v := v.(*PassiveHealthCheck); i { case 0: return &v.state @@ -11487,7 +11487,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[50].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[50].Exporter = func(v any, i int) any { switch v := v.(*DestinationConfig); i { case 0: return &v.state @@ -11499,7 +11499,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[51].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[51].Exporter = func(v any, i int) any { switch v := v.(*RateLimits); i { case 0: return &v.state @@ -11511,7 +11511,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[52].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[52].Exporter = func(v any, i int) any { switch v := v.(*InstanceLevelRateLimits); i { case 0: return &v.state @@ -11523,7 +11523,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[53].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[53].Exporter = func(v any, i int) any { switch v := v.(*InstanceLevelRouteRateLimits); i { case 0: return &v.state @@ -11535,7 +11535,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[54].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[54].Exporter = func(v any, i int) any { switch v := v.(*APIGateway); i { case 0: return &v.state @@ -11547,7 +11547,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[55].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[55].Exporter = func(v any, i int) any { switch v := v.(*Status); i { case 0: return &v.state @@ -11559,7 +11559,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[56].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[56].Exporter = func(v any, i int) any { switch v := v.(*Condition); i { case 0: return &v.state @@ -11571,7 +11571,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[57].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[57].Exporter = func(v any, i int) any { switch v := v.(*APIGatewayListener); i { case 0: return &v.state @@ -11583,7 +11583,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[58].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[58].Exporter = func(v any, i int) any { switch v := v.(*APIGatewayTLSConfiguration); i { case 0: return &v.state @@ -11595,7 +11595,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[59].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[59].Exporter = func(v any, i int) any { switch v := v.(*APIGatewayPolicy); i { case 0: return &v.state @@ -11607,7 +11607,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[60].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[60].Exporter = func(v any, i int) any { switch v := v.(*APIGatewayJWTRequirement); i { case 0: return &v.state @@ -11619,7 +11619,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[61].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[61].Exporter = func(v any, i int) any { switch v := v.(*APIGatewayJWTProvider); i { case 0: return &v.state @@ -11631,7 +11631,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[62].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[62].Exporter = func(v any, i int) any { switch v := v.(*APIGatewayJWTClaimVerification); i { case 0: return &v.state @@ -11643,7 +11643,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[63].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[63].Exporter = func(v any, i int) any { switch v := v.(*ResourceReference); i { case 0: return &v.state @@ -11655,7 +11655,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[64].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[64].Exporter = func(v any, i int) any { switch v := v.(*BoundAPIGateway); i { case 0: return &v.state @@ -11667,7 +11667,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[65].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[65].Exporter = func(v any, i int) any { switch v := v.(*ListOfResourceReference); i { case 0: return &v.state @@ -11679,7 +11679,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[66].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[66].Exporter = func(v any, i int) any { switch v := v.(*BoundAPIGatewayListener); i { case 0: return &v.state @@ -11691,7 +11691,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[67].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[67].Exporter = func(v any, i int) any { switch v := v.(*FileSystemCertificate); i { case 0: return &v.state @@ -11703,7 +11703,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[68].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[68].Exporter = func(v any, i int) any { switch v := v.(*InlineCertificate); i { case 0: return &v.state @@ -11715,7 +11715,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[69].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[69].Exporter = func(v any, i int) any { switch v := v.(*HTTPRoute); i { case 0: return &v.state @@ -11727,7 +11727,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[70].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[70].Exporter = func(v any, i int) any { switch v := v.(*HTTPRouteRule); i { case 0: return &v.state @@ -11739,7 +11739,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[71].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[71].Exporter = func(v any, i int) any { switch v := v.(*HTTPMatch); i { case 0: return &v.state @@ -11751,7 +11751,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[72].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[72].Exporter = func(v any, i int) any { switch v := v.(*HTTPHeaderMatch); i { case 0: return &v.state @@ -11763,7 +11763,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[73].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[73].Exporter = func(v any, i int) any { switch v := v.(*HTTPPathMatch); i { case 0: return &v.state @@ -11775,7 +11775,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[74].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[74].Exporter = func(v any, i int) any { switch v := v.(*HTTPQueryMatch); i { case 0: return &v.state @@ -11787,7 +11787,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[75].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[75].Exporter = func(v any, i int) any { switch v := v.(*HTTPFilters); i { case 0: return &v.state @@ -11799,7 +11799,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[76].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[76].Exporter = func(v any, i int) any { switch v := v.(*HTTPResponseFilters); i { case 0: return &v.state @@ -11811,7 +11811,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[77].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[77].Exporter = func(v any, i int) any { switch v := v.(*URLRewrite); i { case 0: return &v.state @@ -11823,7 +11823,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[78].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[78].Exporter = func(v any, i int) any { switch v := v.(*RetryFilter); i { case 0: return &v.state @@ -11835,7 +11835,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[79].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[79].Exporter = func(v any, i int) any { switch v := v.(*TimeoutFilter); i { case 0: return &v.state @@ -11847,7 +11847,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[80].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[80].Exporter = func(v any, i int) any { switch v := v.(*JWTFilter); i { case 0: return &v.state @@ -11859,7 +11859,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[81].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[81].Exporter = func(v any, i int) any { switch v := v.(*HTTPHeaderFilter); i { case 0: return &v.state @@ -11871,7 +11871,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[82].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[82].Exporter = func(v any, i int) any { switch v := v.(*HTTPService); i { case 0: return &v.state @@ -11883,7 +11883,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[83].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[83].Exporter = func(v any, i int) any { switch v := v.(*TCPRoute); i { case 0: return &v.state @@ -11895,7 +11895,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[84].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[84].Exporter = func(v any, i int) any { switch v := v.(*TCPService); i { case 0: return &v.state @@ -11907,7 +11907,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[85].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[85].Exporter = func(v any, i int) any { switch v := v.(*SamenessGroup); i { case 0: return &v.state @@ -11919,7 +11919,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[86].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[86].Exporter = func(v any, i int) any { switch v := v.(*SamenessGroupMember); i { case 0: return &v.state @@ -11931,7 +11931,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[87].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[87].Exporter = func(v any, i int) any { switch v := v.(*JWTProvider); i { case 0: return &v.state @@ -11943,7 +11943,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[88].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[88].Exporter = func(v any, i int) any { switch v := v.(*JSONWebKeySet); i { case 0: return &v.state @@ -11955,7 +11955,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[89].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[89].Exporter = func(v any, i int) any { switch v := v.(*LocalJWKS); i { case 0: return &v.state @@ -11967,7 +11967,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[90].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[90].Exporter = func(v any, i int) any { switch v := v.(*RemoteJWKS); i { case 0: return &v.state @@ -11979,7 +11979,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[91].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[91].Exporter = func(v any, i int) any { switch v := v.(*JWKSCluster); i { case 0: return &v.state @@ -11991,7 +11991,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[92].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[92].Exporter = func(v any, i int) any { switch v := v.(*JWKSTLSCertificate); i { case 0: return &v.state @@ -12003,7 +12003,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[93].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[93].Exporter = func(v any, i int) any { switch v := v.(*JWKSTLSCertProviderInstance); i { case 0: return &v.state @@ -12015,7 +12015,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[94].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[94].Exporter = func(v any, i int) any { switch v := v.(*JWKSTLSCertTrustedCA); i { case 0: return &v.state @@ -12027,7 +12027,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[95].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[95].Exporter = func(v any, i int) any { switch v := v.(*JWKSRetryPolicy); i { case 0: return &v.state @@ -12039,7 +12039,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[96].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[96].Exporter = func(v any, i int) any { switch v := v.(*RetryPolicyBackOff); i { case 0: return &v.state @@ -12051,7 +12051,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[97].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[97].Exporter = func(v any, i int) any { switch v := v.(*JWTLocation); i { case 0: return &v.state @@ -12063,7 +12063,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[98].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[98].Exporter = func(v any, i int) any { switch v := v.(*JWTLocationHeader); i { case 0: return &v.state @@ -12075,7 +12075,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[99].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[99].Exporter = func(v any, i int) any { switch v := v.(*JWTLocationQueryParam); i { case 0: return &v.state @@ -12087,7 +12087,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[100].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[100].Exporter = func(v any, i int) any { switch v := v.(*JWTLocationCookie); i { case 0: return &v.state @@ -12099,7 +12099,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[101].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[101].Exporter = func(v any, i int) any { switch v := v.(*JWTForwardingConfig); i { case 0: return &v.state @@ -12111,7 +12111,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[102].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[102].Exporter = func(v any, i int) any { switch v := v.(*JWTCacheConfig); i { case 0: return &v.state @@ -12123,7 +12123,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[103].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[103].Exporter = func(v any, i int) any { switch v := v.(*ExportedServices); i { case 0: return &v.state @@ -12135,7 +12135,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[104].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[104].Exporter = func(v any, i int) any { switch v := v.(*ExportedServicesService); i { case 0: return &v.state @@ -12147,7 +12147,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { return nil } } - file_private_pbconfigentry_config_entry_proto_msgTypes[105].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconfigentry_config_entry_proto_msgTypes[105].Exporter = func(v any, i int) any { switch v := v.(*ExportedServicesConsumer); i { case 0: return &v.state @@ -12160,7 +12160,7 @@ func file_private_pbconfigentry_config_entry_proto_init() { } } } - file_private_pbconfigentry_config_entry_proto_msgTypes[4].OneofWrappers = []interface{}{ + file_private_pbconfigentry_config_entry_proto_msgTypes[4].OneofWrappers = []any{ (*ConfigEntry_MeshConfig)(nil), (*ConfigEntry_ServiceResolver)(nil), (*ConfigEntry_IngressGateway)(nil), diff --git a/proto/private/pbconnect/connect.pb.go b/proto/private/pbconnect/connect.pb.go index 7b50d9a683b2..ea506e7860f4 100644 --- a/proto/private/pbconnect/connect.pb.go +++ b/proto/private/pbconnect/connect.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbconnect/connect.proto @@ -663,7 +663,7 @@ func file_private_pbconnect_connect_proto_rawDescGZIP() []byte { } var file_private_pbconnect_connect_proto_msgTypes = make([]protoimpl.MessageInfo, 3) -var file_private_pbconnect_connect_proto_goTypes = []interface{}{ +var file_private_pbconnect_connect_proto_goTypes = []any{ (*CARoots)(nil), // 0: hashicorp.consul.internal.connect.CARoots (*CARoot)(nil), // 1: hashicorp.consul.internal.connect.CARoot (*IssuedCert)(nil), // 2: hashicorp.consul.internal.connect.IssuedCert @@ -696,7 +696,7 @@ func file_private_pbconnect_connect_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbconnect_connect_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconnect_connect_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*CARoots); i { case 0: return &v.state @@ -708,7 +708,7 @@ func file_private_pbconnect_connect_proto_init() { return nil } } - file_private_pbconnect_connect_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconnect_connect_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*CARoot); i { case 0: return &v.state @@ -720,7 +720,7 @@ func file_private_pbconnect_connect_proto_init() { return nil } } - file_private_pbconnect_connect_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbconnect_connect_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*IssuedCert); i { case 0: return &v.state diff --git a/proto/private/pbdemo/v1/demo.pb.go b/proto/private/pbdemo/v1/demo.pb.go index e46c589ffcd1..bdfdcb4b22fe 100644 --- a/proto/private/pbdemo/v1/demo.pb.go +++ b/proto/private/pbdemo/v1/demo.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbdemo/v1/demo.proto @@ -479,7 +479,7 @@ func file_private_pbdemo_v1_demo_proto_rawDescGZIP() []byte { var file_private_pbdemo_v1_demo_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_private_pbdemo_v1_demo_proto_msgTypes = make([]protoimpl.MessageInfo, 5) -var file_private_pbdemo_v1_demo_proto_goTypes = []interface{}{ +var file_private_pbdemo_v1_demo_proto_goTypes = []any{ (Genre)(0), // 0: hashicorp.consul.internal.demo.v1.Genre (*Executive)(nil), // 1: hashicorp.consul.internal.demo.v1.Executive (*RecordLabel)(nil), // 2: hashicorp.consul.internal.demo.v1.RecordLabel @@ -502,7 +502,7 @@ func file_private_pbdemo_v1_demo_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbdemo_v1_demo_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v1_demo_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Executive); i { case 0: return &v.state @@ -514,7 +514,7 @@ func file_private_pbdemo_v1_demo_proto_init() { return nil } } - file_private_pbdemo_v1_demo_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v1_demo_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*RecordLabel); i { case 0: return &v.state @@ -526,7 +526,7 @@ func file_private_pbdemo_v1_demo_proto_init() { return nil } } - file_private_pbdemo_v1_demo_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v1_demo_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*Artist); i { case 0: return &v.state @@ -538,7 +538,7 @@ func file_private_pbdemo_v1_demo_proto_init() { return nil } } - file_private_pbdemo_v1_demo_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v1_demo_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*Album); i { case 0: return &v.state @@ -550,7 +550,7 @@ func file_private_pbdemo_v1_demo_proto_init() { return nil } } - file_private_pbdemo_v1_demo_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v1_demo_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*Concept); i { case 0: return &v.state diff --git a/proto/private/pbdemo/v2/demo.pb.go b/proto/private/pbdemo/v2/demo.pb.go index ea11aceef625..fe64d44ba81d 100644 --- a/proto/private/pbdemo/v2/demo.pb.go +++ b/proto/private/pbdemo/v2/demo.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbdemo/v2/demo.proto @@ -417,7 +417,7 @@ func file_private_pbdemo_v2_demo_proto_rawDescGZIP() []byte { var file_private_pbdemo_v2_demo_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_private_pbdemo_v2_demo_proto_msgTypes = make([]protoimpl.MessageInfo, 4) -var file_private_pbdemo_v2_demo_proto_goTypes = []interface{}{ +var file_private_pbdemo_v2_demo_proto_goTypes = []any{ (Genre)(0), // 0: hashicorp.consul.internal.demo.v2.Genre (*Artist)(nil), // 1: hashicorp.consul.internal.demo.v2.Artist (*Album)(nil), // 2: hashicorp.consul.internal.demo.v2.Album @@ -444,7 +444,7 @@ func file_private_pbdemo_v2_demo_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbdemo_v2_demo_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v2_demo_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Artist); i { case 0: return &v.state @@ -456,7 +456,7 @@ func file_private_pbdemo_v2_demo_proto_init() { return nil } } - file_private_pbdemo_v2_demo_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v2_demo_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*Album); i { case 0: return &v.state @@ -468,7 +468,7 @@ func file_private_pbdemo_v2_demo_proto_init() { return nil } } - file_private_pbdemo_v2_demo_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbdemo_v2_demo_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*Festival); i { case 0: return &v.state diff --git a/proto/private/pboperator/operator.pb.go b/proto/private/pboperator/operator.pb.go index 18a800e66341..305addcaa2ee 100644 --- a/proto/private/pboperator/operator.pb.go +++ b/proto/private/pboperator/operator.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pboperator/operator.proto @@ -184,7 +184,7 @@ func file_private_pboperator_operator_proto_rawDescGZIP() []byte { } var file_private_pboperator_operator_proto_msgTypes = make([]protoimpl.MessageInfo, 2) -var file_private_pboperator_operator_proto_goTypes = []interface{}{ +var file_private_pboperator_operator_proto_goTypes = []any{ (*TransferLeaderRequest)(nil), // 0: hashicorp.consul.internal.operator.TransferLeaderRequest (*TransferLeaderResponse)(nil), // 1: hashicorp.consul.internal.operator.TransferLeaderResponse } @@ -204,7 +204,7 @@ func file_private_pboperator_operator_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pboperator_operator_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pboperator_operator_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*TransferLeaderRequest); i { case 0: return &v.state @@ -216,7 +216,7 @@ func file_private_pboperator_operator_proto_init() { return nil } } - file_private_pboperator_operator_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pboperator_operator_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*TransferLeaderResponse); i { case 0: return &v.state diff --git a/proto/private/pbpeering/peering.pb.go b/proto/private/pbpeering/peering.pb.go index 373f316317a2..7a86a8301b2a 100644 --- a/proto/private/pbpeering/peering.pb.go +++ b/proto/private/pbpeering/peering.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbpeering/peering.proto @@ -2730,7 +2730,7 @@ func file_private_pbpeering_peering_proto_rawDescGZIP() []byte { var file_private_pbpeering_peering_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_private_pbpeering_peering_proto_msgTypes = make([]protoimpl.MessageInfo, 39) -var file_private_pbpeering_peering_proto_goTypes = []interface{}{ +var file_private_pbpeering_peering_proto_goTypes = []any{ (PeeringState)(0), // 0: hashicorp.consul.internal.peering.PeeringState (*SecretsWriteRequest)(nil), // 1: hashicorp.consul.internal.peering.SecretsWriteRequest (*PeeringSecrets)(nil), // 2: hashicorp.consul.internal.peering.PeeringSecrets @@ -2829,7 +2829,7 @@ func file_private_pbpeering_peering_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbpeering_peering_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*SecretsWriteRequest); i { case 0: return &v.state @@ -2841,7 +2841,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*PeeringSecrets); i { case 0: return &v.state @@ -2853,7 +2853,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*Peering); i { case 0: return &v.state @@ -2865,7 +2865,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*RemoteInfo); i { case 0: return &v.state @@ -2877,7 +2877,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*StreamStatus); i { case 0: return &v.state @@ -2889,7 +2889,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*PeeringTrustBundle); i { case 0: return &v.state @@ -2901,7 +2901,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*PeeringServerAddresses); i { case 0: return &v.state @@ -2913,7 +2913,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*PeeringReadRequest); i { case 0: return &v.state @@ -2925,7 +2925,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[8].Exporter = func(v any, i int) any { switch v := v.(*PeeringReadResponse); i { case 0: return &v.state @@ -2937,7 +2937,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[9].Exporter = func(v any, i int) any { switch v := v.(*PeeringListRequest); i { case 0: return &v.state @@ -2949,7 +2949,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[10].Exporter = func(v any, i int) any { switch v := v.(*PeeringListResponse); i { case 0: return &v.state @@ -2961,7 +2961,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[11].Exporter = func(v any, i int) any { switch v := v.(*PeeringWriteRequest); i { case 0: return &v.state @@ -2973,7 +2973,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[12].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[12].Exporter = func(v any, i int) any { switch v := v.(*PeeringWriteResponse); i { case 0: return &v.state @@ -2985,7 +2985,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[13].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[13].Exporter = func(v any, i int) any { switch v := v.(*PeeringDeleteRequest); i { case 0: return &v.state @@ -2997,7 +2997,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[14].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[14].Exporter = func(v any, i int) any { switch v := v.(*PeeringDeleteResponse); i { case 0: return &v.state @@ -3009,7 +3009,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[15].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[15].Exporter = func(v any, i int) any { switch v := v.(*TrustBundleListByServiceRequest); i { case 0: return &v.state @@ -3021,7 +3021,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[16].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[16].Exporter = func(v any, i int) any { switch v := v.(*TrustBundleListByServiceResponse); i { case 0: return &v.state @@ -3033,7 +3033,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[17].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[17].Exporter = func(v any, i int) any { switch v := v.(*TrustBundleReadRequest); i { case 0: return &v.state @@ -3045,7 +3045,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[18].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[18].Exporter = func(v any, i int) any { switch v := v.(*TrustBundleReadResponse); i { case 0: return &v.state @@ -3057,7 +3057,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[19].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[19].Exporter = func(v any, i int) any { switch v := v.(*PeeringTerminateByIDRequest); i { case 0: return &v.state @@ -3069,7 +3069,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[20].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[20].Exporter = func(v any, i int) any { switch v := v.(*PeeringTerminateByIDResponse); i { case 0: return &v.state @@ -3081,7 +3081,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[21].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[21].Exporter = func(v any, i int) any { switch v := v.(*PeeringTrustBundleWriteRequest); i { case 0: return &v.state @@ -3093,7 +3093,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[22].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[22].Exporter = func(v any, i int) any { switch v := v.(*PeeringTrustBundleWriteResponse); i { case 0: return &v.state @@ -3105,7 +3105,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[23].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[23].Exporter = func(v any, i int) any { switch v := v.(*PeeringTrustBundleDeleteRequest); i { case 0: return &v.state @@ -3117,7 +3117,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[24].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[24].Exporter = func(v any, i int) any { switch v := v.(*PeeringTrustBundleDeleteResponse); i { case 0: return &v.state @@ -3129,7 +3129,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[25].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[25].Exporter = func(v any, i int) any { switch v := v.(*GenerateTokenRequest); i { case 0: return &v.state @@ -3141,7 +3141,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[26].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[26].Exporter = func(v any, i int) any { switch v := v.(*GenerateTokenResponse); i { case 0: return &v.state @@ -3153,7 +3153,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[27].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[27].Exporter = func(v any, i int) any { switch v := v.(*EstablishRequest); i { case 0: return &v.state @@ -3165,7 +3165,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[28].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[28].Exporter = func(v any, i int) any { switch v := v.(*EstablishResponse); i { case 0: return &v.state @@ -3177,7 +3177,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[29].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[29].Exporter = func(v any, i int) any { switch v := v.(*SecretsWriteRequest_GenerateTokenRequest); i { case 0: return &v.state @@ -3189,7 +3189,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[30].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[30].Exporter = func(v any, i int) any { switch v := v.(*SecretsWriteRequest_ExchangeSecretRequest); i { case 0: return &v.state @@ -3201,7 +3201,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[31].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[31].Exporter = func(v any, i int) any { switch v := v.(*SecretsWriteRequest_PromotePendingRequest); i { case 0: return &v.state @@ -3213,7 +3213,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[32].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[32].Exporter = func(v any, i int) any { switch v := v.(*SecretsWriteRequest_EstablishRequest); i { case 0: return &v.state @@ -3225,7 +3225,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[33].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[33].Exporter = func(v any, i int) any { switch v := v.(*PeeringSecrets_Establishment); i { case 0: return &v.state @@ -3237,7 +3237,7 @@ func file_private_pbpeering_peering_proto_init() { return nil } } - file_private_pbpeering_peering_proto_msgTypes[34].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeering_peering_proto_msgTypes[34].Exporter = func(v any, i int) any { switch v := v.(*PeeringSecrets_Stream); i { case 0: return &v.state @@ -3250,7 +3250,7 @@ func file_private_pbpeering_peering_proto_init() { } } } - file_private_pbpeering_peering_proto_msgTypes[0].OneofWrappers = []interface{}{ + file_private_pbpeering_peering_proto_msgTypes[0].OneofWrappers = []any{ (*SecretsWriteRequest_GenerateToken)(nil), (*SecretsWriteRequest_ExchangeSecret)(nil), (*SecretsWriteRequest_PromotePending)(nil), diff --git a/proto/private/pbpeerstream/peerstream.pb.go b/proto/private/pbpeerstream/peerstream.pb.go index 97b03c3a6f0e..3b195743ccd8 100644 --- a/proto/private/pbpeerstream/peerstream.pb.go +++ b/proto/private/pbpeerstream/peerstream.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbpeerstream/peerstream.proto @@ -934,7 +934,7 @@ func file_private_pbpeerstream_peerstream_proto_rawDescGZIP() []byte { var file_private_pbpeerstream_peerstream_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_private_pbpeerstream_peerstream_proto_msgTypes = make([]protoimpl.MessageInfo, 11) -var file_private_pbpeerstream_peerstream_proto_goTypes = []interface{}{ +var file_private_pbpeerstream_peerstream_proto_goTypes = []any{ (Operation)(0), // 0: hashicorp.consul.internal.peerstream.Operation (*ReplicationMessage)(nil), // 1: hashicorp.consul.internal.peerstream.ReplicationMessage (*LeaderAddress)(nil), // 2: hashicorp.consul.internal.peerstream.LeaderAddress @@ -980,7 +980,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbpeerstream_peerstream_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*ReplicationMessage); i { case 0: return &v.state @@ -992,7 +992,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*LeaderAddress); i { case 0: return &v.state @@ -1004,7 +1004,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*ExportedService); i { case 0: return &v.state @@ -1016,7 +1016,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*ExportedServiceList); i { case 0: return &v.state @@ -1028,7 +1028,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*ExchangeSecretRequest); i { case 0: return &v.state @@ -1040,7 +1040,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*ExchangeSecretResponse); i { case 0: return &v.state @@ -1052,7 +1052,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*ReplicationMessage_Open); i { case 0: return &v.state @@ -1064,7 +1064,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*ReplicationMessage_Request); i { case 0: return &v.state @@ -1076,7 +1076,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[8].Exporter = func(v any, i int) any { switch v := v.(*ReplicationMessage_Response); i { case 0: return &v.state @@ -1088,7 +1088,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[9].Exporter = func(v any, i int) any { switch v := v.(*ReplicationMessage_Terminated); i { case 0: return &v.state @@ -1100,7 +1100,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { return nil } } - file_private_pbpeerstream_peerstream_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} { + file_private_pbpeerstream_peerstream_proto_msgTypes[10].Exporter = func(v any, i int) any { switch v := v.(*ReplicationMessage_Heartbeat); i { case 0: return &v.state @@ -1113,7 +1113,7 @@ func file_private_pbpeerstream_peerstream_proto_init() { } } } - file_private_pbpeerstream_peerstream_proto_msgTypes[0].OneofWrappers = []interface{}{ + file_private_pbpeerstream_peerstream_proto_msgTypes[0].OneofWrappers = []any{ (*ReplicationMessage_Open_)(nil), (*ReplicationMessage_Request_)(nil), (*ReplicationMessage_Response_)(nil), diff --git a/proto/private/pbservice/healthcheck.gen.go b/proto/private/pbservice/healthcheck.gen.go index 1c608d88c7a7..31a64cd9ce0f 100644 --- a/proto/private/pbservice/healthcheck.gen.go +++ b/proto/private/pbservice/healthcheck.gen.go @@ -160,6 +160,7 @@ func HealthCheckDefinitionToStructs(s *HealthCheckDefinition, t *structs.HealthC t.GRPCUseTLS = s.GRPCUseTLS t.AliasNode = s.AliasNode t.AliasService = s.AliasService + t.SessionName = s.SessionName t.TTL = structs.DurationFromProto(s.TTL) } func HealthCheckDefinitionFromStructs(t *structs.HealthCheckDefinition, s *HealthCheckDefinition) { @@ -190,5 +191,6 @@ func HealthCheckDefinitionFromStructs(t *structs.HealthCheckDefinition, s *Healt s.GRPCUseTLS = t.GRPCUseTLS s.AliasNode = t.AliasNode s.AliasService = t.AliasService + s.SessionName = t.SessionName s.TTL = structs.DurationToProto(t.TTL) } diff --git a/proto/private/pbservice/healthcheck.pb.go b/proto/private/pbservice/healthcheck.pb.go index 252459957b91..e05d60303039 100644 --- a/proto/private/pbservice/healthcheck.pb.go +++ b/proto/private/pbservice/healthcheck.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbservice/healthcheck.proto @@ -300,7 +300,8 @@ type HealthCheckDefinition struct { AliasNode string `protobuf:"bytes,15,opt,name=AliasNode,proto3" json:"AliasNode,omitempty"` AliasService string `protobuf:"bytes,16,opt,name=AliasService,proto3" json:"AliasService,omitempty"` // mog: func-to=structs.DurationFromProto func-from=structs.DurationToProto - TTL *durationpb.Duration `protobuf:"bytes,17,opt,name=TTL,proto3" json:"TTL,omitempty"` + TTL *durationpb.Duration `protobuf:"bytes,17,opt,name=TTL,proto3" json:"TTL,omitempty"` + SessionName string `protobuf:"bytes,26,opt,name=SessionName,proto3" json:"SessionName,omitempty"` } func (x *HealthCheckDefinition) Reset() { @@ -510,6 +511,13 @@ func (x *HealthCheckDefinition) GetTTL() *durationpb.Duration { return nil } +func (x *HealthCheckDefinition) GetSessionName() string { + if x != nil { + return x.SessionName + } + return "" +} + // CheckType is used to create either the CheckMonitor or the CheckTTL. // The following types are supported: Script, HTTP, TCP, Docker, TTL, GRPC, // Alias. Script, H2PING, @@ -575,6 +583,10 @@ type CheckType struct { DeregisterCriticalServiceAfter *durationpb.Duration `protobuf:"bytes,19,opt,name=DeregisterCriticalServiceAfter,proto3" json:"DeregisterCriticalServiceAfter,omitempty"` // mog: func-to=int func-from=int32 OutputMaxSize int32 `protobuf:"varint,25,opt,name=OutputMaxSize,proto3" json:"OutputMaxSize,omitempty"` + // The session name of the session, which is associated with the common parent node of the session and this check. + // This association helps in managing the session and its related state of the checks. + // e.g. if the session is deleted/invalidated the state of this check shall be marked critical. + SessionName string `protobuf:"bytes,35,opt,name=SessionName,proto3" json:"SessionName,omitempty"` } func (x *CheckType) Reset() { @@ -847,6 +859,13 @@ func (x *CheckType) GetOutputMaxSize() int32 { return 0 } +func (x *CheckType) GetSessionName() string { + if x != nil { + return x.SessionName + } + return "" +} + var File_private_pbservice_healthcheck_proto protoreflect.FileDescriptor var file_private_pbservice_healthcheck_proto_rawDesc = []byte{ @@ -900,7 +919,7 @@ var file_private_pbservice_healthcheck_proto_rawDesc = []byte{ 0x50, 0x65, 0x65, 0x72, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x11, 0x20, 0x01, 0x28, 0x09, 0x52, 0x08, 0x50, 0x65, 0x65, 0x72, 0x4e, 0x61, 0x6d, 0x65, 0x22, 0x23, 0x0a, 0x0b, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x12, 0x14, 0x0a, 0x05, 0x56, 0x61, 0x6c, 0x75, 0x65, - 0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x05, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x22, 0xb0, 0x08, + 0x18, 0x01, 0x20, 0x03, 0x28, 0x09, 0x52, 0x05, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x22, 0xd2, 0x08, 0x0a, 0x15, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x44, 0x65, 0x66, 0x69, 0x6e, 0x69, 0x74, 0x69, 0x6f, 0x6e, 0x12, 0x12, 0x0a, 0x04, 0x48, 0x54, 0x54, 0x50, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x48, 0x54, 0x54, 0x50, 0x12, 0x24, 0x0a, 0x0d, 0x54, @@ -961,117 +980,121 @@ var file_private_pbservice_healthcheck_proto_rawDesc = []byte{ 0x73, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x2b, 0x0a, 0x03, 0x54, 0x54, 0x4c, 0x18, 0x11, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, - 0x52, 0x03, 0x54, 0x54, 0x4c, 0x1a, 0x69, 0x0a, 0x0b, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x45, - 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, - 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x44, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, - 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, - 0x70, 0x2e, 0x63, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, - 0x6c, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x2e, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, - 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, - 0x22, 0xd2, 0x0a, 0x0a, 0x09, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x12, 0x18, - 0x0a, 0x07, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, - 0x07, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x49, 0x44, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, 0x6d, 0x65, - 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x16, 0x0a, 0x06, - 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x53, 0x74, - 0x61, 0x74, 0x75, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x4e, 0x6f, 0x74, 0x65, 0x73, 0x18, 0x04, 0x20, - 0x01, 0x28, 0x09, 0x52, 0x05, 0x4e, 0x6f, 0x74, 0x65, 0x73, 0x12, 0x1e, 0x0a, 0x0a, 0x53, 0x63, - 0x72, 0x69, 0x70, 0x74, 0x41, 0x72, 0x67, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, 0x52, 0x0a, - 0x53, 0x63, 0x72, 0x69, 0x70, 0x74, 0x41, 0x72, 0x67, 0x73, 0x12, 0x12, 0x0a, 0x04, 0x48, 0x54, - 0x54, 0x50, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x48, 0x54, 0x54, 0x50, 0x12, 0x50, - 0x0a, 0x06, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x18, 0x14, 0x20, 0x03, 0x28, 0x0b, 0x32, 0x38, + 0x52, 0x03, 0x54, 0x54, 0x4c, 0x12, 0x20, 0x0a, 0x0b, 0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, + 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x1a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0b, 0x53, 0x65, 0x73, 0x73, + 0x69, 0x6f, 0x6e, 0x4e, 0x61, 0x6d, 0x65, 0x1a, 0x69, 0x0a, 0x0b, 0x48, 0x65, 0x61, 0x64, 0x65, + 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, + 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x44, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, + 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, + 0x6f, 0x72, 0x70, 0x2e, 0x63, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, + 0x6e, 0x61, 0x6c, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x2e, 0x48, 0x65, 0x61, 0x64, + 0x65, 0x72, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, + 0x38, 0x01, 0x22, 0xf4, 0x0a, 0x0a, 0x09, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x54, 0x79, 0x70, 0x65, + 0x12, 0x18, 0x0a, 0x07, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x49, 0x44, 0x18, 0x01, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x07, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x49, 0x44, 0x12, 0x12, 0x0a, 0x04, 0x4e, 0x61, + 0x6d, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x16, + 0x0a, 0x06, 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x18, 0x03, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, + 0x53, 0x74, 0x61, 0x74, 0x75, 0x73, 0x12, 0x14, 0x0a, 0x05, 0x4e, 0x6f, 0x74, 0x65, 0x73, 0x18, + 0x04, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, 0x4e, 0x6f, 0x74, 0x65, 0x73, 0x12, 0x1e, 0x0a, 0x0a, + 0x53, 0x63, 0x72, 0x69, 0x70, 0x74, 0x41, 0x72, 0x67, 0x73, 0x18, 0x05, 0x20, 0x03, 0x28, 0x09, + 0x52, 0x0a, 0x53, 0x63, 0x72, 0x69, 0x70, 0x74, 0x41, 0x72, 0x67, 0x73, 0x12, 0x12, 0x0a, 0x04, + 0x48, 0x54, 0x54, 0x50, 0x18, 0x06, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x48, 0x54, 0x54, 0x50, + 0x12, 0x50, 0x0a, 0x06, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x18, 0x14, 0x20, 0x03, 0x28, 0x0b, + 0x32, 0x38, 0x2e, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2e, 0x63, 0x6f, 0x6e, + 0x73, 0x75, 0x6c, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2e, 0x73, 0x65, 0x72, + 0x76, 0x69, 0x63, 0x65, 0x2e, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x2e, 0x48, + 0x65, 0x61, 0x64, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x48, 0x65, 0x61, 0x64, + 0x65, 0x72, 0x12, 0x16, 0x0a, 0x06, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x18, 0x07, 0x20, 0x01, + 0x28, 0x09, 0x52, 0x06, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x42, 0x6f, + 0x64, 0x79, 0x18, 0x1a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x42, 0x6f, 0x64, 0x79, 0x12, 0x2a, + 0x0a, 0x10, 0x44, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x52, 0x65, 0x64, 0x69, 0x72, 0x65, 0x63, + 0x74, 0x73, 0x18, 0x1f, 0x20, 0x01, 0x28, 0x08, 0x52, 0x10, 0x44, 0x69, 0x73, 0x61, 0x62, 0x6c, + 0x65, 0x52, 0x65, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x73, 0x12, 0x10, 0x0a, 0x03, 0x54, 0x43, + 0x50, 0x18, 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x54, 0x43, 0x50, 0x12, 0x1c, 0x0a, 0x09, + 0x54, 0x43, 0x50, 0x55, 0x73, 0x65, 0x54, 0x4c, 0x53, 0x18, 0x22, 0x20, 0x01, 0x28, 0x08, 0x52, + 0x09, 0x54, 0x43, 0x50, 0x55, 0x73, 0x65, 0x54, 0x4c, 0x53, 0x12, 0x10, 0x0a, 0x03, 0x55, 0x44, + 0x50, 0x18, 0x20, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x55, 0x44, 0x50, 0x12, 0x1c, 0x0a, 0x09, + 0x4f, 0x53, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, 0x21, 0x20, 0x01, 0x28, 0x09, 0x52, + 0x09, 0x4f, 0x53, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x35, 0x0a, 0x08, 0x49, 0x6e, + 0x74, 0x65, 0x72, 0x76, 0x61, 0x6c, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, + 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, + 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x08, 0x49, 0x6e, 0x74, 0x65, 0x72, 0x76, 0x61, + 0x6c, 0x12, 0x1c, 0x0a, 0x09, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x4e, 0x6f, 0x64, 0x65, 0x18, 0x0a, + 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x4e, 0x6f, 0x64, 0x65, 0x12, + 0x22, 0x0a, 0x0c, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, + 0x0b, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0c, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x53, 0x65, 0x72, 0x76, + 0x69, 0x63, 0x65, 0x12, 0x2c, 0x0a, 0x11, 0x44, 0x6f, 0x63, 0x6b, 0x65, 0x72, 0x43, 0x6f, 0x6e, + 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x44, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x09, 0x52, 0x11, + 0x44, 0x6f, 0x63, 0x6b, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, + 0x44, 0x12, 0x14, 0x0a, 0x05, 0x53, 0x68, 0x65, 0x6c, 0x6c, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x09, + 0x52, 0x05, 0x53, 0x68, 0x65, 0x6c, 0x6c, 0x12, 0x16, 0x0a, 0x06, 0x48, 0x32, 0x50, 0x49, 0x4e, + 0x47, 0x18, 0x1c, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x48, 0x32, 0x50, 0x49, 0x4e, 0x47, 0x12, + 0x22, 0x0a, 0x0c, 0x48, 0x32, 0x50, 0x69, 0x6e, 0x67, 0x55, 0x73, 0x65, 0x54, 0x4c, 0x53, 0x18, + 0x1e, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0c, 0x48, 0x32, 0x50, 0x69, 0x6e, 0x67, 0x55, 0x73, 0x65, + 0x54, 0x4c, 0x53, 0x12, 0x12, 0x0a, 0x04, 0x47, 0x52, 0x50, 0x43, 0x18, 0x0e, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x04, 0x47, 0x52, 0x50, 0x43, 0x12, 0x1e, 0x0a, 0x0a, 0x47, 0x52, 0x50, 0x43, 0x55, + 0x73, 0x65, 0x54, 0x4c, 0x53, 0x18, 0x0f, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0a, 0x47, 0x52, 0x50, + 0x43, 0x55, 0x73, 0x65, 0x54, 0x4c, 0x53, 0x12, 0x24, 0x0a, 0x0d, 0x54, 0x4c, 0x53, 0x53, 0x65, + 0x72, 0x76, 0x65, 0x72, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x1b, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, + 0x54, 0x4c, 0x53, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x24, 0x0a, + 0x0d, 0x54, 0x4c, 0x53, 0x53, 0x6b, 0x69, 0x70, 0x56, 0x65, 0x72, 0x69, 0x66, 0x79, 0x18, 0x10, + 0x20, 0x01, 0x28, 0x08, 0x52, 0x0d, 0x54, 0x4c, 0x53, 0x53, 0x6b, 0x69, 0x70, 0x56, 0x65, 0x72, + 0x69, 0x66, 0x79, 0x12, 0x33, 0x0a, 0x07, 0x54, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x18, 0x11, + 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, + 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, + 0x07, 0x54, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x12, 0x2b, 0x0a, 0x03, 0x54, 0x54, 0x4c, 0x18, + 0x12, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, + 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, + 0x52, 0x03, 0x54, 0x54, 0x4c, 0x12, 0x32, 0x0a, 0x14, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, + 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x50, 0x61, 0x73, 0x73, 0x69, 0x6e, 0x67, 0x18, 0x15, 0x20, + 0x01, 0x28, 0x05, 0x52, 0x14, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x42, 0x65, 0x66, 0x6f, + 0x72, 0x65, 0x50, 0x61, 0x73, 0x73, 0x69, 0x6e, 0x67, 0x12, 0x34, 0x0a, 0x15, 0x46, 0x61, 0x69, + 0x6c, 0x75, 0x72, 0x65, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x57, 0x61, 0x72, 0x6e, 0x69, + 0x6e, 0x67, 0x18, 0x1d, 0x20, 0x01, 0x28, 0x05, 0x52, 0x15, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, + 0x65, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x57, 0x61, 0x72, 0x6e, 0x69, 0x6e, 0x67, 0x12, + 0x36, 0x0a, 0x16, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, + 0x65, 0x43, 0x72, 0x69, 0x74, 0x69, 0x63, 0x61, 0x6c, 0x18, 0x16, 0x20, 0x01, 0x28, 0x05, 0x52, + 0x16, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x43, + 0x72, 0x69, 0x74, 0x69, 0x63, 0x61, 0x6c, 0x12, 0x1c, 0x0a, 0x09, 0x50, 0x72, 0x6f, 0x78, 0x79, + 0x48, 0x54, 0x54, 0x50, 0x18, 0x17, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x50, 0x72, 0x6f, 0x78, + 0x79, 0x48, 0x54, 0x54, 0x50, 0x12, 0x1c, 0x0a, 0x09, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x47, 0x52, + 0x50, 0x43, 0x18, 0x18, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x47, + 0x52, 0x50, 0x43, 0x12, 0x61, 0x0a, 0x1e, 0x44, 0x65, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, + 0x72, 0x43, 0x72, 0x69, 0x74, 0x69, 0x63, 0x61, 0x6c, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, + 0x41, 0x66, 0x74, 0x65, 0x72, 0x18, 0x13, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, + 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, + 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x1e, 0x44, 0x65, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, + 0x65, 0x72, 0x43, 0x72, 0x69, 0x74, 0x69, 0x63, 0x61, 0x6c, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, + 0x65, 0x41, 0x66, 0x74, 0x65, 0x72, 0x12, 0x24, 0x0a, 0x0d, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, + 0x4d, 0x61, 0x78, 0x53, 0x69, 0x7a, 0x65, 0x18, 0x19, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0d, 0x4f, + 0x75, 0x74, 0x70, 0x75, 0x74, 0x4d, 0x61, 0x78, 0x53, 0x69, 0x7a, 0x65, 0x12, 0x20, 0x0a, 0x0b, + 0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x23, 0x20, 0x01, 0x28, + 0x09, 0x52, 0x0b, 0x53, 0x65, 0x73, 0x73, 0x69, 0x6f, 0x6e, 0x4e, 0x61, 0x6d, 0x65, 0x1a, 0x69, + 0x0a, 0x0b, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, + 0x03, 0x6b, 0x65, 0x79, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, + 0x44, 0x0a, 0x05, 0x76, 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2e, 0x63, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, - 0x63, 0x65, 0x2e, 0x43, 0x68, 0x65, 0x63, 0x6b, 0x54, 0x79, 0x70, 0x65, 0x2e, 0x48, 0x65, 0x61, - 0x64, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x52, 0x06, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, - 0x12, 0x16, 0x0a, 0x06, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x18, 0x07, 0x20, 0x01, 0x28, 0x09, - 0x52, 0x06, 0x4d, 0x65, 0x74, 0x68, 0x6f, 0x64, 0x12, 0x12, 0x0a, 0x04, 0x42, 0x6f, 0x64, 0x79, - 0x18, 0x1a, 0x20, 0x01, 0x28, 0x09, 0x52, 0x04, 0x42, 0x6f, 0x64, 0x79, 0x12, 0x2a, 0x0a, 0x10, - 0x44, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x52, 0x65, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x73, - 0x18, 0x1f, 0x20, 0x01, 0x28, 0x08, 0x52, 0x10, 0x44, 0x69, 0x73, 0x61, 0x62, 0x6c, 0x65, 0x52, - 0x65, 0x64, 0x69, 0x72, 0x65, 0x63, 0x74, 0x73, 0x12, 0x10, 0x0a, 0x03, 0x54, 0x43, 0x50, 0x18, - 0x08, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x54, 0x43, 0x50, 0x12, 0x1c, 0x0a, 0x09, 0x54, 0x43, - 0x50, 0x55, 0x73, 0x65, 0x54, 0x4c, 0x53, 0x18, 0x22, 0x20, 0x01, 0x28, 0x08, 0x52, 0x09, 0x54, - 0x43, 0x50, 0x55, 0x73, 0x65, 0x54, 0x4c, 0x53, 0x12, 0x10, 0x0a, 0x03, 0x55, 0x44, 0x50, 0x18, - 0x20, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x55, 0x44, 0x50, 0x12, 0x1c, 0x0a, 0x09, 0x4f, 0x53, - 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, 0x21, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x4f, - 0x53, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x12, 0x35, 0x0a, 0x08, 0x49, 0x6e, 0x74, 0x65, - 0x72, 0x76, 0x61, 0x6c, 0x18, 0x09, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, - 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, - 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x08, 0x49, 0x6e, 0x74, 0x65, 0x72, 0x76, 0x61, 0x6c, 0x12, - 0x1c, 0x0a, 0x09, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x4e, 0x6f, 0x64, 0x65, 0x18, 0x0a, 0x20, 0x01, - 0x28, 0x09, 0x52, 0x09, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x4e, 0x6f, 0x64, 0x65, 0x12, 0x22, 0x0a, - 0x0c, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x18, 0x0b, 0x20, - 0x01, 0x28, 0x09, 0x52, 0x0c, 0x41, 0x6c, 0x69, 0x61, 0x73, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, - 0x65, 0x12, 0x2c, 0x0a, 0x11, 0x44, 0x6f, 0x63, 0x6b, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x74, 0x61, - 0x69, 0x6e, 0x65, 0x72, 0x49, 0x44, 0x18, 0x0c, 0x20, 0x01, 0x28, 0x09, 0x52, 0x11, 0x44, 0x6f, - 0x63, 0x6b, 0x65, 0x72, 0x43, 0x6f, 0x6e, 0x74, 0x61, 0x69, 0x6e, 0x65, 0x72, 0x49, 0x44, 0x12, - 0x14, 0x0a, 0x05, 0x53, 0x68, 0x65, 0x6c, 0x6c, 0x18, 0x0d, 0x20, 0x01, 0x28, 0x09, 0x52, 0x05, - 0x53, 0x68, 0x65, 0x6c, 0x6c, 0x12, 0x16, 0x0a, 0x06, 0x48, 0x32, 0x50, 0x49, 0x4e, 0x47, 0x18, - 0x1c, 0x20, 0x01, 0x28, 0x09, 0x52, 0x06, 0x48, 0x32, 0x50, 0x49, 0x4e, 0x47, 0x12, 0x22, 0x0a, - 0x0c, 0x48, 0x32, 0x50, 0x69, 0x6e, 0x67, 0x55, 0x73, 0x65, 0x54, 0x4c, 0x53, 0x18, 0x1e, 0x20, - 0x01, 0x28, 0x08, 0x52, 0x0c, 0x48, 0x32, 0x50, 0x69, 0x6e, 0x67, 0x55, 0x73, 0x65, 0x54, 0x4c, - 0x53, 0x12, 0x12, 0x0a, 0x04, 0x47, 0x52, 0x50, 0x43, 0x18, 0x0e, 0x20, 0x01, 0x28, 0x09, 0x52, - 0x04, 0x47, 0x52, 0x50, 0x43, 0x12, 0x1e, 0x0a, 0x0a, 0x47, 0x52, 0x50, 0x43, 0x55, 0x73, 0x65, - 0x54, 0x4c, 0x53, 0x18, 0x0f, 0x20, 0x01, 0x28, 0x08, 0x52, 0x0a, 0x47, 0x52, 0x50, 0x43, 0x55, - 0x73, 0x65, 0x54, 0x4c, 0x53, 0x12, 0x24, 0x0a, 0x0d, 0x54, 0x4c, 0x53, 0x53, 0x65, 0x72, 0x76, - 0x65, 0x72, 0x4e, 0x61, 0x6d, 0x65, 0x18, 0x1b, 0x20, 0x01, 0x28, 0x09, 0x52, 0x0d, 0x54, 0x4c, - 0x53, 0x53, 0x65, 0x72, 0x76, 0x65, 0x72, 0x4e, 0x61, 0x6d, 0x65, 0x12, 0x24, 0x0a, 0x0d, 0x54, - 0x4c, 0x53, 0x53, 0x6b, 0x69, 0x70, 0x56, 0x65, 0x72, 0x69, 0x66, 0x79, 0x18, 0x10, 0x20, 0x01, - 0x28, 0x08, 0x52, 0x0d, 0x54, 0x4c, 0x53, 0x53, 0x6b, 0x69, 0x70, 0x56, 0x65, 0x72, 0x69, 0x66, - 0x79, 0x12, 0x33, 0x0a, 0x07, 0x54, 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x18, 0x11, 0x20, 0x01, - 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, - 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x07, 0x54, - 0x69, 0x6d, 0x65, 0x6f, 0x75, 0x74, 0x12, 0x2b, 0x0a, 0x03, 0x54, 0x54, 0x4c, 0x18, 0x12, 0x20, - 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, - 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x03, - 0x54, 0x54, 0x4c, 0x12, 0x32, 0x0a, 0x14, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x42, 0x65, - 0x66, 0x6f, 0x72, 0x65, 0x50, 0x61, 0x73, 0x73, 0x69, 0x6e, 0x67, 0x18, 0x15, 0x20, 0x01, 0x28, - 0x05, 0x52, 0x14, 0x53, 0x75, 0x63, 0x63, 0x65, 0x73, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, - 0x50, 0x61, 0x73, 0x73, 0x69, 0x6e, 0x67, 0x12, 0x34, 0x0a, 0x15, 0x46, 0x61, 0x69, 0x6c, 0x75, - 0x72, 0x65, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x57, 0x61, 0x72, 0x6e, 0x69, 0x6e, 0x67, - 0x18, 0x1d, 0x20, 0x01, 0x28, 0x05, 0x52, 0x15, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x73, - 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x57, 0x61, 0x72, 0x6e, 0x69, 0x6e, 0x67, 0x12, 0x36, 0x0a, - 0x16, 0x46, 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x43, - 0x72, 0x69, 0x74, 0x69, 0x63, 0x61, 0x6c, 0x18, 0x16, 0x20, 0x01, 0x28, 0x05, 0x52, 0x16, 0x46, - 0x61, 0x69, 0x6c, 0x75, 0x72, 0x65, 0x73, 0x42, 0x65, 0x66, 0x6f, 0x72, 0x65, 0x43, 0x72, 0x69, - 0x74, 0x69, 0x63, 0x61, 0x6c, 0x12, 0x1c, 0x0a, 0x09, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x48, 0x54, - 0x54, 0x50, 0x18, 0x17, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x48, - 0x54, 0x54, 0x50, 0x12, 0x1c, 0x0a, 0x09, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x47, 0x52, 0x50, 0x43, - 0x18, 0x18, 0x20, 0x01, 0x28, 0x09, 0x52, 0x09, 0x50, 0x72, 0x6f, 0x78, 0x79, 0x47, 0x52, 0x50, - 0x43, 0x12, 0x61, 0x0a, 0x1e, 0x44, 0x65, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, 0x43, - 0x72, 0x69, 0x74, 0x69, 0x63, 0x61, 0x6c, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x41, 0x66, - 0x74, 0x65, 0x72, 0x18, 0x13, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x19, 0x2e, 0x67, 0x6f, 0x6f, 0x67, - 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x44, 0x75, 0x72, 0x61, - 0x74, 0x69, 0x6f, 0x6e, 0x52, 0x1e, 0x44, 0x65, 0x72, 0x65, 0x67, 0x69, 0x73, 0x74, 0x65, 0x72, - 0x43, 0x72, 0x69, 0x74, 0x69, 0x63, 0x61, 0x6c, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x41, - 0x66, 0x74, 0x65, 0x72, 0x12, 0x24, 0x0a, 0x0d, 0x4f, 0x75, 0x74, 0x70, 0x75, 0x74, 0x4d, 0x61, - 0x78, 0x53, 0x69, 0x7a, 0x65, 0x18, 0x19, 0x20, 0x01, 0x28, 0x05, 0x52, 0x0d, 0x4f, 0x75, 0x74, - 0x70, 0x75, 0x74, 0x4d, 0x61, 0x78, 0x53, 0x69, 0x7a, 0x65, 0x1a, 0x69, 0x0a, 0x0b, 0x48, 0x65, - 0x61, 0x64, 0x65, 0x72, 0x45, 0x6e, 0x74, 0x72, 0x79, 0x12, 0x10, 0x0a, 0x03, 0x6b, 0x65, 0x79, - 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52, 0x03, 0x6b, 0x65, 0x79, 0x12, 0x44, 0x0a, 0x05, 0x76, - 0x61, 0x6c, 0x75, 0x65, 0x18, 0x02, 0x20, 0x01, 0x28, 0x0b, 0x32, 0x2e, 0x2e, 0x68, 0x61, 0x73, - 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2e, 0x63, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x2e, 0x69, 0x6e, - 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x2e, 0x48, - 0x65, 0x61, 0x64, 0x65, 0x72, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, 0x76, 0x61, 0x6c, 0x75, - 0x65, 0x3a, 0x02, 0x38, 0x01, 0x42, 0x96, 0x02, 0x0a, 0x25, 0x63, 0x6f, 0x6d, 0x2e, 0x68, 0x61, - 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2e, 0x63, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x2e, 0x69, - 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2e, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x42, - 0x10, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x50, 0x72, 0x6f, 0x74, - 0x6f, 0x50, 0x01, 0x5a, 0x33, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, - 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x63, 0x6f, 0x6e, 0x73, 0x75, 0x6c, - 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x69, 0x76, 0x61, 0x74, 0x65, 0x2f, 0x70, - 0x62, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0xa2, 0x02, 0x04, 0x48, 0x43, 0x49, 0x53, 0xaa, - 0x02, 0x21, 0x48, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2e, 0x43, 0x6f, 0x6e, 0x73, - 0x75, 0x6c, 0x2e, 0x49, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2e, 0x53, 0x65, 0x72, 0x76, - 0x69, 0x63, 0x65, 0xca, 0x02, 0x21, 0x48, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x5c, - 0x43, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x5c, 0x49, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, - 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0xe2, 0x02, 0x2d, 0x48, 0x61, 0x73, 0x68, 0x69, 0x63, + 0x63, 0x65, 0x2e, 0x48, 0x65, 0x61, 0x64, 0x65, 0x72, 0x56, 0x61, 0x6c, 0x75, 0x65, 0x52, 0x05, + 0x76, 0x61, 0x6c, 0x75, 0x65, 0x3a, 0x02, 0x38, 0x01, 0x42, 0x96, 0x02, 0x0a, 0x25, 0x63, 0x6f, + 0x6d, 0x2e, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2e, 0x63, 0x6f, 0x6e, 0x73, + 0x75, 0x6c, 0x2e, 0x69, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2e, 0x73, 0x65, 0x72, 0x76, + 0x69, 0x63, 0x65, 0x42, 0x10, 0x48, 0x65, 0x61, 0x6c, 0x74, 0x68, 0x63, 0x68, 0x65, 0x63, 0x6b, + 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x33, 0x67, 0x69, 0x74, 0x68, 0x75, 0x62, 0x2e, + 0x63, 0x6f, 0x6d, 0x2f, 0x68, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2f, 0x63, 0x6f, + 0x6e, 0x73, 0x75, 0x6c, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2f, 0x70, 0x72, 0x69, 0x76, 0x61, + 0x74, 0x65, 0x2f, 0x70, 0x62, 0x73, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0xa2, 0x02, 0x04, 0x48, + 0x43, 0x49, 0x53, 0xaa, 0x02, 0x21, 0x48, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x2e, + 0x43, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x2e, 0x49, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x2e, + 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0xca, 0x02, 0x21, 0x48, 0x61, 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x5c, 0x43, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x5c, 0x49, 0x6e, 0x74, 0x65, 0x72, - 0x6e, 0x61, 0x6c, 0x5c, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x5c, 0x47, 0x50, 0x42, 0x4d, - 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x24, 0x48, 0x61, 0x73, 0x68, 0x69, 0x63, - 0x6f, 0x72, 0x70, 0x3a, 0x3a, 0x43, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x3a, 0x3a, 0x49, 0x6e, 0x74, - 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x3a, 0x3a, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x62, 0x06, - 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, + 0x6e, 0x61, 0x6c, 0x5c, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0xe2, 0x02, 0x2d, 0x48, 0x61, + 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x5c, 0x43, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x5c, 0x49, + 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x5c, 0x53, 0x65, 0x72, 0x76, 0x69, 0x63, 0x65, 0x5c, + 0x47, 0x50, 0x42, 0x4d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74, 0x61, 0xea, 0x02, 0x24, 0x48, 0x61, + 0x73, 0x68, 0x69, 0x63, 0x6f, 0x72, 0x70, 0x3a, 0x3a, 0x43, 0x6f, 0x6e, 0x73, 0x75, 0x6c, 0x3a, + 0x3a, 0x49, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x61, 0x6c, 0x3a, 0x3a, 0x53, 0x65, 0x72, 0x76, 0x69, + 0x63, 0x65, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33, } var ( @@ -1087,7 +1110,7 @@ func file_private_pbservice_healthcheck_proto_rawDescGZIP() []byte { } var file_private_pbservice_healthcheck_proto_msgTypes = make([]protoimpl.MessageInfo, 6) -var file_private_pbservice_healthcheck_proto_goTypes = []interface{}{ +var file_private_pbservice_healthcheck_proto_goTypes = []any{ (*HealthCheck)(nil), // 0: hashicorp.consul.internal.service.HealthCheck (*HeaderValue)(nil), // 1: hashicorp.consul.internal.service.HeaderValue (*HealthCheckDefinition)(nil), // 2: hashicorp.consul.internal.service.HealthCheckDefinition @@ -1127,7 +1150,7 @@ func file_private_pbservice_healthcheck_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbservice_healthcheck_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_healthcheck_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*HealthCheck); i { case 0: return &v.state @@ -1139,7 +1162,7 @@ func file_private_pbservice_healthcheck_proto_init() { return nil } } - file_private_pbservice_healthcheck_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_healthcheck_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*HeaderValue); i { case 0: return &v.state @@ -1151,7 +1174,7 @@ func file_private_pbservice_healthcheck_proto_init() { return nil } } - file_private_pbservice_healthcheck_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_healthcheck_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*HealthCheckDefinition); i { case 0: return &v.state @@ -1163,7 +1186,7 @@ func file_private_pbservice_healthcheck_proto_init() { return nil } } - file_private_pbservice_healthcheck_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_healthcheck_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*CheckType); i { case 0: return &v.state diff --git a/proto/private/pbservice/healthcheck.proto b/proto/private/pbservice/healthcheck.proto index d75a7fc7c71b..6155f7d18113 100644 --- a/proto/private/pbservice/healthcheck.proto +++ b/proto/private/pbservice/healthcheck.proto @@ -89,6 +89,7 @@ message HealthCheckDefinition { string AliasService = 16; // mog: func-to=structs.DurationFromProto func-from=structs.DurationToProto google.protobuf.Duration TTL = 17; + string SessionName = 26; } // CheckType is used to create either the CheckMonitor or the CheckTTL. @@ -159,4 +160,9 @@ message CheckType { // mog: func-to=int func-from=int32 int32 OutputMaxSize = 25; + + // The session name of the session, which is associated with the common parent node of the session and this check. + // This association helps in managing the session and its related state of the checks. + // e.g. if the session is deleted/invalidated the state of this check shall be marked critical. + string SessionName = 35; } diff --git a/proto/private/pbservice/node.pb.go b/proto/private/pbservice/node.pb.go index 0d02640a7d9e..11c114891d5b 100644 --- a/proto/private/pbservice/node.pb.go +++ b/proto/private/pbservice/node.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbservice/node.proto @@ -683,7 +683,7 @@ func file_private_pbservice_node_proto_rawDescGZIP() []byte { } var file_private_pbservice_node_proto_msgTypes = make([]protoimpl.MessageInfo, 8) -var file_private_pbservice_node_proto_goTypes = []interface{}{ +var file_private_pbservice_node_proto_goTypes = []any{ (*IndexedCheckServiceNodes)(nil), // 0: hashicorp.consul.internal.service.IndexedCheckServiceNodes (*CheckServiceNode)(nil), // 1: hashicorp.consul.internal.service.CheckServiceNode (*Node)(nil), // 2: hashicorp.consul.internal.service.Node @@ -734,7 +734,7 @@ func file_private_pbservice_node_proto_init() { file_private_pbservice_healthcheck_proto_init() file_private_pbservice_service_proto_init() if !protoimpl.UnsafeEnabled { - file_private_pbservice_node_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_node_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*IndexedCheckServiceNodes); i { case 0: return &v.state @@ -746,7 +746,7 @@ func file_private_pbservice_node_proto_init() { return nil } } - file_private_pbservice_node_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_node_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*CheckServiceNode); i { case 0: return &v.state @@ -758,7 +758,7 @@ func file_private_pbservice_node_proto_init() { return nil } } - file_private_pbservice_node_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_node_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*Node); i { case 0: return &v.state @@ -770,7 +770,7 @@ func file_private_pbservice_node_proto_init() { return nil } } - file_private_pbservice_node_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_node_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*NodeService); i { case 0: return &v.state diff --git a/proto/private/pbservice/service.pb.go b/proto/private/pbservice/service.pb.go index b913357adfbc..e1cf594b65df 100644 --- a/proto/private/pbservice/service.pb.go +++ b/proto/private/pbservice/service.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbservice/service.proto @@ -1516,7 +1516,7 @@ func file_private_pbservice_service_proto_rawDescGZIP() []byte { } var file_private_pbservice_service_proto_msgTypes = make([]protoimpl.MessageInfo, 14) -var file_private_pbservice_service_proto_goTypes = []interface{}{ +var file_private_pbservice_service_proto_goTypes = []any{ (*ConnectProxyConfig)(nil), // 0: hashicorp.consul.internal.service.ConnectProxyConfig (*Upstream)(nil), // 1: hashicorp.consul.internal.service.Upstream (*ServiceConnect)(nil), // 2: hashicorp.consul.internal.service.ServiceConnect @@ -1574,7 +1574,7 @@ func file_private_pbservice_service_proto_init() { } file_private_pbservice_healthcheck_proto_init() if !protoimpl.UnsafeEnabled { - file_private_pbservice_service_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*ConnectProxyConfig); i { case 0: return &v.state @@ -1586,7 +1586,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*Upstream); i { case 0: return &v.state @@ -1598,7 +1598,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*ServiceConnect); i { case 0: return &v.state @@ -1610,7 +1610,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*PeeringServiceMeta); i { case 0: return &v.state @@ -1622,7 +1622,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*ExposeConfig); i { case 0: return &v.state @@ -1634,7 +1634,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*ExposePath); i { case 0: return &v.state @@ -1646,7 +1646,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*MeshGatewayConfig); i { case 0: return &v.state @@ -1658,7 +1658,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*TransparentProxyConfig); i { case 0: return &v.state @@ -1670,7 +1670,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[8].Exporter = func(v any, i int) any { switch v := v.(*AccessLogsConfig); i { case 0: return &v.state @@ -1682,7 +1682,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[9].Exporter = func(v any, i int) any { switch v := v.(*ServiceDefinition); i { case 0: return &v.state @@ -1694,7 +1694,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[10].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[10].Exporter = func(v any, i int) any { switch v := v.(*ServiceAddress); i { case 0: return &v.state @@ -1706,7 +1706,7 @@ func file_private_pbservice_service_proto_init() { return nil } } - file_private_pbservice_service_proto_msgTypes[11].Exporter = func(v interface{}, i int) interface{} { + file_private_pbservice_service_proto_msgTypes[11].Exporter = func(v any, i int) any { switch v := v.(*Weights); i { case 0: return &v.state diff --git a/proto/private/pbstatus/status.pb.go b/proto/private/pbstatus/status.pb.go index f381fc385328..aae866f0268d 100644 --- a/proto/private/pbstatus/status.pb.go +++ b/proto/private/pbstatus/status.pb.go @@ -14,7 +14,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbstatus/status.proto @@ -159,7 +159,7 @@ func file_private_pbstatus_status_proto_rawDescGZIP() []byte { } var file_private_pbstatus_status_proto_msgTypes = make([]protoimpl.MessageInfo, 1) -var file_private_pbstatus_status_proto_goTypes = []interface{}{ +var file_private_pbstatus_status_proto_goTypes = []any{ (*Status)(nil), // 0: hashicorp.consul.internal.status.Status (*anypb.Any)(nil), // 1: google.protobuf.Any } @@ -178,7 +178,7 @@ func file_private_pbstatus_status_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbstatus_status_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstatus_status_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Status); i { case 0: return &v.state diff --git a/proto/private/pbstorage/raft.pb.go b/proto/private/pbstorage/raft.pb.go index 7871faef0f46..5b02695e4e07 100644 --- a/proto/private/pbstorage/raft.pb.go +++ b/proto/private/pbstorage/raft.pb.go @@ -3,7 +3,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbstorage/raft.proto @@ -829,7 +829,7 @@ func file_private_pbstorage_raft_proto_rawDescGZIP() []byte { var file_private_pbstorage_raft_proto_enumTypes = make([]protoimpl.EnumInfo, 1) var file_private_pbstorage_raft_proto_msgTypes = make([]protoimpl.MessageInfo, 10) -var file_private_pbstorage_raft_proto_goTypes = []interface{}{ +var file_private_pbstorage_raft_proto_goTypes = []any{ (LogType)(0), // 0: hashicorp.consul.internal.storage.raft.LogType (*Log)(nil), // 1: hashicorp.consul.internal.storage.raft.Log (*LogResponse)(nil), // 2: hashicorp.consul.internal.storage.raft.LogResponse @@ -884,7 +884,7 @@ func file_private_pbstorage_raft_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbstorage_raft_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*Log); i { case 0: return &v.state @@ -896,7 +896,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*LogResponse); i { case 0: return &v.state @@ -908,7 +908,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*WriteRequest); i { case 0: return &v.state @@ -920,7 +920,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*WriteResponse); i { case 0: return &v.state @@ -932,7 +932,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*DeleteRequest); i { case 0: return &v.state @@ -944,7 +944,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*ReadRequest); i { case 0: return &v.state @@ -956,7 +956,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*ReadResponse); i { case 0: return &v.state @@ -968,7 +968,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[7].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[7].Exporter = func(v any, i int) any { switch v := v.(*ListRequest); i { case 0: return &v.state @@ -980,7 +980,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[8].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[8].Exporter = func(v any, i int) any { switch v := v.(*ListResponse); i { case 0: return &v.state @@ -992,7 +992,7 @@ func file_private_pbstorage_raft_proto_init() { return nil } } - file_private_pbstorage_raft_proto_msgTypes[9].Exporter = func(v interface{}, i int) interface{} { + file_private_pbstorage_raft_proto_msgTypes[9].Exporter = func(v any, i int) any { switch v := v.(*GroupVersionMismatchErrorDetails); i { case 0: return &v.state @@ -1005,11 +1005,11 @@ func file_private_pbstorage_raft_proto_init() { } } } - file_private_pbstorage_raft_proto_msgTypes[0].OneofWrappers = []interface{}{ + file_private_pbstorage_raft_proto_msgTypes[0].OneofWrappers = []any{ (*Log_Write)(nil), (*Log_Delete)(nil), } - file_private_pbstorage_raft_proto_msgTypes[1].OneofWrappers = []interface{}{ + file_private_pbstorage_raft_proto_msgTypes[1].OneofWrappers = []any{ (*LogResponse_Write)(nil), (*LogResponse_Delete)(nil), } diff --git a/proto/private/pbsubscribe/subscribe.pb.go b/proto/private/pbsubscribe/subscribe.pb.go index 0aee8eb364fb..063fe992d62a 100644 --- a/proto/private/pbsubscribe/subscribe.pb.go +++ b/proto/private/pbsubscribe/subscribe.pb.go @@ -6,7 +6,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: -// protoc-gen-go v1.33.0 +// protoc-gen-go v1.34.2 // protoc (unknown) // source: private/pbsubscribe/subscribe.proto @@ -1068,7 +1068,7 @@ func file_private_pbsubscribe_subscribe_proto_rawDescGZIP() []byte { var file_private_pbsubscribe_subscribe_proto_enumTypes = make([]protoimpl.EnumInfo, 3) var file_private_pbsubscribe_subscribe_proto_msgTypes = make([]protoimpl.MessageInfo, 7) -var file_private_pbsubscribe_subscribe_proto_goTypes = []interface{}{ +var file_private_pbsubscribe_subscribe_proto_goTypes = []any{ (Topic)(0), // 0: subscribe.Topic (CatalogOp)(0), // 1: subscribe.CatalogOp (ConfigEntryUpdate_UpdateOp)(0), // 2: subscribe.ConfigEntryUpdate.UpdateOp @@ -1112,7 +1112,7 @@ func file_private_pbsubscribe_subscribe_proto_init() { return } if !protoimpl.UnsafeEnabled { - file_private_pbsubscribe_subscribe_proto_msgTypes[0].Exporter = func(v interface{}, i int) interface{} { + file_private_pbsubscribe_subscribe_proto_msgTypes[0].Exporter = func(v any, i int) any { switch v := v.(*NamedSubject); i { case 0: return &v.state @@ -1124,7 +1124,7 @@ func file_private_pbsubscribe_subscribe_proto_init() { return nil } } - file_private_pbsubscribe_subscribe_proto_msgTypes[1].Exporter = func(v interface{}, i int) interface{} { + file_private_pbsubscribe_subscribe_proto_msgTypes[1].Exporter = func(v any, i int) any { switch v := v.(*SubscribeRequest); i { case 0: return &v.state @@ -1136,7 +1136,7 @@ func file_private_pbsubscribe_subscribe_proto_init() { return nil } } - file_private_pbsubscribe_subscribe_proto_msgTypes[2].Exporter = func(v interface{}, i int) interface{} { + file_private_pbsubscribe_subscribe_proto_msgTypes[2].Exporter = func(v any, i int) any { switch v := v.(*Event); i { case 0: return &v.state @@ -1148,7 +1148,7 @@ func file_private_pbsubscribe_subscribe_proto_init() { return nil } } - file_private_pbsubscribe_subscribe_proto_msgTypes[3].Exporter = func(v interface{}, i int) interface{} { + file_private_pbsubscribe_subscribe_proto_msgTypes[3].Exporter = func(v any, i int) any { switch v := v.(*EventBatch); i { case 0: return &v.state @@ -1160,7 +1160,7 @@ func file_private_pbsubscribe_subscribe_proto_init() { return nil } } - file_private_pbsubscribe_subscribe_proto_msgTypes[4].Exporter = func(v interface{}, i int) interface{} { + file_private_pbsubscribe_subscribe_proto_msgTypes[4].Exporter = func(v any, i int) any { switch v := v.(*ServiceHealthUpdate); i { case 0: return &v.state @@ -1172,7 +1172,7 @@ func file_private_pbsubscribe_subscribe_proto_init() { return nil } } - file_private_pbsubscribe_subscribe_proto_msgTypes[5].Exporter = func(v interface{}, i int) interface{} { + file_private_pbsubscribe_subscribe_proto_msgTypes[5].Exporter = func(v any, i int) any { switch v := v.(*ConfigEntryUpdate); i { case 0: return &v.state @@ -1184,7 +1184,7 @@ func file_private_pbsubscribe_subscribe_proto_init() { return nil } } - file_private_pbsubscribe_subscribe_proto_msgTypes[6].Exporter = func(v interface{}, i int) interface{} { + file_private_pbsubscribe_subscribe_proto_msgTypes[6].Exporter = func(v any, i int) any { switch v := v.(*ServiceListUpdate); i { case 0: return &v.state @@ -1197,11 +1197,11 @@ func file_private_pbsubscribe_subscribe_proto_init() { } } } - file_private_pbsubscribe_subscribe_proto_msgTypes[1].OneofWrappers = []interface{}{ + file_private_pbsubscribe_subscribe_proto_msgTypes[1].OneofWrappers = []any{ (*SubscribeRequest_WildcardSubject)(nil), (*SubscribeRequest_NamedSubject)(nil), } - file_private_pbsubscribe_subscribe_proto_msgTypes[2].OneofWrappers = []interface{}{ + file_private_pbsubscribe_subscribe_proto_msgTypes[2].OneofWrappers = []any{ (*Event_EndOfSnapshot)(nil), (*Event_NewSnapshotToFollow)(nil), (*Event_EventBatch)(nil), diff --git a/sdk/go.mod b/sdk/go.mod index f8c15106c3bd..6ae9f37908a9 100644 --- a/sdk/go.mod +++ b/sdk/go.mod @@ -1,6 +1,6 @@ module github.com/hashicorp/consul/sdk -go 1.22.12 +go 1.23.12 require ( github.com/hashicorp/go-cleanhttp v0.5.2 @@ -9,7 +9,7 @@ require ( github.com/hashicorp/go-version v1.2.1 github.com/pkg/errors v0.9.1 github.com/stretchr/testify v1.8.4 - golang.org/x/sys v0.29.0 + golang.org/x/sys v0.35.0 ) require ( diff --git a/sdk/go.sum b/sdk/go.sum index 4fe7f1925f0e..e391f7d381f1 100644 --- a/sdk/go.sum +++ b/sdk/go.sum @@ -50,8 +50,8 @@ golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= diff --git a/test-integ/go.mod b/test-integ/go.mod index bc5a28e4a538..8424c8eecacf 100644 --- a/test-integ/go.mod +++ b/test-integ/go.mod @@ -1,6 +1,8 @@ module github.com/hashicorp/consul/test-integ -go 1.22.12 +go 1.23.12 + +toolchain go1.24.3 require ( github.com/google/go-cmp v0.6.0 @@ -13,9 +15,9 @@ require ( github.com/itchyny/gojq v0.12.13 github.com/mitchellh/copystructure v1.2.0 github.com/rboyer/blankspace v0.2.1 - github.com/stretchr/testify v1.8.4 - golang.org/x/net v0.34.0 - google.golang.org/grpc v1.58.3 + github.com/stretchr/testify v1.9.0 + golang.org/x/net v0.43.0 + google.golang.org/grpc v1.65.0 ) require ( @@ -33,7 +35,7 @@ require ( github.com/avast/retry-go v3.0.0+incompatible // indirect github.com/beorn7/perks v1.0.1 // indirect github.com/cenkalti/backoff/v4 v4.2.1 // indirect - github.com/cespare/xxhash/v2 v2.2.0 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/containerd/containerd v1.7.3 // indirect github.com/cpuguy83/dockercfg v0.3.1 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect @@ -42,13 +44,13 @@ require ( github.com/docker/go-connections v0.4.0 // indirect github.com/docker/go-units v0.5.0 // indirect github.com/evanphx/json-patch v4.12.0+incompatible // indirect - github.com/fatih/color v1.16.0 // indirect - github.com/go-jose/go-jose/v3 v3.0.3 // indirect + github.com/fatih/color v1.18.0 // indirect + github.com/go-jose/go-jose/v3 v3.0.4 // indirect github.com/go-test/deep v1.1.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/protobuf v1.5.4 // indirect github.com/google/btree v1.0.1 // indirect - github.com/google/uuid v1.4.0 // indirect + github.com/google/uuid v1.6.0 // indirect github.com/hashicorp/consul v1.16.1 // indirect github.com/hashicorp/consul-server-connection-manager v0.1.4 // indirect github.com/hashicorp/errwrap v1.1.0 // indirect @@ -97,15 +99,15 @@ require ( github.com/teris-io/shortid v0.0.0-20220617161101-71ec9f2aa569 // indirect github.com/testcontainers/testcontainers-go v0.22.0 // indirect github.com/zclconf/go-cty v1.12.1 // indirect - golang.org/x/crypto v0.32.0 // indirect - golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 // indirect - golang.org/x/mod v0.22.0 // indirect - golang.org/x/sync v0.10.0 // indirect - golang.org/x/sys v0.29.0 // indirect - golang.org/x/text v0.21.0 // indirect - golang.org/x/tools v0.29.0 // indirect - google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5 // indirect - google.golang.org/protobuf v1.33.0 // indirect + golang.org/x/crypto v0.41.0 // indirect + golang.org/x/exp v0.0.0-20250808145144-a408d31f581a // indirect + golang.org/x/mod v0.27.0 // indirect + golang.org/x/sync v0.16.0 // indirect + golang.org/x/sys v0.35.0 // indirect + golang.org/x/text v0.28.0 // indirect + golang.org/x/tools v0.36.0 // indirect + google.golang.org/genproto/googleapis/rpc v0.0.0-20240823204242-4ba0660f739c // indirect + google.golang.org/protobuf v1.34.2 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect ) diff --git a/test-integ/go.sum b/test-integ/go.sum index cfcc946857b6..e5f73d3c2595 100644 --- a/test-integ/go.sum +++ b/test-integ/go.sum @@ -45,8 +45,8 @@ github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kB github.com/cenkalti/backoff/v4 v4.2.1 h1:y4OZtCnogmCPw98Zjyt5a6+QwPLGkiQsYW5oUqylYbM= github.com/cenkalti/backoff/v4 v4.2.1/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE= github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= -github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= -github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag= github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I= github.com/containerd/containerd v1.7.3 h1:cKwYKkP1eTj54bP3wCdXXBymmKRQMrWjkLSWZZJDa8o= @@ -74,10 +74,10 @@ github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQL github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU= github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk= -github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM= -github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE= -github.com/go-jose/go-jose/v3 v3.0.3 h1:fFKWeig/irsp7XD2zBxvnmA/XaRWp5V3CBsZXJF7G7k= -github.com/go-jose/go-jose/v3 v3.0.3/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= +github.com/fatih/color v1.18.0 h1:S8gINlzdQ840/4pfAwic/ZE0djQEH3wM94VfqLTZcOM= +github.com/fatih/color v1.18.0/go.mod h1:4FelSpRwEGDpQ12mAdzqdOukCy4u8WUtOY6lkT/6HfU= +github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY= +github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= @@ -102,8 +102,8 @@ github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= -github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4= -github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= +github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= +github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/hashicorp/consul-server-connection-manager v0.1.4 h1:wrcSRV6WGXFBNpNbN6XsdoGgBOyso7ZbN5VaWPEX1jY= github.com/hashicorp/consul-server-connection-manager v0.1.4/go.mod h1:LMqHkALoLP0HUQKOG21xXYr0YPUayIQIHNTlmxG100E= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= @@ -284,15 +284,15 @@ github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= -github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c= -github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals= -github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk= -github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo= +github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= +github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/teris-io/shortid v0.0.0-20220617161101-71ec9f2aa569 h1:xzABM9let0HLLqFypcxvLmlvEciCHL7+Lv+4vwZqecI= github.com/teris-io/shortid v0.0.0-20220617161101-71ec9f2aa569/go.mod h1:2Ly+NIftZN4de9zRmENdYbvPQeaVIYKWpLFStLFEBgI= github.com/testcontainers/testcontainers-go v0.22.0 h1:hOK4NzNu82VZcKEB1aP9LO1xYssVFMvlfeuDW9JMmV0= @@ -311,17 +311,17 @@ golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8U golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= -golang.org/x/crypto v0.32.0 h1:euUpcYgM8WcP71gNpTqQCn6rC2t6ULUPiOzfWaXVVfc= -golang.org/x/crypto v0.32.0/go.mod h1:ZnnJkOaASj8g0AjIduWNlq2NRxL0PlBrbKVyZ6V/Ugc= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 h1:yqrTHse8TCMW1M1ZCP+VAR/l0kKxwaAIqN/il7x4voA= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU= +golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4= +golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a h1:Y+7uR/b1Mw2iSXZ3G//1haIiSElDQZ8KWh0h+sZPG90= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a/go.mod h1:rT6SFzZ7oxADUDx58pcaKFTcZ+inxAa9fTrYx/uVYwg= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= -golang.org/x/mod v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4= -golang.org/x/mod v0.22.0/go.mod h1:6SkKJ3Xj0I0BrPOZoBy3bdMptDDU9oJrpohJ3eWZ1fY= +golang.org/x/mod v0.27.0 h1:kb+q2PyFnEADO2IEF935ehFUXlWiNjJWtRNgBLSfbxQ= +golang.org/x/mod v0.27.0/go.mod h1:rWI627Fq0DEoudcK+MBkNkCe0EetEaDSwJJkCcjpazc= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= @@ -337,8 +337,8 @@ golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qx golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= -golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= -golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= +golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= +golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -347,8 +347,8 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= -golang.org/x/sync v0.10.0 h1:3NQrjDixjgGwUOCaF8w2+VYHv0Ve/vGYSbdkTa98gmQ= -golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= +golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw= +golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= @@ -382,8 +382,8 @@ golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= @@ -397,10 +397,10 @@ golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= -golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo= -golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= -golang.org/x/time v0.9.0 h1:EsRrnYcQiGH+5FfbgvV4AP7qEZstoyrHB0DzarOQ4ZY= -golang.org/x/time v0.9.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= +golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= +golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= +golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= +golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190424220101-1e8e1cfdf96b/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= @@ -410,18 +410,18 @@ golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4f golang.org/x/tools v0.1.6-0.20210726203631-07bc1bf47fb2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= -golang.org/x/tools v0.29.0 h1:Xx0h3TtM9rzQpQuR4dKLrdglAmCEN5Oi+P74JdhdzXE= -golang.org/x/tools v0.29.0/go.mod h1:KMQVMRsVxU6nHCFXrBPhDB8XncLNLM0lIy/F14RP588= +golang.org/x/tools v0.36.0 h1:kWS0uv/zsvHEle1LbV5LE8QujrxB3wfQyxHfhOk0Qkg= +golang.org/x/tools v0.36.0/go.mod h1:WBDiHKJK8YgLHlcQPYQzNCkUxUypCaa5ZegCVutKm+s= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= -google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5 h1:eSaPbMR4T7WfH9FvABk36NBMacoTUKdWCvV0dx+KfOg= -google.golang.org/genproto/googleapis/rpc v0.0.0-20230803162519-f966b187b2e5/go.mod h1:zBEcrKX2ZOcEkHWxBPAIvYUWOKKMIhYcmNiUIu2ji3I= -google.golang.org/grpc v1.58.3 h1:BjnpXut1btbtgN/6sp+brB2Kbm2LjNXnidYujAVbSoQ= -google.golang.org/grpc v1.58.3/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0= -google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= -google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240823204242-4ba0660f739c h1:Kqjm4WpoWvwhMPcrAczoTyMySQmYa9Wy2iL6Con4zn8= +google.golang.org/genproto/googleapis/rpc v0.0.0-20240823204242-4ba0660f739c/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU= +google.golang.org/grpc v1.65.0 h1:bs/cUb4lp1G5iImFFd3u5ixQzweKizoZJAwBNLR42lc= +google.golang.org/grpc v1.65.0/go.mod h1:WgYC2ypjlB0EiQi6wdKixMqukr6lBc0Vo+oOgjrM5ZQ= +google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg= +google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw= gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= diff --git a/test-integ/peering_commontopo/ac1_basic_test.go b/test-integ/peering_commontopo/ac1_basic_test.go index c09bd3d53772..0af6b9b387bd 100644 --- a/test-integ/peering_commontopo/ac1_basic_test.go +++ b/test-integ/peering_commontopo/ac1_basic_test.go @@ -140,12 +140,12 @@ func (s *ac1BasicSuite) setup(t *testing.T, ct *commonTopo) { Partition: ConfigEntryPartition(httpServerSID.Partition), Sources: []*api.SourceIntention{ { - Name: tcpClient.ID.Name, + Name: tcpClient.Workload.ID.Name, Peer: cluPeerName, Action: api.IntentionActionAllow, }, { - Name: httpClient.ID.Name, + Name: httpClient.Workload.ID.Name, Peer: cluPeerName, Action: api.IntentionActionAllow, }, @@ -171,12 +171,12 @@ func (s *ac1BasicSuite) setup(t *testing.T, ct *commonTopo) { Partition: ConfigEntryPartition(tcpServerSID.Partition), Sources: []*api.SourceIntention{ { - Name: tcpClient.ID.Name, + Name: tcpClient.Workload.ID.Name, Peer: cluPeerName, Action: api.IntentionActionAllow, }, { - Name: httpClient.ID.Name, + Name: httpClient.Workload.ID.Name, Peer: cluPeerName, Action: api.IntentionActionAllow, }, @@ -187,9 +187,9 @@ func (s *ac1BasicSuite) setup(t *testing.T, ct *commonTopo) { httpServerNode := ct.AddServiceNode(peerClu, httpServer) tcpServerNode := ct.AddServiceNode(peerClu, tcpServer) - s.sidClientHTTP = httpClient.ID + s.sidClientHTTP = httpClient.Workload.ID s.nodeClientHTTP = httpClientNode.ID() - s.sidClientTCP = tcpClient.ID + s.sidClientTCP = tcpClient.Workload.ID s.nodeClientTCP = tcpClientNode.ID() s.upstreamHTTP = upstreamHTTP s.upstreamTCP = upstreamTCP diff --git a/test-integ/peering_commontopo/ac3_service_defaults_upstream_test.go b/test-integ/peering_commontopo/ac3_service_defaults_upstream_test.go index 30b4b1948053..5c2767c6ebf5 100644 --- a/test-integ/peering_commontopo/ac3_service_defaults_upstream_test.go +++ b/test-integ/peering_commontopo/ac3_service_defaults_upstream_test.go @@ -14,7 +14,7 @@ import ( "github.com/itchyny/gojq" "github.com/stretchr/testify/require" - "github.com/hashicorp/go-cleanhttp" + cleanhttp "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/sdk/testutil/retry" @@ -132,7 +132,7 @@ func (s *ac3SvcDefaultsSuite) setup(t *testing.T, ct *commonTopo) { Partition: ConfigEntryPartition(serverSID.Partition), Sources: []*api.SourceIntention{ { - Name: client.ID.Name, + Name: client.Workload.ID.Name, Peer: cluPeerName, Action: api.IntentionActionAllow, }, @@ -142,7 +142,7 @@ func (s *ac3SvcDefaultsSuite) setup(t *testing.T, ct *commonTopo) { serverNode := ct.AddServiceNode(peerClu, server) - s.sidClient = client.ID + s.sidClient = client.Workload.ID s.nodeClient = clientNode.ID() s.upstream = upstream diff --git a/test-integ/peering_commontopo/ac4_proxy_defaults_test.go b/test-integ/peering_commontopo/ac4_proxy_defaults_test.go index c27057a07a4e..4eb4c920fb91 100644 --- a/test-integ/peering_commontopo/ac4_proxy_defaults_test.go +++ b/test-integ/peering_commontopo/ac4_proxy_defaults_test.go @@ -11,7 +11,7 @@ import ( "github.com/stretchr/testify/require" - "github.com/hashicorp/go-cleanhttp" + cleanhttp "github.com/hashicorp/go-cleanhttp" "github.com/hashicorp/consul/api" "github.com/hashicorp/consul/testing/deployer/topology" @@ -112,7 +112,7 @@ func (s *ac4ProxyDefaultsSuite) setup(t *testing.T, ct *commonTopo) { Partition: ConfigEntryPartition(serverSID.Partition), Sources: []*api.SourceIntention{ { - Name: client.ID.Name, + Name: client.Workload.ID.Name, Peer: cluPeerName, Action: api.IntentionActionAllow, }, @@ -124,7 +124,7 @@ func (s *ac4ProxyDefaultsSuite) setup(t *testing.T, ct *commonTopo) { &api.ProxyConfigEntry{ Kind: api.ProxyDefaults, Name: api.ProxyConfigGlobal, - Partition: ConfigEntryPartition(server.ID.Partition), + Partition: ConfigEntryPartition(server.Workload.ID.Partition), Config: map[string]interface{}{ "protocol": "http", "local_request_timeout_ms": 500, diff --git a/test-integ/peering_commontopo/commontopo.go b/test-integ/peering_commontopo/commontopo.go index c4360932557a..8554776680f1 100644 --- a/test-integ/peering_commontopo/commontopo.go +++ b/test-integ/peering_commontopo/commontopo.go @@ -239,10 +239,10 @@ type serviceExt struct { func (ct *commonTopo) AddServiceNode(clu *topology.Cluster, svc serviceExt) *topology.Node { clusterName := clu.Name - if _, ok := ct.services[clusterName][svc.ID]; ok { - panic(fmt.Sprintf("duplicate service %q in cluster %q", svc.ID, clusterName)) + if _, ok := ct.services[clusterName][svc.Workload.ID]; ok { + panic(fmt.Sprintf("duplicate service %q in cluster %q", svc.Workload.ID, clusterName)) } - ct.services[clusterName][svc.ID] = struct{}{} + ct.services[clusterName][svc.Workload.ID] = struct{}{} // TODO: inline serviceHostnameString := func(dc string, id topology.ID) string { @@ -268,14 +268,14 @@ func (ct *commonTopo) AddServiceNode(clu *topology.Cluster, svc serviceExt) *top nodeKind := topology.NodeKindClient // TODO: bug in deployer somewhere; it should guard against a KindDataplane node with // DisableServiceMesh services on it; dataplane is only for service-mesh - if !svc.DisableServiceMesh && clu.Datacenter == ct.agentlessDC { + if !svc.Workload.DisableServiceMesh && clu.Datacenter == ct.agentlessDC { nodeKind = topology.NodeKindDataplane } node := &topology.Node{ Kind: nodeKind, - Name: serviceHostnameString(clu.Datacenter, svc.ID), - Partition: svc.ID.Partition, + Name: serviceHostnameString(clu.Datacenter, svc.Workload.ID), + Partition: svc.Workload.ID.Partition, Addresses: []*topology.Address{ {Network: clu.Datacenter}, }, @@ -288,9 +288,9 @@ func (ct *commonTopo) AddServiceNode(clu *topology.Cluster, svc serviceExt) *top // Export if necessary if len(svc.Exports) > 0 { - ct.ExportService(clu, svc.ID.Partition, api.ExportedService{ - Name: svc.ID.Name, - Namespace: svc.ID.Namespace, + ct.ExportService(clu, svc.Workload.ID.Partition, api.ExportedService{ + Name: svc.Workload.ID.Name, + Namespace: svc.Workload.ID.Namespace, Consumers: svc.Exports, }) } diff --git a/test/integration/connect/envoy/Dockerfile-tcpdump b/test/integration/connect/envoy/Dockerfile-tcpdump index ea076961cfb3..57a95d22f17d 100644 --- a/test/integration/connect/envoy/Dockerfile-tcpdump +++ b/test/integration/connect/envoy/Dockerfile-tcpdump @@ -1,4 +1,4 @@ -FROM alpine:3.20 +FROM alpine:3.22 RUN apk add --no-cache tcpdump VOLUME [ "/data" ] diff --git a/test/integration/connect/envoy/case-api-gateway-lua/capture.sh b/test/integration/connect/envoy/case-api-gateway-lua/capture.sh new file mode 100644 index 000000000000..9f53e60da5f7 --- /dev/null +++ b/test/integration/connect/envoy/case-api-gateway-lua/capture.sh @@ -0,0 +1,18 @@ +#!/usr/bin/env bash +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: BUSL-1.1 + +set -euo pipefail + +# Capture Envoy logs +docker logs $(docker ps -q --filter name=envoy) > envoy.log + +# Capture Consul logs +docker logs $(docker ps -q --filter name=consul) > consul.log + +# Check network connectivity +echo "Network connectivity check:" > network.log +docker exec -it $(docker ps -q --filter name=envoy) netstat -tulpn >> network.log +docker exec -it $(docker ps -q --filter name=envoy) curl -v localhost:20000/stats >> network.log 2>&1 + +snapshot_envoy_admin localhost:20000 api-gateway primary || true \ No newline at end of file diff --git a/test/integration/connect/envoy/case-api-gateway-lua/service_gateway.hcl b/test/integration/connect/envoy/case-api-gateway-lua/service_gateway.hcl new file mode 100644 index 000000000000..a0c7f6b887f3 --- /dev/null +++ b/test/integration/connect/envoy/case-api-gateway-lua/service_gateway.hcl @@ -0,0 +1,7 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: BUSL-1.1 + +services { + name = "api-gateway" + kind = "api-gateway" +} \ No newline at end of file diff --git a/test/integration/connect/envoy/case-api-gateway-lua/service_s1.hcl b/test/integration/connect/envoy/case-api-gateway-lua/service_s1.hcl new file mode 100644 index 000000000000..2e45294d6cf1 --- /dev/null +++ b/test/integration/connect/envoy/case-api-gateway-lua/service_s1.hcl @@ -0,0 +1,10 @@ +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: BUSL-1.1 + +services { + name = "s1" + port = 8080 + connect { + sidecar_service {} + } +} \ No newline at end of file diff --git a/test/integration/connect/envoy/case-api-gateway-lua/setup.sh b/test/integration/connect/envoy/case-api-gateway-lua/setup.sh new file mode 100644 index 000000000000..bdb4b4441514 --- /dev/null +++ b/test/integration/connect/envoy/case-api-gateway-lua/setup.sh @@ -0,0 +1,164 @@ +#!/usr/bin/env bash +# Copyright (c) HashiCorp, Inc. +# SPDX-License-Identifier: BUSL-1.1 + +set -euo pipefail + +# Create proxy defaults +upsert_config_entry primary ' +Kind = "proxy-defaults" +Name = "global" +Config { + protocol = "http" +} +' +# Create API Gateway +upsert_config_entry primary ' +kind = "api-gateway" +name = "api-gateway" +listeners = [ + { + name = "listener-one" + port = 9999 + protocol = "http" + } +] +' + +# Create HTTP route +upsert_config_entry primary ' +kind = "http-route" +name = "api-gateway-route-one" +rules = [ + { + matches = [ + { + path = { + match = "prefix" + value = "/echo" + } + } + ] + services = [ + { + name = "s1" + } + ] + } +] +parents = [ + { + kind = "api-gateway" + name = "api-gateway" + } +] +' + +# Create service defaults for API Gateway with LUA extension +upsert_config_entry primary "$(cat <<'EOF' +Kind = "service-defaults" +Name = "api-gateway" +Protocol = "http" +EnvoyExtensions = [ + { + name = "builtin/lua" + required = true + arguments = { + proxyType = "api-gateway" + listener = "outbound" + script = < ../api @@ -56,10 +56,10 @@ require ( github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/prometheus/client_model v0.5.0 // indirect go.opentelemetry.io/proto/otlp v1.0.0 // indirect - golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 // indirect - golang.org/x/net v0.34.0 // indirect - golang.org/x/sys v0.29.0 // indirect - golang.org/x/text v0.21.0 // indirect + golang.org/x/exp v0.0.0-20250808145144-a408d31f581a // indirect + golang.org/x/net v0.43.0 // indirect + golang.org/x/sys v0.35.0 // indirect + golang.org/x/text v0.28.0 // indirect google.golang.org/genproto v0.0.0-20230711160842-782d3b101e98 // indirect google.golang.org/genproto/googleapis/api v0.0.0-20230711160842-782d3b101e98 // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20230711160842-782d3b101e98 // indirect diff --git a/troubleshoot/go.sum b/troubleshoot/go.sum index 7457a6e6fdb6..7794c7e3cdc5 100644 --- a/troubleshoot/go.sum +++ b/troubleshoot/go.sum @@ -194,8 +194,8 @@ golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnf golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8 h1:yqrTHse8TCMW1M1ZCP+VAR/l0kKxwaAIqN/il7x4voA= -golang.org/x/exp v0.0.0-20250106191152-7588d65b2ba8/go.mod h1:tujkw807nyEEAamNbDrEGzRav+ilXA7PCRAd6xsmwiU= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a h1:Y+7uR/b1Mw2iSXZ3G//1haIiSElDQZ8KWh0h+sZPG90= +golang.org/x/exp v0.0.0-20250808145144-a408d31f581a/go.mod h1:rT6SFzZ7oxADUDx58pcaKFTcZ+inxAa9fTrYx/uVYwg= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= @@ -210,8 +210,8 @@ golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLL golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8= -golang.org/x/net v0.34.0 h1:Mb7Mrk043xzHgnRM88suvJFwzVrRfHEHJEl5/71CKw0= -golang.org/x/net v0.34.0/go.mod h1:di0qlW3YNM5oh6GqDGQr92MyTozJPmybPK4Ev/Gm31k= +golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE= +golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -241,15 +241,15 @@ golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.29.0 h1:TPYlXGxvx1MGTn2GiZDhnjPA9wZzZeGKHHmKhHYvgaU= -golang.org/x/sys v0.29.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI= +golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= -golang.org/x/text v0.21.0 h1:zyQAAkrwaneQ066sspRyJaG9VNi/YJ1NfzcGB3hZ/qo= -golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ= +golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng= +golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY= diff --git a/ui/.nvmrc b/ui/.nvmrc index 3c032078a4a2..209e3ef4b624 100644 --- a/ui/.nvmrc +++ b/ui/.nvmrc @@ -1 +1 @@ -18 +20 diff --git a/ui/package.json b/ui/package.json index 2fe24dda1693..b4f775ec12af 100644 --- a/ui/package.json +++ b/ui/package.json @@ -26,9 +26,21 @@ "braces": "^3.0.0", "markdown-it": "^12.3.2", "codemirror": "5.58.2", - "ansi-html": "0.0.8" + "ansi-html": "0.0.8", + "json5": "^2.2.3", + "**/express/path-to-regexp": "0.1.12", + "**/nise/path-to-regexp": "1.9.0", + "body-parser": "1.20.3", + "elliptic": "6.6.1", + "nanoid": "3.3.8", + "prismjs": "1.30.0", + "@babel/runtime": "8.0.0-alpha.17", + "@babel/helpers": "7.26.10", + "micromatch": "4.0.8", + "webpack": "5.94.0", + "rollup": "2.79.2" }, "engines": { - "node": "18" + "node": "20" } } diff --git a/ui/packages/consul-ui/app/adapters/token.js b/ui/packages/consul-ui/app/adapters/token.js index 7e8e46dacb1b..897b1007003f 100644 --- a/ui/packages/consul-ui/app/adapters/token.js +++ b/ui/packages/consul-ui/app/adapters/token.js @@ -67,7 +67,7 @@ export default class TokenAdapter extends Adapter { // If a token has Rules, use the old API if (typeof data['Rules'] !== 'undefined') { - // https://www.consul.io/api/acl/legacy.html#update-acl-token + // https://developer.hashicorp.com/api/acl/legacy.html#update-acl-token // as we are using the old API we don't need to specify a nspace return request` PUT /v1/acl/update?${this.formatDatacenter(data.Datacenter)} diff --git a/ui/packages/consul-ui/app/components/action/README.mdx b/ui/packages/consul-ui/app/components/action/README.mdx index 0cd77ac39bae..1e289dc41799 100644 --- a/ui/packages/consul-ui/app/components/action/README.mdx +++ b/ui/packages/consul-ui/app/components/action/README.mdx @@ -52,7 +52,7 @@ in the future. ```hbs preview-template Click Me diff --git a/ui/packages/consul-ui/app/components/code-editor/README.mdx b/ui/packages/consul-ui/app/components/code-editor/README.mdx deleted file mode 100644 index 7893f2bbe467..000000000000 --- a/ui/packages/consul-ui/app/components/code-editor/README.mdx +++ /dev/null @@ -1,50 +0,0 @@ ---- -class: ember -state: needs-love ---- - -# CodeEditor - -```hbs preview-template - - <:label> - Rules (HCL Format) - - <:content> - {"content": "Initial Content"} - - -``` - -A code-editor with syntax highlighting supporting multiple languages (JSON, HCL, YAML), validation and simple tools such as "Copy to clipboard" - - -### Arguments - -| Argument | Type | Default | Description | -| --- | --- | --- | --- | -| `readonly` | `Boolean` | false | If true, the content (value) of the CodeEditor cannot be changed by the user | -| `name` | `String` | | The name attribute of the form element | -| `syntax` | `String` | | Identifies the language used to validate/syntax highlight the code (possible values: hcl, json, yaml) | -| `oninput` | `Action` | noop | Action/callback that is triggered when the user inputs data | - -### Named Blocks - -| Name | Description | Behaviour if empty | -| --- | --- | --- | -| `:label` | Used to define the title that's displayed on the left inside the toolbar above the CodeEditor | Nothing is displayed | -| `:tools` | Used to define tools, buttons, widgets that will be displayed on the right inside the toolbar above the CodeEditor | By default it renders a `language selector` dropdown (if `readonly`== false and `syntax` is falsy) and a `ConsulCopyButton` -| `:content` | Used to display specific content such as code templates inside the CodeEditor | if the block is defined, @value will be displayed instead | - - -### See - -- [Component Source Code](./index.js) -- [Template Source Code](./index.hbs) - ---- diff --git a/ui/packages/consul-ui/app/components/code-editor/index.hbs b/ui/packages/consul-ui/app/components/code-editor/index.hbs deleted file mode 100644 index 3582f5c407de..000000000000 --- a/ui/packages/consul-ui/app/components/code-editor/index.hbs +++ /dev/null @@ -1,42 +0,0 @@ -{{! - Copyright (c) HashiCorp, Inc. - SPDX-License-Identifier: BUSL-1.1 -}} - -
-
- -
- {{#if (has-block "tools")}} - {{yield to="tools"}} - {{else}} - {{#if (and (not readonly) (not syntax))}} - - {{mode.name}} - -
- - {{/if}} - {{/if}} -
-
-
- -
{{#if (has-block 'content')}}{{yield to='content'}}{{else}}{{value}}{{/if}}
diff --git a/ui/packages/consul-ui/app/components/code-editor/index.js b/ui/packages/consul-ui/app/components/code-editor/index.js deleted file mode 100644 index 012f9698dde9..000000000000 --- a/ui/packages/consul-ui/app/components/code-editor/index.js +++ /dev/null @@ -1,100 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -import Component from '@ember/component'; -import { set } from '@ember/object'; -import { inject as service } from '@ember/service'; -const DEFAULTS = { - tabSize: 2, - lineNumbers: true, - theme: 'hashi', - showCursorWhenSelecting: true, -}; -export default Component.extend({ - settings: service('settings'), - dom: service('dom'), - helper: service('code-mirror/linter'), - classNames: ['code-editor'], - readonly: false, - syntax: '', - // TODO: Change this to oninput to be consistent? We'll have to do it throughout the templates - onkeyup: function () {}, - oninput: function () {}, - init: function () { - this._super(...arguments); - set(this, 'modes', this.helper.modes()); - }, - didReceiveAttrs: function () { - this._super(...arguments); - const editor = this.editor; - if (editor) { - editor.setOption('readOnly', this.readonly); - } - }, - setMode: function (mode) { - if (!this.isDestroying && !this.isDestroyed) { - let options = { - ...DEFAULTS, - mode: mode.mime, - readOnly: this.readonly, - }; - if (mode.name === 'XML') { - options.htmlMode = mode.htmlMode; - options.matchClosing = mode.matchClosing; - options.alignCDATA = mode.alignCDATA; - } - set(this, 'options', options); - - const editor = this.editor; - editor.setOption('mode', mode.mime); - set(this, 'mode', mode); - } - }, - willDestroyElement: function () { - this._super(...arguments); - if (this.observer) { - this.observer.disconnect(); - } - }, - didInsertElement: function () { - this._super(...arguments); - const $code = this.dom.element('textarea ~ pre code', this.element); - if ($code.firstChild) { - this.observer = new MutationObserver(([e]) => { - this.oninput(set(this, 'value', e.target.wholeText)); - }); - this.observer.observe($code, { - attributes: false, - subtree: true, - childList: false, - characterData: true, - }); - set(this, 'value', $code.firstChild.wholeText); - } - set(this, 'editor', this.helper.getEditor(this.element)); - this.settings.findBySlug('code-editor').then((mode) => { - const modes = this.modes; - const syntax = this.syntax; - if (syntax) { - mode = modes.find(function (item) { - return item.name.toLowerCase() == syntax.toLowerCase(); - }); - } - mode = !mode ? modes[0] : mode; - this.setMode(mode); - }); - }, - didAppear: function () { - this.editor.refresh(); - }, - actions: { - change: function (value) { - this.settings.persist({ - 'code-editor': value, - }); - this.setMode(value); - }, - }, -}); diff --git a/ui/packages/consul-ui/app/components/code-editor/index.scss b/ui/packages/consul-ui/app/components/code-editor/index.scss deleted file mode 100644 index 58c7c66596cc..000000000000 --- a/ui/packages/consul-ui/app/components/code-editor/index.scss +++ /dev/null @@ -1,11 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -@import './skin'; -@import './layout'; -// yet to pull the CodeMirror skin into the placeholder -.code-editor { - @extend %code-editor; -} diff --git a/ui/packages/consul-ui/app/components/code-editor/layout.scss b/ui/packages/consul-ui/app/components/code-editor/layout.scss deleted file mode 100644 index b8194d850bc9..000000000000 --- a/ui/packages/consul-ui/app/components/code-editor/layout.scss +++ /dev/null @@ -1,73 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -%code-editor { - display: block; - border: 10px; - overflow: hidden; - position: relative; - clear: both; -} - -%code-editor::after { - position: absolute; - bottom: 0px; - width: 100%; - height: 25px; - background-color: var(--token-color-hashicorp-brand); - content: ''; - display: block; -} -%code-editor > pre { - display: none; -} - -%code-editor { - .toolbar-container, - .toolbar-container .toolbar { - align-items: center; - justify-content: space-between; - display: flex; - } - - .toolbar-container { - position: relative; - margin-top: 4px; - height: 44px; - - .toolbar { - flex: 1; - white-space: nowrap; - - .title { - padding: 0 8px; - } - - .toolbar-separator { - height: 32px; - margin: 0 4px; - width: 0; - } - - .tools { - display: flex; - flex-direction: row; - margin: 0 10px; - align-items: center; - .copy-button { - margin-left: 10px; - } - } - } - .ember-basic-dropdown-trigger { - margin: 0 8px; - width: 120px; - height: 32px; - display: flex; - align-items: center; - flex-direction: row; - } - } -} diff --git a/ui/packages/consul-ui/app/components/code-editor/skin.scss b/ui/packages/consul-ui/app/components/code-editor/skin.scss deleted file mode 100644 index 0a8dafc65dde..000000000000 --- a/ui/packages/consul-ui/app/components/code-editor/skin.scss +++ /dev/null @@ -1,227 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -$syntax-consul: #69499a; -$syntax-dark-gray: #535f73; -:root { - --syntax-light-grey: #dde3e7; - --syntax-light-gray: #a4a4a4; - --syntax-light-grey-blue: #6c7b81; - --syntax-dark-grey: #788290; - --syntax-faded-gray: #eaeaea; - // Product colors - --syntax-atlas: #127eff; - --syntax-vagrant: #2f88f7; - --syntax-consul: #{$syntax-consul}; - --syntax-terraform: #822ff7; - --syntax-serf: #dd4e58; - --syntax-packer: #1ddba3; - - // Our colors - --syntax-gray: lighten(#000, 89%); - --syntax-red: #ff3d3d; - --syntax-green: #39b54a; - --syntax-dark-gray: #{$syntax-dark-gray}; - - --syntax-gutter-grey: #2a2f36; - --syntax-yellow: var(--token-color-vault-brand); -} -.CodeMirror { - max-width: 1260px; - min-height: 300px; - height: auto; - /* adds some space at the bottom of the editor for when a horizonal-scroll has appeared */ - padding-bottom: 20px; -} -.CodeMirror-scroll { - overflow-x: hidden !important; -} -.CodeMirror-lint-tooltip { - @extend %code-100-regular; - background-color: #f9f9fa; - border: 1px solid var(--syntax-light-gray); - border-radius: 0; - color: lighten(#000, 13%); - padding: 7px 8px 9px; -} - -.cm-s-hashi { - &.CodeMirror { - width: 100%; - background-color: var(--token-color-hashicorp-brand) !important; - color: #cfd2d1 !important; - border: none; - font-family: var(--token-typography-font-stack-code); - -webkit-font-smoothing: auto; - line-height: 1.4; - } - - .CodeMirror-gutters { - color: var(--syntax-dark-grey); - background-color: var(--syntax-gutter-grey); - border: none; - } - - .CodeMirror-cursor { - border-left: solid thin #f8f8f0; - } - - .CodeMirror-linenumber { - color: #6d8a88; - } - - &.CodeMirror-focused div.CodeMirror-selected { - background: rgb(33, 66, 131); - } - - .CodeMirror-line::selection, - .CodeMirror-line > span::selection, - .CodeMirror-line > span > span::selection { - background: rgb(33, 66, 131); - } - - .CodeMirror-line::-moz-selection, - .CodeMirror-line > span::-moz-selection, - .CodeMirror-line > span > span::-moz-selection { - background: var(--token-color-surface-interactive); - } - - span.cm-comment { - color: var(--syntax-light-grey); - } - - span.cm-string, - span.cm-string-2 { - color: var(--syntax-packer); - } - - span.cm-number { - color: var(--syntax-serf); - } - - span.cm-variable { - color: lighten($syntax-consul, 20%); - } - - span.cm-variable-2 { - color: lighten($syntax-consul, 20%); - } - - span.cm-def { - color: var(--syntax-packer); - } - - span.cm-operator { - color: var(--syntax-gray); - } - span.cm-keyword { - color: var(--syntax-yellow); - } - - span.cm-atom { - color: var(--syntax-serf); - } - - span.cm-meta { - color: var(--syntax-packer); - } - - span.cm-tag { - color: var(--syntax-packer); - } - - span.cm-error { - color: var(--syntax-red); - } - - span.cm-attribute { - color: #9fca56; - } - - span.cm-qualifier { - color: #9fca56; - } - - span.cm-property { - color: lighten($syntax-consul, 20%); - } - - span.cm-variable-3 { - color: #9fca56; - } - - span.cm-builtin { - color: #9fca56; - } - - .CodeMirror-activeline-background { - background: #101213; - } - - .CodeMirror-matchingbracket { - text-decoration: underline; - color: var(--token-color-surface-primary) !important; - } -} - -.readonly-codemirror { - .CodeMirror-cursors { - display: none; - } - - .cm-s-hashi { - span { - color: var(--syntax-light-grey); - } - - span.cm-string, - span.cm-string-2 { - color: var(--syntax-faded-gray); - } - - span.cm-number { - color: lighten($syntax-dark-gray, 30%); - } - - span.cm-property { - color: var(--token-color-surface-primary); - } - - span.cm-variable-2 { - color: var(--syntax-light-grey-blue); - } - } -} - -%code-editor { - .toolbar-container { - background: var(--token-color-surface-strong); - background: linear-gradient( - 180deg, - var(--token-color-surface-strong) 50%, - var(--token-color-surface-interactive-active) 100% - ); - border: 1px solid var(--token-color-surface-interactive-active); - border-bottom-color: var(--token-color-foreground-faint); - border-top-color: var(--token-color-foreground-disabled); - - .toolbar { - .title { - @extend %body-200-semibold; - color: var(--token-color-foreground-strong); - } - .toolbar-separator { - border-right: 1px solid var(--token-color-palette-neutral-300); - } - } - .ember-power-select-trigger { - background-color: var(--token-color-surface-primary); - color: var(--token-color-hashicorp-brand); - border-radius: var(--decor-radius-100); - border: var(--decor-border-100); - border-color: var(--token-color-foreground-faint); - } - } -} diff --git a/ui/packages/consul-ui/app/components/consul/kv/form/index.hbs b/ui/packages/consul-ui/app/components/consul/kv/form/index.hbs index 5351bd6a4b3f..1d6672c1968b 100644 --- a/ui/packages/consul-ui/app/components/consul/kv/form/index.hbs +++ b/ui/packages/consul-ui/app/components/consul/kv/form/index.hbs @@ -7,116 +7,141 @@ @dc={{dc}} @nspace={{nspace}} @partition={{partition}} - @type="kv" - @label="key" + @type='kv' + @label='key' @autofill={{autofill}} @item={{item}} @src={{src}} - @onchange={{action "change"}} + @onchange={{action 'change'}} @onsubmit={{action onsubmit}} as |api| > - -{{#let (cannot 'write kv' item=api.data) as |disabld|}} -
-
-{{#if api.isCreate}} - -{{/if}} -{{#if (or (eq (left-trim api.data.Key parent) '') (not-eq (last api.data.Key) '/'))}} -
-
- -
+ + {{#let (cannot 'write kv' item=api.data) as |disabld|}} + +
+ {{#if api.isCreate}} + + {{/if}} + {{#if (or (eq (left-trim api.data.Key parent) '') (not-eq (last api.data.Key) '/'))}} + - -
-{{/if}} -
- - {{#if api.isCreate}} - {{#if (not disabld)}} - + Code + + + + {{/if}} + + + {{#if api.isCreate}} + {{#if (not disabld)}} + + {{/if}} - - {{else}} - {{#if (not disabld)}} + {{else}} + {{#if (not disabld)}} + + {{/if}} + + {{#if (not disabld)}} + + + + + + + + + {{/if}} {{/if}} - - {{#if (not disabld)}} - - - - - - - - - {{/if}} - {{/if}} - - -{{/let}} -
- + + + {{/let}} + + \ No newline at end of file diff --git a/ui/packages/consul-ui/app/components/consul/node-identity/template/README.mdx b/ui/packages/consul-ui/app/components/consul/node-identity/template/README.mdx deleted file mode 100644 index 7c0ab1c86027..000000000000 --- a/ui/packages/consul-ui/app/components/consul/node-identity/template/README.mdx +++ /dev/null @@ -1,27 +0,0 @@ -# Consul::Node::Identity::Template - -The component is a text-only template that represents what a NodeIdentity -policy looks like. The policy generated here is **not** what is sent back to -the backend, instead its just a visual representation of what happens in the -backend when you save a NodeIdentity. - -```hbs preview-template -
-``` - -## Arguments - -| Argument/Attribute | Type | Default | Description | -| --- | --- | --- | --- | -| `partition` | `string` | `default` | The name of the current partition | -| `name` | `string` | | The name of the policy the will be used to -interpolate the various policy names | - -## See - -- [Template Source Code](./index.hbs) - ---- diff --git a/ui/packages/consul-ui/app/components/consul/node-identity/template/index.hbs b/ui/packages/consul-ui/app/components/consul/node-identity/template/index.hbs deleted file mode 100644 index 301559b31e4b..000000000000 --- a/ui/packages/consul-ui/app/components/consul/node-identity/template/index.hbs +++ /dev/null @@ -1,48 +0,0 @@ -{{! - Copyright (c) HashiCorp, Inc. - SPDX-License-Identifier: BUSL-1.1 -}} - -{{#if (can "use partitions")~}} -partition "{{or @partition 'default'}}" { - {{#if (can "use nspaces")}} - namespace "default" { - node "{{@name}}" { - policy = "write" - } - } - namespace_prefix "" { - service_prefix "" { - policy = "read" - } - } - {{else}} - node "{{@name}}" { - policy = "write" - } - service_prefix "" { - policy = "read" - } - {{/if}} -} -{{~else~}} -{{~#if (can "use nspaces")~}} -namespace "default" { - node "{{@name}}" { - policy = "write" - } -} -namespace_prefix "" { - service_prefix "" { - policy = "read" - } -} -{{else}} -node "{{@name}}" { - policy = "write" -} -service_prefix "" { - policy = "read" -} -{{~/if~}} -{{~/if~}} \ No newline at end of file diff --git a/ui/packages/consul-ui/app/components/consul/node/list/index.hbs b/ui/packages/consul-ui/app/components/consul/node/list/index.hbs index 6ff04bc817fc..28df85ebcbc1 100644 --- a/ui/packages/consul-ui/app/components/consul/node/list/index.hbs +++ b/ui/packages/consul-ui/app/components/consul/node/list/index.hbs @@ -47,7 +47,7 @@ as |item index|> @value={{item.Address}} @name="Address" /> - {{item.Address}} + {{format-ipaddr item.Address}}
diff --git a/ui/packages/consul-ui/app/components/consul/service-identity/template/README.mdx b/ui/packages/consul-ui/app/components/consul/service-identity/template/README.mdx deleted file mode 100644 index 89eb4cb255d1..000000000000 --- a/ui/packages/consul-ui/app/components/consul/service-identity/template/README.mdx +++ /dev/null @@ -1,29 +0,0 @@ -# Consul::ServiceIdentity::Template - -The component is a text-only template that represents what a NodeIdentity -policy looks like. The policy generated here is **not** what is sent back to -the backend, instead its just a visual representation of what happens in the -backend when you save a NodeIdentity. - -```hbs preview-template -
-``` - -## Arguments - -| Argument/Attribute | Type | Default | Description | -| --- | --- | --- | --- | -| `nspace` | `string` | `default` | The name of the current namespace | -| `partition` | `string` | `default` | The name of the current partition | -| `name` | `string` | | The name of the policy the will be used to -interpolate the various policy names | - -## See - -- [Template Source Code](./index.hbs) - ---- diff --git a/ui/packages/consul-ui/app/components/consul/service-identity/template/index.hbs b/ui/packages/consul-ui/app/components/consul/service-identity/template/index.hbs deleted file mode 100644 index b822316bd598..000000000000 --- a/ui/packages/consul-ui/app/components/consul/service-identity/template/index.hbs +++ /dev/null @@ -1,68 +0,0 @@ -{{! - Copyright (c) HashiCorp, Inc. - SPDX-License-Identifier: BUSL-1.1 -}} - -{{#if (can "use partitions")}} -partition "{{or @partition 'default'}}" { - {{#if (can 'use nspaces')}} - namespace "{{or @nspace 'default'}}" { - service "{{@name}}" { - policy = "write" - } - service "{{@name}}-sidecar-proxy" { - policy = "write" - } - service_prefix "" { - policy = "read" - } - node_prefix "" { - policy = "read" - } - } - {{else}} - service "{{@name}}" { - policy = "write" - } - service "{{@name}}-sidecar-proxy" { - policy = "write" - } - service_prefix "" { - policy = "read" - } - node_prefix "" { - policy = "read" - } - {{/if}} -} -{{else}} -{{#if (can 'use nspaces')}} -namespace "{{or @nspace 'default'}}" { - service "{{@name}}" { - policy = "write" - } - service "{{@name}}-sidecar-proxy" { - policy = "write" - } - service_prefix "" { - policy = "read" - } - node_prefix "" { - policy = "read" - } -} -{{else}} -service "{{@name}}" { - policy = "write" -} -service "{{@name}}-sidecar-proxy" { - policy = "write" -} -service_prefix "" { - policy = "read" -} -node_prefix "" { - policy = "read" -} -{{/if}} -{{/if}} \ No newline at end of file diff --git a/ui/packages/consul-ui/app/components/consul/service-instance/list/index.hbs b/ui/packages/consul-ui/app/components/consul/service-instance/list/index.hbs index e341dcf29edf..c0dd5e143b7c 100644 --- a/ui/packages/consul-ui/app/components/consul/service-instance/list/index.hbs +++ b/ui/packages/consul-ui/app/components/consul/service-instance/list/index.hbs @@ -102,9 +102,9 @@ as |proxy|}}
{{#if (not-eq item.Service.Address '')}} - {{item.Service.Address}}:{{item.Service.Port}} + {{format-ipaddr item.Service.Address}}:{{item.Service.Port}} {{else}} - {{item.Node.Address}}:{{item.Service.Port}} + {{format-ipaddr item.Node.Address}}:{{item.Service.Port}} {{/if}}
diff --git a/ui/packages/consul-ui/app/components/hashicorp-consul/index.scss b/ui/packages/consul-ui/app/components/hashicorp-consul/index.scss index ca32e51ede8f..55726eaeae37 100644 --- a/ui/packages/consul-ui/app/components/hashicorp-consul/index.scss +++ b/ui/packages/consul-ui/app/components/hashicorp-consul/index.scss @@ -4,6 +4,9 @@ */ %hashicorp-consul { + .hds-side-nav { + height: auto; + } .consul-side-nav { li.consul-disabled-nav { width: 100%; diff --git a/ui/packages/consul-ui/app/components/policy-form/index.hbs b/ui/packages/consul-ui/app/components/policy-form/index.hbs index 98f85069be4e..190fbec352ef 100644 --- a/ui/packages/consul-ui/app/components/policy-form/index.hbs +++ b/ui/packages/consul-ui/app/components/policy-form/index.hbs @@ -5,150 +5,201 @@ {{yield}}
- {{#yield-slot name='template'}} - {{else}} + {{#yield-slot name='template'}}{{else}}
Policy{{if allowIdentity ' or identity?' ''}}
- {{#if allowIdentity}} -

- Identities are default policies with configurable names. They save you some time and effort you're using Consul for Connect features. -

- {{! this should use radio-group }} -
+ {{#if allowIdentity}} +

+ Identities are default policies with configurable names. They save you some time and effort + you're using Consul for Connect features. +

+ {{! this should use radio-group }} +
{{#each templates as |template|}} - + {{/each}} -
- {{else}} - - {{/if}} +
+ {{else}} + + {{/if}} {{/yield-slot}} - - -{{#if (eq item.template 'node-identity')}} + + + {{#if (eq item.template 'node-identity')}} -
- + {{#if (eq item.template '')}} + + {{/if}} + \ No newline at end of file diff --git a/ui/packages/consul-ui/app/components/policy-selector/index.hbs b/ui/packages/consul-ui/app/components/policy-selector/index.hbs index 286f6bc4f7c2..caa3de320285 100644 --- a/ui/packages/consul-ui/app/components/policy-selector/index.hbs +++ b/ui/packages/consul-ui/app/components/policy-selector/index.hbs @@ -9,16 +9,16 @@ @dc={{dc}} @partition={{partition}} @nspace={{nspace}} - @type="policy" - @placeholder="Search for policy" + @type='policy' + @placeholder='Search for policy' @items={{items}} ...attributes > {{yield}} - + Apply an existing policy - + {{#yield-slot name='trigger'}} {{yield}} {{else}} @@ -29,23 +29,22 @@ @icon='plus' class='type-dialog' data-test-policy-create - {{on "click" (action this.openModal)}} + {{on 'click' (action this.openModal)}} /> {{!TODO: potentially call trigger something else}} {{!the modal has to go here so that if you provide a slot to trigger it doesn't get rendered}} - - + id='new-policy' + @onopen={{action 'open'}} + @aria={{hash label='New Policy'}} + as |modal| + > + +

New Policy

- + - + {{/yield-slot}} - + {{option.Name}} - + - + Name - + -{{#if item.ID }} - {{item.Name}} -{{else}} - {{item.Name}} -{{/if}} + {{#if item.ID}} + {{item.Name}} + {{else}} + {{item.Name}} + {{/if}} - -{{#if (eq item.template '')}} - -{{/if}} -{{#if (eq item.template 'node-identity')}} + + {{#if (eq item.template '')}} + + {{/if}} + {{#if (eq item.template 'node-identity')}}
Datacenter:
{{item.Datacenter}}
-{{else}} + {{else}}
Datacenters:
{{join ', ' (policy/datacenters (or loadedItem item))}}
-{{/if}} - -{{#if (not disabled)}} + + Rules + (HCL Format) + + {{else}} + + + Rules + (HCL Format) + + + {{/if}} + + {{#if (not disabled)}}
- - + + - +

{{message}}

@@ -205,9 +212,9 @@
-{{/if}} + {{/if}}
- + \ No newline at end of file diff --git a/ui/packages/consul-ui/app/components/policy-selector/index.js b/ui/packages/consul-ui/app/components/policy-selector/index.js index 3a8d14fdeeac..8e2f952e602b 100644 --- a/ui/packages/consul-ui/app/components/policy-selector/index.js +++ b/ui/packages/consul-ui/app/components/policy-selector/index.js @@ -32,9 +32,12 @@ export default ChildSelectorComponent.extend({ this._super(...arguments); set(this, 'isScoped', false); }, - refreshCodeEditor: function (e, target) { - const selector = '.code-editor'; - this.dom.component(selector, target).didAppear(); + refreshCodeEditor: function () { + // Could be better, but this is a hack to clear the code editor + const codeEditor = document.querySelector('[aria-label="policy[Rules]"]'); + if (codeEditor) { + codeEditor.innerHTML = ''; + } }, error: function (e) { const item = this.item; @@ -71,7 +74,7 @@ export default ChildSelectorComponent.extend({ }, actions: { open: function (e) { - this.refreshCodeEditor(e, e.target.parentElement); + this.refreshCodeEditor(); }, }, }); diff --git a/ui/packages/consul-ui/app/components/role-selector/index.js b/ui/packages/consul-ui/app/components/role-selector/index.js index ac9e5f7140b7..14b5c266c38f 100644 --- a/ui/packages/consul-ui/app/components/role-selector/index.js +++ b/ui/packages/consul-ui/app/components/role-selector/index.js @@ -40,7 +40,10 @@ export default ChildSelectorComponent.extend({ case 'role[state]': set(this, 'state', target.value); if (target.value === 'policy') { - this.dom.component('.code-editor', target.nextElementSibling).didAppear(); + const codeEditor = document.querySelector('[aria-label="role[policy][Rules]"]'); + if (codeEditor) { + codeEditor.innerHTML = ''; + } } break; default: diff --git a/ui/packages/consul-ui/app/helpers/format-ipaddr.js b/ui/packages/consul-ui/app/helpers/format-ipaddr.js new file mode 100644 index 000000000000..a0d3d3f4c787 --- /dev/null +++ b/ui/packages/consul-ui/app/helpers/format-ipaddr.js @@ -0,0 +1,12 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +import { helper } from '@ember/component/helper'; +import { processIpAddress } from 'consul-ui/utils/process-ip-address'; + +export default helper(function formatIpaddr([ipaddress]) { + const value = processIpAddress(ipaddress); + return value ? value : ''; +}); diff --git a/ui/packages/consul-ui/app/helpers/node-identity-template.js b/ui/packages/consul-ui/app/helpers/node-identity-template.js new file mode 100644 index 000000000000..a1cf6a6ef854 --- /dev/null +++ b/ui/packages/consul-ui/app/helpers/node-identity-template.js @@ -0,0 +1,33 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +import Helper from '@ember/component/helper'; + +export default class NodeIdentityTemplate extends Helper { + compute([_name], { partition = 'default', canUsePartitions = false, canUseNspaces = false }) { + const name = _name || ''; + if (canUsePartitions) { + let block = `partition "${partition}" {\n`; + if (canUseNspaces) { + block += ` namespace "default" {\n node "${name}" {\n policy = "write"\n }\n }\n`; + block += ` namespace_prefix "" {\n service_prefix "" {\n policy = "read"\n }\n }\n`; + } else { + block += ` node "${name}" {\n policy = "write"\n }\n`; + block += ` service_prefix "" {\n policy = "read"\n }\n`; + } + block += `}`; + return block; + } else if (canUseNspaces) { + return ( + `namespace "default" {\n node "${name}" {\n policy = "write"\n }\n}\n` + + `namespace_prefix "" {\n service_prefix "" {\n policy = "read"\n }\n}` + ); + } else { + return ( + `node "${name}" {\n policy = "write"\n}\n` + `service_prefix "" {\n policy = "read"\n}` + ); + } + } +} diff --git a/ui/packages/consul-ui/app/helpers/service-identity-template.js b/ui/packages/consul-ui/app/helpers/service-identity-template.js new file mode 100644 index 000000000000..5ca3862a15ac --- /dev/null +++ b/ui/packages/consul-ui/app/helpers/service-identity-template.js @@ -0,0 +1,46 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +import Helper from '@ember/component/helper'; + +export default class ServiceIdentityTemplate extends Helper { + compute( + [_name], + { partition = 'default', nspace = 'default', canUsePartitions = false, canUseNspaces = false } + ) { + const name = _name || ''; + const indent = (text, level = 1) => + text + .split('\n') + .map((line) => ' '.repeat(level) + line) + .join('\n'); + + const baseBlock = () => { + return [ + `service "${name}" {\n policy = "write"\n}`, + `service "${name}-sidecar-proxy" {\n policy = "write"\n}`, + `service_prefix "" {\n policy = "read"\n}`, + `node_prefix "" {\n policy = "read"\n}`, + ].join('\n'); + }; + + if (canUsePartitions) { + let block = `partition "${partition}" {\n`; + + if (canUseNspaces) { + block += indent(`namespace "${nspace}" {\n${indent(baseBlock(), 1)}\n}`); + } else { + block += indent(baseBlock()); + } + + block += `\n}`; + return block; + } else if (canUseNspaces) { + return `namespace "${nspace}" {\n${indent(baseBlock())}\n}`; + } else { + return baseBlock(); + } + } +} diff --git a/ui/packages/consul-ui/app/instance-initializers/ivy-codemirror.js b/ui/packages/consul-ui/app/instance-initializers/ivy-codemirror.js deleted file mode 100644 index 43ab72509106..000000000000 --- a/ui/packages/consul-ui/app/instance-initializers/ivy-codemirror.js +++ /dev/null @@ -1,39 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -/* globals CodeMirror */ -export function initialize(application) { - const appName = application.application.name; - const doc = application.lookup('service:-document'); - // pick codemirror syntax highlighting paths out of index.html - const fs = new Map( - Object.entries(JSON.parse(doc.querySelector(`[data-${appName}-fs]`).textContent)) - ); - // configure syntax highlighting for CodeMirror - CodeMirror.modeURL = { - replace: function (n, mode) { - switch (mode.trim()) { - case 'javascript': - return fs.get(['codemirror', 'mode', 'javascript', 'javascript.js'].join('/')); - case 'ruby': - return fs.get(['codemirror', 'mode', 'ruby', 'ruby.js'].join('/')); - case 'yaml': - return fs.get(['codemirror', 'mode', 'yaml', 'yaml.js'].join('/')); - case 'xml': - return fs.get(['codemirror', 'mode', 'xml', 'xml.js'].join('/')); - } - }, - }; - - const IvyCodeMirrorComponent = application.resolveRegistration('component:ivy-codemirror'); - // Make sure ivy-codemirror respects/maintains a `name=""` attribute - IvyCodeMirrorComponent.reopen({ - attributeBindings: ['name'], - }); -} - -export default { - initialize, -}; diff --git a/ui/packages/consul-ui/app/services/code-mirror/linter.js b/ui/packages/consul-ui/app/services/code-mirror/linter.js deleted file mode 100644 index 74a0a45b0140..000000000000 --- a/ui/packages/consul-ui/app/services/code-mirror/linter.js +++ /dev/null @@ -1,46 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -import Service, { inject as service } from '@ember/service'; -const MODES = [ - { - name: 'JSON', - mime: 'application/json', - mode: 'javascript', - ext: ['json', 'map'], - alias: ['json5'], - }, - { - name: 'HCL', - mime: 'text/x-ruby', - mode: 'ruby', - ext: ['rb'], - alias: ['jruby', 'macruby', 'rake', 'rb', 'rbx'], - }, - { name: 'YAML', mime: 'text/x-yaml', mode: 'yaml', ext: ['yaml', 'yml'], alias: ['yml'] }, - { - name: 'XML', - mime: 'application/xml', - mode: 'xml', - htmlMode: false, - matchClosing: true, - alignCDATA: false, - ext: ['xml'], - alias: ['xml'], - }, -]; - -export default class LinterService extends Service { - @service('dom') - dom; - - modes() { - return MODES; - } - - getEditor(element) { - return this.dom.element('textarea + div', element).CodeMirror; - } -} diff --git a/ui/packages/consul-ui/app/services/repository/kv.js b/ui/packages/consul-ui/app/services/repository/kv.js index c618cff5f2ca..d031186f7a0b 100644 --- a/ui/packages/consul-ui/app/services/repository/kv.js +++ b/ui/packages/consul-ui/app/services/repository/kv.js @@ -66,7 +66,7 @@ export default class KvService extends RepositoryService { } // this one only gives you keys - // https://www.consul.io/api/kv.html + // https://developer.hashicorp.com/api/kv.html @dataSource('/:partition/:ns/:dc/kvs/:id') async findAllBySlug(params, configuration = {}) { params.separator = '/'; diff --git a/ui/packages/consul-ui/app/styles/app.scss b/ui/packages/consul-ui/app/styles/app.scss index 11597cb190df..078807b9c311 100644 --- a/ui/packages/consul-ui/app/styles/app.scss +++ b/ui/packages/consul-ui/app/styles/app.scss @@ -5,9 +5,6 @@ @charset 'utf-8'; -/* tailwind before all the customizations */ -@import 'tailwind'; - /* css for hds */ @import '@hashicorp/design-system-components'; @@ -30,3 +27,24 @@ /* debug only, this is empty during a production build */ /* but uses the contents of ./debug.scss during dev */ @import '_debug'; + +// Should be removed when we remove FlightIcons +.w-4 { + width: 1rem; +} + +.h-4 { + height: 1rem; +} + +svg { + vertical-align: middle; +} + +.mt-2 { + margin-top: .5rem; +} + +.mb-3 { + margin-bottom: .75rem; +} diff --git a/ui/packages/consul-ui/app/styles/base/reset/system.scss b/ui/packages/consul-ui/app/styles/base/reset/system.scss index 42b03d9e00c2..d3f32cc51c7f 100644 --- a/ui/packages/consul-ui/app/styles/base/reset/system.scss +++ b/ui/packages/consul-ui/app/styles/base/reset/system.scss @@ -3,6 +3,7 @@ * SPDX-License-Identifier: BUSL-1.1 */ + a { text-decoration: none; } @@ -103,3 +104,12 @@ hr { height: 1px; margin: 1.5rem 0; } + +button, +input:where([type='button']), +input:where([type='reset']), +input:where([type='submit']) { + -webkit-appearance: button; /* 1 */ + background-color: transparent; /* 2 */ + background-image: none; /* 2 */ +} diff --git a/ui/packages/consul-ui/app/styles/components.scss b/ui/packages/consul-ui/app/styles/components.scss index f4baa2ca74a4..db5b649b2fc4 100644 --- a/ui/packages/consul-ui/app/styles/components.scss +++ b/ui/packages/consul-ui/app/styles/components.scss @@ -12,7 +12,6 @@ @import 'consul-ui/components/buttons'; @import 'consul-ui/components/card'; @import 'consul-ui/components/checkbox-group'; -@import 'consul-ui/components/code-editor'; @import 'consul-ui/components/composite-row'; @import 'consul-ui/components/confirmation-dialog'; @import 'consul-ui/components/consul-copy-button'; diff --git a/ui/packages/consul-ui/app/styles/routes.scss b/ui/packages/consul-ui/app/styles/routes.scss index 9b8ad0b0d871..4d4b05ad6c75 100644 --- a/ui/packages/consul-ui/app/styles/routes.scss +++ b/ui/packages/consul-ui/app/styles/routes.scss @@ -5,7 +5,6 @@ @import 'routes/dc/services/index'; @import 'routes/dc/nodes/index'; -@import 'routes/dc/kv/index'; @import 'routes/dc/acls/index'; @import 'routes/dc/intentions/index'; @import 'routes/dc/overview/serverstatus'; diff --git a/ui/packages/consul-ui/app/styles/routes/dc/kv/index.scss b/ui/packages/consul-ui/app/styles/routes/dc/kv/index.scss deleted file mode 100644 index a8c5b0bc85ec..000000000000 --- a/ui/packages/consul-ui/app/styles/routes/dc/kv/index.scss +++ /dev/null @@ -1,16 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -html[data-route^='dc.kv'] .type-toggle { - float: right; - margin-bottom: 0 !important; -} -html[data-route^='dc.kv.edit'] h2 { - @extend %display-400-semibold; - border-bottom: var(--decor-border-200); - border-color: var(--token-color-surface-interactive-active); - padding-bottom: 0.2em; - margin-bottom: 0.5em; -} diff --git a/ui/packages/consul-ui/app/styles/tailwind.scss b/ui/packages/consul-ui/app/styles/tailwind.scss deleted file mode 100644 index c03d22ee28d9..000000000000 --- a/ui/packages/consul-ui/app/styles/tailwind.scss +++ /dev/null @@ -1,14 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -@tailwind base; -@tailwind components; -@tailwind utilities; - -@layer utilities { - .consul-surface-nav { - background: var(--token-color-palette-neutral-700); - } -} diff --git a/ui/packages/consul-ui/app/templates/dc/acls/tokens/-fieldsets-legacy.hbs b/ui/packages/consul-ui/app/templates/dc/acls/tokens/-fieldsets-legacy.hbs index dbe188fc2ff7..c20a894f18a9 100644 --- a/ui/packages/consul-ui/app/templates/dc/acls/tokens/-fieldsets-legacy.hbs +++ b/ui/packages/consul-ui/app/templates/dc/acls/tokens/-fieldsets-legacy.hbs @@ -3,37 +3,50 @@ SPDX-License-Identifier: BUSL-1.1 }} -
- -{{#if false}} -
- {{#each (array 'management' 'client') as |type|}} - - {{/each}} -
-{{/if}} - -{{#if create }} -
\ No newline at end of file diff --git a/ui/packages/consul-ui/app/templates/dc/nodes/show.hbs b/ui/packages/consul-ui/app/templates/dc/nodes/show.hbs index 83bc19e43a85..147fb3947257 100644 --- a/ui/packages/consul-ui/app/templates/dc/nodes/show.hbs +++ b/ui/packages/consul-ui/app/templates/dc/nodes/show.hbs @@ -123,7 +123,7 @@ as |item tomography|}} }}/>
- {{item.Address}} + {{format-ipaddr item.Address}} {{#let (or item.Service.Address item.Node.Address) as |address|}} - {{address}} + {{format-ipaddr address}} {{/let}} diff --git a/ui/packages/consul-ui/app/utils/process-ip-address.js b/ui/packages/consul-ui/app/utils/process-ip-address.js new file mode 100644 index 000000000000..9e7bffa04d12 --- /dev/null +++ b/ui/packages/consul-ui/app/utils/process-ip-address.js @@ -0,0 +1,28 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +export function processIpAddress(ip) { + // Simple IPv4 validation + const ipv4Pattern = + /^(25[0-5]|2[0-4]\d|1\d\d|\d\d|\d)\.(25[0-5]|2[0-4]\d|1\d\d|\d\d|\d)\.(25[0-5]|2[0-4]\d|1\d\d|\d\d|\d)\.(25[0-5]|2[0-4]\d|1\d\d|\d\d|\d)$/; + + // Basic IPv6 pattern (loose, just enough to pass to URL) + const ipv6Pattern = /^[0-9a-fA-F:]+$/; + + if (ipv4Pattern.test(ip)) { + return ip; // Valid IPv4, return as-is + } + + if (ipv6Pattern.test(ip)) { + try { + const url = new URL(`http://[${ip}]/`); + return url.hostname; // Returns collapsed IPv6 + } catch (e) { + return null; // Invalid IPv6 + } + } + + return null; // Not valid IPv4 or IPv6 +} diff --git a/ui/packages/consul-ui/config/environment.js b/ui/packages/consul-ui/config/environment.js index a2deafdd8c4a..f48f9dc61f6b 100644 --- a/ui/packages/consul-ui/config/environment.js +++ b/ui/packages/consul-ui/config/environment.js @@ -98,11 +98,11 @@ module.exports = function (environment, $ = process.env) { }, // Static variables used in multiple places throughout the UI - CONSUL_HOME_URL: 'https://www.consul.io', + CONSUL_HOME_URL: 'https://developer.hashicorp.com', CONSUL_REPO_ISSUES_URL: 'https://github.com/hashicorp/consul/issues/new/choose', - CONSUL_DOCS_URL: 'https://www.consul.io/docs', + CONSUL_DOCS_URL: 'https://developer.hashicorp.com/docs', CONSUL_DOCS_LEARN_URL: 'https://learn.hashicorp.com', - CONSUL_DOCS_API_URL: 'https://www.consul.io/api', + CONSUL_DOCS_API_URL: 'https://developer.hashicorp.com/api', CONSUL_DOCS_DEVELOPER_URL: 'https://developer.hashicorp.com/consul/docs', CONSUL_COPYRIGHT_URL: 'https://www.hashicorp.com', }); diff --git a/ui/packages/consul-ui/ember-cli-build.js b/ui/packages/consul-ui/ember-cli-build.js index ecfdc1b35ac6..8f39a23a0b7b 100644 --- a/ui/packages/consul-ui/ember-cli-build.js +++ b/ui/packages/consul-ui/ember-cli-build.js @@ -160,30 +160,6 @@ module.exports = function (defaults, $ = process.env) { 'ember-cli-babel': { includePolyfill: true, }, - postcssOptions: { - compile: { - extension: 'scss', - plugins: [ - { - module: require('@csstools/postcss-sass'), - options: { - includePaths: [ - '../../node_modules/@hashicorp/design-system-tokens/dist/products/css', - ], - }, - }, - { - module: require('tailwindcss'), - options: { - config: './tailwind.config.js', - }, - }, - { - module: require('autoprefixer'), - }, - ], - }, - }, 'ember-cli-string-helpers': { only: [ 'capitalize', @@ -204,13 +180,14 @@ module.exports = function (defaults, $ = process.env) { forbidEval: true, publicAssetURL: isProd ? '{{.ContentPath}}assets' : undefined, }, - codemirror: { - keyMaps: ['sublime'], - addonFiles: ['lint/lint.css', 'lint/yaml-lint.js', 'mode/loadmode.js'], - }, sassOptions: { implementation: require('sass'), sourceMapEmbed: sourcemaps, + precision: 4, + includePaths: [ + './../../node_modules/@hashicorp/design-system-tokens/dist/products/css', + './../../node_modules/@hashicorp/design-system-components/dist/styles', + ], }, } ); @@ -264,27 +241,6 @@ module.exports = function (defaults, $ = process.env) { // CSS.escape polyfill app.import('node_modules/css.escape/css.escape.js', { outputFile: 'assets/css.escape.js' }); - // Possibly dynamically loaded via CodeMirror linting. See components/code-editor.js - app.import('node_modules/codemirror/mode/javascript/javascript.js', { - outputFile: 'assets/codemirror/mode/javascript/javascript.js', - }); - - // HCL/Ruby linting support. Possibly dynamically loaded via CodeMirror linting. See components/code-editor.js - app.import('node_modules/codemirror/mode/ruby/ruby.js', { - outputFile: 'assets/codemirror/mode/ruby/ruby.js', - }); - - // YAML linting support. Possibly dynamically loaded via CodeMirror linting. See components/code-editor.js - app.import('node_modules/js-yaml/dist/js-yaml.js', { - outputFile: 'assets/codemirror/mode/yaml/yaml.js', - }); - app.import('node_modules/codemirror/mode/yaml/yaml.js', { - outputFile: 'assets/codemirror/mode/yaml/yaml.js', - }); - // XML linting support. Possibly dynamically loaded via CodeMirror linting. See services/code-mirror/linter.js - app.import('node_modules/codemirror/mode/xml/xml.js', { - outputFile: 'assets/codemirror/mode/xml/xml.js', - }); // metrics-providers app.import('vendor/metrics-providers/consul.js', { outputFile: 'assets/metrics-providers/consul.js', diff --git a/ui/packages/consul-ui/package.json b/ui/packages/consul-ui/package.json index 44149453e5e0..59fb6a135ecc 100644 --- a/ui/packages/consul-ui/package.json +++ b/ui/packages/consul-ui/package.json @@ -62,17 +62,13 @@ "@babel/helper-call-delegate": "^7.10.1", "@babel/plugin-proposal-class-properties": "^7.10.1", "@babel/plugin-proposal-object-rest-spread": "^7.5.5", - "@csstools/postcss-sass": "^5.0.1", "@docfy/ember": "^0.4.1", "@ember/optional-features": "^2.0.0", "@ember/render-modifiers": "^1.0.2", "@ember/test-helpers": "^2.6.0", "@glimmer/component": "^1.0.4", "@glimmer/tracking": "^1.0.4", - "@hashicorp/design-system-components": "^3.0.2", - "@hashicorp/design-system-tokens": "^1.9.0", "@hashicorp/ember-cli-api-double": "^4.0.0", - "@hashicorp/ember-flight-icons": "^4.0.1", "@html-next/vertical-collection": "^4.0.0", "@lit/reactive-element": "^1.2.1", "@xstate/fsm": "^1.4.0", @@ -104,6 +100,7 @@ "d3-shape": "^2.0.0", "dayjs": "^1.9.3", "deepmerge": "^4.2.2", + "doctoc": "^2.0.0", "ember-array-fns": "^1.4.0", "ember-assign-helper": "^0.3.0", "ember-auto-import": "^2.4.2", @@ -119,7 +116,7 @@ "ember-cli-htmlbars": "^5.7.2", "ember-cli-inject-live-reload": "^2.1.0", "ember-cli-page-object": "^1.17.11", - "ember-cli-postcss": "^8.1.0", + "ember-cli-sass": "^11.0.1", "ember-cli-sri": "^2.1.1", "ember-cli-string-helpers": "^6.1.0", "ember-cli-template-lint": "^2.0.1", @@ -168,13 +165,12 @@ "flat": "^5.0.0", "hast-util-to-string": "^1.0.4", "husky": "^4.2.5", - "ivy-codemirror": "^2.1.0", "js-yaml": "^4.0.0", "lint-staged": "^10.2.11", "loader.js": "^4.7.0", "mnemonist": "^0.38.0", "ngraph.graph": "^19.1.0", - "parse-duration": "^1.0.0", + "parse-duration": "^2.1.3", "pretender": "^3.2.0", "prettier": "^2.5.1", "pretty-ms": "^7.0.1", @@ -184,8 +180,7 @@ "refractor": "^3.5.0", "remark-autolink-headings": "^6.0.1", "remark-hbs": "^0.4.0", - "sass": "^1.28.0", - "tailwindcss": "^3.1.8", + "sass": "^1.89.2", "tape": "^5.0.1", "text-encoding": "^0.7.0", "tippy.js": "^6.2.7", @@ -196,7 +191,7 @@ "webpack": "^5.74.0" }, "engines": { - "node": "18" + "node": "20" }, "ember": { "edition": "octane" @@ -214,7 +209,8 @@ "node": "18" }, "dependencies": { - "doctoc": "^2.0.0", + "@hashicorp/design-system-components": "^4.20.2", + "@hashicorp/ember-flight-icons": "4.0.0", "ember-element-helper": "0.6.1" } } diff --git a/ui/packages/consul-ui/tailwind.config.js b/ui/packages/consul-ui/tailwind.config.js deleted file mode 100644 index fce13d56ec38..000000000000 --- a/ui/packages/consul-ui/tailwind.config.js +++ /dev/null @@ -1,15 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: ['../**/*.{html.js,hbs,mdx}'], - theme: { - // disable all color utilities - we want to use HDS instead - colors: {}, - extend: {}, - }, - plugins: [], -}; diff --git a/ui/packages/consul-ui/tests/acceptance/dc/kvs/create.feature b/ui/packages/consul-ui/tests/acceptance/dc/kvs/create.feature index 9a98ba71fc21..2044e569f914 100644 --- a/ui/packages/consul-ui/tests/acceptance/dc/kvs/create.feature +++ b/ui/packages/consul-ui/tests/acceptance/dc/kvs/create.feature @@ -8,6 +8,7 @@ Feature: dc / kvs / create --- Then the url should be /datacenter/kv/create And the title should be "New Key / Value - Consul" + And pause for 200 Then I fill in with yaml --- additional: key-value diff --git a/ui/packages/consul-ui/tests/integration/components/code-editor-test.js b/ui/packages/consul-ui/tests/integration/components/code-editor-test.js deleted file mode 100644 index f0a5d9d2cbcc..000000000000 --- a/ui/packages/consul-ui/tests/integration/components/code-editor-test.js +++ /dev/null @@ -1,31 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -import { module, test } from 'qunit'; -import { setupRenderingTest } from 'ember-qunit'; -import { render } from '@ember/test-helpers'; -import hbs from 'htmlbars-inline-precompile'; - -module('Integration | Component | code editor', function (hooks) { - setupRenderingTest(hooks); - - test('it renders', async function (assert) { - // Set any properties with this.set('myProperty', 'value'); - // Handle any actions with this.on('myAction', function(val) { ... }); - - await render(hbs`{{code-editor}}`); - - // this test is just to prove it renders something without producing - // an error. It renders the number 1, but seems to also render some sort of trailing space - // so just check for presence of CodeMirror - assert.equal(this.element.querySelectorAll('.CodeMirror').length, 1); - - // Template block usage: - await render(hbs` - {{#code-editor}}{{/code-editor}} - `); - assert.equal(this.element.querySelectorAll('.CodeMirror').length, 1); - }); -}); diff --git a/ui/packages/consul-ui/tests/integration/helpers/format-ipaddr-test.js b/ui/packages/consul-ui/tests/integration/helpers/format-ipaddr-test.js new file mode 100644 index 000000000000..ba9592562514 --- /dev/null +++ b/ui/packages/consul-ui/tests/integration/helpers/format-ipaddr-test.js @@ -0,0 +1,40 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +import { module, test } from 'qunit'; +import { setupRenderingTest } from 'ember-qunit'; +import { render } from '@ember/test-helpers'; +import { hbs } from 'ember-cli-htmlbars'; + +module('Integration | Helper | format-ipaddr', function (hooks) { + setupRenderingTest(hooks); + + test('it renders the given value', async function (assert) { + this.set('inputValue', '192.168.1.1'); + + await render(hbs`
{{format-ipaddr this.inputValue}}
`); + + assert.dom(this.element).hasText('192.168.1.1'); + + // await render(hbs`
{{format-ipaddr '2001::85a3::8a2e:370:7334'}}
`); + // assert.dom(this.element).doesNotHaveText('2001::85a3::8a2e:370:7334'); + }); + + test('it should return an empty string for invalid IP addresses', async function (assert) { + this.set('inputValue', '2001::85a3::8a2e:370:7334'); + + await render(hbs`
{{format-ipaddr this.inputValue}}
`); + + assert.dom(this.element).hasText(''); + }); + + test('it should return a collapsed IPv6 address', async function (assert) { + this.set('inputValue', '2001:0db8:0000:0000:0000:ff00:0042:8329'); + + await render(hbs`
{{format-ipaddr this.inputValue}}
`); + + assert.dom(this.element).hasText('[2001:db8::ff00:42:8329]'); + }); +}); diff --git a/ui/packages/consul-ui/tests/integration/helpers/node-identity-template-test.js b/ui/packages/consul-ui/tests/integration/helpers/node-identity-template-test.js new file mode 100644 index 000000000000..7d589b865274 --- /dev/null +++ b/ui/packages/consul-ui/tests/integration/helpers/node-identity-template-test.js @@ -0,0 +1,81 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +import { module, test } from 'qunit'; +import { setupRenderingTest } from 'ember-qunit'; +import { render } from '@ember/test-helpers'; +import { hbs } from 'ember-cli-htmlbars'; + +module('Integration | Helper | node-identity-template', function (hooks) { + setupRenderingTest(hooks); + + test('renders default output with minimal args', async function (assert) { + this.set('name', 'node-1'); + await render(hbs`
{{node-identity-template this.name}}
`); + + assert + .dom('pre') + .hasText( + `node "node-1" {\n policy = "write"\n}\nservice_prefix "" {\n policy = "read"\n}`, + 'outputs default node and service_prefix block' + ); + }); + + test('renders with canUseNspaces true', async function (assert) { + this.setProperties({ + name: 'node-1', + canUseNspaces: true, + }); + + await render( + hbs`
{{node-identity-template this.name canUseNspaces=this.canUseNspaces}}
` + ); + + assert.dom('pre').includesText('namespace "default"'); + assert.dom('pre').includesText('node "node-1"'); + assert.dom('pre').includesText('service_prefix ""'); + }); + + test('renders with canUsePartitions true and canUseNspaces false', async function (assert) { + this.setProperties({ + name: 'node-1', + canUsePartitions: true, + canUseNspaces: false, + partition: 'alpha', + }); + + await render( + hbs`
{{node-identity-template this.name partition=this.partition canUsePartitions=this.canUsePartitions canUseNspaces=this.canUseNspaces}}
` + ); + + assert.dom('pre').includesText('partition "alpha"'); + assert.dom('pre').includesText('node "node-1"'); + assert.dom('pre').includesText('service_prefix ""'); + }); + + test('renders full structure when both canUsePartitions and canUseNspaces are true', async function (assert) { + this.setProperties({ + name: 'node-1', + canUsePartitions: true, + canUseNspaces: true, + partition: 'beta', + }); + + await render( + hbs`
{{node-identity-template this.name partition=this.partition canUsePartitions=this.canUsePartitions canUseNspaces=this.canUseNspaces}}
` + ); + + assert.dom('pre').includesText('partition "beta"'); + assert.dom('pre').includesText('namespace "default"'); + assert.dom('pre').includesText('node "node-1"'); + assert.dom('pre').includesText('service_prefix ""'); + }); + + test('handles undefined name safely', async function (assert) { + await render(hbs`
{{node-identity-template null}}
`); + + assert.dom('pre').includesText('node ""'); + }); +}); diff --git a/ui/packages/consul-ui/tests/integration/helpers/service-identity-template-test.js b/ui/packages/consul-ui/tests/integration/helpers/service-identity-template-test.js new file mode 100644 index 000000000000..995e9ec8af07 --- /dev/null +++ b/ui/packages/consul-ui/tests/integration/helpers/service-identity-template-test.js @@ -0,0 +1,101 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +import { module, test } from 'qunit'; +import { setupRenderingTest } from 'ember-qunit'; +import { render } from '@ember/test-helpers'; +import { hbs } from 'ember-cli-htmlbars'; + +module('Integration | Helper | service-identity-template', function (hooks) { + setupRenderingTest(hooks); + + test('renders default output with only name', async function (assert) { + this.set('name', 'api'); + await render(hbs`
{{service-identity-template this.name}}
`); + + assert.dom('pre').hasText( + `service "api" { + policy = "write" + } + service "api-sidecar-proxy" { + policy = "write" + } + service_prefix "" { + policy = "read" + } + node_prefix "" { + policy = "read" + }` + ); + }); + + test('renders with canUseNspaces = true', async function (assert) { + this.setProperties({ + name: 'api', + canUseNspaces: true, + }); + + await render( + hbs`
{{service-identity-template this.name canUseNspaces=this.canUseNspaces}}
` + ); + + assert.dom('pre').includesText('namespace "default"'); + assert.dom('pre').includesText('service "api"'); + assert.dom('pre').includesText('service_prefix ""'); + assert.dom('pre').includesText('node_prefix ""'); + }); + + test('renders with canUsePartitions = true only', async function (assert) { + this.setProperties({ + name: 'api', + canUsePartitions: true, + partition: 'p1', + }); + + await render(hbs` +
+        {{service-identity-template this.name
+          canUsePartitions=this.canUsePartitions
+          partition=this.partition}}
+      
+ `); + + assert.dom('pre').includesText('partition "p1"'); + assert.dom('pre').includesText('service "api"'); + assert.dom('pre').doesNotIncludeText('namespace'); + }); + + test('renders with both canUsePartitions and canUseNspaces', async function (assert) { + this.setProperties({ + name: 'api', + canUsePartitions: true, + canUseNspaces: true, + partition: 'prod', + nspace: 'secure', + }); + + await render(hbs` +
+        {{service-identity-template this.name
+          partition=this.partition
+          nspace=this.nspace
+          canUsePartitions=this.canUsePartitions
+          canUseNspaces=this.canUseNspaces}}
+      
+ `); + + assert.dom('pre').includesText('partition "prod"'); + assert.dom('pre').includesText('namespace "secure"'); + assert.dom('pre').includesText('service "api-sidecar-proxy"'); + }); + + test('handles null name gracefully', async function (assert) { + this.set('name', null); + await render(hbs`
{{service-identity-template this.name}}
`); + + assert.dom('pre').includesText('service ""'); + assert.dom('pre').includesText('sidecar-proxy'); + }); +}); diff --git a/ui/packages/consul-ui/tests/steps/assertions/form.js b/ui/packages/consul-ui/tests/steps/assertions/form.js index 1b7ef6f85f5d..fd31c688e08f 100644 --- a/ui/packages/consul-ui/tests/steps/assertions/form.js +++ b/ui/packages/consul-ui/tests/steps/assertions/form.js @@ -13,8 +13,10 @@ export default function (scenario, assert, find, currentPage) { } return Object.keys(data).reduce(function (prev, item, i, arr) { const name = `${obj.prefix || property}[${item}]`; - const $el = document.querySelector(`[name="${name}"]`); - const actual = $el.value; + const $el = + document.querySelector(`[name="${name}"]`) || + document.querySelector(`[aria-label="${name}"]`); + const actual = $el.value || $el.textContent; const expected = data[item]; assert.strictEqual(actual, expected, `Expected settings to be ${expected} was ${actual}`); }, obj); diff --git a/ui/packages/consul-ui/tests/steps/interactions/form.js b/ui/packages/consul-ui/tests/steps/interactions/form.js index bdd14a539616..8b12539fe2a6 100644 --- a/ui/packages/consul-ui/tests/steps/interactions/form.js +++ b/ui/packages/consul-ui/tests/steps/interactions/form.js @@ -3,10 +3,47 @@ * SPDX-License-Identifier: BUSL-1.1 */ +const isClassParentOfElement = function (_class, element) { + if (!element || !element.parentElement) { + return false; + } + if (element.parentElement.classList.contains(_class)) { + return true; + } + + return isClassParentOfElement(_class, element.parentElement); +}; + export default function (scenario, find, fillIn, triggerKeyEvent, currentPage) { const dont = `( don't| shouldn't| can't)?`; + + const fillInCodeEditor = function (page, name, value) { + const valueElement = document.querySelector(`[aria-label="${name}"]`); + + const isCodeEditorElement = isClassParentOfElement('cm-editor', valueElement); + const isCodeBlockElement = isClassParentOfElement('hds-code-block', valueElement); + if (isCodeEditorElement) { + const valueBlock = document.createElement('div'); + valueBlock.innerHTML = value; + valueElement.innerHTML = ''; + valueElement.appendChild(valueBlock); + } else { + if (isCodeBlockElement) { + throw new Error(`The ${name} editor is set to readonly`); + } + + return page; + } + + return page; + }; + const fillInElement = async function (page, name, value) { const cm = document.querySelector(`textarea[name="${name}"] + .CodeMirror`); + const codeEditor = document.querySelector(`[aria-label="${name}"]`); + if (isClassParentOfElement('cm-editor', codeEditor)) { + return fillInCodeEditor(page, name, value); + } if (cm) { if (!cm.CodeMirror.options.readOnly) { cm.CodeMirror.setValue(value); @@ -82,6 +119,9 @@ export default function (scenario, find, fillIn, triggerKeyEvent, currentPage) { return res; } ) + .then(['I fill in code editor "$name" with "$value"'], function (name, value) { + return fillInCodeEditor(currentPage(), name, value); + }) .then(['I type "$text" into "$selector"'], function (text, selector) { return fillIn(selector, text); }) diff --git a/ui/packages/consul-ui/tests/unit/services/code-mirror/linter-test.js b/ui/packages/consul-ui/tests/unit/services/code-mirror/linter-test.js deleted file mode 100644 index ec7c35002e39..000000000000 --- a/ui/packages/consul-ui/tests/unit/services/code-mirror/linter-test.js +++ /dev/null @@ -1,17 +0,0 @@ -/** - * Copyright (c) HashiCorp, Inc. - * SPDX-License-Identifier: BUSL-1.1 - */ - -import { module, test } from 'qunit'; -import { setupTest } from 'ember-qunit'; - -module('Unit | Service | code mirror/linter', function (hooks) { - setupTest(hooks); - - // Replace this with your real tests. - test('it exists', function (assert) { - let service = this.owner.lookup('service:code-mirror/linter'); - assert.ok(service); - }); -}); diff --git a/ui/packages/consul-ui/tests/unit/utils/process-ip-address-test.js b/ui/packages/consul-ui/tests/unit/utils/process-ip-address-test.js new file mode 100644 index 000000000000..5ebf5b2642c0 --- /dev/null +++ b/ui/packages/consul-ui/tests/unit/utils/process-ip-address-test.js @@ -0,0 +1,45 @@ +/** + * Copyright (c) HashiCorp, Inc. + * SPDX-License-Identifier: BUSL-1.1 + */ + +import { processIpAddress } from 'consul-ui/utils/process-ip-address'; +import { module, test } from 'qunit'; + +module('Unit | Utility | Process Ip Address', function () { + test('Returns as it is for ipv4 and already collapsed', function (assert) { + let result = processIpAddress('192.168.1.1'); + assert.equal(result, '192.168.1.1'); + + assert.equal(processIpAddress('255.255.255.255'), '255.255.255.255'); + + assert.equal(processIpAddress('2001:db8::ff00:42:8329'), '[2001:db8::ff00:42:8329]'); + + assert.equal(processIpAddress('::1'), '[::1]'); + + assert.equal(processIpAddress('fe80::202:b3ff:fe1e:8329'), '[fe80::202:b3ff:fe1e:8329]'); + + assert.equal(processIpAddress('::'), '[::]'); + }); + + test('Returns null for invalid IP address', function (assert) { + assert.equal(processIpAddress('2001::85a3::8a2e:370:7334'), null); + + assert.equal(processIpAddress('2001:db8:0:0:0:0:0:0:1:2'), null); + assert.equal(processIpAddress('2001:db8:g::1'), null); + assert.equal(processIpAddress('2001:db8:1::2:3:4:5:6'), null); + }); + + test('Returns collapsed IP address', function (assert) { + assert.equal( + processIpAddress('2001:0db8:0000:0000:0000:ff00:0042:8329'), + '[2001:db8::ff00:42:8329]' + ); + + assert.equal(processIpAddress('2001:db8:0:0:0:ff00:42:8329'), '[2001:db8::ff00:42:8329]'); + + assert.equal(processIpAddress('2001:db8::ff00:42:8329'), '[2001:db8::ff00:42:8329]'); + + assert.equal(processIpAddress('fe80::202:b3ff:fe1e:8329'), '[fe80::202:b3ff:fe1e:8329]'); + }); +}); diff --git a/ui/yarn.lock b/ui/yarn.lock index 10cbc0221d58..ee5467421975 100644 --- a/ui/yarn.lock +++ b/ui/yarn.lock @@ -2,11 +2,6 @@ # yarn lockfile v1 -"@alloc/quick-lru@^5.2.0": - version "5.2.0" - resolved "https://registry.npmjs.org/@alloc/quick-lru/-/quick-lru-5.2.0.tgz#7bf68b20c0a350f936915fcae06f58e32007ce30" - integrity sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw== - "@ampproject/remapping@^2.2.0": version "2.3.0" resolved "https://registry.npmjs.org/@ampproject/remapping/-/remapping-2.3.0.tgz#ed441b6fa600072520ce18b43d2c8cc8caecc7f4" @@ -30,6 +25,15 @@ "@babel/highlight" "^7.24.7" picocolors "^1.0.0" +"@babel/code-frame@^7.27.1": + version "7.27.1" + resolved "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.27.1.tgz#200f715e66d52a23b221a9435534a91cc13ad5be" + integrity sha512-cjQ7ZlQ0Mv3b47hABuTevyTuYN4i+loJKGeV9flcCgIK37cCXRh+L1bd3iBHlynerhQ7BhCkn2BPbQUL+rGqFg== + dependencies: + "@babel/helper-validator-identifier" "^7.27.1" + js-tokens "^4.0.0" + picocolors "^1.1.1" + "@babel/compat-data@^7.20.5", "@babel/compat-data@^7.22.6", "@babel/compat-data@^7.24.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/compat-data/-/compat-data-7.24.7.tgz#d23bbea508c3883ba8251fb4164982c36ea577ed" @@ -66,6 +70,17 @@ "@jridgewell/trace-mapping" "^0.3.25" jsesc "^2.5.1" +"@babel/generator@^7.27.5": + version "7.27.5" + resolved "https://registry.npmjs.org/@babel/generator/-/generator-7.27.5.tgz#3eb01866b345ba261b04911020cbe22dd4be8c8c" + integrity sha512-ZGhA37l0e/g2s1Cnzdix0O3aLYm66eF8aufiVteOgnwxgnRP8GoyMj7VWsgWnQbVKXyge7hqrFh2K2TQM6t1Hw== + dependencies: + "@babel/parser" "^7.27.5" + "@babel/types" "^7.27.3" + "@jridgewell/gen-mapping" "^0.3.5" + "@jridgewell/trace-mapping" "^0.3.25" + jsesc "^3.0.2" + "@babel/helper-annotate-as-pure@^7.18.6", "@babel/helper-annotate-as-pure@^7.24.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/helper-annotate-as-pure/-/helper-annotate-as-pure-7.24.7.tgz#5373c7bc8366b12a033b4be1ac13a206c6656aab" @@ -165,6 +180,14 @@ "@babel/traverse" "^7.24.7" "@babel/types" "^7.24.7" +"@babel/helper-module-imports@^7.22.15": + version "7.27.1" + resolved "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.27.1.tgz#7ef769a323e2655e126673bb6d2d6913bbead204" + integrity sha512-0gSFWUPNXNopqtIPQvlD5WgXYI5GY2kP2cCvoT8kczjbfcfuIljTbcWrulD1CIPIX2gt1wghbDy08yE1p+/r3w== + dependencies: + "@babel/traverse" "^7.27.1" + "@babel/types" "^7.27.1" + "@babel/helper-module-imports@^7.24.7", "@babel/helper-module-imports@^7.8.3": version "7.24.7" resolved "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.24.7.tgz#f2f980392de5b84c3328fc71d38bd81bbb83042b" @@ -242,11 +265,21 @@ resolved "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.24.7.tgz#4d2d0f14820ede3b9807ea5fc36dfc8cd7da07f2" integrity sha512-7MbVt6xrwFQbunH2DNQsAP5sTGxfqQtErvBIvIMi6EQnbgUOuVYanvREcmFrOPhoXBrTtjhhP+lW+o5UfK+tDg== +"@babel/helper-string-parser@^7.27.1": + version "7.27.1" + resolved "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz#54da796097ab19ce67ed9f88b47bb2ec49367687" + integrity sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA== + "@babel/helper-validator-identifier@^7.24.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.24.7.tgz#75b889cfaf9e35c2aaf42cf0d72c8e91719251db" integrity sha512-rR+PBcQ1SMQDDyF6X0wxtG8QyLCgUB0eRAGguqRLfkCA87l7yAP7ehq8SNj96OOGTO8OBV70KhuFYcIkHXOg0w== +"@babel/helper-validator-identifier@^7.27.1": + version "7.27.1" + resolved "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.27.1.tgz#a7054dcc145a967dd4dc8fee845a57c1316c9df8" + integrity sha512-D2hP9eA+Sqx1kBZgzxZh0y1trbuU+JoDkiEwqhQ36nodYqJwyEIhPSdMNd7lOm/4io72luTPWH20Yda0xOuUow== + "@babel/helper-validator-option@^7.24.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/helper-validator-option/-/helper-validator-option-7.24.7.tgz#24c3bb77c7a425d1742eec8fb433b5a1b38e62f6" @@ -262,13 +295,13 @@ "@babel/traverse" "^7.24.7" "@babel/types" "^7.24.7" -"@babel/helpers@^7.24.7": - version "7.24.7" - resolved "https://registry.npmjs.org/@babel/helpers/-/helpers-7.24.7.tgz#aa2ccda29f62185acb5d42fb4a3a1b1082107416" - integrity sha512-NlmJJtvcw72yRJRcnCmGvSi+3jDEg8qFu3z0AFoymmzLx5ERVWyzd9kVXr7Th9/8yIJi2Zc6av4Tqz3wFs8QWg== +"@babel/helpers@7.26.10", "@babel/helpers@^7.24.7": + version "7.26.10" + resolved "https://registry.npmjs.org/@babel/helpers/-/helpers-7.26.10.tgz#6baea3cd62ec2d0c1068778d63cb1314f6637384" + integrity sha512-UPYc3SauzZ3JGgj87GgZ89JVdC5dj0AoetR5Bw6wj4niittNyFh6+eOGonYvJ1ao6B8lEa3Q3klS7ADZ53bc5g== dependencies: - "@babel/template" "^7.24.7" - "@babel/types" "^7.24.7" + "@babel/template" "^7.26.9" + "@babel/types" "^7.26.10" "@babel/highlight@^7.10.4", "@babel/highlight@^7.24.7": version "7.24.7" @@ -285,6 +318,20 @@ resolved "https://registry.npmjs.org/@babel/parser/-/parser-7.24.7.tgz#9a5226f92f0c5c8ead550b750f5608e766c8ce85" integrity sha512-9uUYRm6OqQrCqQdG1iCBwBPZgN8ciDBro2nIOFaiRz1/BCxaI7CNvQbDHvsArAC7Tw9Hda/B3U+6ui9u4HWXPw== +"@babel/parser@^7.27.2": + version "7.27.5" + resolved "https://registry.npmjs.org/@babel/parser/-/parser-7.27.5.tgz#ed22f871f110aa285a6fd934a0efed621d118826" + integrity sha512-OsQd175SxWkGlzbny8J3K8TnnDD0N3lrIUtB92xwyRpzaenGZhxDvxN/JgU00U3CDZNj9tPuDJ5H0WS4Nt3vKg== + dependencies: + "@babel/types" "^7.27.3" + +"@babel/parser@^7.27.5", "@babel/parser@^7.27.7": + version "7.27.7" + resolved "https://registry.npmjs.org/@babel/parser/-/parser-7.27.7.tgz#1687f5294b45039c159730e3b9c1f1b242e425e9" + integrity sha512-qnzXzDXdr/po3bOTbTIQZ7+TxNKxpkN5IifVLXS+r7qwynkZfPyjZfE7hCXbo7IoO9TNcSyibgONsf2HauUd3Q== + dependencies: + "@babel/types" "^7.27.7" + "@babel/plugin-bugfix-firefox-class-in-computed-class-key@^7.24.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/plugin-bugfix-firefox-class-in-computed-class-key/-/plugin-bugfix-firefox-class-in-computed-class-key-7.24.7.tgz#fd059fd27b184ea2b4c7e646868a9a381bbc3055" @@ -325,7 +372,7 @@ "@babel/helper-create-class-features-plugin" "^7.18.6" "@babel/helper-plugin-utils" "^7.18.6" -"@babel/plugin-proposal-decorators@^7.13.5", "@babel/plugin-proposal-decorators@^7.16.7", "@babel/plugin-proposal-decorators@^7.20.13": +"@babel/plugin-proposal-decorators@^7.13.5", "@babel/plugin-proposal-decorators@^7.16.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/plugin-proposal-decorators/-/plugin-proposal-decorators-7.24.7.tgz#7e2dcfeda4a42596b57c4c9de1f5176bbfc532e3" integrity sha512-RL9GR0pUG5Kc8BUWLNDm2T5OpYwSX15r98I0IkgmRQTXuELq/OynH8xtMTMvTJFjXbMWFVTKtYkTaYQsuAwQlQ== @@ -375,7 +422,7 @@ resolved "https://registry.npmjs.org/@babel/plugin-proposal-private-property-in-object/-/plugin-proposal-private-property-in-object-7.21.0-placeholder-for-preset-env.2.tgz#7844f9289546efa9febac2de4cfe358a050bd703" integrity sha512-SOSkfJDddaM7mak6cPEpswyTRnuRltl429hMraQEglW+OkovnCzsiszTmsrlY//qLFjCpQDFRvjdm2wA5pPm9w== -"@babel/plugin-proposal-private-property-in-object@^7.16.5", "@babel/plugin-proposal-private-property-in-object@^7.20.5": +"@babel/plugin-proposal-private-property-in-object@^7.16.5": version "7.21.11" resolved "https://registry.npmjs.org/@babel/plugin-proposal-private-property-in-object/-/plugin-proposal-private-property-in-object-7.21.11.tgz#69d597086b6760c4126525cfa154f34631ff272c" integrity sha512-0QZ8qP/3RLDVBwBFoWAwCtgcDZJVwA5LUJRZU8x2YFfKNuFq161wK3cuGrALu5yiPu+vzwTAg/sMWVNeWeNyaw== @@ -574,7 +621,7 @@ "@babel/helper-create-class-features-plugin" "^7.24.7" "@babel/helper-plugin-utils" "^7.24.7" -"@babel/plugin-transform-class-static-block@^7.16.7", "@babel/plugin-transform-class-static-block@^7.22.11", "@babel/plugin-transform-class-static-block@^7.24.7": +"@babel/plugin-transform-class-static-block@^7.16.7", "@babel/plugin-transform-class-static-block@^7.24.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/plugin-transform-class-static-block/-/plugin-transform-class-static-block-7.24.7.tgz#c82027ebb7010bc33c116d4b5044fbbf8c05484d" integrity sha512-HMXK3WbBPpZQufbMG4B46A90PkuuhN9vBCb5T8+VAHqvAqvcLi+2cKoukcpmUYkszLhScU3l1iudhrks3DggRQ== @@ -698,7 +745,7 @@ dependencies: "@babel/helper-plugin-utils" "^7.24.7" -"@babel/plugin-transform-modules-amd@^7.12.1", "@babel/plugin-transform-modules-amd@^7.13.0", "@babel/plugin-transform-modules-amd@^7.20.11", "@babel/plugin-transform-modules-amd@^7.24.7": +"@babel/plugin-transform-modules-amd@^7.12.1", "@babel/plugin-transform-modules-amd@^7.13.0", "@babel/plugin-transform-modules-amd@^7.24.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/plugin-transform-modules-amd/-/plugin-transform-modules-amd-7.24.7.tgz#65090ed493c4a834976a3ca1cde776e6ccff32d7" integrity sha512-9+pB1qxV3vs/8Hdmz/CulFB8w2tuu6EB94JZFsjdqxQokwGa9Unap7Bo2gGBGIvPmDIVvQrom7r5m/TCDMURhg== @@ -901,7 +948,7 @@ dependencies: "@babel/helper-plugin-utils" "^7.24.7" -"@babel/plugin-transform-typescript@^7.13.0", "@babel/plugin-transform-typescript@^7.20.13": +"@babel/plugin-transform-typescript@^7.13.0": version "7.24.7" resolved "https://registry.npmjs.org/@babel/plugin-transform-typescript/-/plugin-transform-typescript-7.24.7.tgz#b006b3e0094bf0813d505e0c5485679eeaf4a881" integrity sha512-iLD3UNkgx2n/HrjBesVbYX6j0yqn/sJktvbtKKgcaLIQ4bTTQ8obAypc1VpyHPD2y4Phh9zHOaAt8e/L14wCpw== @@ -976,7 +1023,7 @@ core-js "^2.6.5" regenerator-runtime "^0.13.4" -"@babel/preset-env@^7.10.2", "@babel/preset-env@^7.16.5", "@babel/preset-env@^7.16.7", "@babel/preset-env@^7.20.2": +"@babel/preset-env@^7.10.2", "@babel/preset-env@^7.16.5", "@babel/preset-env@^7.16.7": version "7.24.7" resolved "https://registry.npmjs.org/@babel/preset-env/-/preset-env-7.24.7.tgz#ff067b4e30ba4a72f225f12f123173e77b987f37" integrity sha512-1YZNsc+y6cTvWlDHidMBsQZrZfEFjRIo/BZCT906PMdzOyXtSLTgqGdrpcuTDCXyd11Am5uQULtDIcCfnTc8fQ== @@ -1077,17 +1124,10 @@ resolved "https://registry.npmjs.org/@babel/regjsgen/-/regjsgen-0.8.0.tgz#f0ba69b075e1f05fb2825b7fad991e7adbb18310" integrity sha512-x/rqGMdzj+fWZvCOYForTghzbtqPDZ5gPwaoNGHdgDfF2QA/XZbCBp4Moo5scrkAMPhB7z26XM/AaHuIJdgauA== -"@babel/runtime@7.12.18": - version "7.12.18" - resolved "https://registry.npmjs.org/@babel/runtime/-/runtime-7.12.18.tgz#af137bd7e7d9705a412b3caaf991fe6aaa97831b" - integrity sha512-BogPQ7ciE6SYAUPtlm9tWbgI9+2AgqSam6QivMgXgAT+fKbgppaj4ZX15MHeLC1PVF5sNk70huBu20XxWOs8Cg== - dependencies: - regenerator-runtime "^0.13.4" - -"@babel/runtime@^7.12.5", "@babel/runtime@^7.17.8", "@babel/runtime@^7.8.4": - version "7.24.7" - resolved "https://registry.npmjs.org/@babel/runtime/-/runtime-7.24.7.tgz#f4f0d5530e8dbdf59b3451b9b3e594b6ba082e12" - integrity sha512-UwgBRMjJP+xv857DCngvqXI3Iq6J4v0wXmwc6sapg+zyhbwmQX67LUEFrkK5tbyJ30jGuG3ZvWpBiB9LCy1kWw== +"@babel/runtime@7.12.18", "@babel/runtime@8.0.0-alpha.17", "@babel/runtime@^7.12.5", "@babel/runtime@^7.17.8", "@babel/runtime@^7.8.4": + version "8.0.0-alpha.17" + resolved "https://registry.npmjs.org/@babel/runtime/-/runtime-8.0.0-alpha.17.tgz#325cdc17591b6b0e96ff6d07a136eb0a73022f14" + integrity sha512-jeV3fYCLTbEwor7EBzOxhZbW+bxHJpm0V0xhaHGfWQwjsHENO2RBHVxFRTG2zfczCgOpz6TqP7EXVSUaooex6g== dependencies: regenerator-runtime "^0.14.0" @@ -1100,6 +1140,15 @@ "@babel/parser" "^7.24.7" "@babel/types" "^7.24.7" +"@babel/template@^7.26.9", "@babel/template@^7.27.2": + version "7.27.2" + resolved "https://registry.npmjs.org/@babel/template/-/template-7.27.2.tgz#fa78ceed3c4e7b63ebf6cb39e5852fca45f6809d" + integrity sha512-LPDZ85aEJyYSd18/DkjNh4/y1ntkE5KwUHWTiqgRxruuZL2F1yuHligVHLvcHY2vMHXttKFpJn6LwfI7cw7ODw== + dependencies: + "@babel/code-frame" "^7.27.1" + "@babel/parser" "^7.27.2" + "@babel/types" "^7.27.1" + "@babel/traverse@^7.1.6", "@babel/traverse@^7.12.1", "@babel/traverse@^7.24.7", "@babel/traverse@^7.4.5", "@babel/traverse@^7.7.0": version "7.24.7" resolved "https://registry.npmjs.org/@babel/traverse/-/traverse-7.24.7.tgz#de2b900163fa741721ba382163fe46a936c40cf5" @@ -1116,6 +1165,19 @@ debug "^4.3.1" globals "^11.1.0" +"@babel/traverse@^7.27.1": + version "7.27.7" + resolved "https://registry.npmjs.org/@babel/traverse/-/traverse-7.27.7.tgz#8355c39be6818362eace058cf7f3e25ac2ec3b55" + integrity sha512-X6ZlfR/O/s5EQ/SnUSLzr+6kGnkg8HXGMzpgsMsrJVcfDtH1vIp6ctCN4eZ1LS5c0+te5Cb6Y514fASjMRJ1nw== + dependencies: + "@babel/code-frame" "^7.27.1" + "@babel/generator" "^7.27.5" + "@babel/parser" "^7.27.7" + "@babel/template" "^7.27.2" + "@babel/types" "^7.27.7" + debug "^4.3.1" + globals "^11.1.0" + "@babel/types@^7.1.6", "@babel/types@^7.12.1", "@babel/types@^7.12.13", "@babel/types@^7.24.7", "@babel/types@^7.4.4", "@babel/types@^7.7.0", "@babel/types@^7.7.2": version "7.24.7" resolved "https://registry.npmjs.org/@babel/types/-/types-7.24.7.tgz#6027fe12bc1aa724cd32ab113fb7f1988f1f66f2" @@ -1125,6 +1187,22 @@ "@babel/helper-validator-identifier" "^7.24.7" to-fast-properties "^2.0.0" +"@babel/types@^7.26.10", "@babel/types@^7.27.1", "@babel/types@^7.27.3": + version "7.27.6" + resolved "https://registry.npmjs.org/@babel/types/-/types-7.27.6.tgz#a434ca7add514d4e646c80f7375c0aa2befc5535" + integrity sha512-ETyHEk2VHHvl9b9jZP5IHPavHYk57EhanlRRuae9XCpb/j5bDCbPPMOBfCWhnl/7EDJz0jEMCi/RhccCE8r1+Q== + dependencies: + "@babel/helper-string-parser" "^7.27.1" + "@babel/helper-validator-identifier" "^7.27.1" + +"@babel/types@^7.27.7": + version "7.27.7" + resolved "https://registry.npmjs.org/@babel/types/-/types-7.27.7.tgz#40eabd562049b2ee1a205fa589e629f945dce20f" + integrity sha512-8OLQgDScAOHXnAz2cV+RfzzNMipuLVBz2biuAJFMV9bfkNf393je3VM8CLkjQodW5+iWsSJdSgSWT6rsZoXHPw== + dependencies: + "@babel/helper-string-parser" "^7.27.1" + "@babel/helper-validator-identifier" "^7.27.1" + "@cnakazawa/watch@^1.0.3": version "1.0.4" resolved "https://registry.npmjs.org/@cnakazawa/watch/-/watch-1.0.4.tgz#f864ae85004d0fcab6f50be9141c4da368d1656a" @@ -1133,25 +1211,172 @@ exec-sh "^0.3.2" minimist "^1.2.0" +"@codemirror/autocomplete@^6.0.0", "@codemirror/autocomplete@^6.7.1": + version "6.18.6" + resolved "https://registry.npmjs.org/@codemirror/autocomplete/-/autocomplete-6.18.6.tgz#de26e864a1ec8192a1b241eb86addbb612964ddb" + integrity sha512-PHHBXFomUs5DF+9tCOM/UoW6XQ4R44lLNNhRaW9PKPTU0D7lIjRg3ElxaJnTwsl/oHiR93WSXDBrekhoUGCPtg== + dependencies: + "@codemirror/language" "^6.0.0" + "@codemirror/state" "^6.0.0" + "@codemirror/view" "^6.17.0" + "@lezer/common" "^1.0.0" + +"@codemirror/commands@^6.8.0": + version "6.8.1" + resolved "https://registry.npmjs.org/@codemirror/commands/-/commands-6.8.1.tgz#639f5559d2f33f2582a2429c58cb0c1b925c7a30" + integrity sha512-KlGVYufHMQzxbdQONiLyGQDUW0itrLZwq3CcY7xpv9ZLRHqzkBSoteocBHtMCoY7/Ci4xhzSrToIeLg7FxHuaw== + dependencies: + "@codemirror/language" "^6.0.0" + "@codemirror/state" "^6.4.0" + "@codemirror/view" "^6.27.0" + "@lezer/common" "^1.1.0" + +"@codemirror/lang-css@^6.0.0": + version "6.3.1" + resolved "https://registry.npmjs.org/@codemirror/lang-css/-/lang-css-6.3.1.tgz#763ca41aee81bb2431be55e3cfcc7cc8e91421a3" + integrity sha512-kr5fwBGiGtmz6l0LSJIbno9QrifNMUusivHbnA1H6Dmqy4HZFte3UAICix1VuKo0lMPKQr2rqB+0BkKi/S3Ejg== + dependencies: + "@codemirror/autocomplete" "^6.0.0" + "@codemirror/language" "^6.0.0" + "@codemirror/state" "^6.0.0" + "@lezer/common" "^1.0.2" + "@lezer/css" "^1.1.7" + +"@codemirror/lang-go@^6.0.1": + version "6.0.1" + resolved "https://registry.npmjs.org/@codemirror/lang-go/-/lang-go-6.0.1.tgz#598222c90f56eae28d11069c612ca64d0306b057" + integrity sha512-7fNvbyNylvqCphW9HD6WFnRpcDjr+KXX/FgqXy5H5ZS0eC5edDljukm/yNgYkwTsgp2busdod50AOTIy6Jikfg== + dependencies: + "@codemirror/autocomplete" "^6.0.0" + "@codemirror/language" "^6.6.0" + "@codemirror/state" "^6.0.0" + "@lezer/common" "^1.0.0" + "@lezer/go" "^1.0.0" + +"@codemirror/lang-html@^6.0.0": + version "6.4.9" + resolved "https://registry.npmjs.org/@codemirror/lang-html/-/lang-html-6.4.9.tgz#d586f2cc9c341391ae07d1d7c545990dfa069727" + integrity sha512-aQv37pIMSlueybId/2PVSP6NPnmurFDVmZwzc7jszd2KAF8qd4VBbvNYPXWQq90WIARjsdVkPbw29pszmHws3Q== + dependencies: + "@codemirror/autocomplete" "^6.0.0" + "@codemirror/lang-css" "^6.0.0" + "@codemirror/lang-javascript" "^6.0.0" + "@codemirror/language" "^6.4.0" + "@codemirror/state" "^6.0.0" + "@codemirror/view" "^6.17.0" + "@lezer/common" "^1.0.0" + "@lezer/css" "^1.1.0" + "@lezer/html" "^1.3.0" + +"@codemirror/lang-javascript@^6.0.0", "@codemirror/lang-javascript@^6.2.2": + version "6.2.4" + resolved "https://registry.npmjs.org/@codemirror/lang-javascript/-/lang-javascript-6.2.4.tgz#eef2227d1892aae762f3a0f212f72bec868a02c5" + integrity sha512-0WVmhp1QOqZ4Rt6GlVGwKJN3KW7Xh4H2q8ZZNGZaP6lRdxXJzmjm4FqvmOojVj6khWJHIb9sp7U/72W7xQgqAA== + dependencies: + "@codemirror/autocomplete" "^6.0.0" + "@codemirror/language" "^6.6.0" + "@codemirror/lint" "^6.0.0" + "@codemirror/state" "^6.0.0" + "@codemirror/view" "^6.17.0" + "@lezer/common" "^1.0.0" + "@lezer/javascript" "^1.0.0" + +"@codemirror/lang-json@^6.0.1": + version "6.0.2" + resolved "https://registry.npmjs.org/@codemirror/lang-json/-/lang-json-6.0.2.tgz#054b160671306667e25d80385286049841836179" + integrity sha512-x2OtO+AvwEHrEwR0FyyPtfDUiloG3rnVTSZV1W8UteaLL8/MajQd8DpvUb2YVzC+/T18aSDv0H9mu+xw0EStoQ== + dependencies: + "@codemirror/language" "^6.0.0" + "@lezer/json" "^1.0.0" + +"@codemirror/lang-markdown@^6.3.2": + version "6.3.3" + resolved "https://registry.npmjs.org/@codemirror/lang-markdown/-/lang-markdown-6.3.3.tgz#457f93bd8a2d422dae0625b20b61adf5c6d23def" + integrity sha512-1fn1hQAPWlSSMCvnF810AkhWpNLkJpl66CRfIy3vVl20Sl4NwChkorCHqpMtNbXr1EuMJsrDnhEpjZxKZ2UX3A== + dependencies: + "@codemirror/autocomplete" "^6.7.1" + "@codemirror/lang-html" "^6.0.0" + "@codemirror/language" "^6.3.0" + "@codemirror/state" "^6.0.0" + "@codemirror/view" "^6.0.0" + "@lezer/common" "^1.2.1" + "@lezer/markdown" "^1.0.0" + +"@codemirror/lang-sql@^6.8.0": + version "6.9.0" + resolved "https://registry.npmjs.org/@codemirror/lang-sql/-/lang-sql-6.9.0.tgz#0130da09c7d827b0aa5f9598f61bca975a5480c7" + integrity sha512-xmtpWqKSgum1B1J3Ro6rf7nuPqf2+kJQg5SjrofCAcyCThOe0ihSktSoXfXuhQBnwx1QbmreBbLJM5Jru6zitg== + dependencies: + "@codemirror/autocomplete" "^6.0.0" + "@codemirror/language" "^6.0.0" + "@codemirror/state" "^6.0.0" + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.0.0" + +"@codemirror/lang-yaml@^6.1.2": + version "6.1.2" + resolved "https://registry.npmjs.org/@codemirror/lang-yaml/-/lang-yaml-6.1.2.tgz#c84280c68fa7af456a355d91183b5e537e9b7038" + integrity sha512-dxrfG8w5Ce/QbT7YID7mWZFKhdhsaTNOYjOkSIMt1qmC4VQnXSDSYVHHHn8k6kJUfIhtLo8t1JJgltlxWdsITw== + dependencies: + "@codemirror/autocomplete" "^6.0.0" + "@codemirror/language" "^6.0.0" + "@codemirror/state" "^6.0.0" + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.2.0" + "@lezer/lr" "^1.0.0" + "@lezer/yaml" "^1.0.0" + +"@codemirror/language@^6.0.0", "@codemirror/language@^6.10.3", "@codemirror/language@^6.3.0", "@codemirror/language@^6.4.0", "@codemirror/language@^6.6.0": + version "6.11.2" + resolved "https://registry.npmjs.org/@codemirror/language/-/language-6.11.2.tgz#90d2d094cfbd14263bc5354ebd2445ee4e81bdc3" + integrity sha512-p44TsNArL4IVXDTbapUmEkAlvWs2CFQbcfc0ymDsis1kH2wh0gcY96AS29c/vp2d0y2Tquk1EDSaawpzilUiAw== + dependencies: + "@codemirror/state" "^6.0.0" + "@codemirror/view" "^6.23.0" + "@lezer/common" "^1.1.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.0.0" + style-mod "^4.0.0" + +"@codemirror/legacy-modes@^6.4.2": + version "6.5.1" + resolved "https://registry.npmjs.org/@codemirror/legacy-modes/-/legacy-modes-6.5.1.tgz#6bd13fac94f67a825e5420017e0d2f3c35d09342" + integrity sha512-DJYQQ00N1/KdESpZV7jg9hafof/iBNp9h7TYo1SLMk86TWl9uDsVdho2dzd81K+v4retmK6mdC7WpuOQDytQqw== + dependencies: + "@codemirror/language" "^6.0.0" + +"@codemirror/lint@^6.0.0", "@codemirror/lint@^6.8.4": + version "6.8.5" + resolved "https://registry.npmjs.org/@codemirror/lint/-/lint-6.8.5.tgz#9edaa808e764e28e07665b015951934c8ec3a418" + integrity sha512-s3n3KisH7dx3vsoeGMxsbRAgKe4O1vbrnKBClm99PU0fWxmxsx5rR2PfqQgIt+2MMJBHbiJ5rfIdLYfB9NNvsA== + dependencies: + "@codemirror/state" "^6.0.0" + "@codemirror/view" "^6.35.0" + crelt "^1.0.5" + +"@codemirror/state@^6.0.0", "@codemirror/state@^6.4.0", "@codemirror/state@^6.5.0": + version "6.5.2" + resolved "https://registry.npmjs.org/@codemirror/state/-/state-6.5.2.tgz#8eca3a64212a83367dc85475b7d78d5c9b7076c6" + integrity sha512-FVqsPqtPWKVVL3dPSxy8wEF/ymIEuVzF1PK3VbUgrxXpJUSHQWWZz4JMToquRxnkw+36LTamCZG2iua2Ptq0fA== + dependencies: + "@marijn/find-cluster-break" "^1.0.0" + +"@codemirror/view@^6.0.0", "@codemirror/view@^6.17.0", "@codemirror/view@^6.23.0", "@codemirror/view@^6.27.0", "@codemirror/view@^6.35.0", "@codemirror/view@^6.36.2": + version "6.38.0" + resolved "https://registry.npmjs.org/@codemirror/view/-/view-6.38.0.tgz#4486062b791a4247793e0953e05ae71a9e172217" + integrity sha512-yvSchUwHOdupXkd7xJ0ob36jdsSR/I+/C+VbY0ffBiL5NiSTEBDfB1ZGWbbIlDd5xgdUkody+lukAdOxYrOBeg== + dependencies: + "@codemirror/state" "^6.5.0" + crelt "^1.0.6" + style-mod "^4.1.0" + w3c-keyname "^2.2.4" + "@colors/colors@1.5.0": version "1.5.0" resolved "https://registry.npmjs.org/@colors/colors/-/colors-1.5.0.tgz#bb504579c1cae923e6576a4f5da43d25f97bdbd9" integrity sha512-ooWCrlZP11i8GImSjTHYHLkvFDP48nS4+204nGb1RiX/WXYHmJA2III9/e2DWVabCESdW7hBAEzHRqUn9OUVvQ== -"@csstools/postcss-sass@^5.0.1": - version "5.1.1" - resolved "https://registry.npmjs.org/@csstools/postcss-sass/-/postcss-sass-5.1.1.tgz#135921df13bc56bee50c7470a66e4e9f3d5c89ae" - integrity sha512-La7bgTcM6YwPBLqlaXg7lMLry82iLv1a+S1RmgvHq2mH2Zd57L2anjZvJC8ACUHWc4M9fXws93dq6gaK0kZyAw== - dependencies: - "@csstools/sass-import-resolve" "^1.0.0" - sass "^1.69.5" - source-map "~0.7.4" - -"@csstools/sass-import-resolve@^1.0.0": - version "1.0.0" - resolved "https://registry.npmjs.org/@csstools/sass-import-resolve/-/sass-import-resolve-1.0.0.tgz#32c3cdb2f7af3cd8f0dca357b592e7271f3831b5" - integrity sha512-pH4KCsbtBLLe7eqUrw8brcuFO8IZlN36JjdKlOublibVdAIPHCzEnpBWOVUXK5sCf+DpBi8ZtuWtjF0srybdeA== - "@docfy/core@^0.4.4": version "0.4.4" resolved "https://registry.npmjs.org/@docfy/core/-/core-0.4.4.tgz#041157870abcde99e64068cdbd79767b2c1a97b4" @@ -1377,7 +1602,7 @@ ember-cli-babel "^7.10.0" ember-modifier-manager-polyfill "^1.1.0" -"@ember/render-modifiers@^2.0.0", "@ember/render-modifiers@^2.0.5": +"@ember/render-modifiers@^2.0.0", "@ember/render-modifiers@^2.1.0": version "2.1.0" resolved "https://registry.npmjs.org/@ember/render-modifiers/-/render-modifiers-2.1.0.tgz#f4fff95a8b5cfbe947ec46644732d511711c5bf9" integrity sha512-LruhfoDv2itpk0fA0IC76Sxjcnq/7BC6txpQo40hOko8Dn6OxwQfxkPIbZGV0Cz7df+iX+VJrcYzNIvlc3w2EQ== @@ -1417,7 +1642,7 @@ ember-cli-version-checker "^5.1.2" semver "^7.3.5" -"@embroider/addon-shim@^1.0.0", "@embroider/addon-shim@^1.2.0", "@embroider/addon-shim@^1.8.3", "@embroider/addon-shim@^1.8.4", "@embroider/addon-shim@^1.8.7": +"@embroider/addon-shim@^1.0.0", "@embroider/addon-shim@^1.2.0", "@embroider/addon-shim@^1.8.3", "@embroider/addon-shim@^1.8.7": version "1.8.9" resolved "https://registry.npmjs.org/@embroider/addon-shim/-/addon-shim-1.8.9.tgz#ef37eba069d391b2d2a80aa62880c469051c4d43" integrity sha512-qyN64T1jMHZ99ihlk7VFHCWHYZHLE1DOdHi0J7lmn5waV1DoW7gD8JLi1i7FregzXtKhbDc7shyEmTmWPTs8MQ== @@ -1427,6 +1652,16 @@ common-ancestor-path "^1.0.1" semver "^7.3.8" +"@embroider/addon-shim@^1.10.0", "@embroider/addon-shim@^1.6.0", "@embroider/addon-shim@^1.8.6", "@embroider/addon-shim@^1.9.0": + version "1.10.0" + resolved "https://registry.npmjs.org/@embroider/addon-shim/-/addon-shim-1.10.0.tgz#7c3325e0939674290a9ca4ad7d744ee69313c0a0" + integrity sha512-gcJuHiXgnrzaU8NyU+2bMbtS6PNOr5v5B8OXBqaBvTCsMpXLvKo8OBOQFCoUN0rPX2J6VaFqrbi/371sMvzZug== + dependencies: + "@embroider/shared-internals" "^3.0.0" + broccoli-funnel "^3.0.8" + common-ancestor-path "^1.0.1" + semver "^7.3.8" + "@embroider/core@0.36.0": version "0.36.0" resolved "https://registry.npmjs.org/@embroider/core/-/core-0.36.0.tgz#fbbd60d29c3fcbe02b4e3e63e6043a43de2b9ce3" @@ -1518,7 +1753,7 @@ resolve "^1.20.0" semver "^7.3.2" -"@embroider/macros@^0.50.0 || ^1.0.0", "@embroider/macros@^1.0.0", "@embroider/macros@^1.10.0", "@embroider/macros@^1.16.1", "@embroider/macros@^1.2.0": +"@embroider/macros@^0.50.0 || ^1.0.0", "@embroider/macros@^1.0.0", "@embroider/macros@^1.10.0", "@embroider/macros@^1.16.1": version "1.16.5" resolved "https://registry.npmjs.org/@embroider/macros/-/macros-1.16.5.tgz#871addab2103b554c6b6a3a337c00e3f0a0462ac" integrity sha512-Oz8bUZvZzOV1Gk3qSgIzZJJzs6acclSTcEFyB+KdKbKqjTC3uebn53aU2gAlLU7/YdTRZrg2gNbQuwAp+tGkGg== @@ -1532,6 +1767,34 @@ resolve "^1.20.0" semver "^7.3.2" +"@embroider/macros@^1.12.3", "@embroider/macros@^1.18.0": + version "1.18.0" + resolved "https://registry.npmjs.org/@embroider/macros/-/macros-1.18.0.tgz#d79c4474667559ac9baf903e8fb89f1b00a0c45a" + integrity sha512-KanP80XxNK4bmQ1HKTcUjy/cdCt9n7knPMLK1vzHdOFymACHo+GbhgUjXjYdOCuBTv+ZwcjL2P2XDmBcYS9r8g== + dependencies: + "@embroider/shared-internals" "3.0.0" + assert-never "^1.2.1" + babel-import-util "^3.0.1" + ember-cli-babel "^7.26.6" + find-up "^5.0.0" + lodash "^4.17.21" + resolve "^1.20.0" + semver "^7.3.2" + +"@embroider/macros@~1.16.0": + version "1.16.13" + resolved "https://registry.npmjs.org/@embroider/macros/-/macros-1.16.13.tgz#3647839de7154400115e0b874bbf5aed9312a7a8" + integrity sha512-2oGZh0m1byBYQFWEa8b2cvHJB2LzaF3DdMCLCqcRAccABMROt1G3sultnNCT30NhfdGWMEsJOT3Jm4nFxXmTRw== + dependencies: + "@embroider/shared-internals" "2.9.0" + assert-never "^1.2.1" + babel-import-util "^2.0.0" + ember-cli-babel "^7.26.6" + find-up "^5.0.0" + lodash "^4.17.21" + resolve "^1.20.0" + semver "^7.3.2" + "@embroider/shared-internals@0.41.0": version "0.41.0" resolved "https://registry.npmjs.org/@embroider/shared-internals/-/shared-internals-0.41.0.tgz#2553f026d4f48ea1fd11235501feb63bf49fa306" @@ -1587,6 +1850,43 @@ semver "^7.3.5" typescript-memoize "^1.0.1" +"@embroider/shared-internals@2.9.0": + version "2.9.0" + resolved "https://registry.npmjs.org/@embroider/shared-internals/-/shared-internals-2.9.0.tgz#5d945b92e08db163de60d82f7c388e2b7260f0cc" + integrity sha512-8untWEvGy6av/oYibqZWMz/yB+LHsKxEOoUZiLvcpFwWj2Sipc0DcXeTJQZQZ++otNkLCWyDrDhOLrOkgjOPSg== + dependencies: + babel-import-util "^2.0.0" + debug "^4.3.2" + ember-rfc176-data "^0.3.17" + fs-extra "^9.1.0" + is-subdir "^1.2.0" + js-string-escape "^1.0.1" + lodash "^4.17.21" + minimatch "^3.0.4" + pkg-entry-points "^1.1.0" + resolve-package-path "^4.0.1" + semver "^7.3.5" + typescript-memoize "^1.0.1" + +"@embroider/shared-internals@3.0.0", "@embroider/shared-internals@^3.0.0": + version "3.0.0" + resolved "https://registry.npmjs.org/@embroider/shared-internals/-/shared-internals-3.0.0.tgz#98251e6b99d36d64120361a449569ef5384b3812" + integrity sha512-5J5ipUMCAinQS38WW7wedruq5Z4VnHvNo+ZgOduw0PtI9w0CQWx7/HE+98PBDW8jclikeF+aHwF317vc1hwuzg== + dependencies: + babel-import-util "^3.0.1" + debug "^4.3.2" + ember-rfc176-data "^0.3.17" + fs-extra "^9.1.0" + is-subdir "^1.2.0" + js-string-escape "^1.0.1" + lodash "^4.17.21" + minimatch "^3.0.4" + pkg-entry-points "^1.1.0" + resolve-package-path "^4.0.1" + resolve.exports "^2.0.2" + semver "^7.3.5" + typescript-memoize "^1.0.1" + "@embroider/shared-internals@^1.0.0": version "1.8.3" resolved "https://registry.npmjs.org/@embroider/shared-internals/-/shared-internals-1.8.3.tgz#52d868dc80016e9fe983552c0e516f437bf9b9f9" @@ -1610,7 +1910,7 @@ broccoli-funnel "^3.0.5" ember-cli-babel "^7.23.1" -"@embroider/util@^0.39.1 || ^0.40.0 || ^0.41.0 || ^1.0.0", "@embroider/util@^1.0.0", "@embroider/util@^1.9.0": +"@embroider/util@^0.39.1 || ^0.40.0 || ^0.41.0 || ^1.0.0", "@embroider/util@^1.9.0": version "1.13.1" resolved "https://registry.npmjs.org/@embroider/util/-/util-1.13.1.tgz#c6d4a569b331cbf805e68e7fa6602f248438bde6" integrity sha512-MRbs2FPO4doQ31YHIYk+QKChEs7k15aTsMk8QmO4eKiuQq9OT0sr1oasObZyGB8cVVbr29WWRWmsNirxzQtHIg== @@ -1628,6 +1928,15 @@ broccoli-funnel "^3.0.5" ember-cli-babel "^7.23.1" +"@embroider/util@^1.13.2": + version "1.13.3" + resolved "https://registry.npmjs.org/@embroider/util/-/util-1.13.3.tgz#ac6a12f54097173167a9a1189a49ea6cd5b45755" + integrity sha512-fb9S137zZqSI1IeWpGKVJ+WZHsRiIrD9D2A4aVwVH0dZeBKDg6lMaMN2MiXJ/ldUAG3DUFxnClnpiG5m2g3JFA== + dependencies: + "@embroider/macros" "~1.16.0" + broccoli-funnel "^3.0.5" + ember-cli-babel "^7.26.11" + "@eslint/eslintrc@^0.4.3": version "0.4.3" resolved "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-0.4.3.tgz#9e42981ef035beb3dd49add17acb96e8ff6f394c" @@ -1643,6 +1952,26 @@ minimatch "^3.0.4" strip-json-comments "^3.1.1" +"@floating-ui/core@^1.7.2": + version "1.7.2" + resolved "https://registry.npmjs.org/@floating-ui/core/-/core-1.7.2.tgz#3d1c35263950b314b6d5a72c8bfb9e3c1551aefd" + integrity sha512-wNB5ooIKHQc+Kui96jE/n69rHFWAVoxn5CAzL1Xdd8FG03cgY3MLO+GF9U3W737fYDSgPWA6MReKhBQBop6Pcw== + dependencies: + "@floating-ui/utils" "^0.2.10" + +"@floating-ui/dom@^1.6.12": + version "1.7.2" + resolved "https://registry.npmjs.org/@floating-ui/dom/-/dom-1.7.2.tgz#3540b051cf5ce0d4f4db5fb2507a76e8ea5b4a45" + integrity sha512-7cfaOQuCS27HD7DX+6ib2OrnW+b4ZBwDNnCcT0uTyidcmyWb03FnQqJybDBoCnpdxwBSfA94UAYlRCt7mV+TbA== + dependencies: + "@floating-ui/core" "^1.7.2" + "@floating-ui/utils" "^0.2.10" + +"@floating-ui/utils@^0.2.10": + version "0.2.10" + resolved "https://registry.npmjs.org/@floating-ui/utils/-/utils-0.2.10.tgz#a2a1e3812d14525f725d011a73eceb41fef5bc1c" + integrity sha512-aGTxbpbg8/b5JfU1HXSrbH3wXZuLPJcNEcZQFMxLs3oSzgtVu6nFPkbbGGUvBcUjKV2YyB9Wxxabo+HEH9tcRQ== + "@formatjs/ecma402-abstract@1.11.4": version "1.11.4" resolved "https://registry.npmjs.org/@formatjs/ecma402-abstract/-/ecma402-abstract-1.11.4.tgz#b962dfc4ae84361f9f08fbce411b4e4340930eda" @@ -1923,37 +2252,58 @@ faker "^4.1.0" js-yaml "^3.13.1" -"@hashicorp/design-system-components@^3.0.2": - version "3.6.0" - resolved "https://registry.npmjs.org/@hashicorp/design-system-components/-/design-system-components-3.6.0.tgz#e678123f9d88eef7df2edfdf1666997214fe3273" - integrity sha512-HV8Wa9fTFCfwCCze7gzg2+U3oSmwNkwYtnrrxln7MXzEKRWl1E2p4BM7ZdyaXu3n092R3wcjy04VwaVSmQqrLw== - dependencies: - "@ember/render-modifiers" "^2.0.5" +"@hashicorp/design-system-components@^4.20.2": + version "4.20.2" + resolved "https://registry.npmjs.org/@hashicorp/design-system-components/-/design-system-components-4.20.2.tgz#0503dc09493d5647fa835207d766cc1f38faf37d" + integrity sha512-0FDaDlvaQQVVXoSoWsExmW1TUgmuJNoCz11JuwaOwin59Vl4ttVLsNvY8DviGJlh6VhV1yYlGJa7X2xhQG+ESQ== + dependencies: + "@codemirror/commands" "^6.8.0" + "@codemirror/lang-go" "^6.0.1" + "@codemirror/lang-javascript" "^6.2.2" + "@codemirror/lang-json" "^6.0.1" + "@codemirror/lang-markdown" "^6.3.2" + "@codemirror/lang-sql" "^6.8.0" + "@codemirror/lang-yaml" "^6.1.2" + "@codemirror/language" "^6.10.3" + "@codemirror/legacy-modes" "^6.4.2" + "@codemirror/lint" "^6.8.4" + "@codemirror/state" "^6.5.0" + "@codemirror/view" "^6.36.2" + "@ember/render-modifiers" "^2.1.0" "@ember/string" "^3.1.1" "@ember/test-waiters" "^3.1.0" - "@hashicorp/design-system-tokens" "^1.11.0" - "@hashicorp/ember-flight-icons" "^4.1.0" - dialog-polyfill "^0.5.6" - ember-a11y-refocus "^3.0.2" - ember-auto-import "^2.6.3" - ember-cli-babel "^8.2.0" - ember-cli-htmlbars "^6.3.0" + "@embroider/addon-shim" "^1.10.0" + "@embroider/macros" "^1.18.0" + "@embroider/util" "^1.13.2" + "@floating-ui/dom" "^1.6.12" + "@hashicorp/design-system-tokens" "^2.3.0" + "@hashicorp/flight-icons" "^3.11.1" + "@lezer/highlight" "^1.2.1" + "@nullvoxpopuli/ember-composable-helpers" "^5.2.10" + clipboard-polyfill "^4.1.1" + codemirror-lang-hcl "^0.0.0-beta.2" + decorator-transforms "^2.3.0" + ember-a11y-refocus "^4.1.4" ember-cli-sass "^11.0.1" - ember-composable-helpers "^5.0.0" - ember-element-helper "^0.8.5" - ember-focus-trap "^1.1.0" - ember-keyboard "^8.2.1" - ember-stargate "^0.4.3" - ember-style-modifier "^3.0.1" - ember-truth-helpers "^3.1.1" - prismjs "^1.29.0" - sass "^1.69.5" + ember-concurrency "^4.0.4" + ember-element-helper "^0.8.6" + ember-focus-trap "^1.1.1" + ember-get-config "^2.1.1" + ember-modifier "^4.2.2" + ember-power-select "^8.7.1" + ember-stargate "^0.5.0" + ember-style-modifier "^4.4.0" + ember-truth-helpers "^4.0.3" + luxon "^3.4.2" + prismjs "^1.30.0" + sass "^1.83.0" + tabbable "^6.2.0" tippy.js "^6.3.7" -"@hashicorp/design-system-tokens@^1.11.0", "@hashicorp/design-system-tokens@^1.9.0": - version "1.11.0" - resolved "https://registry.npmjs.org/@hashicorp/design-system-tokens/-/design-system-tokens-1.11.0.tgz#0ae68d06d4297e891ce4ba63465d1f6e742e5554" - integrity sha512-LPj8IAznpEEhKrrosg3+sW9ss6fVKt8zOC9Ic4Kt5/KZPWLFaP6S8pff5ytvede+cZrAzY3UzZF55u+ev5J9GQ== +"@hashicorp/design-system-tokens@^2.3.0": + version "2.3.0" + resolved "https://registry.npmjs.org/@hashicorp/design-system-tokens/-/design-system-tokens-2.3.0.tgz#ea05796cad7e573245db90cd9089e44ac5cae5e1" + integrity sha512-T2XhcgUeiGkNqvPu73yittDghEccUpIZc7Fh/g4PG7KEvJwbXItFWTRWoHSGR8T6r6LpOP5E6CC4hSVwGRugRg== "@hashicorp/ember-cli-api-double@^4.0.0": version "4.0.0" @@ -1970,21 +2320,25 @@ pretender "^3.2.0" recursive-readdir-sync "^1.0.6" -"@hashicorp/ember-flight-icons@^4.0.1", "@hashicorp/ember-flight-icons@^4.1.0": - version "4.1.0" - resolved "https://registry.npmjs.org/@hashicorp/ember-flight-icons/-/ember-flight-icons-4.1.0.tgz#4f73fc6145c94ecd46ef38802722ea2d1fe0e876" - integrity sha512-X1AL475EPuGu6UkZiS/zqRFgymnIhGfgpY1HwPdavePARmgMr9CcPSwsTeZeV+OXq6yUxMzidijSJUAfEpLb5Q== +"@hashicorp/ember-flight-icons@4.0.0": + version "4.0.0" + resolved "https://registry.npmjs.org/@hashicorp/ember-flight-icons/-/ember-flight-icons-4.0.0.tgz#266344c64491be23d7a2e6cef796108eb3208abe" + integrity sha512-6uSFNnyqCO4IDLZnybAwvpfLKP81Hkjbx8zD3tT0Ib/YitNFxq3AURnAnnxaMybeuq4pJpA3kav+Bx+8infPZQ== dependencies: - "@hashicorp/flight-icons" "^3.0.0" + "@hashicorp/flight-icons" "^2.20.0" ember-auto-import "^2.6.3" - ember-cli-babel "^8.2.0" - ember-cli-htmlbars "^6.3.0" - ember-get-config "^2.1.1" + ember-cli-babel "^7.26.11" + ember-cli-htmlbars "^6.2.0" -"@hashicorp/flight-icons@^3.0.0": - version "3.4.0" - resolved "https://registry.npmjs.org/@hashicorp/flight-icons/-/flight-icons-3.4.0.tgz#fbd30a9748c36d92934784623e93ce9af48ce957" - integrity sha512-ddbiKkaXW3LMlXE1MZz0fsaO0rJvpbLQ2Js+Qa1e2yWKQbtSvJluAu9V8mg5jvOlR3HFDskTm8knSxVRd0VjGw== +"@hashicorp/flight-icons@^2.20.0": + version "2.25.0" + resolved "https://registry.npmjs.org/@hashicorp/flight-icons/-/flight-icons-2.25.0.tgz#a9f3266525a5824b0c19c8dab22d45a27f1d3d3d" + integrity sha512-BFR+xnC7hHgo9QahwFKXUCao4MJLYAnYBb9i924Wz6WAkyNey880nyULedh6J3z/lGx+7VVa7H/xnv4WSFyZyA== + +"@hashicorp/flight-icons@^3.11.1": + version "3.11.1" + resolved "https://registry.npmjs.org/@hashicorp/flight-icons/-/flight-icons-3.11.1.tgz#4c34e9511f8a3fe6d4089da8f539a96cd196359e" + integrity sha512-FQOHB2qCzHoG3dm6zidS39D4U0ida/7Sge5EG+KqcebH5jsbJQiMyB/qMc3YQBo5vGBe8XUa+rVW8v4JNpzk1Q== "@html-next/vertical-collection@^4.0.0": version "4.0.2" @@ -2030,7 +2384,7 @@ resolved "https://registry.npmjs.org/@istanbuljs/schema/-/schema-0.1.3.tgz#e45e384e4b8ec16bce2fd903af78450f6bf7ec98" integrity sha512-ZXRY4jNvVgSVQ8DL3LTcakaAtXwTVUxE81hslsyD2AtoXW/wVob10HkOJ1X/pAlcI7D+2YoZKg5do8G/w6RYgA== -"@jridgewell/gen-mapping@^0.3.2", "@jridgewell/gen-mapping@^0.3.5": +"@jridgewell/gen-mapping@^0.3.5": version "0.3.5" resolved "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.5.tgz#dcce6aff74bdf6dad1a95802b69b04a2fcb1fb36" integrity sha512-IzL8ZoEDIBRWEzlCcRhOaCupYyN5gdIK+Q6fbFdPDg6HqX6jpkItn7DFIpW9LQzXG6Df9sA7+OKnq0qlz/GaQg== @@ -2070,6 +2424,87 @@ "@jridgewell/resolve-uri" "^3.1.0" "@jridgewell/sourcemap-codec" "^1.4.14" +"@lezer/common@^1.0.0", "@lezer/common@^1.0.2", "@lezer/common@^1.1.0", "@lezer/common@^1.2.0", "@lezer/common@^1.2.1": + version "1.2.3" + resolved "https://registry.npmjs.org/@lezer/common/-/common-1.2.3.tgz#138fcddab157d83da557554851017c6c1e5667fd" + integrity sha512-w7ojc8ejBqr2REPsWxJjrMFsA/ysDCFICn8zEOR9mrqzOu2amhITYuLD8ag6XZf0CFXDrhKqw7+tW8cX66NaDA== + +"@lezer/css@^1.1.0", "@lezer/css@^1.1.7": + version "1.2.1" + resolved "https://registry.npmjs.org/@lezer/css/-/css-1.2.1.tgz#b35f6d0459e9be4de1cdf4d3132a59efd7cf2ba3" + integrity sha512-2F5tOqzKEKbCUNraIXc0f6HKeyKlmMWJnBB0i4XW6dJgssrZO/YlZ2pY5xgyqDleqqhiNJ3dQhbrV2aClZQMvg== + dependencies: + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.3.0" + +"@lezer/go@^1.0.0": + version "1.0.1" + resolved "https://registry.npmjs.org/@lezer/go/-/go-1.0.1.tgz#3004b54f5e4c9719edcba98653f380baf8c0d1a2" + integrity sha512-xToRsYxwsgJNHTgNdStpcvmbVuKxTapV0dM0wey1geMMRc9aggoVyKgzYp41D2/vVOx+Ii4hmE206kvxIXBVXQ== + dependencies: + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.3.0" + +"@lezer/highlight@^1.0.0", "@lezer/highlight@^1.1.3", "@lezer/highlight@^1.2.0", "@lezer/highlight@^1.2.1": + version "1.2.1" + resolved "https://registry.npmjs.org/@lezer/highlight/-/highlight-1.2.1.tgz#596fa8f9aeb58a608be0a563e960c373cbf23f8b" + integrity sha512-Z5duk4RN/3zuVO7Jq0pGLJ3qynpxUVsh7IbUbGj88+uV2ApSAn6kWg2au3iJb+0Zi7kKtqffIESgNcRXWZWmSA== + dependencies: + "@lezer/common" "^1.0.0" + +"@lezer/html@^1.3.0": + version "1.3.10" + resolved "https://registry.npmjs.org/@lezer/html/-/html-1.3.10.tgz#1be9a029a6fe835c823b20a98a449a630416b2af" + integrity sha512-dqpT8nISx/p9Do3AchvYGV3qYc4/rKr3IBZxlHmpIKam56P47RSHkSF5f13Vu9hebS1jM0HmtJIwLbWz1VIY6w== + dependencies: + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.0.0" + +"@lezer/javascript@^1.0.0": + version "1.5.1" + resolved "https://registry.npmjs.org/@lezer/javascript/-/javascript-1.5.1.tgz#2a424a6ec29f1d4ef3c34cbccc5447e373618ad8" + integrity sha512-ATOImjeVJuvgm3JQ/bpo2Tmv55HSScE2MTPnKRMRIPx2cLhHGyX2VnqpHhtIV1tVzIjZDbcWQm+NCTF40ggZVw== + dependencies: + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.1.3" + "@lezer/lr" "^1.3.0" + +"@lezer/json@^1.0.0": + version "1.0.3" + resolved "https://registry.npmjs.org/@lezer/json/-/json-1.0.3.tgz#e773a012ad0088fbf07ce49cfba875cc9e5bc05f" + integrity sha512-BP9KzdF9Y35PDpv04r0VeSTKDeox5vVr3efE7eBbx3r4s3oNLfunchejZhjArmeieBH+nVOpgIiBJpEAv8ilqQ== + dependencies: + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.0.0" + +"@lezer/lr@^1.0.0", "@lezer/lr@^1.3.0", "@lezer/lr@^1.4.0": + version "1.4.2" + resolved "https://registry.npmjs.org/@lezer/lr/-/lr-1.4.2.tgz#931ea3dea8e9de84e90781001dae30dea9ff1727" + integrity sha512-pu0K1jCIdnQ12aWNaAVU5bzi7Bd1w54J3ECgANPmYLtQKP0HBj2cE/5coBD66MT10xbtIuUr7tg0Shbsvk0mDA== + dependencies: + "@lezer/common" "^1.0.0" + +"@lezer/markdown@^1.0.0": + version "1.4.3" + resolved "https://registry.npmjs.org/@lezer/markdown/-/markdown-1.4.3.tgz#a742ed5e782ac4913a621dfd1e6a8e409f4dd589" + integrity sha512-kfw+2uMrQ/wy/+ONfrH83OkdFNM0ye5Xq96cLlaCy7h5UT9FO54DU4oRoIc0CSBh5NWmWuiIJA7NGLMJbQ+Oxg== + dependencies: + "@lezer/common" "^1.0.0" + "@lezer/highlight" "^1.0.0" + +"@lezer/yaml@^1.0.0": + version "1.0.3" + resolved "https://registry.npmjs.org/@lezer/yaml/-/yaml-1.0.3.tgz#b23770ab42b390056da6b187d861b998fd60b1ff" + integrity sha512-GuBLekbw9jDBDhGur82nuwkxKQ+a3W5H0GfaAthDXcAu+XdpS43VlnxA9E9hllkpSP5ellRDKjLLj7Lu9Wr6xA== + dependencies: + "@lezer/common" "^1.2.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.4.0" + "@lit-labs/ssr-dom-shim@^1.0.0": version "1.2.0" resolved "https://registry.npmjs.org/@lit-labs/ssr-dom-shim/-/ssr-dom-shim-1.2.0.tgz#353ce4a76c83fadec272ea5674ede767650762fd" @@ -2097,6 +2532,11 @@ dependencies: call-bind "^1.0.7" +"@marijn/find-cluster-break@^1.0.0": + version "1.0.2" + resolved "https://registry.npmjs.org/@marijn/find-cluster-break/-/find-cluster-break-1.0.2.tgz#775374306116d51c0c500b8c4face0f9a04752d8" + integrity sha512-l0h88YhZFyKdXIFNfSWpyjStDjGHwZ/U7iobcK1cQQD8sejsONdQtTVU+1wVN1PBw40PiiHB1vA5S7VTfQiP9g== + "@mrmlnc/readdir-enhanced@^2.2.1": version "2.2.1" resolved "https://registry.npmjs.org/@mrmlnc/readdir-enhanced/-/readdir-enhanced-2.2.1.tgz#524af240d1a360527b730475ecfa1344aa540dde" @@ -2131,6 +2571,104 @@ "@nodelib/fs.scandir" "2.1.5" fastq "^1.6.0" +"@nullvoxpopuli/ember-composable-helpers@^5.2.10": + version "5.2.11" + resolved "https://registry.npmjs.org/@nullvoxpopuli/ember-composable-helpers/-/ember-composable-helpers-5.2.11.tgz#ecea309e85efb29bace4a84dc3168d03e8e9fc73" + integrity sha512-hdDDhYru0TelepDbh1WpxJlyFYy9bIqdKx3u6Y8FkEjgNnF5RFV7gIUk4u8XB28/3llHAILepCMvRmze9176OA== + dependencies: + "@embroider/addon-shim" "^1.9.0" + decorator-transforms "^2.3.0" + ember-functions-as-helper-polyfill "^2.1.2" + +"@parcel/watcher-android-arm64@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-android-arm64/-/watcher-android-arm64-2.5.1.tgz#507f836d7e2042f798c7d07ad19c3546f9848ac1" + integrity sha512-KF8+j9nNbUN8vzOFDpRMsaKBHZ/mcjEjMToVMJOhTozkDonQFFrRcfdLWn6yWKCmJKmdVxSgHiYvTCef4/qcBA== + +"@parcel/watcher-darwin-arm64@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-darwin-arm64/-/watcher-darwin-arm64-2.5.1.tgz#3d26dce38de6590ef79c47ec2c55793c06ad4f67" + integrity sha512-eAzPv5osDmZyBhou8PoF4i6RQXAfeKL9tjb3QzYuccXFMQU0ruIc/POh30ePnaOyD1UXdlKguHBmsTs53tVoPw== + +"@parcel/watcher-darwin-x64@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-darwin-x64/-/watcher-darwin-x64-2.5.1.tgz#99f3af3869069ccf774e4ddfccf7e64fd2311ef8" + integrity sha512-1ZXDthrnNmwv10A0/3AJNZ9JGlzrF82i3gNQcWOzd7nJ8aj+ILyW1MTxVk35Db0u91oD5Nlk9MBiujMlwmeXZg== + +"@parcel/watcher-freebsd-x64@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-freebsd-x64/-/watcher-freebsd-x64-2.5.1.tgz#14d6857741a9f51dfe51d5b08b7c8afdbc73ad9b" + integrity sha512-SI4eljM7Flp9yPuKi8W0ird8TI/JK6CSxju3NojVI6BjHsTyK7zxA9urjVjEKJ5MBYC+bLmMcbAWlZ+rFkLpJQ== + +"@parcel/watcher-linux-arm-glibc@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-linux-arm-glibc/-/watcher-linux-arm-glibc-2.5.1.tgz#43c3246d6892381db473bb4f663229ad20b609a1" + integrity sha512-RCdZlEyTs8geyBkkcnPWvtXLY44BCeZKmGYRtSgtwwnHR4dxfHRG3gR99XdMEdQ7KeiDdasJwwvNSF5jKtDwdA== + +"@parcel/watcher-linux-arm-musl@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-linux-arm-musl/-/watcher-linux-arm-musl-2.5.1.tgz#663750f7090bb6278d2210de643eb8a3f780d08e" + integrity sha512-6E+m/Mm1t1yhB8X412stiKFG3XykmgdIOqhjWj+VL8oHkKABfu/gjFj8DvLrYVHSBNC+/u5PeNrujiSQ1zwd1Q== + +"@parcel/watcher-linux-arm64-glibc@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-linux-arm64-glibc/-/watcher-linux-arm64-glibc-2.5.1.tgz#ba60e1f56977f7e47cd7e31ad65d15fdcbd07e30" + integrity sha512-LrGp+f02yU3BN9A+DGuY3v3bmnFUggAITBGriZHUREfNEzZh/GO06FF5u2kx8x+GBEUYfyTGamol4j3m9ANe8w== + +"@parcel/watcher-linux-arm64-musl@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-linux-arm64-musl/-/watcher-linux-arm64-musl-2.5.1.tgz#f7fbcdff2f04c526f96eac01f97419a6a99855d2" + integrity sha512-cFOjABi92pMYRXS7AcQv9/M1YuKRw8SZniCDw0ssQb/noPkRzA+HBDkwmyOJYp5wXcsTrhxO0zq1U11cK9jsFg== + +"@parcel/watcher-linux-x64-glibc@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-linux-x64-glibc/-/watcher-linux-x64-glibc-2.5.1.tgz#4d2ea0f633eb1917d83d483392ce6181b6a92e4e" + integrity sha512-GcESn8NZySmfwlTsIur+49yDqSny2IhPeZfXunQi48DMugKeZ7uy1FX83pO0X22sHntJ4Ub+9k34XQCX+oHt2A== + +"@parcel/watcher-linux-x64-musl@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-linux-x64-musl/-/watcher-linux-x64-musl-2.5.1.tgz#277b346b05db54f55657301dd77bdf99d63606ee" + integrity sha512-n0E2EQbatQ3bXhcH2D1XIAANAcTZkQICBPVaxMeaCVBtOpBZpWJuf7LwyWPSBDITb7In8mqQgJ7gH8CILCURXg== + +"@parcel/watcher-win32-arm64@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-win32-arm64/-/watcher-win32-arm64-2.5.1.tgz#7e9e02a26784d47503de1d10e8eab6cceb524243" + integrity sha512-RFzklRvmc3PkjKjry3hLF9wD7ppR4AKcWNzH7kXR7GUe0Igb3Nz8fyPwtZCSquGrhU5HhUNDr/mKBqj7tqA2Vw== + +"@parcel/watcher-win32-ia32@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-win32-ia32/-/watcher-win32-ia32-2.5.1.tgz#2d0f94fa59a873cdc584bf7f6b1dc628ddf976e6" + integrity sha512-c2KkcVN+NJmuA7CGlaGD1qJh1cLfDnQsHjE89E60vUEMlqduHGCdCLJCID5geFVM0dOtA3ZiIO8BoEQmzQVfpQ== + +"@parcel/watcher-win32-x64@2.5.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher-win32-x64/-/watcher-win32-x64-2.5.1.tgz#ae52693259664ba6f2228fa61d7ee44b64ea0947" + integrity sha512-9lHBdJITeNR++EvSQVUcaZoWupyHfXe1jZvGZ06O/5MflPcuPLtEphScIBL+AiCWBO46tDSHzWyD0uDmmZqsgA== + +"@parcel/watcher@^2.4.1": + version "2.5.1" + resolved "https://registry.npmjs.org/@parcel/watcher/-/watcher-2.5.1.tgz#342507a9cfaaf172479a882309def1e991fb1200" + integrity sha512-dfUnCxiN9H4ap84DvD2ubjw+3vUNpstxa0TneY/Paat8a3R4uQZDLSvWjmznAY/DoahqTHl9V46HF/Zs3F29pg== + dependencies: + detect-libc "^1.0.3" + is-glob "^4.0.3" + micromatch "^4.0.5" + node-addon-api "^7.0.0" + optionalDependencies: + "@parcel/watcher-android-arm64" "2.5.1" + "@parcel/watcher-darwin-arm64" "2.5.1" + "@parcel/watcher-darwin-x64" "2.5.1" + "@parcel/watcher-freebsd-x64" "2.5.1" + "@parcel/watcher-linux-arm-glibc" "2.5.1" + "@parcel/watcher-linux-arm-musl" "2.5.1" + "@parcel/watcher-linux-arm64-glibc" "2.5.1" + "@parcel/watcher-linux-arm64-musl" "2.5.1" + "@parcel/watcher-linux-x64-glibc" "2.5.1" + "@parcel/watcher-linux-x64-musl" "2.5.1" + "@parcel/watcher-win32-arm64" "2.5.1" + "@parcel/watcher-win32-ia32" "2.5.1" + "@parcel/watcher-win32-x64" "2.5.1" + "@popperjs/core@^2.9.0": version "2.11.8" resolved "https://registry.npmjs.org/@popperjs/core/-/core-2.11.8.tgz#6b79032e760a0899cd4204710beede972a3a185f" @@ -2245,23 +2783,7 @@ dependencies: "@types/node" "*" -"@types/eslint-scope@^3.7.3": - version "3.7.7" - resolved "https://registry.npmjs.org/@types/eslint-scope/-/eslint-scope-3.7.7.tgz#3108bd5f18b0cdb277c867b3dd449c9ed7079ac5" - integrity sha512-MzMFlSLBqNF2gcHWO0G1vP/YQyfvrxZ0bF+u7mzUdZ1/xK4A4sru+nraZz5i3iEIk1l1uyicaDVTB4QbbEkAYg== - dependencies: - "@types/eslint" "*" - "@types/estree" "*" - -"@types/eslint@*": - version "8.56.10" - resolved "https://registry.npmjs.org/@types/eslint/-/eslint-8.56.10.tgz#eb2370a73bf04a901eeba8f22595c7ee0f7eb58d" - integrity sha512-Shavhk87gCtY2fhXDctcfS3e6FdxWkCx1iUZ9eEUbh7rTqlZT0/IzOkCOVt0fCjcFuZ9FPYfuezTBImfHCDBGQ== - dependencies: - "@types/estree" "*" - "@types/json-schema" "*" - -"@types/estree@*", "@types/estree@^1.0.5": +"@types/estree@^1.0.5": version "1.0.5" resolved "https://registry.npmjs.org/@types/estree/-/estree-1.0.5.tgz#a6ce3e556e00fd9895dd872dd172ad0d4bd687f4" integrity sha512-/kYRxGDLWzHOB7q+wtSUQlFrtcdUccpfy+X+9iMBpHK8QLLhx2wIPYuS5DYtR9Wa/YlZAbIovy7qVdB1Aq6Lyw== @@ -2328,7 +2850,7 @@ resolved "https://registry.npmjs.org/@types/http-errors/-/http-errors-2.0.4.tgz#7eb47726c391b7345a6ec35ad7f4de469cf5ba4f" integrity sha512-D0CFMMtydbJAegzOyHjtiKPLlvnm3iTZyZRSZoLq2mRhDdmLfIWOCYPfQJ4cu2erKghU++QvjcUjp/5h7hESpA== -"@types/json-schema@*", "@types/json-schema@^7.0.5", "@types/json-schema@^7.0.8", "@types/json-schema@^7.0.9": +"@types/json-schema@^7.0.5", "@types/json-schema@^7.0.8", "@types/json-schema@^7.0.9": version "7.0.15" resolved "https://registry.npmjs.org/@types/json-schema/-/json-schema-7.0.15.tgz#596a1747233694d50f6ad8a7869fcb6f56cf5841" integrity sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA== @@ -2420,64 +2942,21 @@ "@webassemblyjs/helper-numbers" "1.11.6" "@webassemblyjs/helper-wasm-bytecode" "1.11.6" -"@webassemblyjs/ast@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/ast/-/ast-1.9.0.tgz#bd850604b4042459a5a41cd7d338cbed695ed964" - integrity sha512-C6wW5L+b7ogSDVqymbkkvuW9kruN//YisMED04xzeBBqjHa2FYnmvOlS6Xj68xWQRgWvI9cIglsjFowH/RJyEA== - dependencies: - "@webassemblyjs/helper-module-context" "1.9.0" - "@webassemblyjs/helper-wasm-bytecode" "1.9.0" - "@webassemblyjs/wast-parser" "1.9.0" - "@webassemblyjs/floating-point-hex-parser@1.11.6": version "1.11.6" resolved "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.11.6.tgz#dacbcb95aff135c8260f77fa3b4c5fea600a6431" integrity sha512-ejAj9hfRJ2XMsNHk/v6Fu2dGS+i4UaXBXGemOfQ/JfQ6mdQg/WXtwleQRLLS4OvfDhv8rYnVwH27YJLMyYsxhw== -"@webassemblyjs/floating-point-hex-parser@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/floating-point-hex-parser/-/floating-point-hex-parser-1.9.0.tgz#3c3d3b271bddfc84deb00f71344438311d52ffb4" - integrity sha512-TG5qcFsS8QB4g4MhrxK5TqfdNe7Ey/7YL/xN+36rRjl/BlGE/NcBvJcqsRgCP6Z92mRE+7N50pRIi8SmKUbcQA== - "@webassemblyjs/helper-api-error@1.11.6": version "1.11.6" resolved "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.11.6.tgz#6132f68c4acd59dcd141c44b18cbebbd9f2fa768" integrity sha512-o0YkoP4pVu4rN8aTJgAyj9hC2Sv5UlkzCHhxqWj8butaLvnpdc2jOwh4ewE6CX0txSfLn/UYaV/pheS2Txg//Q== -"@webassemblyjs/helper-api-error@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/helper-api-error/-/helper-api-error-1.9.0.tgz#203f676e333b96c9da2eeab3ccef33c45928b6a2" - integrity sha512-NcMLjoFMXpsASZFxJ5h2HZRcEhDkvnNFOAKneP5RbKRzaWJN36NC4jqQHKwStIhGXu5mUWlUUk7ygdtrO8lbmw== - "@webassemblyjs/helper-buffer@1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.12.1.tgz#6df20d272ea5439bf20ab3492b7fb70e9bfcb3f6" integrity sha512-nzJwQw99DNDKr9BVCOZcLuJJUlqkJh+kVzVl6Fmq/tI5ZtEyWT1KZMyOXltXLZJmDtvLCDgwsyrkohEtopTXCw== -"@webassemblyjs/helper-buffer@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/helper-buffer/-/helper-buffer-1.9.0.tgz#a1442d269c5feb23fcbc9ef759dac3547f29de00" - integrity sha512-qZol43oqhq6yBPx7YM3m9Bv7WMV9Eevj6kMi6InKOuZxhw+q9hOkvq5e/PpKSiLfyetpaBnogSbNCfBwyB00CA== - -"@webassemblyjs/helper-code-frame@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/helper-code-frame/-/helper-code-frame-1.9.0.tgz#647f8892cd2043a82ac0c8c5e75c36f1d9159f27" - integrity sha512-ERCYdJBkD9Vu4vtjUYe8LZruWuNIToYq/ME22igL+2vj2dQ2OOujIZr3MEFvfEaqKoVqpsFKAGsRdBSBjrIvZA== - dependencies: - "@webassemblyjs/wast-printer" "1.9.0" - -"@webassemblyjs/helper-fsm@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/helper-fsm/-/helper-fsm-1.9.0.tgz#c05256b71244214671f4b08ec108ad63b70eddb8" - integrity sha512-OPRowhGbshCb5PxJ8LocpdX9Kl0uB4XsAjl6jH/dWKlk/mzsANvhwbiULsaiqT5GZGT9qinTICdj6PLuM5gslw== - -"@webassemblyjs/helper-module-context@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/helper-module-context/-/helper-module-context-1.9.0.tgz#25d8884b76839871a08a6c6f806c3979ef712f07" - integrity sha512-MJCW8iGC08tMk2enck1aPW+BE5Cw8/7ph/VGZxwyvGbJwjktKkDK7vy7gAmMDx88D7mhDTCNKAW5tED+gZ0W8g== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/helper-numbers@1.11.6": version "1.11.6" resolved "https://registry.npmjs.org/@webassemblyjs/helper-numbers/-/helper-numbers-1.11.6.tgz#cbce5e7e0c1bd32cf4905ae444ef64cea919f1b5" @@ -2492,11 +2971,6 @@ resolved "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.11.6.tgz#bb2ebdb3b83aa26d9baad4c46d4315283acd51e9" integrity sha512-sFFHKwcmBprO9e7Icf0+gddyWYDViL8bpPjJJl0WHxCdETktXdmtWLGVzoHbqUcY4Be1LkNfwTmXOJUFZYSJdA== -"@webassemblyjs/helper-wasm-bytecode@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/helper-wasm-bytecode/-/helper-wasm-bytecode-1.9.0.tgz#4fed8beac9b8c14f8c58b70d124d549dd1fe5790" - integrity sha512-R7FStIzyNcd7xKxCZH5lE0Bqy+hGTwS3LJjuv1ZVxd9O7eHCedSdrId/hMOd20I+v8wDXEn+bjfKDLzTepoaUw== - "@webassemblyjs/helper-wasm-section@1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.12.1.tgz#3da623233ae1a60409b509a52ade9bc22a37f7bf" @@ -2507,16 +2981,6 @@ "@webassemblyjs/helper-wasm-bytecode" "1.11.6" "@webassemblyjs/wasm-gen" "1.12.1" -"@webassemblyjs/helper-wasm-section@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/helper-wasm-section/-/helper-wasm-section-1.9.0.tgz#5a4138d5a6292ba18b04c5ae49717e4167965346" - integrity sha512-XnMB8l3ek4tvrKUUku+IVaXNHz2YsJyOOmz+MMkZvh8h1uSJpSen6vYnw3IoQ7WwEuAhL8Efjms1ZWjqh2agvw== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/helper-buffer" "1.9.0" - "@webassemblyjs/helper-wasm-bytecode" "1.9.0" - "@webassemblyjs/wasm-gen" "1.9.0" - "@webassemblyjs/ieee754@1.11.6": version "1.11.6" resolved "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.11.6.tgz#bb665c91d0b14fffceb0e38298c329af043c6e3a" @@ -2524,13 +2988,6 @@ dependencies: "@xtuc/ieee754" "^1.2.0" -"@webassemblyjs/ieee754@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/ieee754/-/ieee754-1.9.0.tgz#15c7a0fbaae83fb26143bbacf6d6df1702ad39e4" - integrity sha512-dcX8JuYU/gvymzIHc9DgxTzUUTLexWwt8uCTWP3otys596io0L5aW02Gb1RjYpx2+0Jus1h4ZFqjla7umFniTg== - dependencies: - "@xtuc/ieee754" "^1.2.0" - "@webassemblyjs/leb128@1.11.6": version "1.11.6" resolved "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.11.6.tgz#70e60e5e82f9ac81118bc25381a0b283893240d7" @@ -2538,37 +2995,11 @@ dependencies: "@xtuc/long" "4.2.2" -"@webassemblyjs/leb128@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/leb128/-/leb128-1.9.0.tgz#f19ca0b76a6dc55623a09cffa769e838fa1e1c95" - integrity sha512-ENVzM5VwV1ojs9jam6vPys97B/S65YQtv/aanqnU7D8aSoHFX8GyhGg0CMfyKNIHBuAVjy3tlzd5QMMINa7wpw== - dependencies: - "@xtuc/long" "4.2.2" - "@webassemblyjs/utf8@1.11.6": version "1.11.6" resolved "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.11.6.tgz#90f8bc34c561595fe156603be7253cdbcd0fab5a" integrity sha512-vtXf2wTQ3+up9Zsg8sa2yWiQpzSsMyXj0qViVP6xKGCUT8p8YJ6HqI7l5eCnWx1T/FYdsv07HQs2wTFbbof/RA== -"@webassemblyjs/utf8@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/utf8/-/utf8-1.9.0.tgz#04d33b636f78e6a6813227e82402f7637b6229ab" - integrity sha512-GZbQlWtopBTP0u7cHrEx+73yZKrQoBMpwkGEIqlacljhXCkVM1kMQge/Mf+csMJAjEdSwhOyLAS0AoR3AG5P8w== - -"@webassemblyjs/wasm-edit@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.9.0.tgz#3fe6d79d3f0f922183aa86002c42dd256cfee9cf" - integrity sha512-FgHzBm80uwz5M8WKnMTn6j/sVbqilPdQXTWraSjBwFXSYGirpkSWE2R9Qvz9tNiTKQvoKILpCuTjBKzOIm0nxw== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/helper-buffer" "1.9.0" - "@webassemblyjs/helper-wasm-bytecode" "1.9.0" - "@webassemblyjs/helper-wasm-section" "1.9.0" - "@webassemblyjs/wasm-gen" "1.9.0" - "@webassemblyjs/wasm-opt" "1.9.0" - "@webassemblyjs/wasm-parser" "1.9.0" - "@webassemblyjs/wast-printer" "1.9.0" - "@webassemblyjs/wasm-edit@^1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/wasm-edit/-/wasm-edit-1.12.1.tgz#9f9f3ff52a14c980939be0ef9d5df9ebc678ae3b" @@ -2594,17 +3025,6 @@ "@webassemblyjs/leb128" "1.11.6" "@webassemblyjs/utf8" "1.11.6" -"@webassemblyjs/wasm-gen@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/wasm-gen/-/wasm-gen-1.9.0.tgz#50bc70ec68ded8e2763b01a1418bf43491a7a49c" - integrity sha512-cPE3o44YzOOHvlsb4+E9qSqjc9Qf9Na1OO/BHFy4OI91XDE14MjFN4lTMezzaIWdPqHnsTodGGNP+iRSYfGkjA== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/helper-wasm-bytecode" "1.9.0" - "@webassemblyjs/ieee754" "1.9.0" - "@webassemblyjs/leb128" "1.9.0" - "@webassemblyjs/utf8" "1.9.0" - "@webassemblyjs/wasm-opt@1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.12.1.tgz#9e6e81475dfcfb62dab574ac2dda38226c232bc5" @@ -2615,16 +3035,6 @@ "@webassemblyjs/wasm-gen" "1.12.1" "@webassemblyjs/wasm-parser" "1.12.1" -"@webassemblyjs/wasm-opt@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/wasm-opt/-/wasm-opt-1.9.0.tgz#2211181e5b31326443cc8112eb9f0b9028721a61" - integrity sha512-Qkjgm6Anhm+OMbIL0iokO7meajkzQD71ioelnfPEj6r4eOFuqm4YC3VBPqXjFyyNwowzbMD+hizmprP/Fwkl2A== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/helper-buffer" "1.9.0" - "@webassemblyjs/wasm-gen" "1.9.0" - "@webassemblyjs/wasm-parser" "1.9.0" - "@webassemblyjs/wasm-parser@1.12.1", "@webassemblyjs/wasm-parser@^1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.12.1.tgz#c47acb90e6f083391e3fa61d113650eea1e95937" @@ -2637,30 +3047,6 @@ "@webassemblyjs/leb128" "1.11.6" "@webassemblyjs/utf8" "1.11.6" -"@webassemblyjs/wasm-parser@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/wasm-parser/-/wasm-parser-1.9.0.tgz#9d48e44826df4a6598294aa6c87469d642fff65e" - integrity sha512-9+wkMowR2AmdSWQzsPEjFU7njh8HTO5MqO8vjwEHuM+AMHioNqSBONRdr0NQQ3dVQrzp0s8lTcYqzUdb7YgELA== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/helper-api-error" "1.9.0" - "@webassemblyjs/helper-wasm-bytecode" "1.9.0" - "@webassemblyjs/ieee754" "1.9.0" - "@webassemblyjs/leb128" "1.9.0" - "@webassemblyjs/utf8" "1.9.0" - -"@webassemblyjs/wast-parser@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/wast-parser/-/wast-parser-1.9.0.tgz#3031115d79ac5bd261556cecc3fa90a3ef451914" - integrity sha512-qsqSAP3QQ3LyZjNC/0jBJ/ToSxfYJ8kYyuiGvtn/8MK89VrNEfwj7BPQzJVHi0jGTRK2dGdJ5PRqhtjzoww+bw== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/floating-point-hex-parser" "1.9.0" - "@webassemblyjs/helper-api-error" "1.9.0" - "@webassemblyjs/helper-code-frame" "1.9.0" - "@webassemblyjs/helper-fsm" "1.9.0" - "@xtuc/long" "4.2.2" - "@webassemblyjs/wast-printer@1.12.1": version "1.12.1" resolved "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.12.1.tgz#bcecf661d7d1abdaf989d8341a4833e33e2b31ac" @@ -2669,15 +3055,6 @@ "@webassemblyjs/ast" "1.12.1" "@xtuc/long" "4.2.2" -"@webassemblyjs/wast-printer@1.9.0": - version "1.9.0" - resolved "https://registry.npmjs.org/@webassemblyjs/wast-printer/-/wast-printer-1.9.0.tgz#4935d54c85fef637b00ce9f52377451d00d47899" - integrity sha512-2J0nE95rHXHyQ24cWjMKJ1tqB/ds8z/cyeOZxJhcb+rW+SQASVjuznUSmdz5GpVJTzU8JkhYut0D3siFDD6wsA== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/wast-parser" "1.9.0" - "@xtuc/long" "4.2.2" - "@xmldom/xmldom@^0.8.0": version "0.8.10" resolved "https://registry.npmjs.org/@xmldom/xmldom/-/xmldom-0.8.10.tgz#a1337ca426aa61cef9fe15b5b28e340a72f6fa99" @@ -2746,11 +3123,6 @@ acorn-walk@^7.1.1: resolved "https://registry.npmjs.org/acorn-walk/-/acorn-walk-7.2.0.tgz#0de889a601203909b0fbe07b8938dc21d2e967bc" integrity sha512-OPdCF6GsMIP+Az+aWfAAOEt2/+iVDKE7oy6lJ098aoe59oAmK76qV6Gw60SbZ8jHuG2wH058GF4pLFbYamYrVA== -acorn@^6.4.1: - version "6.4.2" - resolved "https://registry.npmjs.org/acorn/-/acorn-6.4.2.tgz#35866fd710528e92de10cf06016498e47e39e1e6" - integrity sha512-XtGIhXwF8YM8bJhGxG5kXgjkEuNGLTkoYqVE+KMR+aspr4KGYmKYg7yUe3KghyQ9yheNwLnjmzh/7+gfDBmHCQ== - acorn@^7.1.1, acorn@^7.4.0: version "7.4.1" resolved "https://registry.npmjs.org/acorn/-/acorn-7.4.1.tgz#feaed255973d2e77555b83dbc08851a6c63520fa" @@ -2776,11 +3148,6 @@ aggregate-error@^3.0.0: clean-stack "^2.0.0" indent-string "^4.0.0" -ajv-errors@^1.0.0: - version "1.0.1" - resolved "https://registry.npmjs.org/ajv-errors/-/ajv-errors-1.0.1.tgz#f35986aceb91afadec4102fbd85014950cefa64d" - integrity sha512-DCRfO/4nQ+89p/RK43i8Ezd41EqdGIU4ld7nGF8OQ14oc/we5rEntLCUa7+jrn3nn83BosfwZA0wb4pon2o8iQ== - ajv-formats@^2.1.1: version "2.1.1" resolved "https://registry.npmjs.org/ajv-formats/-/ajv-formats-2.1.1.tgz#6e669400659eb74973bbf2e33327180a0996b520" @@ -2788,7 +3155,7 @@ ajv-formats@^2.1.1: dependencies: ajv "^8.0.0" -ajv-keywords@^3.1.0, ajv-keywords@^3.4.1, ajv-keywords@^3.5.2: +ajv-keywords@^3.5.2: version "3.5.2" resolved "https://registry.npmjs.org/ajv-keywords/-/ajv-keywords-3.5.2.tgz#31f29da5ab6e00d1c2d329acf7b5929614d5014d" integrity sha512-5p6WTN0DdTGVQk6VjcEju19IgaHudalcfabD7yhDGeA6bcQnmL+CpveLJq/3hvfwd1aof6L386Ougkx6RfyMIQ== @@ -2800,7 +3167,7 @@ ajv-keywords@^5.1.0: dependencies: fast-deep-equal "^3.1.3" -ajv@^6.1.0, ajv@^6.10.0, ajv@^6.10.2, ajv@^6.12.4, ajv@^6.12.5: +ajv@^6.10.0, ajv@^6.12.4, ajv@^6.12.5: version "6.12.6" resolved "https://registry.npmjs.org/ajv/-/ajv-6.12.6.tgz#baf5a62e802b07d977034586f8c3baf5adf26df4" integrity sha512-j3fVLgvTo527anyYyJOGTYJbG+vnnQYvE0m5mmkc1TK+nxAppkCLMIL0aZ4dblVCNoGShhm+kzE4ZUykBoMg4g== @@ -2920,11 +3287,6 @@ ansicolors@~0.2.1: resolved "https://registry.npmjs.org/ansicolors/-/ansicolors-0.2.1.tgz#be089599097b74a5c9c4a84a0cdbcdb62bd87aef" integrity sha512-tOIuy1/SK/dr94ZA0ckDohKXNeBNqZ4us6PjMVLs5h1w2GBB6uPtOknp2+VF4F/zcy9LI70W+Z+pE2Soajky1w== -any-promise@^1.0.0: - version "1.3.0" - resolved "https://registry.npmjs.org/any-promise/-/any-promise-1.3.0.tgz#abc6afeedcea52e809cdc0376aed3ce39635d17f" - integrity sha512-7UvmKalWRt1wgjL1RrGxoSJW/0QZFIegpeGvZG9kjp8vrRu55XTHbwnqq2GpXm9uLbcuhxm3IqX9OB4MZR1b2A== - anymatch@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/anymatch/-/anymatch-2.0.0.tgz#bcb24b4f37934d9aa7ac17b4adaf89e7c76ef2eb" @@ -2933,14 +3295,6 @@ anymatch@^2.0.0: micromatch "^3.1.4" normalize-path "^2.1.1" -anymatch@~3.1.2: - version "3.1.3" - resolved "https://registry.npmjs.org/anymatch/-/anymatch-3.1.3.tgz#790c58b19ba1720a84205b57c618d5ad8524973e" - integrity sha512-KMReFUr0B4t+D+OBkjR3KYqvocp2XaSzO55UcB6mgQMd3KbcE+mWTyvVV7D/zsdEbNnV6acZUutkiHQXvTr1Rw== - dependencies: - normalize-path "^3.0.0" - picomatch "^2.0.4" - aot-test-generators@^0.1.0: version "0.1.0" resolved "https://registry.npmjs.org/aot-test-generators/-/aot-test-generators-0.1.0.tgz#43f0f615f97cb298d7919c1b0b4e6b7310b03cd0" @@ -2948,7 +3302,7 @@ aot-test-generators@^0.1.0: dependencies: jsesc "^2.5.0" -aproba@^1.0.3, aproba@^1.1.1: +aproba@^1.0.3: version "1.2.0" resolved "https://registry.npmjs.org/aproba/-/aproba-1.2.0.tgz#6802e6264efd18c790a1b0d517f0f2627bf2c94a" integrity sha512-Y9J6ZjXtoYh8RnXVCMOU/ttDmk1aBjunq9vO0ta5x85WDQiQfUF9sIPBITdbiiIVcBo03Hi3jMxigBtsddlXRw== @@ -2974,11 +3328,6 @@ are-we-there-yet@~1.1.2: delegates "^1.0.0" readable-stream "^2.0.6" -arg@^5.0.2: - version "5.0.2" - resolved "https://registry.npmjs.org/arg/-/arg-5.0.2.tgz#c81433cc427c92c4dcf4865142dbca6f15acd59c" - integrity sha512-PYjyFOLKQ9y57JvQ6QLo8dAgNqswh8M1RMJYdQduT6xbWSgK36P/Z/v+p888pM69jMMfS8Xd8F6I1kQ/I9HUGg== - argparse@^1.0.7: version "1.0.10" resolved "https://registry.npmjs.org/argparse/-/argparse-1.0.10.tgz#bcd6791ea5ae09725e17e5ad988134cd40b3d911" @@ -2991,16 +3340,6 @@ argparse@^2.0.1: resolved "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz#246f50f3ca78a3240f6c997e8a9bd1eac49e4b38" integrity sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q== -arr-diff@^4.0.0: - version "4.0.0" - resolved "https://registry.npmjs.org/arr-diff/-/arr-diff-4.0.0.tgz#d6461074febfec71e7e15235761a329a5dc7c520" - integrity sha512-YVIQ82gZPGBebQV/a8dar4AitzCQs0jjXwMPZllpXMaGjXPYVUawSxQrRsjhjupyVxEvbHgUmIhKVlND+j02kA== - -arr-union@^3.1.0: - version "3.1.0" - resolved "https://registry.npmjs.org/arr-union/-/arr-union-3.1.0.tgz#e39b09aea9def866a8f206e288af63919bae39c4" - integrity sha512-sKpyeERZ02v1FeCZT8lrfJq5u6goHCtpTAzPwJYe7c8SPFOboNjNg1vz2L4VTn9T4PQxEx13TbXLmYUcS6Ug7Q== - array-buffer-byte-length@^1.0.0, array-buffer-byte-length@^1.0.1: version "1.0.1" resolved "https://registry.npmjs.org/array-buffer-byte-length/-/array-buffer-byte-length-1.0.1.tgz#1e5583ec16763540a27ae52eed99ff899223568f" @@ -3046,11 +3385,6 @@ array-union@^2.1.0: resolved "https://registry.npmjs.org/array-union/-/array-union-2.1.0.tgz#b798420adbeb1de828d84acd8a2e23d3efe85e8d" integrity sha512-HGyxoOTYUyCM6stUe6EJgnd4EoewAI7zMdfqO+kGjnlZmBDz/cR5pf8r/cR4Wq60sL/p0IkcjUEEPwS3GFrIyw== -array-unique@^0.3.2: - version "0.3.2" - resolved "https://registry.npmjs.org/array-unique/-/array-unique-0.3.2.tgz#a894b75d4bc4f6cd679ef3244a9fd8f46ae2d428" - integrity sha512-SleRWjh9JUud2wH1hPs9rZBZ33H6T9HOiL0uwGnGx9FpE6wKGyfWugmbkEOIs6qWrZhg0LWeLziLrEwQJhs5mQ== - array.prototype.every@^1.1.6: version "1.1.6" resolved "https://registry.npmjs.org/array.prototype.every/-/array.prototype.every-1.1.6.tgz#1717b407d019913250317300d814a1b6660f10d7" @@ -3081,33 +3415,11 @@ asap@^2.0.0: resolved "https://registry.npmjs.org/asap/-/asap-2.0.6.tgz#e50347611d7e690943208bbdafebcbc2fb866d46" integrity sha512-BSHWgDSAiKs50o2Re8ppvp3seVHXSRM44cdSsT9FfNEUUZLOGWVCsiWaRPWM1Znn+mqZ1OfVZ3z3DWEzSp7hRA== -asn1.js@^4.10.1: - version "4.10.1" - resolved "https://registry.npmjs.org/asn1.js/-/asn1.js-4.10.1.tgz#b9c2bf5805f1e64aadeed6df3a2bfafb5a73f5a0" - integrity sha512-p32cOF5q0Zqs9uBiONKYLm6BClCoBCM5O9JfeUSlnQLBTxYdTK+pW+nXflm8UkKd2UYlEbYz5qEi0JuZR9ckSw== - dependencies: - bn.js "^4.0.0" - inherits "^2.0.1" - minimalistic-assert "^1.0.0" - assert-never@^1.1.0, assert-never@^1.2.1: version "1.3.0" resolved "https://registry.npmjs.org/assert-never/-/assert-never-1.3.0.tgz#c53cf3ad8fcdb67f400a941dea66dac7fe82dd2e" integrity sha512-9Z3vxQ+berkL/JJo0dK+EY3Lp0s3NtSnP3VCLsh5HDcZPrh0M+KQRK5sWhUeyPPH+/RCxZqOxLMR+YC6vlviEQ== -assert@^1.1.1: - version "1.5.1" - resolved "https://registry.npmjs.org/assert/-/assert-1.5.1.tgz#038ab248e4ff078e7bc2485ba6e6388466c78f76" - integrity sha512-zzw1uCAgLbsKwBfFc8CX78DDg+xZeBksSO3vwVIDDN5i94eOrPsSSyiVhmsSABFDM/OcpE2aagCat9dnWQLG1A== - dependencies: - object.assign "^4.1.4" - util "^0.10.4" - -assign-symbols@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/assign-symbols/-/assign-symbols-1.0.0.tgz#59667f41fadd4f20ccbc2bb96b8d4f7f78ec0367" - integrity sha512-Q+JC7Whu8HhmTdBph/Tq59IoRtoy6KAm5zzPv00WdujX82lbAL8K7WVjne7vdCsAmbF4AYaDOPyO3k0kl8qIrw== - ast-types@0.13.3: version "0.13.3" resolved "https://registry.npmjs.org/ast-types/-/ast-types-0.13.3.tgz#50da3f28d17bdbc7969a3a2d83a0e4a72ae755a7" @@ -3144,11 +3456,6 @@ async-disk-cache@^2.0.0: rsvp "^4.8.5" username-sync "^1.0.2" -async-each@^1.0.1: - version "1.0.6" - resolved "https://registry.npmjs.org/async-each/-/async-each-1.0.6.tgz#52f1d9403818c179b7561e11a5d1b77eb2160e77" - integrity sha512-c646jH1avxr+aVpndVMeAfYw7wAa6idufrlN3LPA4PmKS0QEGp6PIC9nwz0WQkkvBGAMEki3pFdtxaF39J9vvg== - async-promise-queue@^1.0.3, async-promise-queue@^1.0.5: version "1.0.5" resolved "https://registry.npmjs.org/async-promise-queue/-/async-promise-queue-1.0.5.tgz#cb23bce9fce903a133946a700cc85f27f09ea49d" @@ -3386,7 +3693,7 @@ babel-import-util@^1.1.0: resolved "https://registry.npmjs.org/babel-import-util/-/babel-import-util-1.4.1.tgz#1df6fd679845df45494bac9ca12461d49497fdd4" integrity sha512-TNdiTQdPhXlx02pzG//UyVPSKE7SNWjY0n4So/ZnjQpWwaM5LvWBLkWa1JKll5u06HNscHD91XZPuwrMg1kadQ== -babel-import-util@^2.0.0: +babel-import-util@^2.0.0, babel-import-util@^2.0.1: version "2.1.1" resolved "https://registry.npmjs.org/babel-import-util/-/babel-import-util-2.1.1.tgz#0f4905fe899abfb8cd835dd52f3df1966d1ffbb0" integrity sha512-3qBQWRjzP9NreSH/YrOEU1Lj5F60+pWSLP0kIdCWxjFHH7pX2YPHIxQ67el4gnMNfYoDxSDGcT0zpVlZ+gVtQA== @@ -3396,6 +3703,11 @@ babel-import-util@^3.0.0: resolved "https://registry.npmjs.org/babel-import-util/-/babel-import-util-3.0.0.tgz#5814c6a58e7b80e64156b48fdfd34d48e6e0b1df" integrity sha512-4YNPkuVsxAW5lnSTa6cn4Wk49RX6GAB6vX+M6LqEtN0YePqoFczv1/x0EyLK/o+4E1j9jEuYj5Su7IEPab5JHQ== +babel-import-util@^3.0.1: + version "3.0.1" + resolved "https://registry.npmjs.org/babel-import-util/-/babel-import-util-3.0.1.tgz#62dd0476e855bf57522e1d0027916dc0c0b0fdb2" + integrity sha512-2copPaWQFUrzooJVIVZA/Oppx/S/KOoZ4Uhr+XWEQDMZ8Rvq/0SNQpbdIyMBJ8IELWt10dewuJw+tX4XjOo7Rg== + babel-loader@^8.0.6, babel-loader@^8.1.0: version "8.3.0" resolved "https://registry.npmjs.org/babel-loader/-/babel-loader-8.3.0.tgz#124936e841ba4fe8176786d6ff28add1f134d6a8" @@ -3520,17 +3832,6 @@ babel-plugin-module-resolver@^4.1.0: reselect "^4.0.0" resolve "^1.13.1" -babel-plugin-module-resolver@^5.0.0: - version "5.0.2" - resolved "https://registry.npmjs.org/babel-plugin-module-resolver/-/babel-plugin-module-resolver-5.0.2.tgz#cdeac5d4aaa3b08dd1ac23ddbf516660ed2d293e" - integrity sha512-9KtaCazHee2xc0ibfqsDeamwDps6FZNo5S0Q81dUqEuFzVwPhcT4J5jOqIVvgCA3Q/wO9hKYxN/Ds3tIsp5ygg== - dependencies: - find-babel-config "^2.1.1" - glob "^9.3.3" - pkg-up "^3.1.0" - reselect "^4.1.7" - resolve "^1.22.8" - babel-plugin-polyfill-corejs2@^0.4.10: version "0.4.11" resolved "https://registry.npmjs.org/babel-plugin-polyfill-corejs2/-/babel-plugin-polyfill-corejs2-0.4.11.tgz#30320dfe3ffe1a336c15afdcdafd6fd615b25e33" @@ -3937,7 +4238,7 @@ balanced-match@^1.0.0: resolved "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz#e83e3a7e3f300b34cb9d87f615fa0cbf357690ee" integrity sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw== -base64-js@^1.0.2, base64-js@^1.3.0, base64-js@^1.3.1: +base64-js@^1.3.0, base64-js@^1.3.1: version "1.5.1" resolved "https://registry.npmjs.org/base64-js/-/base64-js-1.5.1.tgz#1b1b440160a5bf7ad40b650f095963481903930a" integrity sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA== @@ -3947,19 +4248,6 @@ base64id@2.0.0, base64id@~2.0.0: resolved "https://registry.npmjs.org/base64id/-/base64id-2.0.0.tgz#2770ac6bc47d312af97a8bf9a634342e0cd25cb6" integrity sha512-lGe34o6EHj9y3Kts9R4ZYs/Gr+6N7MCaMlIFA3F1R2O5/m7K06AxfSeO5530PEERE6/WyEg3lsuyw4GHlPZHog== -base@^0.11.1: - version "0.11.2" - resolved "https://registry.npmjs.org/base/-/base-0.11.2.tgz#7bde5ced145b6d551a90db87f83c558b4eb48a8f" - integrity sha512-5T6P4xPgpp0YDFvSWwEZ4NoE3aM4QBQXDzmVbraCkFj8zHM+mba8SyqB5DbZWyR7mYHo6Y7BdQo3MoA4m0TeQg== - dependencies: - cache-base "^1.0.1" - class-utils "^0.3.5" - component-emitter "^1.2.1" - define-property "^1.0.0" - isobject "^3.0.1" - mixin-deep "^1.2.0" - pascalcase "^0.1.1" - basic-auth@~2.0.1: version "2.0.1" resolved "https://registry.npmjs.org/basic-auth/-/basic-auth-2.0.1.tgz#b998279bf47ce38344b4f3cf916d4679bbf51e3a" @@ -3967,33 +4255,23 @@ basic-auth@~2.0.1: dependencies: safe-buffer "5.1.2" +better-path-resolve@1.0.0: + version "1.0.0" + resolved "https://registry.npmjs.org/better-path-resolve/-/better-path-resolve-1.0.0.tgz#13a35a1104cdd48a7b74bf8758f96a1ee613f99d" + integrity sha512-pbnl5XzGBdrFU/wT4jqmJVPn2B6UHPBOhzMQkY/SPUPB6QtUXtmBHBIwCbXJol93mOpGMnQyP/+BB19q04xj7g== + dependencies: + is-windows "^1.0.0" + big.js@^5.2.2: version "5.2.2" resolved "https://registry.npmjs.org/big.js/-/big.js-5.2.2.tgz#65f0af382f578bcdc742bd9c281e9cb2d7768328" integrity sha512-vyL2OymJxmarO8gxMr0mhChsO9QGwhynfuu4+MHTAW6czfq9humCB7rKpUjDd9YUiDPU4mzpyupFSvOClAwbmQ== -binary-extensions@^1.0.0: - version "1.13.1" - resolved "https://registry.npmjs.org/binary-extensions/-/binary-extensions-1.13.1.tgz#598afe54755b2868a5330d2aff9d4ebb53209b65" - integrity sha512-Un7MIEDdUC5gNpcGDV97op1Ywk748MpHcFTHoYs6qnj1Z3j7I53VG3nwZhKzoBZmbdRNnb6WRdFlwl7tSDuZGw== - -binary-extensions@^2.0.0: - version "2.3.0" - resolved "https://registry.npmjs.org/binary-extensions/-/binary-extensions-2.3.0.tgz#f6e14a97858d327252200242d4ccfe522c445522" - integrity sha512-Ceh+7ox5qe7LJuLHoY0feh3pHuUDHAcRUeyL2VYghZwfpkNIy/+8Ocg0a3UuSoYzavmylwuLWQOf3hl0jjMMIw== - "binaryextensions@1 || 2", binaryextensions@^2.1.2: version "2.3.0" resolved "https://registry.npmjs.org/binaryextensions/-/binaryextensions-2.3.0.tgz#1d269cbf7e6243ea886aa41453c3651ccbe13c22" integrity sha512-nAihlQsYGyc5Bwq6+EsubvANYGExeJKHDO3RjnvwU042fawQTQfM3Kxn7IHUXQOz4bzfwsGYYHGSvXyW4zOGLg== -bindings@^1.5.0: - version "1.5.0" - resolved "https://registry.npmjs.org/bindings/-/bindings-1.5.0.tgz#10353c9e945334bc0511a6d90b38fbc7c9c504df" - integrity sha512-p2q/t/mhvuOj/UeLlV6566GD/guowlr0hHxClI0W9m7MWYkL1F0hLo+0Aexs9HSPCtR1SXQ0TD3MMKrXZajbiQ== - dependencies: - file-uri-to-path "1.0.0" - bl@^4.1.0: version "4.1.0" resolved "https://registry.npmjs.org/bl/-/bl-4.1.0.tgz#451535264182bec2fbbc83a62ab98cf11d9f7b3a" @@ -4008,7 +4286,7 @@ blank-object@^1.0.1: resolved "https://registry.npmjs.org/blank-object/-/blank-object-1.0.2.tgz#f990793fbe9a8c8dd013fb3219420bec81d5f4b9" integrity sha512-kXQ19Xhoghiyw66CUiGypnuRpWlbHAzY/+NyvqTEdTfhfQGH1/dbEMYiXju7fYKIFePpzp/y9dsu5Cu/PkmawQ== -bluebird@^3.4.6, bluebird@^3.5.5, bluebird@^3.7.2: +bluebird@^3.4.6, bluebird@^3.7.2: version "3.7.2" resolved "https://registry.npmjs.org/bluebird/-/bluebird-3.7.2.tgz#9f229c15be272454ffa973ace0dbee79a1b0c36f" integrity sha512-XpNj6GDQzdfW+r2Wnn7xiSAd7TM3jzkxGXBGTtWKuSXv1xUV+azxAm8jdWZN06QTQk+2N2XB9jRDkvbmQmcRtg== @@ -4018,20 +4296,15 @@ blueimp-md5@^2.10.0: resolved "https://registry.npmjs.org/blueimp-md5/-/blueimp-md5-2.19.0.tgz#b53feea5498dcb53dc6ec4b823adb84b729c4af0" integrity sha512-DRQrD6gJyy8FbiE4s+bDoXS9hiW3Vbx5uCdwvcCf3zLHL+Iv7LtGHLpr+GZV8rHG8tK766FGYBwRbu8pELTt+w== -bn.js@^4.0.0, bn.js@^4.1.0, bn.js@^4.11.9: +bn.js@^4.11.9: version "4.12.0" resolved "https://registry.npmjs.org/bn.js/-/bn.js-4.12.0.tgz#775b3f278efbb9718eec7361f483fb36fbbfea88" integrity sha512-c98Bf3tPniI+scsdk237ku1Dc3ujXQTSgyiPUDEOe7tRkhrqridvh8klBv0HCEso1OLOYcHuCv/cS6DNxKH+ZA== -bn.js@^5.0.0, bn.js@^5.2.1: - version "5.2.1" - resolved "https://registry.npmjs.org/bn.js/-/bn.js-5.2.1.tgz#0bc527a6a0d18d0aa8d5b0538ce4a77dccfa7b70" - integrity sha512-eXRvHzWyYPBuB4NBy0cmYQjGitUrtqwbvlzP3G6VFnNRbsZQIxQ10PbKKHt8gZ/HW/D/747aDl+QkDqg3KQLMQ== - -body-parser@1.20.2, body-parser@^1.19.0: - version "1.20.2" - resolved "https://registry.npmjs.org/body-parser/-/body-parser-1.20.2.tgz#6feb0e21c4724d06de7ff38da36dad4f57a747fd" - integrity sha512-ml9pReCu3M61kGlqoTm2umSXTlRTuGTx0bfYj+uIUKKYycG5NtSbeetV3faSU6R7ajOPw0g/J1PvK4qNy7s5bA== +body-parser@1.20.2, body-parser@1.20.3, body-parser@^1.19.0: + version "1.20.3" + resolved "https://registry.npmjs.org/body-parser/-/body-parser-1.20.3.tgz#1953431221c6fb5cd63c4b36d53fab0928e548c6" + integrity sha512-7rAxByjUMqQ3/bHJy7D6OGXvx/MMc4IqBn/X0fcM1QUcAItpZrBEYhWGem+tzXH90c+G01ypMcYJBO9Y30203g== dependencies: bytes "3.1.2" content-type "~1.0.5" @@ -4041,7 +4314,7 @@ body-parser@1.20.2, body-parser@^1.19.0: http-errors "2.0.0" iconv-lite "0.4.24" on-finished "2.4.1" - qs "6.11.0" + qs "6.13.0" raw-body "2.5.2" type-is "~1.6.18" unpipe "1.0.0" @@ -4081,7 +4354,7 @@ brace-expansion@^1.1.7: balanced-match "^1.0.0" concat-map "0.0.1" -braces@^2.3.1, braces@^2.3.2, braces@^3.0.0, braces@^3.0.3, braces@~3.0.2: +braces@^3.0.0, braces@^3.0.3: version "3.0.3" resolved "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz#490332f40919452272d55a8480adc0c441358789" integrity sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA== @@ -4149,20 +4422,6 @@ broccoli-babel-transpiler@^7.8.0: rsvp "^4.8.4" workerpool "^3.1.1" -broccoli-babel-transpiler@^8.0.0: - version "8.0.0" - resolved "https://registry.npmjs.org/broccoli-babel-transpiler/-/broccoli-babel-transpiler-8.0.0.tgz#07576728a95b840a99d5f0f9b07b71a737f69319" - integrity sha512-3HEp3flvasUKJGWERcrPgM1SWvHJ0O/fmbEtY9L4kDyMSnqjY6hTYvNvgWCIgbwXAYAUlZP0vjAQsmyLNGLwFw== - dependencies: - broccoli-persistent-filter "^3.0.0" - clone "^2.1.2" - hash-for-dep "^1.4.7" - heimdalljs "^0.2.1" - heimdalljs-logger "^0.1.9" - json-stable-stringify "^1.0.1" - rsvp "^4.8.4" - workerpool "^6.0.2" - broccoli-bridge@^1.0.0: version "1.0.0" resolved "https://registry.npmjs.org/broccoli-bridge/-/broccoli-bridge-1.0.0.tgz#6223fd64b62062c31333539f0f3c42d0acd92fb1" @@ -4359,7 +4618,7 @@ broccoli-funnel@^2.0.0, broccoli-funnel@^2.0.1, broccoli-funnel@^2.0.2: symlink-or-copy "^1.0.0" walk-sync "^0.3.1" -broccoli-funnel@^3.0.0, broccoli-funnel@^3.0.2, broccoli-funnel@^3.0.3, broccoli-funnel@^3.0.5, broccoli-funnel@^3.0.8: +broccoli-funnel@^3.0.2, broccoli-funnel@^3.0.3, broccoli-funnel@^3.0.5, broccoli-funnel@^3.0.8: version "3.0.8" resolved "https://registry.npmjs.org/broccoli-funnel/-/broccoli-funnel-3.0.8.tgz#f5b62e2763c3918026a15a3c833edc889971279b" integrity sha512-ng4eIhPYiXqMw6SyGoxPHR3YAwEd2lr9FgBI1CyTbspl4txZovOsmzFkMkGAlu88xyvYXJqHiM2crfLa65T1BQ== @@ -4516,7 +4775,7 @@ broccoli-persistent-filter@^2.1.0, broccoli-persistent-filter@^2.2.1, broccoli-p sync-disk-cache "^1.3.3" walk-sync "^1.0.0" -broccoli-persistent-filter@^3.0.0, broccoli-persistent-filter@^3.1.1, broccoli-persistent-filter@^3.1.2: +broccoli-persistent-filter@^3.1.2: version "3.1.3" resolved "https://registry.npmjs.org/broccoli-persistent-filter/-/broccoli-persistent-filter-3.1.3.tgz#aca815bf3e3b0247bd0a7b567fdb0d0e08c99cc2" integrity sha512-Q+8iezprZzL9voaBsDY3rQVl7c7H5h+bvv8SpzCZXPZgfBFCbx7KFQ2c3rZR6lW5k4Kwoqt7jG+rZMUg67Gwxw== @@ -4589,29 +4848,6 @@ broccoli-plugin@^3.1.0: rimraf "^2.3.4" symlink-or-copy "^1.1.8" -broccoli-postcss-single@^5.0.1: - version "5.0.2" - resolved "https://registry.npmjs.org/broccoli-postcss-single/-/broccoli-postcss-single-5.0.2.tgz#f23661b3011494d8a2dbd8ff39eb394e80313682" - integrity sha512-r4eWtz/5uihtHwOszViWwV6weJr9VryvaqtVo1DOh4gL+TbTyU+NX+Y+t9TqUw99OtuivMz4uHLLH7zZECbZmw== - dependencies: - broccoli-caching-writer "^3.0.3" - include-path-searcher "^0.1.0" - minimist ">=1.2.5" - mkdirp "^1.0.3" - object-assign "^4.1.1" - postcss "^8.1.4" - -broccoli-postcss@^6.0.1: - version "6.1.0" - resolved "https://registry.npmjs.org/broccoli-postcss/-/broccoli-postcss-6.1.0.tgz#1e15c5e8a65a984544224f083cbd1e6763691b60" - integrity sha512-I8+DHq5xcCBHU0PpCtDMayAmSUVx07CqAquUpdlNUHckXeD//cUFf4aFQllnZBhF8Z86YLhuA+j7qvCYYgBXRg== - dependencies: - broccoli-funnel "^3.0.0" - broccoli-persistent-filter "^3.1.1" - minimist ">=1.2.5" - object-assign "^4.1.1" - postcss "^8.1.4" - broccoli-rollup@^5.0.0: version "5.0.0" resolved "https://registry.npmjs.org/broccoli-rollup/-/broccoli-rollup-5.0.0.tgz#a77b53bcef1b70e988913fee82265c0a4ca530da" @@ -4653,7 +4889,7 @@ broccoli-source@^2.1.2: resolved "https://registry.npmjs.org/broccoli-source/-/broccoli-source-2.1.2.tgz#e9ae834f143b607e9ec114ade66731500c38b90b" integrity sha512-1lLayO4wfS0c0Sj50VfHJXNWf94FYY0WUhxj0R77thbs6uWI7USiOWFqQV5dRmhAJnoKaGN4WyLGQbgjgiYFwQ== -broccoli-source@^3.0.0, broccoli-source@^3.0.1: +broccoli-source@^3.0.0: version "3.0.1" resolved "https://registry.npmjs.org/broccoli-source/-/broccoli-source-3.0.1.tgz#fd581b2f3877ca1338f724f6ef70acec8c7e1444" integrity sha512-ZbGVQjivWi0k220fEeIUioN6Y68xjMy0xiLAc0LdieHI99gw+tafU8w0CggBDYVNsJMKUr006AZaM7gNEwCxEg== @@ -4737,7 +4973,7 @@ broccoli@^3.5.1: underscore.string "^3.2.2" watch-detector "^1.0.0" -brorand@^1.0.1, brorand@^1.1.0: +brorand@^1.1.0: version "1.1.0" resolved "https://registry.npmjs.org/brorand/-/brorand-1.1.0.tgz#12c25efe40a45e3c323eb8675a0a0ce57b22371f" integrity sha512-cKV8tMCEpQs4hK/ik71d6LrPOnpkpGBR0wzxqr68g2m/LB2GxVYQroAjMJZRVM1Y4BCjCKc3vAamxSzOY2RP+w== @@ -4747,68 +4983,6 @@ browser-process-hrtime@^1.0.0: resolved "https://registry.npmjs.org/browser-process-hrtime/-/browser-process-hrtime-1.0.0.tgz#3c9b4b7d782c8121e56f10106d84c0d0ffc94626" integrity sha512-9o5UecI3GhkpM6DrXr69PblIuWxPKk9Y0jHBRhdocZ2y7YECBFCsHm79Pr3OyR2AvjhDkabFJaDJMYRazHgsow== -browserify-aes@^1.0.4, browserify-aes@^1.2.0: - version "1.2.0" - resolved "https://registry.npmjs.org/browserify-aes/-/browserify-aes-1.2.0.tgz#326734642f403dabc3003209853bb70ad428ef48" - integrity sha512-+7CHXqGuspUn/Sl5aO7Ea0xWGAtETPXNSAjHo48JfLdPWcMng33Xe4znFvQweqc/uzk5zSOI3H52CYnjCfb5hA== - dependencies: - buffer-xor "^1.0.3" - cipher-base "^1.0.0" - create-hash "^1.1.0" - evp_bytestokey "^1.0.3" - inherits "^2.0.1" - safe-buffer "^5.0.1" - -browserify-cipher@^1.0.0: - version "1.0.1" - resolved "https://registry.npmjs.org/browserify-cipher/-/browserify-cipher-1.0.1.tgz#8d6474c1b870bfdabcd3bcfcc1934a10e94f15f0" - integrity sha512-sPhkz0ARKbf4rRQt2hTpAHqn47X3llLkUGn+xEJzLjwY8LRs2p0v7ljvI5EyoRO/mexrNunNECisZs+gw2zz1w== - dependencies: - browserify-aes "^1.0.4" - browserify-des "^1.0.0" - evp_bytestokey "^1.0.0" - -browserify-des@^1.0.0: - version "1.0.2" - resolved "https://registry.npmjs.org/browserify-des/-/browserify-des-1.0.2.tgz#3af4f1f59839403572f1c66204375f7a7f703e9c" - integrity sha512-BioO1xf3hFwz4kc6iBhI3ieDFompMhrMlnDFC4/0/vd5MokpuAc3R+LYbwTA9A5Yc9pq9UYPqffKpW2ObuwX5A== - dependencies: - cipher-base "^1.0.1" - des.js "^1.0.0" - inherits "^2.0.1" - safe-buffer "^5.1.2" - -browserify-rsa@^4.0.0, browserify-rsa@^4.1.0: - version "4.1.0" - resolved "https://registry.npmjs.org/browserify-rsa/-/browserify-rsa-4.1.0.tgz#b2fd06b5b75ae297f7ce2dc651f918f5be158c8d" - integrity sha512-AdEER0Hkspgno2aR97SAf6vi0y0k8NuOpGnVH3O99rcA5Q6sh8QxcngtHuJ6uXwnfAXNM4Gn1Gb7/MV1+Ymbog== - dependencies: - bn.js "^5.0.0" - randombytes "^2.0.1" - -browserify-sign@^4.0.0: - version "4.2.3" - resolved "https://registry.npmjs.org/browserify-sign/-/browserify-sign-4.2.3.tgz#7afe4c01ec7ee59a89a558a4b75bd85ae62d4208" - integrity sha512-JWCZW6SKhfhjJxO8Tyiiy+XYB7cqd2S5/+WeYHsKdNKFlCBhKbblba1A/HN/90YwtxKc8tCErjffZl++UNmGiw== - dependencies: - bn.js "^5.2.1" - browserify-rsa "^4.1.0" - create-hash "^1.2.0" - create-hmac "^1.1.7" - elliptic "^6.5.5" - hash-base "~3.0" - inherits "^2.0.4" - parse-asn1 "^5.1.7" - readable-stream "^2.3.8" - safe-buffer "^5.2.1" - -browserify-zlib@^0.2.0: - version "0.2.0" - resolved "https://registry.npmjs.org/browserify-zlib/-/browserify-zlib-0.2.0.tgz#2869459d9aa3be245fe8fe2ca1f46e2e7f54d73f" - integrity sha512-Z942RysHXmJrhqk88FmKBVq/v5tqmSkDz7p54G/MGyjMnCFFnC79XWNbg+Vta8W6Wb2qtSZTSxIGkJrRpCFEiA== - dependencies: - pako "~1.0.5" - browserslist@^3.2.6: version "3.2.8" resolved "https://registry.npmjs.org/browserslist/-/browserslist-3.2.8.tgz#b0005361d6471f0f5952797a76fc985f1f978fc6" @@ -4839,20 +5013,6 @@ buffer-from@^1.0.0: resolved "https://registry.npmjs.org/buffer-from/-/buffer-from-1.1.2.tgz#2b146a6fd72e80b4f55d255f35ed59a3a9a41bd5" integrity sha512-E+XQCRwSbaaiChtv6k6Dwgc+bx+Bs6vuKJHHl5kox/BaKbhiXzqQOwK4cO22yElGp2OCmjwVhT3HmxgyPGnJfQ== -buffer-xor@^1.0.3: - version "1.0.3" - resolved "https://registry.npmjs.org/buffer-xor/-/buffer-xor-1.0.3.tgz#26e61ed1422fb70dd42e6e36729ed51d855fe8d9" - integrity sha512-571s0T7nZWK6vB67HI5dyUF7wXiNcfaPPPTl6zYCNApANjIvYJTg7hlud/+cJpdAhS7dVzqMLmfhfHR3rAcOjQ== - -buffer@^4.3.0: - version "4.9.2" - resolved "https://registry.npmjs.org/buffer/-/buffer-4.9.2.tgz#230ead344002988644841ab0244af8c44bbe3ef8" - integrity sha512-xq+q3SRMOxGivLhBNaUdC64hDTQwejJ+H0T/NB1XMtTVEwNTrfFF3gAxiyW0Bu/xWEGhjVKgUcMhCrUy2+uCWg== - dependencies: - base64-js "^1.0.2" - ieee754 "^1.1.4" - isarray "^1.0.0" - buffer@^5.5.0: version "5.7.1" resolved "https://registry.npmjs.org/buffer/-/buffer-5.7.1.tgz#ba62e7c13133053582197160851a8f648e99eed0" @@ -4861,11 +5021,6 @@ buffer@^5.5.0: base64-js "^1.3.1" ieee754 "^1.1.13" -builtin-status-codes@^3.0.0: - version "3.0.0" - resolved "https://registry.npmjs.org/builtin-status-codes/-/builtin-status-codes-3.0.0.tgz#85982878e21b98e1c66425e03d0174788f569ee8" - integrity sha512-HpGFw18DgFWlncDfjTa2rcQ4W88O1mC8e8yZ2AvQY5KDaktSTwo+KRf6nHK6FRI5FyRyb/5T6+TSxfP7QyGsmQ== - builtins@^1.0.3: version "1.0.3" resolved "https://registry.npmjs.org/builtins/-/builtins-1.0.3.tgz#cb94faeb61c8696451db36534e1422f94f0aee88" @@ -4886,42 +5041,6 @@ bytes@3.1.2: resolved "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz#8b0beeb98605adf1b128fa4386403c009e0221a5" integrity sha512-/Nf7TyzTx6S3yRJObOAV7956r8cr2+Oj8AC5dt8wSP3BQAoeX58NoHyCU8P8zGkNXStjTSi6fzO6F0pBdcYbEg== -cacache@^12.0.2: - version "12.0.4" - resolved "https://registry.npmjs.org/cacache/-/cacache-12.0.4.tgz#668bcbd105aeb5f1d92fe25570ec9525c8faa40c" - integrity sha512-a0tMB40oefvuInr4Cwb3GerbL9xTj1D5yg0T5xrjGCGyfvbxseIXX7BAO/u/hIXdafzOI5JC3wDwHyf24buOAQ== - dependencies: - bluebird "^3.5.5" - chownr "^1.1.1" - figgy-pudding "^3.5.1" - glob "^7.1.4" - graceful-fs "^4.1.15" - infer-owner "^1.0.3" - lru-cache "^5.1.1" - mississippi "^3.0.0" - mkdirp "^0.5.1" - move-concurrently "^1.0.1" - promise-inflight "^1.0.1" - rimraf "^2.6.3" - ssri "^6.0.1" - unique-filename "^1.1.1" - y18n "^4.0.0" - -cache-base@^1.0.1: - version "1.0.1" - resolved "https://registry.npmjs.org/cache-base/-/cache-base-1.0.1.tgz#0a7f46416831c8b662ee36fe4e7c59d76f666ab2" - integrity sha512-AKcdTnFSWATd5/GCPRxr2ChwIJ85CeyrEyjRHlKxQ56d4XJMGym0uAiKn0xbLOGOl3+yRpOTi484dVCEc5AUzQ== - dependencies: - collection-visit "^1.0.0" - component-emitter "^1.2.1" - get-value "^2.0.6" - has-value "^1.0.0" - isobject "^3.0.1" - set-value "^2.0.0" - to-object-path "^0.3.0" - union-value "^1.0.0" - unset-value "^1.0.0" - calculate-cache-key-for-tree@2.0.0, calculate-cache-key-for-tree@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/calculate-cache-key-for-tree/-/calculate-cache-key-for-tree-2.0.0.tgz#7ac57f149a4188eacb0a45b210689215d3fef8d6" @@ -4936,6 +5055,14 @@ calculate-cache-key-for-tree@^1.1.0: dependencies: json-stable-stringify "^1.0.1" +call-bind-apply-helpers@^1.0.1, call-bind-apply-helpers@^1.0.2: + version "1.0.2" + resolved "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz#4b5428c222be985d79c3d82657479dbe0b59b2d6" + integrity sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ== + dependencies: + es-errors "^1.3.0" + function-bind "^1.1.2" + call-bind@^1.0.2, call-bind@^1.0.5, call-bind@^1.0.6, call-bind@^1.0.7: version "1.0.7" resolved "https://registry.npmjs.org/call-bind/-/call-bind-1.0.7.tgz#06016599c40c56498c18769d2730be242b6fa3b9" @@ -4957,11 +5084,6 @@ callsites@^3.0.0, callsites@^3.1.0: resolved "https://registry.npmjs.org/callsites/-/callsites-3.1.0.tgz#b3630abd8943432f54b3f0519238e33cd7df2f73" integrity sha512-P8BjAsXvZS+VIDUI11hHCQEv74YT67YUi5JJFNWIqL235sBmjX4+qx9Muvls5ivyNENctx46xQLQ3aTuE7ssaQ== -camelcase-css@^2.0.1: - version "2.0.1" - resolved "https://registry.npmjs.org/camelcase-css/-/camelcase-css-2.0.1.tgz#ee978f6947914cc30c6b44741b6ed1df7f043fd5" - integrity sha512-QOSvevhslijgYwRx6Rv7zKdMF8lbRmx+uQGx2+vDc+KI/eBnsy9kit5aj23AgGu3pa4t9AgwbnXWqS+iOY+2aA== - camelcase@^5.3.1: version "5.3.1" resolved "https://registry.npmjs.org/camelcase/-/camelcase-5.3.1.tgz#e3c9b31569e106811df242f715725a1f4c494320" @@ -5072,44 +5194,12 @@ charm@^1.0.0: dependencies: inherits "^2.0.1" -"chokidar@>=3.0.0 <4.0.0", chokidar@^3.4.1, chokidar@^3.5.3: - version "3.6.0" - resolved "https://registry.npmjs.org/chokidar/-/chokidar-3.6.0.tgz#197c6cc669ef2a8dc5e7b4d97ee4e092c3eb0d5b" - integrity sha512-7VT13fmjotKpGipCW9JEQAusEPE+Ei8nl6/g4FBAmIm0GOOLMua9NDDo/DWp0ZAxCr3cPq5ZpBqmPAQgDda2Pw== - dependencies: - anymatch "~3.1.2" - braces "~3.0.2" - glob-parent "~5.1.2" - is-binary-path "~2.1.0" - is-glob "~4.0.1" - normalize-path "~3.0.0" - readdirp "~3.6.0" - optionalDependencies: - fsevents "~2.3.2" - -chokidar@^2.1.8: - version "2.1.8" - resolved "https://registry.npmjs.org/chokidar/-/chokidar-2.1.8.tgz#804b3a7b6a99358c3c5c61e71d8728f041cff917" - integrity sha512-ZmZUazfOzf0Nve7duiCKD23PFSCs4JPoYyccjUFF3aQkQadqBhfzhjkwBH2mNOG9cTBwhamM37EIsIkZw3nRgg== +chokidar@^4.0.0: + version "4.0.3" + resolved "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz#7be37a4c03c9aee1ecfe862a4a23b2c70c205d30" + integrity sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA== dependencies: - anymatch "^2.0.0" - async-each "^1.0.1" - braces "^2.3.2" - glob-parent "^3.1.0" - inherits "^2.0.3" - is-binary-path "^1.0.0" - is-glob "^4.0.0" - normalize-path "^3.0.0" - path-is-absolute "^1.0.0" - readdirp "^2.2.1" - upath "^1.1.1" - optionalDependencies: - fsevents "^1.2.7" - -chownr@^1.1.1: - version "1.1.4" - resolved "https://registry.npmjs.org/chownr/-/chownr-1.1.4.tgz#6fc9d7b42d32a583596337666e7d08084da2cc6b" - integrity sha512-jJ0bqzaylmJtVnNgzTeSOs8DPavpbYgEr/b0YL8/2GO3xJEhInFmhKMUnEJQjZumK7KXGFhUy89PrsJWlakBVg== + readdirp "^4.0.1" chrome-trace-event@^1.0.2: version "1.0.4" @@ -5121,24 +5211,6 @@ ci-info@^2.0.0: resolved "https://registry.npmjs.org/ci-info/-/ci-info-2.0.0.tgz#67a9e964be31a51e15e5010d58e6f12834002f46" integrity sha512-5tK7EtrZ0N+OLFMthtqOj4fI2Jeb88C4CAZPu25LDVUgXJ0A3Js4PMGqrn0JU1W0Mh1/Z8wZzYPxqUrXeBboCQ== -cipher-base@^1.0.0, cipher-base@^1.0.1, cipher-base@^1.0.3: - version "1.0.4" - resolved "https://registry.npmjs.org/cipher-base/-/cipher-base-1.0.4.tgz#8760e4ecc272f4c363532f926d874aae2c1397de" - integrity sha512-Kkht5ye6ZGmwv40uUDZztayT2ThLQGfnj/T71N/XzeZeo3nf8foyW7zGTsPYkEya3m5f3cAypH+qe7YOrM1U2Q== - dependencies: - inherits "^2.0.1" - safe-buffer "^5.0.1" - -class-utils@^0.3.5: - version "0.3.6" - resolved "https://registry.npmjs.org/class-utils/-/class-utils-0.3.6.tgz#f93369ae8b9a7ce02fd41faad0ca83033190c463" - integrity sha512-qOhPa/Fj7s6TY8H8esGu5QNpMMQxz79h+urzrNYN6mn+9BnxlDGf5QZ+XeCDsxSjPqsSR56XOZOJmpeurnLMeg== - dependencies: - arr-union "^3.1.0" - define-property "^0.2.5" - isobject "^3.0.0" - static-extend "^0.1.1" - cldr-core@^36.0.0: version "36.0.0" resolved "https://registry.npmjs.org/cldr-core/-/cldr-core-36.0.0.tgz#1d2148ed6802411845baeeb21432d7bbfde7d4f7" @@ -5229,6 +5301,11 @@ cli-width@^3.0.0: resolved "https://registry.npmjs.org/cli-width/-/cli-width-3.0.0.tgz#a2f48437a2caa9a22436e794bf071ec9e61cedf6" integrity sha512-FxqpkPPwu1HjuN93Omfm4h8uIanXofW0RxVEW3k5RKx+mJJYSthzNhp32Kzxxy3YAEZ/Dc/EWN1vZRY0+kOhbw== +clipboard-polyfill@^4.1.1: + version "4.1.1" + resolved "https://registry.npmjs.org/clipboard-polyfill/-/clipboard-polyfill-4.1.1.tgz#eaf074f91c0a55aa4c12fcfd4862d2cfb9a0cab9" + integrity sha512-nbvNLrcX0zviek5QHLFRAaLrx8y/s8+RF2stH43tuS+kP5XlHMrcD0UGBWq43Hwp6WuuK7KefRMP56S45ibZkA== + clipboard@^2.0.11: version "2.0.11" resolved "https://registry.npmjs.org/clipboard/-/clipboard-2.0.11.tgz#62180360b97dd668b6b3a84ec226975762a70be5" @@ -5262,19 +5339,20 @@ code-point-at@^1.0.0: resolved "https://registry.npmjs.org/code-point-at/-/code-point-at-1.1.0.tgz#0d070b4d043a5bea33a2f1a40e2edb3d9a4ccf77" integrity sha512-RpAVKQA5T63xEj6/giIbUEtZwJ4UFIc3ZtvEkiaUERylqe8xb5IvqcgOurZLahv93CLKfxcw5YI+DZcUBRyLXA== -codemirror@5.58.2, codemirror@~5.15.0: +codemirror-lang-hcl@^0.0.0-beta.2: + version "0.0.0-beta.2" + resolved "https://registry.npmjs.org/codemirror-lang-hcl/-/codemirror-lang-hcl-0.0.0-beta.2.tgz#05ab6dfa6399c5987942e2eb5051f3426d44aad5" + integrity sha512-R3ew7Z2EYTdHTMXsWKBW9zxnLoLPYO+CrAa3dPZjXLrIR96Q3GR4cwJKF7zkSsujsnWgwRQZonyWpXYXfhQYuQ== + dependencies: + "@codemirror/language" "^6.0.0" + "@lezer/highlight" "^1.0.0" + "@lezer/lr" "^1.0.0" + +codemirror@5.58.2: version "5.58.2" resolved "https://registry.npmjs.org/codemirror/-/codemirror-5.58.2.tgz#ed54a1796de1498688bea1cdd4e9eeb187565d1b" integrity sha512-K/hOh24cCwRutd1Mk3uLtjWzNISOkm4fvXiMO7LucCrqbh6aJDdtqUziim3MZUI6wOY0rvY1SlL1Ork01uMy6w== -collection-visit@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/collection-visit/-/collection-visit-1.0.0.tgz#4bc0373c164bc3291b4d368c829cf1a80a59dca0" - integrity sha512-lNkKvzEeMBBjUGHZ+q6z9pSJla0KWAQPvtzhEV9+iGyQYG+pBpl7xKDhxoNSOZH2hhv0v5k0y2yAM4o4SjoSkw== - dependencies: - map-visit "^1.0.0" - object-visit "^1.0.0" - color-convert@^1.9.0: version "1.9.3" resolved "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz#bb71850690e1f136567de629d2d5471deda4c1e8" @@ -5348,7 +5426,7 @@ commander@^2.20.0, commander@^2.6.0: resolved "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz#fd485e84c03eb4881c20722ba48035e8531aeb33" integrity sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ== -commander@^4.0.0, commander@^4.1.1: +commander@^4.1.1: version "4.1.1" resolved "https://registry.npmjs.org/commander/-/commander-4.1.1.tgz#9fd602bd936294e9e9ef46a3f4d6964044b18068" integrity sha512-NOKm8xhkzAjzFx8B2v5OAHT+u5pRQc2UCa2Vq9jYL/31o2wi9mxBA7LIFs3sV5VSC49z6pEhfbMULvShKj26WA== @@ -5378,11 +5456,6 @@ compare-versions@^3.6.0: resolved "https://registry.npmjs.org/compare-versions/-/compare-versions-3.6.0.tgz#1a5689913685e5a87637b8d3ffca75514ec41d62" integrity sha512-W6Af2Iw1z4CB7q4uU4hv646dW9GQuBM+YpC0UvUCWSD8w90SJjp+ujJuXaEMtAXBtSqGfMPuFOVn4/+FlaqfBA== -component-emitter@^1.2.1: - version "1.3.1" - resolved "https://registry.npmjs.org/component-emitter/-/component-emitter-1.3.1.tgz#ef1d5796f7d93f135ee6fb684340b26403c97d17" - integrity sha512-T0+barUSQRTUQASh8bx02dl+DhF54GtIDY13Y3m9oWTklKbb3Wv974meRpeZ3lp1JpLVECWWNHC4vaG2XHXouQ== - compressible@~2.0.16: version "2.0.18" resolved "https://registry.npmjs.org/compressible/-/compressible-2.0.18.tgz#af53cca6b070d4c3c0750fbd77286a6d7cc46fba" @@ -5408,16 +5481,6 @@ concat-map@0.0.1: resolved "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz#d8a96bd77fd68df7793a73036a3ba0d5405d477b" integrity sha512-/Srv4dswyQNBfohGpz9o6Yb3Gz3SrUDqBH5rTuhGR7ahtlbYKnVxw2bCFMRljaA7EXHaXZ8wsHdodFvbkhKmqg== -concat-stream@^1.5.0: - version "1.6.2" - resolved "https://registry.npmjs.org/concat-stream/-/concat-stream-1.6.2.tgz#904bdf194cd3122fc675c77fc4ac3d4ff0fd1a34" - integrity sha512-27HBghJxjiZtIk3Ycvn/4kbJk/1uZuJFfuPEns6LaEvpvG1f0hTea8lilrouyo9mVc2GWdcEZ8OLoGmSADlrCw== - dependencies: - buffer-from "^1.0.0" - inherits "^2.0.3" - readable-stream "^2.2.2" - typedarray "^0.0.6" - configstore@^5.0.1: version "5.0.1" resolved "https://registry.npmjs.org/configstore/-/configstore-5.0.1.tgz#d365021b5df4b98cdd187d6a3b0e3f6a7cc5ed96" @@ -5440,11 +5503,6 @@ connect@^3.6.6: parseurl "~1.3.3" utils-merge "1.0.1" -console-browserify@^1.1.0: - version "1.2.0" - resolved "https://registry.npmjs.org/console-browserify/-/console-browserify-1.2.0.tgz#67063cef57ceb6cf4993a2ab3a55840ae8c49336" - integrity sha512-ZMkYO/LkF17QvCPqM0gxw8yUzigAOZOSWSHg91FH6orS7vcEj5dVZTidN2fQ14yBSdg97RqhSNwLUXInd52OTA== - console-control-strings@^1.0.0, console-control-strings@^1.1.0, console-control-strings@~1.1.0: version "1.1.0" resolved "https://registry.npmjs.org/console-control-strings/-/console-control-strings-1.1.0.tgz#3d7cf4464db6446ea644bf4b39507f9851008e8e" @@ -5468,11 +5526,6 @@ consolidate@^0.16.0: dependencies: bluebird "^3.7.2" -constants-browserify@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/constants-browserify/-/constants-browserify-1.0.0.tgz#c20b96d8c617748aaf1c16021760cd27fcb8cb75" - integrity sha512-xFxOwqIzR/e1k1gLiWEophSCMqXcwVHIH7akf7b/vxcUeGunlj3hvZaaqxwHsTgn+IndtkQJgSztIDWeumWJDQ== - "consul-acls@file:packages/consul-acls": version "0.1.0" @@ -5543,28 +5596,11 @@ cookie@~0.4.1: resolved "https://registry.npmjs.org/cookie/-/cookie-0.4.2.tgz#0e41f24de5ecf317947c82fc789e06a884824432" integrity sha512-aSWTXFzaKWkvHO1Ny/s+ePFpvKsPnjc551iI41v3ny/ow6tBG5Vd+FuqGNhh1LxOmVzOlGUriIlOaokOvhaStA== -copy-concurrently@^1.0.0: - version "1.0.5" - resolved "https://registry.npmjs.org/copy-concurrently/-/copy-concurrently-1.0.5.tgz#92297398cae34937fcafd6ec8139c18051f0b5e0" - integrity sha512-f2domd9fsVDFtaFcbaRZuYXwtdmnzqbADSwhSWYxYB/Q8zsdUUFMXVRwXGDMWmbEzAn1kdRrtI1T/KTFOL4X2A== - dependencies: - aproba "^1.1.1" - fs-write-stream-atomic "^1.0.8" - iferr "^0.1.5" - mkdirp "^0.5.1" - rimraf "^2.5.4" - run-queue "^1.0.0" - copy-dereference@^1.0.0: version "1.0.0" resolved "https://registry.npmjs.org/copy-dereference/-/copy-dereference-1.0.0.tgz#6b131865420fd81b413ba994b44d3655311152b6" integrity sha512-40TSLuhhbiKeszZhK9LfNdazC67Ue4kq/gGwN5sdxEUWPXTIMmKmGmgD9mPfNKVAeecEW+NfEIpBaZoACCQLLw== -copy-descriptor@^0.1.0: - version "0.1.1" - resolved "https://registry.npmjs.org/copy-descriptor/-/copy-descriptor-0.1.1.tgz#676f6eb3c39997c2ee1ac3a924fd6124748f578d" - integrity sha512-XgZ0pFcakEUlbwQEVNg3+QAis1FyTL3Qel9FYy8pSkQqoG3PNoT0bOCQtOXcOkur21r2Eq2kI+IE+gsmAEVlYw== - core-js-compat@^3.31.0, core-js-compat@^3.36.1: version "3.37.1" resolved "https://registry.npmjs.org/core-js-compat/-/core-js-compat-3.37.1.tgz#c844310c7852f4bdf49b8d339730b97e17ff09ee" @@ -5608,36 +5644,10 @@ cosmiconfig@^7.0.0: path-type "^4.0.0" yaml "^1.10.0" -create-ecdh@^4.0.0: - version "4.0.4" - resolved "https://registry.npmjs.org/create-ecdh/-/create-ecdh-4.0.4.tgz#d6e7f4bffa66736085a0762fd3a632684dabcc4e" - integrity sha512-mf+TCx8wWc9VpuxfP2ht0iSISLZnt0JgWlrOKZiNqyUZWnjIaCIVNQArMHnCZKfEYRg6IM7A+NeJoN8gf/Ws0A== - dependencies: - bn.js "^4.1.0" - elliptic "^6.5.3" - -create-hash@^1.1.0, create-hash@^1.1.2, create-hash@^1.2.0: - version "1.2.0" - resolved "https://registry.npmjs.org/create-hash/-/create-hash-1.2.0.tgz#889078af11a63756bcfb59bd221996be3a9ef196" - integrity sha512-z00bCGNHDG8mHAkP7CtT1qVu+bFQUPjYq/4Iv3C3kWjTFV10zIjfSoeqXo9Asws8gwSHDGj/hl2u4OGIjapeCg== - dependencies: - cipher-base "^1.0.1" - inherits "^2.0.1" - md5.js "^1.3.4" - ripemd160 "^2.0.1" - sha.js "^2.4.0" - -create-hmac@^1.1.0, create-hmac@^1.1.4, create-hmac@^1.1.7: - version "1.1.7" - resolved "https://registry.npmjs.org/create-hmac/-/create-hmac-1.1.7.tgz#69170c78b3ab957147b2b8b04572e47ead2243ff" - integrity sha512-MJG9liiZ+ogc4TzUwuvbER1JRdgvUFSB5+VR/g5h82fGaIRWMWddtKBHi7/sVhfjQZ6SehlyhvQYrcYkaUIpLg== - dependencies: - cipher-base "^1.0.3" - create-hash "^1.1.0" - inherits "^2.0.1" - ripemd160 "^2.0.0" - safe-buffer "^5.0.1" - sha.js "^2.4.8" +crelt@^1.0.5, crelt@^1.0.6: + version "1.0.6" + resolved "https://registry.npmjs.org/crelt/-/crelt-1.0.6.tgz#7cc898ea74e190fb6ef9dae57f8f81cf7302df72" + integrity sha512-VQ2MBenTq1fWZUH9DJNGti7kKv6EeAuYr3cLwxUWhIu1baTaXh4Ib5W2CqHVqib4/MqbYGJqiL3Zb8GJZr3l4g== cross-spawn@^6.0.0, cross-spawn@^6.0.5: version "6.0.5" @@ -5659,23 +5669,6 @@ cross-spawn@^7.0.0, cross-spawn@^7.0.2, cross-spawn@^7.0.3: shebang-command "^2.0.0" which "^2.0.1" -crypto-browserify@^3.11.0: - version "3.12.0" - resolved "https://registry.npmjs.org/crypto-browserify/-/crypto-browserify-3.12.0.tgz#396cf9f3137f03e4b8e532c58f698254e00f80ec" - integrity sha512-fz4spIh+znjO2VjL+IdhEpRJ3YN6sMzITSBijk6FK2UvTqruSQW+/cCZTSNsMiZNvUeq0CqurF+dAbyiGOY6Wg== - dependencies: - browserify-cipher "^1.0.0" - browserify-sign "^4.0.0" - create-ecdh "^4.0.0" - create-hash "^1.1.0" - create-hmac "^1.1.0" - diffie-hellman "^5.0.0" - inherits "^2.0.1" - pbkdf2 "^3.0.3" - public-encrypt "^4.0.0" - randombytes "^2.0.0" - randomfill "^1.0.3" - crypto-random-string@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/crypto-random-string/-/crypto-random-string-2.0.0.tgz#ef2a7a966ec11083388369baa02ebead229b30d5" @@ -5741,10 +5734,10 @@ cssstyle@^2.3.0: dependencies: cssom "~0.3.6" -cyclist@^1.0.1: - version "1.0.2" - resolved "https://registry.npmjs.org/cyclist/-/cyclist-1.0.2.tgz#673b5f233bf34d8e602b949429f8171d9121bea3" - integrity sha512-0sVXIohTfLqVIW3kb/0n6IiWF3Ifj5nm2XaSrLq2DI6fKIGa2fYAZdk917rUneaeLVpYfFcyXE2ft0fe3remsA== +csstype@^3.1.3: + version "3.1.3" + resolved "https://registry.npmjs.org/csstype/-/csstype-3.1.3.tgz#d80ff294d114fb0e6ac500fbf85b60137d7eff81" + integrity sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw== "d3-array@2 - 3", "d3-array@2.10.0 - 3": version "3.2.4" @@ -5873,7 +5866,7 @@ dayjs@^1.9.3: resolved "https://registry.npmjs.org/dayjs/-/dayjs-1.11.11.tgz#dfe0e9d54c5f8b68ccf8ca5f72ac603e7e5ed59e" integrity sha512-okzr3f11N6WuqYtZSvm+F776mB41wRZMhKP+hc34YdW+KmtYYK9iqvHSwo2k9FEH3fhGXvOPV6yz2IcSrfRUDg== -debug@2.6.9, debug@^2.1.0, debug@^2.1.1, debug@^2.1.3, debug@^2.2.0, debug@^2.3.3, debug@^2.6.8, debug@^2.6.9: +debug@2.6.9, debug@^2.1.0, debug@^2.1.1, debug@^2.1.3, debug@^2.2.0, debug@^2.6.8, debug@^2.6.9: version "2.6.9" resolved "https://registry.npmjs.org/debug/-/debug-2.6.9.tgz#5d128515df134ff327e90a4c93f4e077a536341f" integrity sha512-bC7ElrdJaJnPbAP+1EotYvqZsb3ecl5wi6Bfi6BJTUcNowp6cvspg0jXznRTKDjm/E7AdgFBVeAPVMNcKGsHMA== @@ -5909,6 +5902,14 @@ decode-uri-component@^0.2.0: resolved "https://registry.npmjs.org/decode-uri-component/-/decode-uri-component-0.2.2.tgz#e69dbe25d37941171dd540e024c444cd5188e1e9" integrity sha512-FqUYQ+8o158GyGTrMFJms9qh3CqTKvAqgqsTnkLI8sKu0028orqBhxNMFkFen0zGyg6epACD32pjVk58ngIErQ== +decorator-transforms@^1.0.1: + version "1.2.1" + resolved "https://registry.npmjs.org/decorator-transforms/-/decorator-transforms-1.2.1.tgz#d72e39b95c9e3d63465f82b148d021919e9d198f" + integrity sha512-UUtmyfdlHvYoX3VSG1w5rbvBQ2r5TX1JsE4hmKU9snleFymadA3VACjl6SRfi9YgBCSjBbfQvR1bs9PRW9yBKw== + dependencies: + "@babel/plugin-syntax-decorators" "^7.23.3" + babel-import-util "^2.0.1" + decorator-transforms@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/decorator-transforms/-/decorator-transforms-2.0.0.tgz#4e9178a8905c81ff79f4078dc6dfb716244ecd37" @@ -5917,6 +5918,14 @@ decorator-transforms@^2.0.0: "@babel/plugin-syntax-decorators" "^7.23.3" babel-import-util "^3.0.0" +decorator-transforms@^2.3.0: + version "2.3.0" + resolved "https://registry.npmjs.org/decorator-transforms/-/decorator-transforms-2.3.0.tgz#521d0617627e289dc47c2186787ac80390ee988a" + integrity sha512-jo8c1ss9yFPudHuYYcrJ9jpkDZIoi+lOGvt+Uyp9B+dz32i50icRMx9Bfa8hEt7TnX1FyKWKkjV+cUdT/ep2kA== + dependencies: + "@babel/plugin-syntax-decorators" "^7.23.3" + babel-import-util "^3.0.0" + dedent@^0.7.0: version "0.7.0" resolved "https://registry.npmjs.org/dedent/-/dedent-0.7.0.tgz#2495ddbaf6eb874abb0e1be9df22d2e5a544326c" @@ -5981,28 +5990,6 @@ define-properties@^1.2.0, define-properties@^1.2.1: has-property-descriptors "^1.0.0" object-keys "^1.1.1" -define-property@^0.2.5: - version "0.2.5" - resolved "https://registry.npmjs.org/define-property/-/define-property-0.2.5.tgz#c35b1ef918ec3c990f9a5bc57be04aacec5c8116" - integrity sha512-Rr7ADjQZenceVOAKop6ALkkRAmH1A4Gx9hV/7ZujPUN2rkATqFO0JZLZInbAjpZYoJ1gUx8MRMQVkYemcbMSTA== - dependencies: - is-descriptor "^0.1.0" - -define-property@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/define-property/-/define-property-1.0.0.tgz#769ebaaf3f4a63aad3af9e8d304c9bbe79bfb0e6" - integrity sha512-cZTYKFWspt9jZsMscWo8sc/5lbPC9Q0N5nBLgb+Yd915iL3udB1uFgS3B8YCx66UVHq018DAVFoee7x+gxggeA== - dependencies: - is-descriptor "^1.0.0" - -define-property@^2.0.2: - version "2.0.2" - resolved "https://registry.npmjs.org/define-property/-/define-property-2.0.2.tgz#d459689e8d654ba77e02a817f8710d702cb16e9d" - integrity sha512-jwK2UV4cnPpbcG7+VRARKTZPUWowwXA8bzH5NP6ud0oeAxyYPuGZUAC7hMugpCdz4BeSZl2Dl9k66CHJ/46ZYQ== - dependencies: - is-descriptor "^1.0.2" - isobject "^3.0.1" - defined@^1.0.1: version "1.0.1" resolved "https://registry.npmjs.org/defined/-/defined-1.0.1.tgz#c0b9db27bfaffd95d6f61399419b893df0f91ebf" @@ -6033,14 +6020,6 @@ depd@~1.1.2: resolved "https://registry.npmjs.org/depd/-/depd-1.1.2.tgz#9bcd52e14c097763e749b274c4346ed2e560b5a9" integrity sha512-7emPTl6Dpo6JRXOXjLRxck+FlLRX5847cLKEn00PLAgc3g2hTZZgr+e4c2v6QpSmLeFP3n5yUo7ft6avBK/5jQ== -des.js@^1.0.0: - version "1.1.0" - resolved "https://registry.npmjs.org/des.js/-/des.js-1.1.0.tgz#1d37f5766f3bbff4ee9638e871a8768c173b81da" - integrity sha512-r17GxjhUCjSRy8aiJpr8/UadFIzMzJGexI3Nmz4ADi9LYSFx4gTBp80+NaX/YsXWWLhpZ7v/v/ubEc/bCNfKwg== - dependencies: - inherits "^2.0.1" - minimalistic-assert "^1.0.0" - destroy@1.2.0: version "1.2.0" resolved "https://registry.npmjs.org/destroy/-/destroy-1.2.0.tgz#4803735509ad8be552934c67df614f94e66fa015" @@ -6063,6 +6042,11 @@ detect-indent@^6.0.0: resolved "https://registry.npmjs.org/detect-indent/-/detect-indent-6.1.0.tgz#592485ebbbf6b3b1ab2be175c8393d04ca0d57e6" integrity sha512-reYkTUJAZb9gUuZ2RvVCNhVHdg62RHnJ7WJl8ftMi4diZ6NWlciOzQN88pUhSELEwflJht4oQDv0F0BMlwaYtA== +detect-libc@^1.0.3: + version "1.0.3" + resolved "https://registry.npmjs.org/detect-libc/-/detect-libc-1.0.3.tgz#fa137c4bd698edf55cd5cd02ac559f91a4c4ba9b" + integrity sha512-pGjwhsmsp4kL2RTz08wcOlGN83otlqHeD/Z5T8GXZB+/YcpQ/dgo+lbU8ZsGxV0HIvqqxo9l7mqYwyYMD9bKDg== + detect-newline@3.1.0: version "3.1.0" resolved "https://registry.npmjs.org/detect-newline/-/detect-newline-3.1.0.tgz#576f5dfc63ae1a192ff192d8ad3af6308991b651" @@ -6076,16 +6060,6 @@ dezalgo@^1.0.0: asap "^2.0.0" wrappy "1" -dialog-polyfill@^0.5.6: - version "0.5.6" - resolved "https://registry.npmjs.org/dialog-polyfill/-/dialog-polyfill-0.5.6.tgz#7507b4c745a82fcee0fa07ce64d835979719599a" - integrity sha512-ZbVDJI9uvxPAKze6z146rmfUZjBqNEwcnFTVamQzXH+svluiV7swmVIGr7miwADgfgt1G2JQIytypM9fbyhX4w== - -didyoumean@^1.2.2: - version "1.2.2" - resolved "https://registry.npmjs.org/didyoumean/-/didyoumean-1.2.2.tgz#989346ffe9e839b4555ecf5666edea0d3e8ad037" - integrity sha512-gxtyfqMg7GKyhQmb056K7M3xszy/myH8w+B4RT+QXBQsvAOdc3XymqDDPHx1BgPgsdAA5SIifona89YtRATDzw== - diff@^4.0.2: version "4.0.2" resolved "https://registry.npmjs.org/diff/-/diff-4.0.2.tgz#60f3aecb89d5fae520c11aa19efc2bb982aade7d" @@ -6096,15 +6070,6 @@ diff@^5.0.0: resolved "https://registry.npmjs.org/diff/-/diff-5.2.0.tgz#26ded047cd1179b78b9537d5ef725503ce1ae531" integrity sha512-uIFDxqpRZGZ6ThOk84hEfqWoHx2devRFvpTZcTHur85vImfaxUbTW9Ryh4CpCuDnToOP1CEtXKIgytHBPVff5A== -diffie-hellman@^5.0.0: - version "5.0.3" - resolved "https://registry.npmjs.org/diffie-hellman/-/diffie-hellman-5.0.3.tgz#40e8ee98f55a2149607146921c63e1ae5f3d2875" - integrity sha512-kqag/Nl+f3GwyK25fhUMYj81BUOrZ9IuJsjIcDE5icNM9FJHAVm3VcUDxdLPoQtTuUylWm6ZIknYJwwaPxsUzg== - dependencies: - bn.js "^4.1.0" - miller-rabin "^4.0.0" - randombytes "^2.0.0" - dir-glob@^3.0.1: version "3.0.1" resolved "https://registry.npmjs.org/dir-glob/-/dir-glob-3.0.1.tgz#56dbf73d992a4a93ba1584f4534063fd2e41717f" @@ -6112,11 +6077,6 @@ dir-glob@^3.0.1: dependencies: path-type "^4.0.0" -dlv@^1.1.3: - version "1.1.3" - resolved "https://registry.npmjs.org/dlv/-/dlv-1.1.3.tgz#5c198a8a11453596e751494d49874bc7732f2e79" - integrity sha512-+HlytyjlPKnIG8XuRG8WvmBP8xs8P71y+SKKS6ZXWoEgLuePxtDoUEiH7WkdePWrQ5JBpE6aoVqfZfJUQkjXwA== - doctoc@^2.0.0: version "2.2.1" resolved "https://registry.npmjs.org/doctoc/-/doctoc-2.2.1.tgz#83f6a6bf4df97defbe027c9a82d13091a138ffe2" @@ -6145,11 +6105,6 @@ dom-serializer@^1.0.1: domhandler "^4.2.0" entities "^2.0.0" -domain-browser@^1.1.1: - version "1.2.0" - resolved "https://registry.npmjs.org/domain-browser/-/domain-browser-1.2.0.tgz#3d31f50191a6749dd1375a7f522e823d42e54eda" - integrity sha512-jnjyiM6eRyZl2H+W8Q/zLMA481hzi0eszAaBUzIVnmYVDBbnLxVNnfu1HgEBvCbL+71FrxMl3E6lpKH7Ge3OXA== - domelementtype@^2.0.1, domelementtype@^2.2.0: version "2.3.0" resolved "https://registry.npmjs.org/domelementtype/-/domelementtype-2.3.0.tgz#5c45e8e869952626331d7aab326d01daf65d589d" @@ -6200,15 +6155,14 @@ dotignore@^0.1.2: dependencies: minimatch "^3.0.4" -duplexify@^3.4.2, duplexify@^3.6.0: - version "3.7.1" - resolved "https://registry.npmjs.org/duplexify/-/duplexify-3.7.1.tgz#2a4df5317f6ccfd91f86d6fd25d8d8a103b88309" - integrity sha512-07z8uv2wMyS51kKhD1KsdXJg5WQ6t93RneqRxUHnskXVtlYYkLqM0gqStQZ3pj073g687jPCHrqNfCzawLYh5g== +dunder-proto@^1.0.1: + version "1.0.1" + resolved "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz#d7ae667e1dc83482f8b70fd0f6eefc50da30f58a" + integrity sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A== dependencies: - end-of-stream "^1.0.0" - inherits "^2.0.1" - readable-stream "^2.0.0" - stream-shift "^1.0.0" + call-bind-apply-helpers "^1.0.1" + es-errors "^1.3.0" + gopd "^1.2.0" editions@^1.1.1: version "1.3.4" @@ -6233,10 +6187,10 @@ electron-to-chromium@^1.3.47, electron-to-chromium@^1.4.796: resolved "https://registry.npmjs.org/electron-to-chromium/-/electron-to-chromium-1.4.818.tgz#7762c8bfd15a07c3833b7f5deed990e9e5a4c24f" integrity sha512-eGvIk2V0dGImV9gWLq8fDfTTsCAeMDwZqEPMr+jMInxZdnp9Us8UpovYpRCf9NQ7VOFgrN2doNSgvISbsbNpxA== -elliptic@^6.5.3, elliptic@^6.5.5: - version "6.5.5" - resolved "https://registry.npmjs.org/elliptic/-/elliptic-6.5.5.tgz#c715e09f78b6923977610d4c2346d6ce22e6dded" - integrity sha512-7EjbcmUm17NQFu4Pmgmq2olYMj8nwMnpcddByChSUjArp8F5DQWcIcpriwO4ZToLNAJig0yiyjswfyGNje/ixw== +elliptic@6.6.1: + version "6.6.1" + resolved "https://registry.npmjs.org/elliptic/-/elliptic-6.6.1.tgz#3b8ffb02670bf69e382c7f65bf524c97c5405c06" + integrity sha512-RaddvvMatK2LJHqFJ+YA4WysVN5Ita9E35botqIYspQ4TkRAlCicdzKOjlyv/1Za5RyTNn7di//eEV0uTAfe3g== dependencies: bn.js "^4.11.9" brorand "^1.1.0" @@ -6246,10 +6200,10 @@ elliptic@^6.5.3, elliptic@^6.5.5: minimalistic-assert "^1.0.1" minimalistic-crypto-utils "^1.0.1" -ember-a11y-refocus@^3.0.2: - version "3.0.2" - resolved "https://registry.npmjs.org/ember-a11y-refocus/-/ember-a11y-refocus-3.0.2.tgz#e648c491d3a8d84cb594679bafc8430cd22b2ed4" - integrity sha512-5T9kAvl0RUBF6SSeaaWpVS2WC8MTktgqiGdLAbxVjT2f2NGrDDPmv7riDVNMsuL5sHRwSKm0EHCIzZ4M3aFMow== +ember-a11y-refocus@^4.1.4: + version "4.1.4" + resolved "https://registry.npmjs.org/ember-a11y-refocus/-/ember-a11y-refocus-4.1.4.tgz#ffcabbc91503379cd2c0124cb5f0bc93178098b5" + integrity sha512-51tGk30bskObL1LsGZRxzqIxgZhIE8ZvvDYcT1OWphxZlq00+Arz57aMLS4Vz4qhSE40BfeN2qFYP/gXtp9qDA== dependencies: ember-cli-babel "^7.26.11" ember-cli-htmlbars "^6.0.1" @@ -6269,6 +6223,21 @@ ember-assign-helper@^0.3.0: ember-cli-babel "^7.19.0" ember-cli-htmlbars "^4.3.1" +ember-assign-helper@^0.5.0: + version "0.5.1" + resolved "https://registry.npmjs.org/ember-assign-helper/-/ember-assign-helper-0.5.1.tgz#5c0dbffe30090df23ad7d6a7595b015beff439a4" + integrity sha512-dXHbwlBTJWVjG7k4dhVrT3Gh4nQt6rC2LjyltuPztIhQ+YcPYHMqAPJRJYLGZu16aPSJbaGF8K+u51i7CLzqlQ== + dependencies: + "@embroider/addon-shim" "^1.8.7" + +ember-async-data@^1.0.1: + version "1.0.3" + resolved "https://registry.npmjs.org/ember-async-data/-/ember-async-data-1.0.3.tgz#4e5afe4c0e05071e02d05724a64af9b909c27c5b" + integrity sha512-54OtoQwNi+/ZvPOVuT4t8fcHR9xL8N7kBydzcZSo6BIEsLYeXPi3+jUR8niWjfjXXhKlJ8EWXR0lTeHleTrxbw== + dependencies: + "@ember/test-waiters" "^3.0.0" + "@embroider/addon-shim" "^1.8.6" + ember-auto-import@^1.10.1, ember-auto-import@^1.11.3, ember-auto-import@^1.5.3: version "1.12.2" resolved "https://registry.npmjs.org/ember-auto-import/-/ember-auto-import-1.12.2.tgz#cc7298ee5c0654b0249267de68fb27a2861c3579" @@ -6304,7 +6273,7 @@ ember-auto-import@^1.10.1, ember-auto-import@^1.11.3, ember-auto-import@^1.5.3: walk-sync "^0.3.3" webpack "^4.43.0" -ember-auto-import@^2.2.3, ember-auto-import@^2.4.2, ember-auto-import@^2.5.0, ember-auto-import@^2.6.3: +ember-auto-import@^2.2.3, ember-auto-import@^2.4.2, ember-auto-import@^2.5.0: version "2.7.4" resolved "https://registry.npmjs.org/ember-auto-import/-/ember-auto-import-2.7.4.tgz#ca99570eb3d6165968df797a4750aa58073852b5" integrity sha512-6CdXSegJJc8nwwK7+1lIcBUnMVrJRNd4ZdMgcKbCAwPvcGxMgRVBddSzrX/+q/UuflvTEO26Dk1g7Z6KHMXUhw== @@ -6344,6 +6313,48 @@ ember-auto-import@^2.2.3, ember-auto-import@^2.4.2, ember-auto-import@^2.5.0, em typescript-memoize "^1.0.0-alpha.3" walk-sync "^3.0.0" +ember-auto-import@^2.6.3: + version "2.10.0" + resolved "https://registry.npmjs.org/ember-auto-import/-/ember-auto-import-2.10.0.tgz#2a29b82335eba4375d115570cbe836666ed2e7cc" + integrity sha512-bcBFDYVTFHyqyq8BNvsj6UO3pE6Uqou/cNmee0WaqBgZ+1nQqFz0UE26usrtnFAT+YaFZSkqF2H36QW84k0/cg== + dependencies: + "@babel/core" "^7.16.7" + "@babel/plugin-proposal-class-properties" "^7.16.7" + "@babel/plugin-proposal-decorators" "^7.16.7" + "@babel/plugin-proposal-private-methods" "^7.16.7" + "@babel/plugin-transform-class-static-block" "^7.16.7" + "@babel/preset-env" "^7.16.7" + "@embroider/macros" "^1.0.0" + "@embroider/shared-internals" "^2.0.0" + babel-loader "^8.0.6" + babel-plugin-ember-modules-api-polyfill "^3.5.0" + babel-plugin-ember-template-compilation "^2.0.1" + babel-plugin-htmlbars-inline-precompile "^5.2.1" + babel-plugin-syntax-dynamic-import "^6.18.0" + broccoli-debug "^0.6.4" + broccoli-funnel "^3.0.8" + broccoli-merge-trees "^4.2.0" + broccoli-plugin "^4.0.0" + broccoli-source "^3.0.0" + css-loader "^5.2.0" + debug "^4.3.1" + fs-extra "^10.0.0" + fs-tree-diff "^2.0.0" + handlebars "^4.3.1" + is-subdir "^1.2.0" + js-string-escape "^1.0.1" + lodash "^4.17.19" + mini-css-extract-plugin "^2.5.2" + minimatch "^3.0.0" + parse5 "^6.0.1" + pkg-entry-points "^1.1.0" + resolve "^1.20.0" + resolve-package-path "^4.0.3" + semver "^7.3.4" + style-loader "^2.0.0" + typescript-memoize "^1.0.0-alpha.3" + walk-sync "^3.0.0" + ember-basic-dropdown@3.0.21, ember-basic-dropdown@^3.0.21: version "3.0.21" resolved "https://registry.npmjs.org/ember-basic-dropdown/-/ember-basic-dropdown-3.0.21.tgz#5711d071966919c9578d2d5ac2c6dcadbb5ea0e0" @@ -6427,7 +6438,7 @@ ember-cli-babel-plugin-helpers@^1.0.0, ember-cli-babel-plugin-helpers@^1.1.0, em resolved "https://registry.npmjs.org/ember-cli-babel-plugin-helpers/-/ember-cli-babel-plugin-helpers-1.1.1.tgz#5016b80cdef37036c4282eef2d863e1d73576879" integrity sha512-sKvOiPNHr5F/60NLd7SFzMpYPte/nnGkq/tMIfXejfKHIhaiIkYFqX8Z9UFTKWLLn+V7NOaby6niNPZUdvKCRw== -ember-cli-babel@^6.0.0, ember-cli-babel@^6.0.0-beta.4, ember-cli-babel@^6.6.0, ember-cli-babel@^6.8.1, ember-cli-babel@^6.8.2: +ember-cli-babel@^6.0.0-beta.4, ember-cli-babel@^6.6.0, ember-cli-babel@^6.8.1, ember-cli-babel@^6.8.2: version "6.18.0" resolved "https://registry.npmjs.org/ember-cli-babel/-/ember-cli-babel-6.18.0.tgz#3f6435fd275172edeff2b634ee7b29ce74318957" integrity sha512-7ceC8joNYxY2wES16iIBlbPSxwKDBhYwC8drU3ZEvuPDMwVv1KzxCNu1fvxyFEBWhwaRNTUxSCsEVoTd9nosGA== @@ -6482,39 +6493,6 @@ ember-cli-babel@^7.0.0, ember-cli-babel@^7.1.3, ember-cli-babel@^7.10.0, ember-c rimraf "^3.0.1" semver "^5.5.0" -ember-cli-babel@^8.2.0: - version "8.2.0" - resolved "https://registry.npmjs.org/ember-cli-babel/-/ember-cli-babel-8.2.0.tgz#91e14c22ac22956177002385947724174553d41c" - integrity sha512-8H4+jQElCDo6tA7CamksE66NqBXWs7VNpS3a738L9pZCjg2kXIX4zoyHzkORUqCtr0Au7YsCnrlAMi1v2ALo7A== - dependencies: - "@babel/helper-compilation-targets" "^7.20.7" - "@babel/plugin-proposal-class-properties" "^7.16.5" - "@babel/plugin-proposal-decorators" "^7.20.13" - "@babel/plugin-proposal-private-methods" "^7.16.5" - "@babel/plugin-proposal-private-property-in-object" "^7.20.5" - "@babel/plugin-transform-class-static-block" "^7.22.11" - "@babel/plugin-transform-modules-amd" "^7.20.11" - "@babel/plugin-transform-runtime" "^7.13.9" - "@babel/plugin-transform-typescript" "^7.20.13" - "@babel/preset-env" "^7.20.2" - "@babel/runtime" "7.12.18" - amd-name-resolver "^1.3.1" - babel-plugin-debug-macros "^0.3.4" - babel-plugin-ember-data-packages-polyfill "^0.1.2" - babel-plugin-ember-modules-api-polyfill "^3.5.0" - babel-plugin-module-resolver "^5.0.0" - broccoli-babel-transpiler "^8.0.0" - broccoli-debug "^0.6.4" - broccoli-funnel "^3.0.8" - broccoli-source "^3.0.1" - calculate-cache-key-for-tree "^2.0.0" - clone "^2.1.2" - ember-cli-babel-plugin-helpers "^1.1.1" - ember-cli-version-checker "^5.1.2" - ensure-posix-path "^1.0.2" - resolve-package-path "^4.0.3" - semver "^7.3.8" - ember-cli-code-coverage@^1.0.0-beta.4: version "1.0.3" resolved "https://registry.npmjs.org/ember-cli-code-coverage/-/ember-cli-code-coverage-1.0.3.tgz#9a6e5e6350d70761eba749d68ebe2e0d9aa3492f" @@ -6617,7 +6595,7 @@ ember-cli-htmlbars@^5.0.0, ember-cli-htmlbars@^5.1.0, ember-cli-htmlbars@^5.1.2, strip-bom "^4.0.0" walk-sync "^2.2.0" -ember-cli-htmlbars@^6.0.0, ember-cli-htmlbars@^6.0.1, ember-cli-htmlbars@^6.1.1, ember-cli-htmlbars@^6.3.0: +ember-cli-htmlbars@^6.0.0, ember-cli-htmlbars@^6.0.1, ember-cli-htmlbars@^6.1.1, ember-cli-htmlbars@^6.2.0: version "6.3.0" resolved "https://registry.npmjs.org/ember-cli-htmlbars/-/ember-cli-htmlbars-6.3.0.tgz#ac85f2bbd09788992ab7f9ca832cd044fb8e5798" integrity sha512-N9Y80oZfcfWLsqickMfRd9YByVcTGyhYRnYQ2XVPVrp6jyUyOeRWmEAPh7ERSXpp8Ws4hr/JB9QVQrn/yZa+Ag== @@ -6693,17 +6671,6 @@ ember-cli-path-utils@^1.0.0: resolved "https://registry.npmjs.org/ember-cli-path-utils/-/ember-cli-path-utils-1.0.0.tgz#4e39af8b55301cddc5017739b77a804fba2071ed" integrity sha512-Qq0vvquzf4cFHoDZavzkOy3Izc893r/5spspWgyzLCPTaG78fM3HsrjZm7UWEltbXUqwHHYrqZd/R0jS08NqSA== -ember-cli-postcss@^8.1.0: - version "8.2.0" - resolved "https://registry.npmjs.org/ember-cli-postcss/-/ember-cli-postcss-8.2.0.tgz#9cc1fee624d2d13c41633cf32d4e8cb8d5f88eff" - integrity sha512-S2HQqmNtcezmLSt/OPZKCXg+aRV7yFoZp+tn1HCLSbR/eU95xl7MWxTjbj/wOIGMfhggy/hBT2+STDh8mGuVpw== - dependencies: - broccoli-merge-trees "^4.2.0" - broccoli-postcss "^6.0.1" - broccoli-postcss-single "^5.0.1" - ember-cli-babel "^7.26.11" - merge "^2.1.1" - ember-cli-preprocess-registry@^3.3.0: version "3.3.0" resolved "https://registry.npmjs.org/ember-cli-preprocess-registry/-/ember-cli-preprocess-registry-3.3.0.tgz#685837a314fbe57224bd54b189f4b9c23907a2de" @@ -7069,6 +7036,17 @@ ember-concurrency-decorators@^2.0.0: ember-compatibility-helpers "^1.2.0" ember-destroyable-polyfill "^2.0.2" +ember-concurrency@^4.0.4: + version "4.0.4" + resolved "https://registry.npmjs.org/ember-concurrency/-/ember-concurrency-4.0.4.tgz#1d021e652d159e9bbdc97e9071c7142559531b59" + integrity sha512-Y+PwbFE2r3+ANlT0lTBNokLXTRFLV6lnGkZ8u5tDhND5o2wD1wkh9JdP8KZ8aJ+J0dmhncVGQNi+Dbbtc6xTfg== + dependencies: + "@babel/helper-module-imports" "^7.22.15" + "@babel/helper-plugin-utils" "^7.12.13" + "@babel/types" "^7.12.13" + "@embroider/addon-shim" "^1.8.7" + decorator-transforms "^1.0.1" + ember-copy@2.0.1: version "2.0.1" resolved "https://registry.npmjs.org/ember-copy/-/ember-copy-2.0.1.tgz#13192b12a250324bb4a8b4547a680b113f4e3041" @@ -7146,13 +7124,12 @@ ember-element-helper@^0.5.5: ember-cli-babel "^7.17.2" ember-cli-htmlbars "^5.1.0" -ember-element-helper@^0.8.5: - version "0.8.6" - resolved "https://registry.npmjs.org/ember-element-helper/-/ember-element-helper-0.8.6.tgz#564d63dbbb6130e4c69ff06b3bd8fbfb9cb4787a" - integrity sha512-WcbkJKgBZypRGwujeiPrQfZRhETVFLR0wvH2UxDaNBhLWncapt6KK+M/2i/eODoAQwgGxziejhXC6Cbqa9zA8g== +ember-element-helper@^0.8.6: + version "0.8.8" + resolved "https://registry.npmjs.org/ember-element-helper/-/ember-element-helper-0.8.8.tgz#b7cb5a6450ec00ae6bc4974a5f6d7224aced8f35" + integrity sha512-3slTltQV5ke53t3YVP2GYoswsQ6y+lhuVzKmt09tbEx91DapG8I/xa8W5OA0StvcQlavL3/vHrz/vCQEFs8bBA== dependencies: "@embroider/addon-shim" "^1.8.3" - "@embroider/util" "^1.0.0" ember-exam@^6.1.0: version "6.1.0" @@ -7186,14 +7163,23 @@ ember-factory-for-polyfill@^1.3.1: dependencies: ember-cli-version-checker "^2.1.0" -ember-focus-trap@^1.1.0: - version "1.1.0" - resolved "https://registry.npmjs.org/ember-focus-trap/-/ember-focus-trap-1.1.0.tgz#e3c47c6e916e838af3884b43e2794e87088d2bac" - integrity sha512-KxbCKpAJaBVZm+bW4tHPoBJAZThmxa6pI+WQusL+bj0RtAnGUNkWsVy6UBMZ5QqTQzf4EvGHkCVACVp5lbAWMQ== +ember-focus-trap@^1.1.1: + version "1.1.1" + resolved "https://registry.npmjs.org/ember-focus-trap/-/ember-focus-trap-1.1.1.tgz#257512ba847a40ea4cf86d83495b4e47c0f9efed" + integrity sha512-5tOWu6eV1UoNZE+P9Gl9lJXNrENZVCoOXi52ePb7JOrOZ3ckOk1OkPsFwR4Jym9VJ7vZ6S3Z3D8BrkFa2aCpYw== dependencies: "@embroider/addon-shim" "^1.0.0" focus-trap "^6.7.1" +ember-functions-as-helper-polyfill@^2.1.2: + version "2.1.3" + resolved "https://registry.npmjs.org/ember-functions-as-helper-polyfill/-/ember-functions-as-helper-polyfill-2.1.3.tgz#5fc78d222f326ebd2b796241dbd8f70741f53952" + integrity sha512-Hte8jfOmSNzrz/vOchf68CGaBWXN2/5qKgFaylqr9omW2i4Wt9JmaBWRkeR0AJ53N57q3DX2TOb166Taq6QjiA== + dependencies: + ember-cli-babel "^7.26.11" + ember-cli-typescript "^5.0.0" + ember-cli-version-checker "^5.1.2" + ember-get-config@^0.3.0: version "0.3.0" resolved "https://registry.npmjs.org/ember-get-config/-/ember-get-config-0.3.0.tgz#a73a1a87b48d9dde4c66a0e52ed5260b8a48cfbd" @@ -7265,15 +7251,12 @@ ember-intl@^5.7.0: mkdirp "^1.0.4" silent-error "^1.1.1" -ember-keyboard@^8.2.1: - version "8.2.1" - resolved "https://registry.npmjs.org/ember-keyboard/-/ember-keyboard-8.2.1.tgz#945a8a71068d81c06ad26851008ef81061db2a59" - integrity sha512-wT9xpt3GKsiodGZoifKU4OyeRjXWlmKV9ZHHsp6wJBwMFpl4wWPjTNdINxivk2qg/WFNIh8nUiwuG4+soWXPdw== +ember-lifeline@^7.0.0: + version "7.0.0" + resolved "https://registry.npmjs.org/ember-lifeline/-/ember-lifeline-7.0.0.tgz#46780c8f832b6c784ee4681b938a1e1437bfa676" + integrity sha512-2l51NzgH5vjN972zgbs+32rnXnnEFKB7qsSpJF+lBI4V5TG6DMy4SfowC72ZEuAtS58OVfwITbOO+RnM21EdpA== dependencies: - "@embroider/addon-shim" "^1.8.4" - ember-destroyable-polyfill "^2.0.3" - ember-modifier "^2.1.2 || ^3.1.0 || ^4.0.0" - ember-modifier-manager-polyfill "^1.2.0" + "@embroider/addon-shim" "^1.6.0" ember-load-initializers@^2.1.2: version "2.1.2" @@ -7333,7 +7316,18 @@ ember-modifier@^2.1.0: ember-destroyable-polyfill "^2.0.2" ember-modifier-manager-polyfill "^1.2.0" -"ember-modifier@^2.1.2 || ^3.1.0 || ^4.0.0", "ember-modifier@^3.2.7 || ^4.0.0", ember-modifier@^4.1.0: +ember-modifier@^3.2.7: + version "3.2.7" + resolved "https://registry.npmjs.org/ember-modifier/-/ember-modifier-3.2.7.tgz#f2d35b7c867cbfc549e1acd8d8903c5ecd02ea4b" + integrity sha512-ezcPQhH8jUfcJQbbHji4/ZG/h0yyj1jRDknfYue/ypQS8fM8LrGcCMo0rjDZLzL1Vd11InjNs3BD7BdxFlzGoA== + dependencies: + ember-cli-babel "^7.26.6" + ember-cli-normalize-entity-name "^1.0.0" + ember-cli-string-utils "^1.1.0" + ember-cli-typescript "^5.0.0" + ember-compatibility-helpers "^1.2.5" + +"ember-modifier@^3.2.7 || ^4.0.0", ember-modifier@^4.1.0: version "4.2.0" resolved "https://registry.npmjs.org/ember-modifier/-/ember-modifier-4.2.0.tgz#f99cb817b9b85c5188c63f853cd06aa62e8dde57" integrity sha512-BJ48eTEGxD8J7+lofwVmee7xDgNDgpr5dd6+MSu4gk+I6xb35099RMNorXY5hjjwMJEyi/IRR6Yn3M7iJMz8Zw== @@ -7343,16 +7337,15 @@ ember-modifier@^2.1.0: ember-cli-normalize-entity-name "^1.0.0" ember-cli-string-utils "^1.1.0" -ember-modifier@^3.2.7: - version "3.2.7" - resolved "https://registry.npmjs.org/ember-modifier/-/ember-modifier-3.2.7.tgz#f2d35b7c867cbfc549e1acd8d8903c5ecd02ea4b" - integrity sha512-ezcPQhH8jUfcJQbbHji4/ZG/h0yyj1jRDknfYue/ypQS8fM8LrGcCMo0rjDZLzL1Vd11InjNs3BD7BdxFlzGoA== +ember-modifier@^4.2.0, ember-modifier@^4.2.2: + version "4.2.2" + resolved "https://registry.npmjs.org/ember-modifier/-/ember-modifier-4.2.2.tgz#ad6a638dc6f82c7086031c97c2de9b094331c756" + integrity sha512-pPYBAGyczX0hedGWQFQOEiL9s45KS9efKxJxUQkMLjQyh+1Uef1mcmAGsdw2KmvNupITkE/nXxmVO1kZ9tt3ag== dependencies: - ember-cli-babel "^7.26.6" + "@embroider/addon-shim" "^1.8.7" + decorator-transforms "^2.0.0" ember-cli-normalize-entity-name "^1.0.0" ember-cli-string-utils "^1.1.0" - ember-cli-typescript "^5.0.0" - ember-compatibility-helpers "^1.2.5" ember-named-blocks-polyfill@^0.2.5: version "0.2.5" @@ -7421,6 +7414,19 @@ ember-power-select@^4.0.0, ember-power-select@^4.0.5: ember-text-measurer "^0.6.0" ember-truth-helpers "^2.1.0 || ^3.0.0" +ember-power-select@^8.7.1: + version "8.7.3" + resolved "https://registry.npmjs.org/ember-power-select/-/ember-power-select-8.7.3.tgz#01d2977b9e6d65dae2835bfd07d204050147b4b7" + integrity sha512-jDUmW2Wy+xtn/BkTGIq1d3NVGanZRbP5bSonIJysZoF9GfcD8W0iVs4Wj7q6CnzPZ/fMH8ZD2/ZQ+gOQBj7ggg== + dependencies: + "@embroider/addon-shim" "^1.10.0" + "@embroider/util" "^1.13.2" + decorator-transforms "^2.3.0" + ember-assign-helper "^0.5.0" + ember-lifeline "^7.0.0" + ember-modifier "^4.2.0" + ember-truth-helpers "^4.0.3" + ember-qunit@^5.1.5: version "5.1.5" resolved "https://registry.npmjs.org/ember-qunit/-/ember-qunit-5.1.5.tgz#24a7850f052be24189ff597dfc31b923e684c444" @@ -7480,14 +7486,15 @@ ember-resolver@^8.0.3: ember-cli-version-checker "^5.1.2" resolve "^1.20.0" -ember-resources@^5.0.1: - version "5.6.4" - resolved "https://registry.npmjs.org/ember-resources/-/ember-resources-5.6.4.tgz#1ae05bb5398ab0d8fab8c0925c5bf679ee86e327" - integrity sha512-ShdosnruPm37jPpzPOgPVelymEDJT/27Jz/j5AGPVAfCaUhRIocTxNMtPx13ox890A2babuPF5M3Ur8UFidqtw== +ember-resources@^6.0.0: + version "6.5.2" + resolved "https://registry.npmjs.org/ember-resources/-/ember-resources-6.5.2.tgz#59ebf5bde2dcc6809e4d9820c7cfc0ce0d3487ed" + integrity sha512-8JQ9ebTcKjsmhR5AJ7JNiXziuOiILjrEbGRqcFKkTvodK4QdvvOspDz8yejsf/J/1YUMFe4fjJnjqc2wpORX2Q== dependencies: "@babel/runtime" "^7.17.8" "@embroider/addon-shim" "^1.2.0" - "@embroider/macros" "^1.2.0" + "@embroider/macros" "^1.12.3" + ember-async-data "^1.0.1" ember-rfc176-data@^0.3.13, ember-rfc176-data@^0.3.15, ember-rfc176-data@^0.3.17: version "0.3.18" @@ -7595,15 +7602,15 @@ ember-stargate@^0.2.0: ember-in-element-polyfill "^1.0.0" tracked-maps-and-sets "^2.1.0" -ember-stargate@^0.4.3: - version "0.4.3" - resolved "https://registry.npmjs.org/ember-stargate/-/ember-stargate-0.4.3.tgz#93e92e4928d489557401d70e52b242b38f36f9ab" - integrity sha512-GeT5n+TT3Lfl335f16fx9ms0Jap+v5LTs8otIaQEGtFbSP5Jj/hlT3JPB9Uo8IDLXdjejxJsKRpCEzRD43g5dg== +ember-stargate@^0.5.0: + version "0.5.0" + resolved "https://registry.npmjs.org/ember-stargate/-/ember-stargate-0.5.0.tgz#b50c3831ee11c91518b266c386ff01ecd02967f1" + integrity sha512-HYUww+s1M5X4nmErc3VxsCmGAelBrp8AecObadEvO3u6c9cF8RpsMciWpjfvcD94gy0sneIg61S91S4XJaormQ== dependencies: "@ember/render-modifiers" "^2.0.0" "@embroider/addon-shim" "^1.0.0" "@glimmer/component" "^1.1.2" - ember-resources "^5.0.1" + ember-resources "^6.0.0" tracked-maps-and-sets "^3.0.1" ember-string-fns@^1.4.0: @@ -7621,13 +7628,14 @@ ember-style-modifier@^0.6.0: ember-cli-babel "^7.21.0" ember-modifier "^2.1.0" -ember-style-modifier@^3.0.1: - version "3.1.1" - resolved "https://registry.npmjs.org/ember-style-modifier/-/ember-style-modifier-3.1.1.tgz#313269708552c42255806586160411840adc98c5" - integrity sha512-J91YLKVp3/m7LrcLEWNSG2sJlSFhE5Ny75empU048qYJtdJMe788Ks/EpKEi953o1mJujVRg792YGrwbrpTzNA== +ember-style-modifier@^4.4.0: + version "4.4.0" + resolved "https://registry.npmjs.org/ember-style-modifier/-/ember-style-modifier-4.4.0.tgz#2d1fa6a35d41d88612277d7d149f1e569acaf8d3" + integrity sha512-gT1ckbhl1KSj5sWTo/8UChj98eZeE+mUmYoXw8VjwJgWP0wiTCibGZjVbC0WlIUd7umxuG61OQ/ivfF+sAiOEQ== dependencies: - ember-auto-import "^2.5.0" - ember-cli-babel "^7.26.11" + "@embroider/addon-shim" "^1.8.7" + csstype "^3.1.3" + decorator-transforms "^2.0.0" ember-modifier "^3.2.7 || ^4.0.0" ember-template-lint@^2.0.1: @@ -7689,13 +7697,21 @@ ember-tracked-storage-polyfill@1.0.0, ember-tracked-storage-polyfill@^1.0.0: ember-cli-babel "^7.26.3" ember-cli-htmlbars "^5.7.1" -"ember-truth-helpers@^2.1.0 || ^3.0.0", ember-truth-helpers@^3.0.0, ember-truth-helpers@^3.1.1: +"ember-truth-helpers@^2.1.0 || ^3.0.0", ember-truth-helpers@^3.0.0: version "3.1.1" resolved "https://registry.npmjs.org/ember-truth-helpers/-/ember-truth-helpers-3.1.1.tgz#434715926d72bcc63b8a115dec09745fda4474dc" integrity sha512-FHwJAx77aA5q27EhdaaiBFuy9No+8yaWNT5A7zs0sIFCmf14GbcLn69vJEp6mW7vkITezizGAWhw7gL0Wbk7DA== dependencies: ember-cli-babel "^7.22.1" +ember-truth-helpers@^4.0.3: + version "4.0.3" + resolved "https://registry.npmjs.org/ember-truth-helpers/-/ember-truth-helpers-4.0.3.tgz#02705dc36f2d68f1d4cff0d8226396c8ae5dee2e" + integrity sha512-T6Ogd3pk9FxYiZfSxdjgn3Hb3Ksqgw7CD23V9qfig9jktNdkNEHo4+3PA3cSD/+3a2kdH3KmNvKyarVuzdtEkA== + dependencies: + "@embroider/addon-shim" "^1.8.6" + ember-functions-as-helper-polyfill "^2.1.2" + ember-validators@~4.0.0: version "4.0.1" resolved "https://registry.npmjs.org/ember-validators/-/ember-validators-4.0.1.tgz#13beefdf185b00efd1b60e51b21380686d8994ba" @@ -7725,7 +7741,7 @@ encodeurl@~1.0.2: resolved "https://registry.npmjs.org/encodeurl/-/encodeurl-1.0.2.tgz#ad3ff4c86ec2d029322f5a02c3a9a606c95b3f59" integrity sha512-TPJXq8JqFaVYm2CWmPvnP2Iyo4ZSM7/QKcSmuMLDObfpH5fi7RUGmd/rTDf+rut/saiDiQEeVTNgAmJEdAOx0w== -end-of-stream@^1.0.0, end-of-stream@^1.1.0: +end-of-stream@^1.1.0: version "1.4.4" resolved "https://registry.npmjs.org/end-of-stream/-/end-of-stream-1.4.4.tgz#5ae64a5f45057baf3626ec14da0ca5e4b2431eb0" integrity sha512-+uw1inIHVPQoaVuHzRyXd21icM+cnt4CzD5rW+NC1wjOUSTOs+Te7FOv7AhN7vS9x/oIyhLP5PR1H+phQAHu5Q== @@ -7753,7 +7769,7 @@ engine.io@~6.5.2: engine.io-parser "~5.2.1" ws "~8.17.1" -enhanced-resolve@^4.0.0, enhanced-resolve@^4.5.0: +enhanced-resolve@^4.0.0: version "4.5.0" resolved "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-4.5.0.tgz#2f3cfd84dbe3b487f18f2db2ef1e064a571ca5ec" integrity sha512-Nv9m36S/vxpsI+Hc4/ZGRs0n9mXqSWGGq49zxb/cJfPAQMbUtttJAlNPS4AQzaBdw/pKskw5bMbekT/Y7W/Wlg== @@ -7762,10 +7778,10 @@ enhanced-resolve@^4.0.0, enhanced-resolve@^4.5.0: memory-fs "^0.5.0" tapable "^1.0.0" -enhanced-resolve@^5.17.0: - version "5.17.0" - resolved "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.17.0.tgz#d037603789dd9555b89aaec7eb78845c49089bc5" - integrity sha512-dwDPwZL0dmye8Txp2gzFmA6sxALaSvdRDjPH0viLcKrtlOL3tw62nWWweVD1SdILDTJrbrL6tdWVN58Wo6U3eA== +enhanced-resolve@^5.17.1: + version "5.18.1" + resolved "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.18.1.tgz#728ab082f8b7b6836de51f1637aab5d3b9568faf" + integrity sha512-ZSW3ma5GkcQBIpwZTSRAI8N71Uuwgs93IezB7mf7R60tC8ZbJideoDNKjHn2O9KIlx6rkGTTEk1xUCK2E1Y2Yg== dependencies: graceful-fs "^4.2.4" tapable "^2.2.0" @@ -7803,7 +7819,7 @@ errlop@^2.0.0: resolved "https://registry.npmjs.org/errlop/-/errlop-2.2.0.tgz#1ff383f8f917ae328bebb802d6ca69666a42d21b" integrity sha512-e64Qj9+4aZzjzzFpZC7p5kmm/ccCrbLhAJplhsDXQFs87XTsXwOpH4s1Io2s90Tau/8r2j9f4l/thhDevRjzxw== -errno@^0.1.3, errno@~0.1.7: +errno@^0.1.3: version "0.1.8" resolved "https://registry.npmjs.org/errno/-/errno-0.1.8.tgz#8bb3e9c7d463be4976ff888f76b4809ebc2e811f" integrity sha512-dJ6oBr5SQ1VSd9qkk7ByRgb/1SH4JZjCHSW/mr63/QcXO9zLVxvJ6Oy13nio03rxpSnVDDjFor75SjVeZWPW/A== @@ -7883,6 +7899,11 @@ es-define-property@^1.0.0: dependencies: get-intrinsic "^1.2.4" +es-define-property@^1.0.1: + version "1.0.1" + resolved "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz#983eb2f9a6724e9303f61addf011c72e09e0b0fa" + integrity sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g== + es-errors@^1.2.1, es-errors@^1.3.0: version "1.3.0" resolved "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz#05f75a25dab98e4fb1dcd5e1472c0546d5057c8f" @@ -7915,14 +7936,31 @@ es-object-atoms@^1.0.0: dependencies: es-errors "^1.3.0" +es-object-atoms@^1.1.1: + version "1.1.1" + resolved "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz#1c4f2c4837327597ce69d2ca190a7fdd172338c1" + integrity sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA== + dependencies: + es-errors "^1.3.0" + es-set-tostringtag@^2.0.3: version "2.0.3" resolved "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.0.3.tgz#8bb60f0a440c2e4281962428438d58545af39777" integrity sha512-3T8uNMC3OQTHkFUsFq8r/BwAXLHvU/9O9mE0fBc/MY5iq/8H7ncvO947LmYA6ldWw9Uh8Yhf25zu6n7nML5QWQ== dependencies: - get-intrinsic "^1.2.4" + get-intrinsic "^1.2.4" + has-tostringtag "^1.0.2" + hasown "^2.0.1" + +es-set-tostringtag@^2.1.0: + version "2.1.0" + resolved "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz#f31dbbe0c183b00a6d26eb6325c810c0fd18bd4d" + integrity sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA== + dependencies: + es-errors "^1.3.0" + get-intrinsic "^1.2.6" has-tostringtag "^1.0.2" - hasown "^2.0.1" + hasown "^2.0.2" es-to-primitive@^1.2.1: version "1.2.1" @@ -8026,14 +8064,6 @@ eslint-scope@5.1.1, eslint-scope@^5.1.1: esrecurse "^4.3.0" estraverse "^4.1.1" -eslint-scope@^4.0.3: - version "4.0.3" - resolved "https://registry.npmjs.org/eslint-scope/-/eslint-scope-4.0.3.tgz#ca03833310f6889a3264781aa82e63eb9cfe7848" - integrity sha512-p7VutNr1O/QrxysMo3E45FjYDTeXBy0iTltPFNSqKAIfjDSXC+4dj+qfyuD8bfAXrW/y6lW3O76VaYNPKfpKrg== - dependencies: - esrecurse "^4.1.0" - estraverse "^4.1.1" - eslint-utils@^2.0.0, eslint-utils@^2.1.0: version "2.1.0" resolved "https://registry.npmjs.org/eslint-utils/-/eslint-utils-2.1.0.tgz#d2de5e03424e707dc10c74068ddedae708741b27" @@ -8135,7 +8165,7 @@ esquery@^1.4.0: dependencies: estraverse "^5.1.0" -esrecurse@^4.1.0, esrecurse@^4.3.0: +esrecurse@^4.3.0: version "4.3.0" resolved "https://registry.npmjs.org/esrecurse/-/esrecurse-4.3.0.tgz#7ad7964d679abb28bee72cec63758b1c5d2c9921" integrity sha512-KmfKL3b6G+RXvP8N1vr3Tq1kL/oCFgn2NYXEtqP8/L3pKapUA4G8cFVaoF3SU323CD4XypR/ffioHmkti6/Tag== @@ -8177,19 +8207,11 @@ events-to-array@^1.0.1: resolved "https://registry.npmjs.org/events-to-array/-/events-to-array-1.1.2.tgz#2d41f563e1fe400ed4962fe1a4d5c6a7539df7f6" integrity sha512-inRWzRY7nG+aXZxBzEqYKB3HPgwflZRopAjDCHv0whhRx+MTUr1ei0ICZUypdyE0HRm4L2d5VEcIqLD6yl+BFA== -events@^3.0.0, events@^3.2.0: +events@^3.2.0: version "3.3.0" resolved "https://registry.npmjs.org/events/-/events-3.3.0.tgz#31a95ad0a924e2d2c419a813aeb2c4e878ea7400" integrity sha512-mQw+2fkQbALzQ7V0MY0IqdnXNOeTtP4r0lN9z7AAawCXgqea7bDii20AYrIBrFd/Hx0M2Ocz6S111CaFkUcb0Q== -evp_bytestokey@^1.0.0, evp_bytestokey@^1.0.3: - version "1.0.3" - resolved "https://registry.npmjs.org/evp_bytestokey/-/evp_bytestokey-1.0.3.tgz#7fcbdb198dc71959432efe13842684e0525acb02" - integrity sha512-/f2Go4TognH/KvCISP7OUsHn85hT9nUkxxA9BEWxFn+Oj9o8ZNLm/40hdlgSLyuOimsrTKLUMEorQexp/aPQeA== - dependencies: - md5.js "^1.3.4" - safe-buffer "^5.1.1" - exec-sh@^0.3.2: version "0.3.6" resolved "https://registry.npmjs.org/exec-sh/-/exec-sh-0.3.6.tgz#ff264f9e325519a60cb5e273692943483cca63bc" @@ -8279,19 +8301,6 @@ exit@^0.1.2: resolved "https://registry.npmjs.org/exit/-/exit-0.1.2.tgz#0632638f8d877cc82107d30a0fff1a17cba1cd0c" integrity sha512-Zk/eNKV2zbjpKzrsQ+n1G6poVbErQxJ0LBOJXaKZ1EViLzH+hrLu9cdXI4zw9dBQJslwBEpbQ2P1oS7nDxs6jQ== -expand-brackets@^2.1.4: - version "2.1.4" - resolved "https://registry.npmjs.org/expand-brackets/-/expand-brackets-2.1.4.tgz#b77735e315ce30f6b6eff0f83b04151a22449622" - integrity sha512-w/ozOKR9Obk3qoWeY/WDi6MFta9AoMR+zud60mdnbniMcBxRuFJyDt2LdX/14A1UABeqk+Uk+LDfUpvoGKppZA== - dependencies: - debug "^2.3.3" - define-property "^0.2.5" - extend-shallow "^2.0.1" - posix-character-classes "^0.1.0" - regex-not "^1.0.0" - snapdragon "^0.8.1" - to-regex "^3.0.1" - expand-tilde@^2.0.0, expand-tilde@^2.0.2: version "2.0.2" resolved "https://registry.npmjs.org/expand-tilde/-/expand-tilde-2.0.2.tgz#97e801aa052df02454de46b02bf621642cdc8502" @@ -8336,21 +8345,6 @@ express@^4.10.7, express@^4.17.1: utils-merge "1.0.1" vary "~1.1.2" -extend-shallow@^2.0.1: - version "2.0.1" - resolved "https://registry.npmjs.org/extend-shallow/-/extend-shallow-2.0.1.tgz#51af7d614ad9a9f610ea1bafbb989d6b1c56890f" - integrity sha512-zCnTtlxNoAiDc3gqY2aYAWFx7XWWiasuF2K8Me5WbN8otHKTUKBwjPtNpRs/rbUZm7KxWAaNj7P1a/p52GbVug== - dependencies: - is-extendable "^0.1.0" - -extend-shallow@^3.0.0, extend-shallow@^3.0.2: - version "3.0.2" - resolved "https://registry.npmjs.org/extend-shallow/-/extend-shallow-3.0.2.tgz#26a71aaf073b39fb2127172746131c2704028db8" - integrity sha512-BwY5b5Ql4+qZoefgMj2NUmx+tehVTH/Kf4k1ZEtOHNFcm2wSxMRo992l6X3TIgni2eZVTZ85xMOjF31fwZAj6Q== - dependencies: - assign-symbols "^1.0.0" - is-extendable "^1.0.1" - extend@^3.0.0, extend@^3.0.2: version "3.0.2" resolved "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz#f8b1136b4071fbd8eb140aff858b1019ec2915fa" @@ -8365,20 +8359,6 @@ external-editor@^3.0.3: iconv-lite "^0.4.24" tmp "^0.0.33" -extglob@^2.0.4: - version "2.0.4" - resolved "https://registry.npmjs.org/extglob/-/extglob-2.0.4.tgz#ad00fe4dc612a9232e8718711dc5cb5ab0285543" - integrity sha512-Nmb6QXkELsuBr24CJSkilo6UHHgbekK5UiZgfE6UHD3Eb27YC6oD+bhcT+tJ6cl8dmsgdQxnWlcry8ksBIBLpw== - dependencies: - array-unique "^0.3.2" - define-property "^1.0.0" - expand-brackets "^2.1.4" - extend-shallow "^2.0.1" - fragment-cache "^0.2.1" - regex-not "^1.0.0" - snapdragon "^0.8.1" - to-regex "^3.0.1" - extract-stack@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/extract-stack/-/extract-stack-2.0.0.tgz#11367bc865bfcd9bc0db3123e5edb57786f11f9b" @@ -8421,7 +8401,7 @@ fast-glob@^2.2.6: merge2 "^1.2.3" micromatch "^3.1.10" -fast-glob@^3.0.3, fast-glob@^3.2.5, fast-glob@^3.2.9, fast-glob@^3.3.0: +fast-glob@^3.0.3, fast-glob@^3.2.5, fast-glob@^3.2.9: version "3.3.2" resolved "https://registry.npmjs.org/fast-glob/-/fast-glob-3.3.2.tgz#a904501e57cfdd2ffcded45e99a54fef55e46129" integrity sha512-oX2ruAFQwf/Orj8m737Y5adxDQO0LAB7/S5MnxCdTNDd4p6BsyIVsv9JQsATbTSq8KHRpLwIHbVlUNatxd+1Ow== @@ -8509,11 +8489,6 @@ fb-watchman@^2.0.0: dependencies: bser "2.1.1" -figgy-pudding@^3.5.1: - version "3.5.2" - resolved "https://registry.npmjs.org/figgy-pudding/-/figgy-pudding-3.5.2.tgz#b4eee8148abb01dcf1d1ac34367d59e12fa61d6e" - integrity sha512-0btnI/H8f2pavGMN8w40mlSKOfTK2SVJmBfBeVIj3kNw0swwgzyRq0d5TJVOwodFmtvpPeWPN/MCcfuWF0Ezbw== - figures@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/figures/-/figures-2.0.0.tgz#3ab1a2d2a62c8bfb431a0c94cb797a2fce27c962" @@ -8535,11 +8510,6 @@ file-entry-cache@^6.0.1: dependencies: flat-cache "^3.0.4" -file-uri-to-path@1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/file-uri-to-path/-/file-uri-to-path-1.0.0.tgz#553a7b8446ff6f684359c445f1e37a05dacc33dd" - integrity sha512-0Zt+s3L7Vf1biwWZ29aARiVYLx7iMGnEUl9x33fbB/j3jR81u/O2LbqK+Bm1CDSNDKVtJ/YjwY7TUd5SkeLQLw== - filesize@^4.1.2: version "4.2.1" resolved "https://registry.npmjs.org/filesize/-/filesize-4.2.1.tgz#ab1cb2069db5d415911c1a13e144c0e743bc89bc" @@ -8591,23 +8561,6 @@ find-babel-config@^1.1.0, find-babel-config@^1.2.0: json5 "^1.0.2" path-exists "^3.0.0" -find-babel-config@^2.1.1: - version "2.1.1" - resolved "https://registry.npmjs.org/find-babel-config/-/find-babel-config-2.1.1.tgz#93703fc8e068db5e4c57592900c5715dd04b7e5b" - integrity sha512-5Ji+EAysHGe1OipH7GN4qDjok5Z1uw5KAwDCbicU/4wyTZY7CqOCzcWbG7J5ad9mazq67k89fXlbc1MuIfl9uA== - dependencies: - json5 "^2.2.3" - path-exists "^4.0.0" - -find-cache-dir@^2.1.0: - version "2.1.0" - resolved "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-2.1.0.tgz#8d0f94cd13fe43c6c7c261a0d86115ca918c05f7" - integrity sha512-Tq6PixE0w/VMFfCgbONnkiQIVol/JJL7nRMi20fqzA4NRs9AfeqMGeRdPi3wIhYkxjeBaWh2rxwapn5Tu3IqOQ== - dependencies: - commondir "^1.0.1" - make-dir "^2.0.0" - pkg-dir "^3.0.0" - find-cache-dir@^3.3.1: version "3.3.2" resolved "https://registry.npmjs.org/find-cache-dir/-/find-cache-dir-3.3.2.tgz#b30c5b6eff0730731aea9bbd9dbecbd80256d64b" @@ -8754,14 +8707,6 @@ flatted@^3.2.9: resolved "https://registry.npmjs.org/flatted/-/flatted-3.3.1.tgz#21db470729a6734d4997002f439cb308987f567a" integrity sha512-X8cqMLLie7KsNUDSdzeN8FYK9rEt4Dt67OsG/DNGnYTSDBG4uFAJFBnUeiV+zCVAvwFy56IjM9sH51jVaEhNxw== -flush-write-stream@^1.0.0: - version "1.1.1" - resolved "https://registry.npmjs.org/flush-write-stream/-/flush-write-stream-1.1.1.tgz#8dd7d873a1babc207d94ead0c2e0e44276ebf2e8" - integrity sha512-3Z4XhFZ3992uIq0XOqb9AreonueSYphE6oYbpt5+3u06JWklbsPkNv3ZKkP9Bz/r+1MWCaMoSQ28P85+1Yc77w== - dependencies: - inherits "^2.0.3" - readable-stream "^2.3.6" - focus-trap@^6.7.1: version "6.9.4" resolved "https://registry.npmjs.org/focus-trap/-/focus-trap-6.9.4.tgz#436da1a1d935c48b97da63cd8f361c6f3aa16444" @@ -8786,19 +8731,16 @@ for-each@^0.3.3: dependencies: is-callable "^1.1.3" -for-in@^1.0.2: - version "1.0.2" - resolved "https://registry.npmjs.org/for-in/-/for-in-1.0.2.tgz#81068d295a8142ec0ac726c6e2200c30fb6d5e80" - integrity sha512-7EwmXrOjyL+ChxMhmG5lnW9MPt1aIeZEwKhQzoBUdTV0N3zuwWDZYVJatDvZ2OyzPUvdIAZDsCetk3coyMfcnQ== - form-data@^3.0.0: - version "3.0.1" - resolved "https://registry.npmjs.org/form-data/-/form-data-3.0.1.tgz#ebd53791b78356a99af9a300d4282c4d5eb9755f" - integrity sha512-RHkBKtLWUVwd7SqRIvCZMEvAMoGUp0XU+seQiZejj0COz3RI3hWP4sCv3gZWWLjJTd7rGwcsF5eKZGii0r/hbg== + version "3.0.4" + resolved "https://registry.npmjs.org/form-data/-/form-data-3.0.4.tgz#938273171d3f999286a4557528ce022dc2c98df1" + integrity sha512-f0cRzm6dkyVYV3nPoooP8XlccPQukegwhAnpoLcXy+X+A8KfpGOoXwDr9FLZd3wzgLaBGQBE3lY93Zm/i1JvIQ== dependencies: asynckit "^0.4.0" combined-stream "^1.0.8" - mime-types "^2.1.12" + es-set-tostringtag "^2.1.0" + hasown "^2.0.2" + mime-types "^2.1.35" format@^0.2.0: version "0.2.2" @@ -8815,26 +8757,11 @@ fraction.js@^4.3.7: resolved "https://registry.npmjs.org/fraction.js/-/fraction.js-4.3.7.tgz#06ca0085157e42fda7f9e726e79fefc4068840f7" integrity sha512-ZsDfxO51wGAXREY55a7la9LScWpwv9RxIrYABrlvOFBlH/ShPnrtsXeuUIfXKKOVicNxQ+o8JTbJvjS4M89yew== -fragment-cache@^0.2.1: - version "0.2.1" - resolved "https://registry.npmjs.org/fragment-cache/-/fragment-cache-0.2.1.tgz#4290fad27f13e89be7f33799c6bc5a0abfff0d19" - integrity sha512-GMBAbW9antB8iZRHLoGw0b3HANt57diZYFO/HL1JGIC1MjKrdmhxvrJbupnVvpys0zsz7yBApXdQyfepKly2kA== - dependencies: - map-cache "^0.2.2" - fresh@0.5.2: version "0.5.2" resolved "https://registry.npmjs.org/fresh/-/fresh-0.5.2.tgz#3d8cadd90d976569fa835ab1f8e4b23a105605a7" integrity sha512-zJ2mQYM18rEFOudeV4GShTGIQ7RbzA7ozbU9I/XBpm7kqgMywgmylMwXHxZJmkVoYkna9d2pVXVXPdYTP9ej8Q== -from2@^2.1.0: - version "2.3.0" - resolved "https://registry.npmjs.org/from2/-/from2-2.3.0.tgz#8bfb5502bde4a4d36cfdeea007fcca21d7e382af" - integrity sha512-OMcX/4IC/uqEPVgGeyfN22LJk6AZrMkRZHxcHBMBvHScDGgwTm2GT2Wkgtocyd3JfZffjj2kYUDXXII0Fk9W0g== - dependencies: - inherits "^2.0.1" - readable-stream "^2.0.0" - fs-extra@^0.24.0: version "0.24.0" resolved "https://registry.npmjs.org/fs-extra/-/fs-extra-0.24.0.tgz#d4e4342a96675cb7846633a6099249332b539952" @@ -8952,29 +8879,11 @@ fs-updater@^1.0.4: heimdalljs-logger "^0.1.9" rimraf "^2.6.2" -fs-write-stream-atomic@^1.0.8: - version "1.0.10" - resolved "https://registry.npmjs.org/fs-write-stream-atomic/-/fs-write-stream-atomic-1.0.10.tgz#b47df53493ef911df75731e70a9ded0189db40c9" - integrity sha512-gehEzmPn2nAwr39eay+x3X34Ra+M2QlVUTLhkXPjWdeO8RF9kszk116avgBJM3ZyNHgHXBNx+VmPaFC36k0PzA== - dependencies: - graceful-fs "^4.1.2" - iferr "^0.1.5" - imurmurhash "^0.1.4" - readable-stream "1 || 2" - fs.realpath@^1.0.0: version "1.0.0" resolved "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz#1504ad2523158caa40db4a2787cb01411994ea4f" integrity sha512-OO0pH2lK6a0hZnAdau5ItzHPI6pUlvI7jMVnxUQRtw4owF2wk8lOSabtGDCTP4Ggrg2MbGnWO9X8K1t4+fGMDw== -fsevents@^1.2.7: - version "1.2.13" - resolved "https://registry.npmjs.org/fsevents/-/fsevents-1.2.13.tgz#f325cb0455592428bcf11b383370ef70e3bfcc38" - integrity sha512-oWb1Z6mkHIskLzEJ/XWX0srkpkTQ7vaopMQkyaEIoq0fmtFVxOthb8cCxeT+p3ynTdkk/RZwbgG4brR5BeWECw== - dependencies: - bindings "^1.5.0" - nan "^2.12.1" - fsevents@~2.3.2: version "2.3.3" resolved "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz#cac6407785d03675a2a5e1a5305c697b347d90d6" @@ -9059,6 +8968,22 @@ get-intrinsic@^1.1.3, get-intrinsic@^1.2.1, get-intrinsic@^1.2.2, get-intrinsic@ has-symbols "^1.0.3" hasown "^2.0.0" +get-intrinsic@^1.2.6: + version "1.3.0" + resolved "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz#743f0e3b6964a93a5491ed1bffaae054d7f98d01" + integrity sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ== + dependencies: + call-bind-apply-helpers "^1.0.2" + es-define-property "^1.0.1" + es-errors "^1.3.0" + es-object-atoms "^1.1.1" + function-bind "^1.1.2" + get-proto "^1.0.1" + gopd "^1.2.0" + has-symbols "^1.1.0" + hasown "^2.0.2" + math-intrinsics "^1.1.0" + get-own-enumerable-property-symbols@^3.0.0: version "3.0.2" resolved "https://registry.npmjs.org/get-own-enumerable-property-symbols/-/get-own-enumerable-property-symbols-3.0.2.tgz#b5fde77f22cbe35f390b4e089922c50bce6ef664" @@ -9069,6 +8994,14 @@ get-package-type@^0.1.0: resolved "https://registry.npmjs.org/get-package-type/-/get-package-type-0.1.0.tgz#8de2d803cff44df3bc6c456e6668b36c3926e11a" integrity sha512-pjzuKtY64GYfWizNAJ0fr9VqttZkNiK2iS430LtIHzjBEr6bX8Am2zm4sW4Ro5wjWW5cAlRL1qAMTcXbjNAO2Q== +get-proto@^1.0.1: + version "1.0.1" + resolved "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz#150b3f2743869ef3e851ec0c49d15b1d14d00ee1" + integrity sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g== + dependencies: + dunder-proto "^1.0.1" + es-object-atoms "^1.0.0" + get-stdin@^4.0.1: version "4.0.1" resolved "https://registry.npmjs.org/get-stdin/-/get-stdin-4.0.1.tgz#b968c6b0a04384324902e8bf1a5df32579a450fe" @@ -9107,11 +9040,6 @@ get-symbol-description@^1.0.2: es-errors "^1.3.0" get-intrinsic "^1.2.4" -get-value@^2.0.3, get-value@^2.0.6: - version "2.0.6" - resolved "https://registry.npmjs.org/get-value/-/get-value-2.0.6.tgz#dc15ca1c672387ca76bd37ac0a395ba2042a2c28" - integrity sha512-Ln0UQDlxH1BapMu3GPtf7CuYNwRZf2gwCuPqbyG6pB8WfmFpzqcy4xtAaAMUhnNqjMKTiCPZG2oMT3YSx8U2NA== - git-hooks-list@1.0.3: version "1.0.3" resolved "https://registry.npmjs.org/git-hooks-list/-/git-hooks-list-1.0.3.tgz#be5baaf78203ce342f2f844a9d2b03dba1b45156" @@ -9135,20 +9063,13 @@ glob-parent@^3.1.0: is-glob "^3.1.0" path-dirname "^1.0.0" -glob-parent@^5.1.2, glob-parent@~5.1.2: +glob-parent@^5.1.2: version "5.1.2" resolved "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz#869832c58034fe68a4093c17dc15e8340d8401c4" integrity sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow== dependencies: is-glob "^4.0.1" -glob-parent@^6.0.2: - version "6.0.2" - resolved "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz#6d237d99083950c79290f24c7642a3de9a28f9e3" - integrity sha512-XxwI8EOhVQgWp6iDL+3b0r86f4d6AX6zSU55HfB4ydCEuXLXc5FcYeOu+nnGftS4TEju/11rt4KJPTMgbfmv4A== - dependencies: - is-glob "^4.0.3" - glob-to-regexp@^0.3.0: version "0.3.0" resolved "https://registry.npmjs.org/glob-to-regexp/-/glob-to-regexp-0.3.0.tgz#8c5a1494d2066c570cc3bfe4496175acc4d502ab" @@ -9159,7 +9080,7 @@ glob-to-regexp@^0.4.1: resolved "https://registry.npmjs.org/glob-to-regexp/-/glob-to-regexp-0.4.1.tgz#c75297087c851b9a578bd217dd59a92f59fe546e" integrity sha512-lkX1HJXwyMcprw/5YUZc2s7DrpAiHB21/V+E1rHUrVNokkvB6bqMzT0VfV6/86ZNabt1k14YOIaT7nDvOX3Iiw== -glob@7.2.3, glob@^10.3.10, glob@^5.0.10, glob@^7.0.4, glob@^7.1.1, glob@^7.1.2, glob@^7.1.3, glob@^7.1.4, glob@^7.1.6, glob@^7.1.7, glob@^7.2.3, glob@^9.3.3: +glob@7.2.3, glob@^5.0.10, glob@^7.0.4, glob@^7.1.1, glob@^7.1.2, glob@^7.1.3, glob@^7.1.4, glob@^7.1.6, glob@^7.1.7, glob@^7.2.3: version "7.2.3" resolved "https://registry.npmjs.org/glob/-/glob-7.2.3.tgz#b8df0fb802bbfa8e89bd1d938b4e16578ed44f2b" integrity sha512-nFR0zLpU2YCaRxwoCJvL6UvCH2JFyFVIvwTLsIf21AuHlMskA1hhTdk+LlYJtOlYt9v6dvszD2BGRqBL+iQK9Q== @@ -9266,7 +9187,12 @@ gopd@^1.0.1: dependencies: get-intrinsic "^1.1.3" -graceful-fs@^4.1.11, graceful-fs@^4.1.15, graceful-fs@^4.1.2, graceful-fs@^4.1.3, graceful-fs@^4.1.6, graceful-fs@^4.2.0, graceful-fs@^4.2.11, graceful-fs@^4.2.4: +gopd@^1.2.0: + version "1.2.0" + resolved "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz#89f56b8217bdbc8802bd299df6d7f1081d7e51a1" + integrity sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg== + +graceful-fs@^4.1.2, graceful-fs@^4.1.3, graceful-fs@^4.1.6, graceful-fs@^4.2.0, graceful-fs@^4.2.11, graceful-fs@^4.2.4: version "4.2.11" resolved "https://registry.npmjs.org/graceful-fs/-/graceful-fs-4.2.11.tgz#4183e4e8bf08bb6e05bbb2f7d2e0c8f712ca40e3" integrity sha512-RbJ5/jmFcNNCcDV5o9eTnBLJ/HszWV0P73bc+Ff4nS/rJj+YaS6IGyiOL0VoBYX+l1Wrl3k63h/KrH+nhJ0XvQ== @@ -9347,6 +9273,11 @@ has-symbols@^1.0.2, has-symbols@^1.0.3: resolved "https://registry.npmjs.org/has-symbols/-/has-symbols-1.0.3.tgz#bb7b2c4349251dce87b125f7bdf874aa7c8b39f8" integrity sha512-l3LCuF6MgDNwTDKkdYGEihYjt5pRPbEg46rtlmnSPlUbgmB8LOIrKJbYYFBSbnPaJexMKtiPO8hmeRjRz2Td+A== +has-symbols@^1.1.0: + version "1.1.0" + resolved "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz#fc9c6a783a084951d0b971fe1018de813707a338" + integrity sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ== + has-tostringtag@^1.0.0, has-tostringtag@^1.0.2: version "1.0.2" resolved "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz#2cdc42d40bef2e5b4eeab7c01a73c54ce7ab5abc" @@ -9359,54 +9290,6 @@ has-unicode@^2.0.0, has-unicode@^2.0.1: resolved "https://registry.npmjs.org/has-unicode/-/has-unicode-2.0.1.tgz#e0e6fe6a28cf51138855e086d1691e771de2a8b9" integrity sha512-8Rf9Y83NBReMnx0gFzA8JImQACstCYWUplepDa9xprwwtmgEZUF0h/i5xSA625zB/I37EtrswSST6OXxwaaIJQ== -has-value@^0.3.1: - version "0.3.1" - resolved "https://registry.npmjs.org/has-value/-/has-value-0.3.1.tgz#7b1f58bada62ca827ec0a2078025654845995e1f" - integrity sha512-gpG936j8/MzaeID5Yif+577c17TxaDmhuyVgSwtnL/q8UUTySg8Mecb+8Cf1otgLoD7DDH75axp86ER7LFsf3Q== - dependencies: - get-value "^2.0.3" - has-values "^0.1.4" - isobject "^2.0.0" - -has-value@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/has-value/-/has-value-1.0.0.tgz#18b281da585b1c5c51def24c930ed29a0be6b177" - integrity sha512-IBXk4GTsLYdQ7Rvt+GRBrFSVEkmuOUy4re0Xjd9kJSUQpnTrWR4/y9RpfexN9vkAPMFuQoeWKwqzPozRTlasGw== - dependencies: - get-value "^2.0.6" - has-values "^1.0.0" - isobject "^3.0.0" - -has-values@^0.1.4: - version "0.1.4" - resolved "https://registry.npmjs.org/has-values/-/has-values-0.1.4.tgz#6d61de95d91dfca9b9a02089ad384bff8f62b771" - integrity sha512-J8S0cEdWuQbqD9//tlZxiMuMNmxB8PlEwvYwuxsTmR1G5RXUePEX/SJn7aD0GMLieuZYSwNH0cQuJGwnYunXRQ== - -has-values@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/has-values/-/has-values-1.0.0.tgz#95b0b63fec2146619a6fe57fe75628d5a39efe4f" - integrity sha512-ODYZC64uqzmtfGMEAX/FvZiRyWLpAC3vYnNunURUnkGVTS+mI0smVsWaPydRBsE3g+ok7h960jChO8mFcWlHaQ== - dependencies: - is-number "^3.0.0" - kind-of "^4.0.0" - -hash-base@^3.0.0: - version "3.1.0" - resolved "https://registry.npmjs.org/hash-base/-/hash-base-3.1.0.tgz#55c381d9e06e1d2997a883b4a3fddfe7f0d3af33" - integrity sha512-1nmYp/rhMDiE7AYkDw+lLwlAzz0AntGIe51F3RfFfEqyQ3feY2eI/NcwC6umIQVOASPMsWJLJScWKSSvzL9IVA== - dependencies: - inherits "^2.0.4" - readable-stream "^3.6.0" - safe-buffer "^5.2.0" - -hash-base@~3.0: - version "3.0.4" - resolved "https://registry.npmjs.org/hash-base/-/hash-base-3.0.4.tgz#5fc8686847ecd73499403319a6b0a3f3f6ae4918" - integrity sha512-EeeoJKjTyt868liAlVmcv2ZsUfGHlE3Q+BICOXcZiwN3osr5Q/zFGYmTJpoIzuaSTAwndFy+GqhEwlU4L3j4Ow== - dependencies: - inherits "^2.0.1" - safe-buffer "^5.0.1" - hash-for-dep@^1.0.2, hash-for-dep@^1.2.3, hash-for-dep@^1.4.7, hash-for-dep@^1.5.0, hash-for-dep@^1.5.1: version "1.5.1" resolved "https://registry.npmjs.org/hash-for-dep/-/hash-for-dep-1.5.1.tgz#497754b39bee2f1c4ade4521bfd2af0a7c1196e3" @@ -9626,11 +9509,6 @@ http-proxy@^1.13.1, http-proxy@^1.18.1: follow-redirects "^1.0.0" requires-port "^1.0.0" -https-browserify@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/https-browserify/-/https-browserify-1.0.0.tgz#ec06c10e0a34c0f2faf199f7fd7fc78fffd03c73" - integrity sha512-J+FkSdyD+0mA0N+81tMotaRMfSL9SGi+xpD3T6YApKsc3bGSXJlfXri3VyFOeYkfLRQisDk1W+jIFFKBeUBbBg== - https-proxy-agent@^5.0.0: version "5.0.1" resolved "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-5.0.1.tgz#c59ef224a04fe8b754f3db0063a25ea30d0005d6" @@ -9682,16 +9560,11 @@ icss-utils@^5.0.0, icss-utils@^5.1.0: resolved "https://registry.npmjs.org/icss-utils/-/icss-utils-5.1.0.tgz#c6be6858abd013d768e98366ae47e25d5887b1ae" integrity sha512-soFhflCVWLfRNOPU3iv5Z9VUdT44xFRbzjLsEzSr5AQmgqPMTHdU3PMT1Cf1ssx8fLNJDA1juftYl+PUcv3MqA== -ieee754@^1.1.13, ieee754@^1.1.4: +ieee754@^1.1.13: version "1.2.1" resolved "https://registry.npmjs.org/ieee754/-/ieee754-1.2.1.tgz#8eb7a10a63fff25d15a57b001586d177d1b0d352" integrity sha512-dcyqhDvX1C46lXZcVqCpK+FtMRQVdIMN6/Df5js2zouUsqG7I6sFxitIC+7KYK29KdXOLHdu9zL4sFnoVQnqaA== -iferr@^0.1.5: - version "0.1.5" - resolved "https://registry.npmjs.org/iferr/-/iferr-0.1.5.tgz#c60eed69e6d8fdb6b3104a1fcbca1c192dc5b501" - integrity sha512-DUNFN5j7Tln0D+TxzloUjKB+CtVu6myn0JEFak6dG18mNt9YkQ6lzGCdafwofISZ1lLF3xRHJ98VKy9ynkcFaA== - ignore@^4.0.6: version "4.0.6" resolved "https://registry.npmjs.org/ignore/-/ignore-4.0.6.tgz#750e3db5862087b4737ebac8207ffd1ef27b25fc" @@ -9702,10 +9575,10 @@ ignore@^5.1.1, ignore@^5.2.0: resolved "https://registry.npmjs.org/ignore/-/ignore-5.3.1.tgz#5073e554cd42c5b33b394375f538b8593e34d4ef" integrity sha512-5Fytz/IraMjqpwfd34ke28PTVMjZjJG2MPn5t7OE4eUCUNf8BAa7b5WUS9/Qvr6mwOQS7Mk6vdsMno5he+T8Xw== -immutable@^4.0.0: - version "4.3.6" - resolved "https://registry.npmjs.org/immutable/-/immutable-4.3.6.tgz#6a05f7858213238e587fb83586ffa3b4b27f0447" - integrity sha512-Ju0+lEMyzMVZarkTn/gqRpdqd5dOPaz1mCZ0SH3JV6iFw81PldE/PEB1hWVEA288HPt4WXW8O7AWxB10M+03QQ== +immutable@^5.0.2: + version "5.1.3" + resolved "https://registry.npmjs.org/immutable/-/immutable-5.1.3.tgz#e6486694c8b76c37c063cca92399fa64098634d4" + integrity sha512-+chQdDfvscSF1SJqv2gn4SRO2ZyS3xL3r7IW/wWEEzrzLisnOlKiQu5ytC/BVNcS15C39WT2Hg/bjKjDMcu+zg== import-fresh@^3.0.0, import-fresh@^3.2.1: version "3.3.0" @@ -9730,11 +9603,6 @@ indent-string@^4.0.0: resolved "https://registry.npmjs.org/indent-string/-/indent-string-4.0.0.tgz#624f8f4497d619b2d9768531d58f4122854d7251" integrity sha512-EdDDZu4A2OyIK7Lr/2zG+w5jmbuk1DVBnEwREQvBzspBJkCEbRa8GxU1lghYcaGJCnRWibjDXlq779X1/y5xwg== -infer-owner@^1.0.3: - version "1.0.4" - resolved "https://registry.npmjs.org/infer-owner/-/infer-owner-1.0.4.tgz#c4cefcaa8e51051c2a40ba2ce8a3d27295af9467" - integrity sha512-IClj+Xz94+d7irH5qRyfJonOdfTzuDaifE6ZPWfx0N0+/ATZCbuTPq2prFl526urkQd90WyUKIh1DfBQ2hMz9A== - inflection@^1.12.0, inflection@~1.13.1: version "1.13.4" resolved "https://registry.npmjs.org/inflection/-/inflection-1.13.4.tgz#65aa696c4e2da6225b148d7a154c449366633a32" @@ -9861,13 +9729,6 @@ ipaddr.js@1.9.1: resolved "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz#bff38543eeb8984825079ff3a2a8e6cbd46781b3" integrity sha512-0KI/607xoxSToH7GjN1FfSbLoU0+btTicjsQSWQlh/hZykN8KpmMf7uYwPW3R+akZ6R/w18ZlXSHBYXiYUPO3g== -is-accessor-descriptor@^1.0.1: - version "1.0.1" - resolved "https://registry.npmjs.org/is-accessor-descriptor/-/is-accessor-descriptor-1.0.1.tgz#3223b10628354644b86260db29b3e693f5ceedd4" - integrity sha512-YBUanLI8Yoihw923YeFUS5fs0fF2f5TSFTNiYAAzhhDscDa3lEqYuz1pDOEP5KvX94I9ey3vsqjJcLVFVU+3QA== - dependencies: - hasown "^2.0.0" - is-alphabetical@^1.0.0: version "1.0.4" resolved "https://registry.npmjs.org/is-alphabetical/-/is-alphabetical-1.0.4.tgz#9e7d6b94916be22153745d184c298cbf986a686d" @@ -9909,20 +9770,6 @@ is-bigint@^1.0.1: dependencies: has-bigints "^1.0.1" -is-binary-path@^1.0.0: - version "1.0.1" - resolved "https://registry.npmjs.org/is-binary-path/-/is-binary-path-1.0.1.tgz#75f16642b480f187a711c814161fd3a4a7655898" - integrity sha512-9fRVlXc0uCxEDj1nQzaWONSpbTfx0FmJfzHF7pwlI8DkWGoHBBea4Pg5Ky0ojwwxQmnSifgbKkI06Qv0Ljgj+Q== - dependencies: - binary-extensions "^1.0.0" - -is-binary-path@~2.1.0: - version "2.1.0" - resolved "https://registry.npmjs.org/is-binary-path/-/is-binary-path-2.1.0.tgz#ea1f7f3b80f064236e83470f86c09c254fb45b09" - integrity sha512-ZMERYes6pDydyuGidse7OsHxtbI7WVeUEozgR/g7rd0xUimYNlvZRE/K2MgZTjWy725IfelLeVcEM97mmtRGXw== - dependencies: - binary-extensions "^2.0.0" - is-boolean-object@^1.1.0: version "1.1.2" resolved "https://registry.npmjs.org/is-boolean-object/-/is-boolean-object-1.1.2.tgz#5c6dc200246dd9321ae4b885a114bb1f75f63719" @@ -9931,11 +9778,6 @@ is-boolean-object@^1.1.0: call-bind "^1.0.2" has-tostringtag "^1.0.0" -is-buffer@^1.1.5: - version "1.1.6" - resolved "https://registry.npmjs.org/is-buffer/-/is-buffer-1.1.6.tgz#efaa2ea9daa0d7ab2ea13a97b2b8ad51fefbe8be" - integrity sha512-NcdALwpXkTm5Zvvbk7owOUSvVvBKDgKP5/ewfXEznmQFfs4ZRmanOeKBTjRVjka3QFoN6XJ+9F3USqfHqTaU5w== - is-buffer@^2.0.0: version "2.0.5" resolved "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.5.tgz#ebc252e400d22ff8d77fa09888821a24a658c191" @@ -9953,13 +9795,6 @@ is-core-module@^2.13.0: dependencies: hasown "^2.0.2" -is-data-descriptor@^1.0.1: - version "1.0.1" - resolved "https://registry.npmjs.org/is-data-descriptor/-/is-data-descriptor-1.0.1.tgz#2109164426166d32ea38c405c1e0945d9e6a4eeb" - integrity sha512-bc4NlCDiCr28U4aEsQ3Qs2491gVq4V8G7MQyws968ImqjKuYtTJXrl7Vq7jsN7Ly/C3xj5KWFrY7sHNeDkAzXw== - dependencies: - hasown "^2.0.0" - is-data-view@^1.0.1: version "1.0.1" resolved "https://registry.npmjs.org/is-data-view/-/is-data-view-1.0.1.tgz#4b4d3a511b70f3dc26d42c03ca9ca515d847759f" @@ -9979,39 +9814,11 @@ is-decimal@^1.0.0: resolved "https://registry.npmjs.org/is-decimal/-/is-decimal-1.0.4.tgz#65a3a5958a1c5b63a706e1b333d7cd9f630d3fa5" integrity sha512-RGdriMmQQvZ2aqaQq3awNA6dCGtKpiDFcOzrTWrDAT2MiWrKQVPmxLGHl7Y2nNu6led0kEyoX0enY0qXYsv9zw== -is-descriptor@^0.1.0: - version "0.1.7" - resolved "https://registry.npmjs.org/is-descriptor/-/is-descriptor-0.1.7.tgz#2727eb61fd789dcd5bdf0ed4569f551d2fe3be33" - integrity sha512-C3grZTvObeN1xud4cRWl366OMXZTj0+HGyk4hvfpx4ZHt1Pb60ANSXqCK7pdOTeUQpRzECBSTphqvD7U+l22Eg== - dependencies: - is-accessor-descriptor "^1.0.1" - is-data-descriptor "^1.0.1" - -is-descriptor@^1.0.0, is-descriptor@^1.0.2: - version "1.0.3" - resolved "https://registry.npmjs.org/is-descriptor/-/is-descriptor-1.0.3.tgz#92d27cb3cd311c4977a4db47df457234a13cb306" - integrity sha512-JCNNGbwWZEVaSPtS45mdtrneRWJFp07LLmykxeFV5F6oBvNF8vHSfJuJgoT472pSfk+Mf8VnlrspaFBHWM8JAw== - dependencies: - is-accessor-descriptor "^1.0.1" - is-data-descriptor "^1.0.1" - is-docker@^2.0.0: version "2.2.1" resolved "https://registry.npmjs.org/is-docker/-/is-docker-2.2.1.tgz#33eeabe23cfe86f14bde4408a02c0cfb853acdaa" integrity sha512-F+i2BKsFrH66iaUFc0woD8sLy8getkwTwtOBjvs56Cx4CgJDeKQeqfz8wAYiSb8JOprWhHH5p77PbmYCvvUuXQ== -is-extendable@^0.1.0, is-extendable@^0.1.1: - version "0.1.1" - resolved "https://registry.npmjs.org/is-extendable/-/is-extendable-0.1.1.tgz#62b110e289a471418e3ec36a617d472e301dfc89" - integrity sha512-5BMULNob1vgFX6EjQw5izWDxrecWK9AM72rugNr0TFldMOi0fj6Jk+zeKIt0xGj4cEfQIJth4w3OKWOJ4f+AFw== - -is-extendable@^1.0.1: - version "1.0.1" - resolved "https://registry.npmjs.org/is-extendable/-/is-extendable-1.0.1.tgz#a7470f9e426733d81bd81e1155264e3a3507cab4" - integrity sha512-arnXMxT1hhoKo9k1LZdmlNyJdDDfy2v0fXjFlmok4+i8ul/6WlbVge9bhM74OpNPQPMGUToDtz+KXa1PneJxOA== - dependencies: - is-plain-object "^2.0.4" - is-extglob@^2.1.0, is-extglob@^2.1.1: version "2.1.1" resolved "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz#a88c02535791f02ed37c76a1b9ea9773c833f8c2" @@ -10051,7 +9858,7 @@ is-glob@^3.1.0: dependencies: is-extglob "^2.1.0" -is-glob@^4.0.0, is-glob@^4.0.1, is-glob@^4.0.3, is-glob@~4.0.1: +is-glob@^4.0.0, is-glob@^4.0.1, is-glob@^4.0.3: version "4.0.3" resolved "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz#64f61e42cbbb2eec2071a9dac0b28ba1e65d5084" integrity sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg== @@ -10090,13 +9897,6 @@ is-number-object@^1.0.4: dependencies: has-tostringtag "^1.0.0" -is-number@^3.0.0: - version "3.0.0" - resolved "https://registry.npmjs.org/is-number/-/is-number-3.0.0.tgz#24fd6201a4782cf50561c810276afc7d12d71195" - integrity sha512-4cboCqIpliH+mAvFNegjZQ4kgKc3ZUhQVr3HvWbSh5q3WH2v82ct+T2Y1hdU5Gdtorx/cLifQjqCbL7bpznLTg== - dependencies: - kind-of "^3.0.2" - is-number@^7.0.0: version "7.0.0" resolved "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz#7535345b896734d5f80c4d06c50955527a14f12b" @@ -10122,13 +9922,6 @@ is-plain-obj@^1.1: resolved "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-1.1.0.tgz#71a50c8429dfca773c92a390a4a03b39fcd51d3e" integrity sha512-yvkRyxmFKEOQ4pNXCmJG5AEQNlXJS5LaONXo5/cLdTZdWvsZ1ioJEonLGAosKlMWE8lwUy/bJzMjcw8az73+Fg== -is-plain-object@^2.0.3, is-plain-object@^2.0.4: - version "2.0.4" - resolved "https://registry.npmjs.org/is-plain-object/-/is-plain-object-2.0.4.tgz#2c163b3fafb1b606d9d17928f05c2a1c38e07677" - integrity sha512-h5PpgXkWitc38BBMYawTYMWJHFZJVnBquFE57xFpjB8pJFiF6gZ+bU+WyI/yqXiFR5mdLsgYNaPe8uao6Uv9Og== - dependencies: - isobject "^3.0.1" - is-potential-custom-element-name@^1.0.1: version "1.0.1" resolved "https://registry.npmjs.org/is-potential-custom-element-name/-/is-potential-custom-element-name-1.0.1.tgz#171ed6f19e3ac554394edf78caa05784a45bebb5" @@ -10176,6 +9969,13 @@ is-string@^1.0.5, is-string@^1.0.7: dependencies: has-tostringtag "^1.0.0" +is-subdir@^1.2.0: + version "1.2.0" + resolved "https://registry.npmjs.org/is-subdir/-/is-subdir-1.2.0.tgz#b791cd28fab5202e91a08280d51d9d7254fd20d4" + integrity sha512-2AT6j+gXe/1ueqbW6fLZJiIw3F8iXGJtt0yDrZaBhAZEG1raiTxKWU+IPqMCzQAXOUCKdA4UDMgacKH25XG2Cw== + dependencies: + better-path-resolve "1.0.0" + is-symbol@^1.0.2, is-symbol@^1.0.3: version "1.0.4" resolved "https://registry.npmjs.org/is-symbol/-/is-symbol-1.0.4.tgz#a6dac93b635b063ca6872236de88910a57af139c" @@ -10227,16 +10027,11 @@ is-weakset@^2.0.3: call-bind "^1.0.7" get-intrinsic "^1.2.4" -is-windows@^1.0.1, is-windows@^1.0.2: +is-windows@^1.0.0, is-windows@^1.0.1: version "1.0.2" resolved "https://registry.npmjs.org/is-windows/-/is-windows-1.0.2.tgz#d1850eb9791ecd18e6182ce12a30f396634bb19d" integrity sha512-eXK1UInq2bPmjyX6e3VHIzMLobc4J94i4AWn+Hpq3OU5KkrRC96OAcR3PRJ/pGu6m8TRnBHP9dkXQVsT/COVIA== -is-wsl@^1.1.0: - version "1.1.0" - resolved "https://registry.npmjs.org/is-wsl/-/is-wsl-1.1.0.tgz#1f16e4aa22b04d1336b66188a66af3c600c3a66d" - integrity sha512-gfygJYZ2gLTDlmbWMI0CE2MwnFzSN/2SZfkMlItC4K/JBlsWVDB0bO6XhqcY13YXE7iMcAJnzTCJjPiTeJJ0Mw== - is-wsl@^2.2.0: version "2.2.0" resolved "https://registry.npmjs.org/is-wsl/-/is-wsl-2.2.0.tgz#74a4c76e77ca9fd3f932f290c17ea326cd157271" @@ -10276,11 +10071,6 @@ isobject@^2.0.0: dependencies: isarray "1.0.0" -isobject@^3.0.0, isobject@^3.0.1: - version "3.0.1" - resolved "https://registry.npmjs.org/isobject/-/isobject-3.0.1.tgz#4e431e92b11a9731636aa1f9c8d1ccbcfdab78df" - integrity sha512-WhB9zCku7EGTj/HQQRz5aUQEUeoQZH2bWcltRErOpymJ4boYE6wL9Tbr23krRPSZ+C5zqNSrSw+Cc7sZZ4b7vg== - istanbul-lib-coverage@^3.0.0, istanbul-lib-coverage@^3.2.0: version "3.2.2" resolved "https://registry.npmjs.org/istanbul-lib-coverage/-/istanbul-lib-coverage-3.2.2.tgz#2d166c4b0644d43a39f04bf6c2edd1e585f31756" @@ -10332,15 +10122,6 @@ istextorbinary@^2.5.1: editions "^2.2.0" textextensions "^2.5.0" -ivy-codemirror@^2.1.0: - version "2.1.0" - resolved "https://registry.npmjs.org/ivy-codemirror/-/ivy-codemirror-2.1.0.tgz#c06f1606c375610bf62b007a21a9e63f5854175e" - integrity sha512-+Ha6Yf39fiK3dfQD5vlanrQ8GMIf/KVRbxzEzG+AsvAgUNSO8VECCfIRzdHQZcBfi9jNCaT+9q6VQd7mSqNalQ== - dependencies: - codemirror "~5.15.0" - ember-cli-babel "^6.0.0" - ember-cli-node-assets "^0.2.2" - jest-worker@^27.4.5: version "27.5.1" resolved "https://registry.npmjs.org/jest-worker/-/jest-worker-27.5.1.tgz#8d146f0900e8973b106b6f73cc1e9a8cb86f8db0" @@ -10350,11 +10131,6 @@ jest-worker@^27.4.5: merge-stream "^2.0.0" supports-color "^8.0.0" -jiti@^1.21.0: - version "1.21.6" - resolved "https://registry.npmjs.org/jiti/-/jiti-1.21.6.tgz#6c7f7398dd4b3142767f9a168af2f317a428d268" - integrity sha512-2yTgeWTWzMWkHu6Jp9NKgePDaYHbntiwvYuuJLbbN9vl7DC9DvXKOB2BC3ZZ92D3cvV/aflH0osDfwpHepQ53w== - jquery@^3.4.1, jquery@^3.5.1: version "3.7.1" resolved "https://registry.npmjs.org/jquery/-/jquery-3.7.1.tgz#083ef98927c9a6a74d05a6af02806566d16274de" @@ -10433,6 +10209,11 @@ jsesc@^2.5.0, jsesc@^2.5.1: resolved "https://registry.npmjs.org/jsesc/-/jsesc-2.5.2.tgz#80564d2e483dacf6e8ef209650a67df3f0c283a4" integrity sha512-OYu7XEzjkCQ3C5Ps3QIZsQfNpqoJyZZA99wd9aWd05NCtC5pWOkShK2mkL6HXQR6/Cy2lbNdPlZBpuQHXE63gA== +jsesc@^3.0.2: + version "3.1.0" + resolved "https://registry.npmjs.org/jsesc/-/jsesc-3.1.0.tgz#74d335a234f67ed19907fdadfac7ccf9d409825d" + integrity sha512-/sM3dO2FOzXjKQhJuo0Q173wf2KOo8t4I8vHy6lF9poUp7bKT0/NHE8fPX23PwfhnykfqnC2xRxOnVw5XuGIaA== + jsesc@~0.3.x: version "0.3.0" resolved "https://registry.npmjs.org/jsesc/-/jsesc-0.3.0.tgz#1bf5ee63b4539fe2e26d0c1e99c240b97a457972" @@ -10448,7 +10229,7 @@ json-buffer@3.0.1: resolved "https://registry.npmjs.org/json-buffer/-/json-buffer-3.0.1.tgz#9338802a30d3b6605fbe0613e094008ca8c05a13" integrity sha512-4bV5BfR2mqfQTJm+V5tPPdf+ZpuhiIvTuAB5g8kcrXOZpTT/QwwVRWBywX1ozr6lEuPdbHxwaJlm9G6mI2sfSQ== -json-parse-better-errors@^1.0.1, json-parse-better-errors@^1.0.2: +json-parse-better-errors@^1.0.1: version "1.0.2" resolved "https://registry.npmjs.org/json-parse-better-errors/-/json-parse-better-errors-1.0.2.tgz#bb867cfb3450e69107c131d1c514bab3dc8bcaa9" integrity sha512-mrqyZKfX5EhL7hvqcV6WG1yYjnjeuYDzDhhcAAUrq8Po85NBQBJP+ZDUT75qZQ98IkUoBqdkExkukOU7Ts2wrw== @@ -10483,19 +10264,7 @@ json-stable-stringify@^1.0.0, json-stable-stringify@^1.0.1: jsonify "^0.0.1" object-keys "^1.1.1" -json5@^0.5.1: - version "0.5.1" - resolved "https://registry.npmjs.org/json5/-/json5-0.5.1.tgz#1eade7acc012034ad84e2396767ead9fa5495821" - integrity sha512-4xrs1aW+6N5DalkqSVA8fxh458CXvR99WU8WLKmq4v8eWAL86Xo3BVqyd3SkA9wEVjCMqyvvRRkshAdOnBp5rw== - -json5@^1.0.1, json5@^1.0.2: - version "1.0.2" - resolved "https://registry.npmjs.org/json5/-/json5-1.0.2.tgz#63d98d60f21b313b77c4d6da18bfa69d80e1d593" - integrity sha512-g1MWMLBiz8FKi1e4w0UyVL3w+iJceWAFBAaBnnGKOpNa5f8TLktkbre1+s6oICydWAm+HRUGTmI+//xv2hvXYA== - dependencies: - minimist "^1.2.0" - -json5@^2.1.2, json5@^2.2.3: +json5@^0.5.1, json5@^1.0.2, json5@^2.1.2, json5@^2.2.3: version "2.2.3" resolved "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz#78cd6f1a19bdc12b73db5ad0c61efd66c1e29283" integrity sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg== @@ -10540,25 +10309,6 @@ keyv@^4.5.3: dependencies: json-buffer "3.0.1" -kind-of@^3.0.2, kind-of@^3.0.3: - version "3.2.2" - resolved "https://registry.npmjs.org/kind-of/-/kind-of-3.2.2.tgz#31ea21a734bab9bbb0f32466d893aea51e4a3c64" - integrity sha512-NOW9QQXMoZGg/oqnVNoNTTIFEIid1627WCffUBJEdMxYApq7mNE7CpzucIPc+ZQg25Phej7IJSmX3hO+oblOtQ== - dependencies: - is-buffer "^1.1.5" - -kind-of@^4.0.0: - version "4.0.0" - resolved "https://registry.npmjs.org/kind-of/-/kind-of-4.0.0.tgz#20813df3d712928b207378691a45066fae72dd57" - integrity sha512-24XsCxmEbRwEDbz/qz3stgin8TTzZ1ESR56OMCN0ujYg+vRutNSiOj9bHH9u85DKgXguraugV5sFuvbD4FW/hw== - dependencies: - is-buffer "^1.1.5" - -kind-of@^6.0.2: - version "6.0.3" - resolved "https://registry.npmjs.org/kind-of/-/kind-of-6.0.3.tgz#07c05034a6c349fa06e24fa35aa76db4580ce4dd" - integrity sha512-dcS1ul+9tmeD95T+x28/ehLgd9mENa3LsvDTtzm3vyBEO7RPptvAD+t44WVXaUjTBRcrpFeFlC8WCruUR456hw== - layout-bin-packer@^1.4.0: version "1.5.0" resolved "https://registry.npmjs.org/layout-bin-packer/-/layout-bin-packer-1.5.0.tgz#2e950456083621fe01f82007d896294f5e31e89c" @@ -10599,16 +10349,6 @@ license-checker@^25.0.1: spdx-satisfies "^4.0.0" treeify "^1.1.0" -lilconfig@^2.1.0: - version "2.1.0" - resolved "https://registry.npmjs.org/lilconfig/-/lilconfig-2.1.0.tgz#78e23ac89ebb7e1bfbf25b18043de756548e7f52" - integrity sha512-utWOt/GHzuUxnLKxB6dk81RoOeoNeHgbrXiuGk4yyF5qlRz+iIVWu56E2fqGHFrXz0QNUhLB/8nKqvRH66JKGQ== - -lilconfig@^3.0.0: - version "3.1.2" - resolved "https://registry.npmjs.org/lilconfig/-/lilconfig-3.1.2.tgz#e4a7c3cb549e3a606c8dcc32e5ae1005e62c05cb" - integrity sha512-eop+wDAvpItUys0FWkHIKeC9ybYrTGbU41U5K7+bttZZeohvnY7M9dZ5kB21GNWiFT2q1OoPTvncPCgSOVO5ow== - line-column@^1.0.2: version "1.0.2" resolved "https://registry.npmjs.org/line-column/-/line-column-1.0.2.tgz#d25af2936b6f4849172b312e4792d1d987bc34a2" @@ -10679,25 +10419,11 @@ load-json-file@^4.0.0: pify "^3.0.0" strip-bom "^3.0.0" -loader-runner@^2.4.0: - version "2.4.0" - resolved "https://registry.npmjs.org/loader-runner/-/loader-runner-2.4.0.tgz#ed47066bfe534d7e84c4c7b9998c2a75607d9357" - integrity sha512-Jsmr89RcXGIwivFY21FcRrisYZfvLMTWx5kOLc+JTxtpBOG6xML0vzbc6SEQG2FO9/4Fc3wW4LVcB5DmGflaRw== - loader-runner@^4.2.0: version "4.3.0" resolved "https://registry.npmjs.org/loader-runner/-/loader-runner-4.3.0.tgz#c1b4a163b99f614830353b16755e7149ac2314e1" integrity sha512-3R/1M+yS3j5ou80Me59j7F9IMs4PXs3VqRrm0TU3AbKPxlmpoY1TNscJV/oGJXo8qCatFGTfDbY6W6ipGOYXfg== -loader-utils@^1.2.3: - version "1.4.2" - resolved "https://registry.npmjs.org/loader-utils/-/loader-utils-1.4.2.tgz#29a957f3a63973883eb684f10ffd3d151fec01a3" - integrity sha512-I5d00Pd/jwMD2QCduo657+YM/6L3KZu++pmX9VFncxaxvHcru9jx1lBaFft+r4Mt2jK0Yhp41XlRAihzPxHNCg== - dependencies: - big.js "^5.2.2" - emojis-list "^3.0.0" - json5 "^1.0.1" - loader-utils@^2.0.0: version "2.0.4" resolved "https://registry.npmjs.org/loader-utils/-/loader-utils-2.0.4.tgz#8b5cb38b5c34a9a018ee1fc0e6a066d1dfcc528c" @@ -11008,6 +10734,11 @@ lru-cache@^6.0.0: dependencies: yallist "^4.0.0" +luxon@^3.4.2: + version "3.6.1" + resolved "https://registry.npmjs.org/luxon/-/luxon-3.6.1.tgz#d283ffc4c0076cb0db7885ec6da1c49ba97e47b0" + integrity sha512-tJLxrKJhO2ukZ5z0gyjY1zPh3Rh88Ej9P7jNrZiHMUXHae1yvI2imgOZtL1TO8TW6biMMKfTtAOoEJANgtWBMQ== + magic-string@^0.25.7: version "0.25.9" resolved "https://registry.npmjs.org/magic-string/-/magic-string-0.25.9.tgz#de7f9faf91ef8a1c91d02c2e5314c8277dbcdd1c" @@ -11015,14 +10746,6 @@ magic-string@^0.25.7: dependencies: sourcemap-codec "^1.4.8" -make-dir@^2.0.0: - version "2.1.0" - resolved "https://registry.npmjs.org/make-dir/-/make-dir-2.1.0.tgz#5f0310e18b8be898cc07009295a30ae41e91e6f5" - integrity sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA== - dependencies: - pify "^4.0.1" - semver "^5.6.0" - make-dir@^3.0.0, make-dir@^3.0.2, make-dir@^3.1.0: version "3.1.0" resolved "https://registry.npmjs.org/make-dir/-/make-dir-3.1.0.tgz#415e967046b3a7f1d185277d84aa58203726a13f" @@ -11044,18 +10767,6 @@ makeerror@1.0.12: dependencies: tmpl "1.0.5" -map-cache@^0.2.2: - version "0.2.2" - resolved "https://registry.npmjs.org/map-cache/-/map-cache-0.2.2.tgz#c32abd0bd6525d9b051645bb4f26ac5dc98a0dbf" - integrity sha512-8y/eV9QQZCiyn1SprXSrCmqJN0yNRATe+PO8ztwqrvrbdRLA3eYJF0yaR0YayLWkMbsQSKWS9N2gPcGEc4UsZg== - -map-visit@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/map-visit/-/map-visit-1.0.0.tgz#ecdca8f13144e660f1b5bd41f12f3479d98dfb8f" - integrity sha512-4y7uGv8bd2WdM9vpQsiQNo41Ln1NvhvDRuVt0k2JZQ+ezN2uaQes7lZeZ+QQUHOLQAtDaBJ+7wCbi+ab/KFs+w== - dependencies: - object-visit "^1.0.0" - markdown-it-terminal@0.2.1: version "0.2.1" resolved "https://registry.npmjs.org/markdown-it-terminal/-/markdown-it-terminal-0.2.1.tgz#670fd5ea824a7dcaa1591dcbeef28bf70aff1705" @@ -11100,6 +10811,11 @@ matcher-collection@^2.0.0, matcher-collection@^2.0.1: "@types/minimatch" "^3.0.3" minimatch "^3.0.2" +math-intrinsics@^1.1.0: + version "1.1.0" + resolved "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz#a0dd74be81e2aa5c2f27e65ce283605ee4e2b7f9" + integrity sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g== + md5-hex@^3.0.1: version "3.0.1" resolved "https://registry.npmjs.org/md5-hex/-/md5-hex-3.0.1.tgz#be3741b510591434b2784d79e556eefc2c9a8e5c" @@ -11107,15 +10823,6 @@ md5-hex@^3.0.1: dependencies: blueimp-md5 "^2.10.0" -md5.js@^1.3.4: - version "1.3.5" - resolved "https://registry.npmjs.org/md5.js/-/md5.js-1.3.5.tgz#b5d07b8e3216e3e27cd728d72f70d1e6a342005f" - integrity sha512-xitP+WxNPcTTOgnTJcrhM0xvdPepipPSf3I8EIpGKeFLjt3PlJLIDG3u8EX53ZIubkb+5U2+3rELYpEhHhzdkg== - dependencies: - hash-base "^3.0.0" - inherits "^2.0.1" - safe-buffer "^5.1.2" - mdast-normalize-headings@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/mdast-normalize-headings/-/mdast-normalize-headings-2.0.0.tgz#378c8161a9f57fcf52a6fd5628507af370c7f8c5" @@ -11271,14 +10978,6 @@ media-typer@0.3.0: resolved "https://registry.npmjs.org/media-typer/-/media-typer-0.3.0.tgz#8710d7af0aa626f8fffa1ce00168545263255748" integrity sha512-dq+qelQ9akHpcOl/gUVRTxVIOkAJ1wR3QAvb4RsVjS8oVoFjDGTc679wJYmUmknUF5HwMLOgb5O+a3KxfWapPQ== -memory-fs@^0.4.1: - version "0.4.1" - resolved "https://registry.npmjs.org/memory-fs/-/memory-fs-0.4.1.tgz#3a9a20b8462523e447cfbc7e8bb80ed667bfc552" - integrity sha512-cda4JKCxReDXFXRqOHPQscuIYg1PvxbE2S2GP45rnwfEK+vZaXC8C1OFvdHIbgw0DLzowXGVoxLaAmlgRy14GQ== - dependencies: - errno "^0.1.3" - readable-stream "^2.0.1" - memory-fs@^0.5.0: version "0.5.0" resolved "https://registry.npmjs.org/memory-fs/-/memory-fs-0.5.0.tgz#324c01288b88652966d161db77838720845a8e3c" @@ -11341,11 +11040,6 @@ merge2@^1.2.3, merge2@^1.3.0, merge2@^1.4.1: resolved "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz#4368892f885e907455a6fd7dc55c0c9d404990ae" integrity sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg== -merge@^2.1.1: - version "2.1.1" - resolved "https://registry.npmjs.org/merge/-/merge-2.1.1.tgz#59ef4bf7e0b3e879186436e8481c06a6c162ca98" - integrity sha512-jz+Cfrg9GWOZbQAnDQ4hlVnQky+341Yk5ru8bZSe6sIDTCIg8n9i/u7hSQGSVOF3C7lH6mGtqjkiT9G4wFLL0w== - methods@~1.1.2: version "1.1.2" resolved "https://registry.npmjs.org/methods/-/methods-1.1.2.tgz#5529a4d67654134edcc5266656835b0f851afcee" @@ -11418,47 +11112,20 @@ micromark@^2.11.3, micromark@~2.11.0, micromark@~2.11.3: debug "^4.0.0" parse-entities "^2.0.0" -micromatch@^3.1.10, micromatch@^3.1.4: - version "3.1.10" - resolved "https://registry.npmjs.org/micromatch/-/micromatch-3.1.10.tgz#70859bc95c9840952f359a068a3fc49f9ecfac23" - integrity sha512-MWikgl9n9M3w+bpsY3He8L+w9eF9338xRl8IAO5viDizwSzziFEyUzo2xrrloB64ADbTf8uA8vRqqttDTOmccg== - dependencies: - arr-diff "^4.0.0" - array-unique "^0.3.2" - braces "^2.3.1" - define-property "^2.0.2" - extend-shallow "^3.0.2" - extglob "^2.0.4" - fragment-cache "^0.2.1" - kind-of "^6.0.2" - nanomatch "^1.2.9" - object.pick "^1.3.0" - regex-not "^1.0.0" - snapdragon "^0.8.1" - to-regex "^3.0.2" - -micromatch@^4.0.2, micromatch@^4.0.4, micromatch@^4.0.5: - version "4.0.7" - resolved "https://registry.npmjs.org/micromatch/-/micromatch-4.0.7.tgz#33e8190d9fe474a9895525f5618eee136d46c2e5" - integrity sha512-LPP/3KorzCwBxfeUuZmaR6bG2kdeHSbe0P2tY3FLRU4vYrjYz5hI4QZwV0njUx3jeuKe67YukQ1LSPZBKDqO/Q== +micromatch@4.0.8, micromatch@^3.1.10, micromatch@^3.1.4, micromatch@^4.0.2, micromatch@^4.0.4, micromatch@^4.0.5: + version "4.0.8" + resolved "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz#d66fa18f3a47076789320b9b1af32bd86d9fa202" + integrity sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA== dependencies: braces "^3.0.3" picomatch "^2.3.1" -miller-rabin@^4.0.0: - version "4.0.1" - resolved "https://registry.npmjs.org/miller-rabin/-/miller-rabin-4.0.1.tgz#f080351c865b0dc562a8462966daa53543c78a4d" - integrity sha512-115fLhvZVqWwHPbClyntxEVfVDfl9DLLTuJvq3g2O/Oxi8AiNouAHvDSzHS0viUJc+V5vm3eq91Xwqn9dp4jRA== - dependencies: - bn.js "^4.0.0" - brorand "^1.0.1" - mime-db@1.52.0, "mime-db@>= 1.43.0 < 2": version "1.52.0" resolved "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz#bbabcdc02859f4987301c856e3387ce5ec43bf70" integrity sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg== -mime-types@^2.1.12, mime-types@^2.1.18, mime-types@^2.1.26, mime-types@^2.1.27, mime-types@~2.1.24, mime-types@~2.1.34: +mime-types@^2.1.18, mime-types@^2.1.26, mime-types@^2.1.27, mime-types@^2.1.35, mime-types@~2.1.24, mime-types@~2.1.34: version "2.1.35" resolved "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz#381a871b62a734450660ae3deee44813f70d959a" integrity sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw== @@ -11505,16 +11172,16 @@ minimatch@^3.0.0, minimatch@^3.0.2, minimatch@^3.0.4, minimatch@^3.1.1: dependencies: brace-expansion "^1.1.7" -minimist@>=1.2.5, minimist@^1.1.1, minimist@^1.2.0, minimist@^1.2.5, minimist@^1.2.6, minimist@^1.2.8: - version "1.2.8" - resolved "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz#c1a464e7693302e082a075cee0c057741ac4772c" - integrity sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA== - minimist@^0.2.1: version "0.2.4" resolved "https://registry.npmjs.org/minimist/-/minimist-0.2.4.tgz#0085d5501e29033748a2f2a4da0180142697a475" integrity sha512-Pkrrm8NjyQ8yVt8Am9M+yUt74zE3iokhzbG1bFVNjLB92vwM71hf40RkEsryg98BujhVOncKm/C1xROxZ030LQ== +minimist@^1.1.1, minimist@^1.2.0, minimist@^1.2.5, minimist@^1.2.6, minimist@^1.2.8: + version "1.2.8" + resolved "https://registry.npmjs.org/minimist/-/minimist-1.2.8.tgz#c1a464e7693302e082a075cee0c057741ac4772c" + integrity sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA== + minipass@^2.2.0: version "2.9.0" resolved "https://registry.npmjs.org/minipass/-/minipass-2.9.0.tgz#e713762e7d3e32fed803115cf93e04bca9fcc9a6" @@ -11523,38 +11190,14 @@ minipass@^2.2.0: safe-buffer "^5.1.2" yallist "^3.0.0" -mississippi@^3.0.0: - version "3.0.0" - resolved "https://registry.npmjs.org/mississippi/-/mississippi-3.0.0.tgz#ea0a3291f97e0b5e8776b363d5f0a12d94c67022" - integrity sha512-x471SsVjUtBRtcvd4BzKE9kFC+/2TeWgKCgw0bZcw1b9l2X3QX5vCWgF+KaZaYm87Ss//rHnWryupDrgLvmSkA== - dependencies: - concat-stream "^1.5.0" - duplexify "^3.4.2" - end-of-stream "^1.1.0" - flush-write-stream "^1.0.0" - from2 "^2.1.0" - parallel-transform "^1.1.0" - pump "^3.0.0" - pumpify "^1.3.3" - stream-each "^1.1.0" - through2 "^2.0.0" - -mixin-deep@^1.2.0: - version "1.3.2" - resolved "https://registry.npmjs.org/mixin-deep/-/mixin-deep-1.3.2.tgz#1120b43dc359a785dce65b55b82e257ccf479566" - integrity sha512-WRoDn//mXBiJ1H40rqa3vH0toePwSsGb45iInWlTySa+Uu4k3tYUSxa2v1KqAiLtvlrSzaExqS1gtk96A9zvEA== - dependencies: - for-in "^1.0.2" - is-extendable "^1.0.1" - -mkdirp@^0.5.0, mkdirp@^0.5.1, mkdirp@^0.5.3, mkdirp@^0.5.5, mkdirp@^0.5.6: +mkdirp@^0.5.0, mkdirp@^0.5.1, mkdirp@^0.5.5, mkdirp@^0.5.6: version "0.5.6" resolved "https://registry.npmjs.org/mkdirp/-/mkdirp-0.5.6.tgz#7def03d2432dcae4ba1d611445c48396062255f6" integrity sha512-FP+p8RB8OWpF3YZBCrP5gtADmtXApB5AMLn+vdyA+PyxCjrCs00mjyUozssO33cwDeT3wNGdLxJ5M//YqtHAJw== dependencies: minimist "^1.2.6" -mkdirp@^1.0.3, mkdirp@^1.0.4: +mkdirp@^1.0.4: version "1.0.4" resolved "https://registry.npmjs.org/mkdirp/-/mkdirp-1.0.4.tgz#3eb5ed62622756d79a5f0e2a221dfebad75c2f7e" integrity sha512-vVqVZQyf3WLx2Shd0qJ9xuvqgAyKPLAiqITEtqW0oIUjzo3PePDd6fW9iFz30ef7Ysp/oiWqbhszeGWW2T6Gzw== @@ -11604,18 +11247,6 @@ mout@^1.0.0: resolved "https://registry.npmjs.org/mout/-/mout-1.2.4.tgz#9ffd261c4d6509e7ebcbf6b641a89b36ecdf8155" integrity sha512-mZb9uOruMWgn/fw28DG4/yE3Kehfk1zKCLhuDU2O3vlKdnBBr4XaOCqVTflJ5aODavGUPqFHZgrFX3NJVuxGhQ== -move-concurrently@^1.0.1: - version "1.0.1" - resolved "https://registry.npmjs.org/move-concurrently/-/move-concurrently-1.0.1.tgz#be2c005fda32e0b29af1f05d7c4b33214c701f92" - integrity sha512-hdrFxZOycD/g6A6SoI2bB5NA/5NEqD0569+S47WZhPvm46sD50ZHdYaFmnua5lndde9rCHGjmfK7Z8BuCt/PcQ== - dependencies: - aproba "^1.1.1" - copy-concurrently "^1.0.0" - fs-write-stream-atomic "^1.0.8" - mkdirp "^0.5.1" - rimraf "^2.5.4" - run-queue "^1.0.3" - ms@2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/ms/-/ms-2.0.0.tgz#5608aeadfc00be6c2901df5f9861788de0d597c8" @@ -11646,46 +11277,15 @@ mute-stream@0.0.8: resolved "https://registry.npmjs.org/mute-stream/-/mute-stream-0.0.8.tgz#1630c42b2251ff81e2a283de96a5497ea92e5e0d" integrity sha512-nnbWWOkoWyUsTjKrhgD0dcz22mdkSnpYqbEjIm2nhwhuxlSkpywJmBo8h0ZqJdkp73mb90SssHkN4rsRaBAfAA== -mz@^2.7.0: - version "2.7.0" - resolved "https://registry.npmjs.org/mz/-/mz-2.7.0.tgz#95008057a56cafadc2bc63dde7f9ff6955948e32" - integrity sha512-z81GNO7nnYMEhrGh9LeymoE4+Yr0Wn5McHIZMK5cfQCl+NDX08sCZgUc9/6MHni9IWuFLm1Z3HTCXu2z9fN62Q== - dependencies: - any-promise "^1.0.0" - object-assign "^4.0.1" - thenify-all "^1.0.0" - -nan@^2.12.1: - version "2.20.0" - resolved "https://registry.npmjs.org/nan/-/nan-2.20.0.tgz#08c5ea813dd54ed16e5bd6505bf42af4f7838ca3" - integrity sha512-bk3gXBZDGILuuo/6sKtr0DQmSThYHLtNCdSdXk9YkxD/jK6X2vmCyyXBBxyqZ4XcnzTyYEAThfX3DCEnLf6igw== - nanoassert@^1.1.0: version "1.1.0" resolved "https://registry.npmjs.org/nanoassert/-/nanoassert-1.1.0.tgz#4f3152e09540fde28c76f44b19bbcd1d5a42478d" integrity sha512-C40jQ3NzfkP53NsO8kEOFd79p4b9kDXQMwgiY1z8ZwrDZgUyom0AHwGegF4Dm99L+YoYhuaB0ceerUcXmqr1rQ== -nanoid@^3.3.7: - version "3.3.7" - resolved "https://registry.npmjs.org/nanoid/-/nanoid-3.3.7.tgz#d0c301a691bc8d54efa0a2226ccf3fe2fd656bd8" - integrity sha512-eSRppjcPIatRIMC1U6UngP8XFcz8MQWGQdt1MTBQ7NaAmvXDfvNxbvWV3x2y6CdEUciCSsDHDQZbhYaB8QEo2g== - -nanomatch@^1.2.9: - version "1.2.13" - resolved "https://registry.npmjs.org/nanomatch/-/nanomatch-1.2.13.tgz#b87a8aa4fc0de8fe6be88895b38983ff265bd119" - integrity sha512-fpoe2T0RbHwBTBUOftAfBPaDEi06ufaUai0mE6Yn1kacc3SnTErfb/h+X94VXzI64rKFHYImXSvdwGGCmwOqCA== - dependencies: - arr-diff "^4.0.0" - array-unique "^0.3.2" - define-property "^2.0.2" - extend-shallow "^3.0.2" - fragment-cache "^0.2.1" - is-windows "^1.0.2" - kind-of "^6.0.2" - object.pick "^1.3.0" - regex-not "^1.0.0" - snapdragon "^0.8.1" - to-regex "^3.0.1" +nanoid@3.3.8, nanoid@^3.3.7: + version "3.3.8" + resolved "https://registry.npmjs.org/nanoid/-/nanoid-3.3.8.tgz#b1be3030bee36aaff18bacb375e5cce521684baf" + integrity sha512-WNLf5Sd8oZxOm+TzppcYk8gVOgP+l58xNy58D0nbUnOxOWRWvlcCV4kUF7ltmI6PsrLl/BgKEyS4mqsGChFN0w== natural-compare@^1.4.0: version "1.4.0" @@ -11697,7 +11297,7 @@ negotiator@0.6.3: resolved "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz#58e323a72fedc0d6f9cd4d31fe49f51479590ccd" integrity sha512-+EUsqGPLsM+j/zdChZjsnX51g4XrHFOIXwfnCVPGlQk/k5giakcKsuxCObBRu6DSm9opw/O6slWbJdghQM4bBg== -neo-async@^2.5.0, neo-async@^2.6.1, neo-async@^2.6.2: +neo-async@^2.6.2: version "2.6.2" resolved "https://registry.npmjs.org/neo-async/-/neo-async-2.6.2.tgz#b4aafb93e3aeb2d8174ca53cf163ab7d7308305f" integrity sha512-Yd3UES5mWCSqR+qNT93S3UoYUkqAZ9lLg8a7g9rimsWmYGK8cVToA4/sF3RrshdyV3sAGMXVUmpMYOw+dLpOuw== @@ -11738,6 +11338,11 @@ no-case@^3.0.4: lower-case "^2.0.2" tslib "^2.0.3" +node-addon-api@^7.0.0: + version "7.1.1" + resolved "https://registry.npmjs.org/node-addon-api/-/node-addon-api-7.1.1.tgz#1aba6693b0f255258a049d621329329322aad558" + integrity sha512-5m3bsyrjFWE1xf7nz7YXdN4udnVtXK6/Yfgn5qnahL6bCkf2yKt4k3nuTKAtT4r3IG8JNR2ncsIMdZuAzJjHQQ== + node-dir@^0.1.17: version "0.1.17" resolved "https://registry.npmjs.org/node-dir/-/node-dir-0.1.17.tgz#5f5665d93351335caabef8f1c554516cf5f1e4e5" @@ -11757,35 +11362,6 @@ node-int64@^0.4.0: resolved "https://registry.npmjs.org/node-int64/-/node-int64-0.4.0.tgz#87a9065cdb355d3182d8f94ce11188b825c68a3b" integrity sha512-O5lz91xSOeoXP6DulyHfllpq+Eg00MWitZIbtPfoSEvqIHdl5gfcY6hYzDWnj0qD5tz52PI08u9qUvSVeUBeHw== -node-libs-browser@^2.2.1: - version "2.2.1" - resolved "https://registry.npmjs.org/node-libs-browser/-/node-libs-browser-2.2.1.tgz#b64f513d18338625f90346d27b0d235e631f6425" - integrity sha512-h/zcD8H9kaDZ9ALUWwlBUDo6TKF8a7qBSCSEGfjTVIYeqsioSKaAX+BN7NgiMGp6iSIXZ3PxgCu8KS3b71YK5Q== - dependencies: - assert "^1.1.1" - browserify-zlib "^0.2.0" - buffer "^4.3.0" - console-browserify "^1.1.0" - constants-browserify "^1.0.0" - crypto-browserify "^3.11.0" - domain-browser "^1.1.1" - events "^3.0.0" - https-browserify "^1.0.0" - os-browserify "^0.3.0" - path-browserify "0.0.1" - process "^0.11.10" - punycode "^1.2.4" - querystring-es3 "^0.2.0" - readable-stream "^2.3.3" - stream-browserify "^2.0.1" - stream-http "^2.7.2" - string_decoder "^1.0.0" - timers-browserify "^2.0.4" - tty-browserify "0.0.0" - url "^0.11.0" - util "^0.11.0" - vm-browserify "^1.0.1" - node-modules-path@^1.0.0, node-modules-path@^1.0.1: version "1.0.2" resolved "https://registry.npmjs.org/node-modules-path/-/node-modules-path-1.0.2.tgz#e3acede9b7baf4bc336e3496b58e5b40d517056e" @@ -11845,7 +11421,7 @@ normalize-path@^2.1.1: dependencies: remove-trailing-separator "^1.0.1" -normalize-path@^3.0.0, normalize-path@~3.0.0: +normalize-path@^3.0.0: version "3.0.0" resolved "https://registry.npmjs.org/normalize-path/-/normalize-path-3.0.0.tgz#0dcd69ff23a1c9b11fd0978316644a0388216a65" integrity sha512-6eZs5Ls3WtCisHWp9S2GUy8dqkpGi4BVSz3GaqiE6ezub0512ESztXUwUB6C6IKbQkY2Pnb/mD4WYojCRwcwLA== @@ -11940,30 +11516,16 @@ nwsapi@^2.2.0: resolved "https://registry.npmjs.org/nwsapi/-/nwsapi-2.2.10.tgz#0b77a68e21a0b483db70b11fad055906e867cda8" integrity sha512-QK0sRs7MKv0tKe1+5uZIQk/C8XGza4DAnztJG8iD+TpJIORARrCxczA738awHrZoHeTjSSoHqao2teO0dC/gFQ== -object-assign@4.1.1, object-assign@^4, object-assign@^4.0.1, object-assign@^4.1.0, object-assign@^4.1.1: +object-assign@4.1.1, object-assign@^4, object-assign@^4.1.0: version "4.1.1" resolved "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz#2109adc7965887cfc05cbbd442cac8bfbb360863" integrity sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg== -object-copy@^0.1.0: - version "0.1.0" - resolved "https://registry.npmjs.org/object-copy/-/object-copy-0.1.0.tgz#7e7d858b781bd7c991a41ba975ed3812754e998c" - integrity sha512-79LYn6VAb63zgtmAteVOWo9Vdj71ZVBy3Pbse+VqxDpEP83XuujMrGqHIwAXJ5I/aM0zU7dIyIAhifVTPrNItQ== - dependencies: - copy-descriptor "^0.1.0" - define-property "^0.2.5" - kind-of "^3.0.3" - object-hash@^1.3.1: version "1.3.1" resolved "https://registry.npmjs.org/object-hash/-/object-hash-1.3.1.tgz#fde452098a951cb145f039bb7d455449ddc126df" integrity sha512-OSuu/pU4ENM9kmREg0BdNrUDIl1heYa4mBZacJc+vVWz4GtAwu7jO8s4AIt2aGRUTqxykpWzI3Oqnsm13tTMDA== -object-hash@^3.0.0: - version "3.0.0" - resolved "https://registry.npmjs.org/object-hash/-/object-hash-3.0.0.tgz#73f97f753e7baffc0e2cc9d6e079079744ac82e9" - integrity sha512-RSn9F68PjH9HqtltsSnqYC1XXoWe9Bju5+213R98cNGttag9q9yAOTzdbsqvIa7aNm5WffBZFpWYr2aWrklWAw== - object-inspect@^1.13.1: version "1.13.2" resolved "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.2.tgz#dea0088467fb991e67af4058147a24824a3043ff" @@ -11982,13 +11544,6 @@ object-keys@^1.1.1: resolved "https://registry.npmjs.org/object-keys/-/object-keys-1.1.1.tgz#1c47f272df277f3b1daf061677d9c82e2322c60e" integrity sha512-NuAESUOUMrlIXOfHKzD6bpPu3tYt3xvjNdRIQ+FeT0lNb4K8WR70CaDxhuNguS2XG+GjkyMwOzsN5ZktImfhLA== -object-visit@^1.0.0: - version "1.0.1" - resolved "https://registry.npmjs.org/object-visit/-/object-visit-1.0.1.tgz#f79c4493af0c5377b59fe39d395e41042dd045bb" - integrity sha512-GBaMwwAVK9qbQN3Scdo0OyvgPW7l3lnaVMj84uTOZlswkX0KpF6fyDBJhtTthf7pymztoN36/KEr1DyhF96zEA== - dependencies: - isobject "^3.0.0" - object.assign@^4.1.4, object.assign@^4.1.5: version "4.1.5" resolved "https://registry.npmjs.org/object.assign/-/object.assign-4.1.5.tgz#3a833f9ab7fdb80fc9e8d2300c803d216d8fdbb0" @@ -11999,13 +11554,6 @@ object.assign@^4.1.4, object.assign@^4.1.5: has-symbols "^1.0.3" object-keys "^1.1.1" -object.pick@^1.3.0: - version "1.3.0" - resolved "https://registry.npmjs.org/object.pick/-/object.pick-1.3.0.tgz#87a10ac4c1694bd2e1cbf53591a66141fb5dd747" - integrity sha512-tqa/UMy/CCoYmj+H5qc07qvSL9dqcs/WZENZ1JbtWBlATP+iVOe778gE6MSijnyCnORzDuX6hU+LA4SZ09YjFQ== - dependencies: - isobject "^3.0.1" - obliterator@^2.0.0: version "2.0.4" resolved "https://registry.npmjs.org/obliterator/-/obliterator-2.0.4.tgz#fa650e019b2d075d745e44f1effeb13a2adbe816" @@ -12095,11 +11643,6 @@ ora@^5.4.0: strip-ansi "^6.0.0" wcwidth "^1.0.1" -os-browserify@^0.3.0: - version "0.3.0" - resolved "https://registry.npmjs.org/os-browserify/-/os-browserify-0.3.0.tgz#854373c7f5c2315914fc9bfc6bd8238fdda1ec27" - integrity sha512-gjcpUc3clBf9+210TRaDWbf+rZZZEshZ+DlXMRCeAjp0xhTrnQsKHypIy1J3d5hKdUzj69t708EHtU8P6bUn0A== - os-homedir@^1.0.0: version "1.0.2" resolved "https://registry.npmjs.org/os-homedir/-/os-homedir-1.0.2.tgz#ffbc4988336e0e833de0c168c7ef152121aa7fb3" @@ -12213,20 +11756,6 @@ p-try@^2.0.0: resolved "https://registry.npmjs.org/p-try/-/p-try-2.2.0.tgz#cb2868540e313d61de58fafbe35ce9004d5540e6" integrity sha512-R4nPAVTAU0B9D35/Gk3uJf/7XYbQcyohSKdvAxIRSNghFl4e71hVoGnBNQz9cWaXxO2I10KTC+3jMdvvoKw6dQ== -pako@~1.0.5: - version "1.0.11" - resolved "https://registry.npmjs.org/pako/-/pako-1.0.11.tgz#6c9599d340d54dfd3946380252a35705a6b992bf" - integrity sha512-4hLB8Py4zZce5s4yd9XzopqwVv/yGNhV1Bl8NTmCq1763HeK2+EwVTv+leGeL13Dnh2wfbqowVPXCIO0z4taYw== - -parallel-transform@^1.1.0: - version "1.2.0" - resolved "https://registry.npmjs.org/parallel-transform/-/parallel-transform-1.2.0.tgz#9049ca37d6cb2182c3b1d2c720be94d14a5814fc" - integrity sha512-P2vSmIu38uIlvdcU7fDkyrxj33gTUy/ABO5ZUbGowxNCopBq/OoD42bP4UmMrJoPyk4Uqf0mu3mtWBhHCZD8yg== - dependencies: - cyclist "^1.0.1" - inherits "^2.0.3" - readable-stream "^2.1.5" - parent-module@^1.0.0: version "1.0.1" resolved "https://registry.npmjs.org/parent-module/-/parent-module-1.0.1.tgz#691d2709e78c79fae3a156622452d00762caaaa2" @@ -12234,22 +11763,10 @@ parent-module@^1.0.0: dependencies: callsites "^3.0.0" -parse-asn1@^5.0.0, parse-asn1@^5.1.7: - version "5.1.7" - resolved "https://registry.npmjs.org/parse-asn1/-/parse-asn1-5.1.7.tgz#73cdaaa822125f9647165625eb45f8a051d2df06" - integrity sha512-CTM5kuWR3sx9IFamcl5ErfPl6ea/N8IYwiJ+vpeB2g+1iknv7zBl5uPwbMbRVznRVbrNY6lGuDoE5b30grmbqg== - dependencies: - asn1.js "^4.10.1" - browserify-aes "^1.2.0" - evp_bytestokey "^1.0.3" - hash-base "~3.0" - pbkdf2 "^3.1.2" - safe-buffer "^5.2.1" - -parse-duration@^1.0.0: - version "1.1.0" - resolved "https://registry.npmjs.org/parse-duration/-/parse-duration-1.1.0.tgz#5192084c5d8f2a3fd676d04a451dbd2e05a1819c" - integrity sha512-z6t9dvSJYaPoQq7quMzdEagSFtpGu+utzHqqxmpVWNNZRIXnvqyCvn9XsTdh7c/w0Bqmdz3RB3YnRaKtpRtEXQ== +parse-duration@^2.1.3: + version "2.1.4" + resolved "https://registry.npmjs.org/parse-duration/-/parse-duration-2.1.4.tgz#02918736726f657eaf70b52bb8da7910316df51d" + integrity sha512-b98m6MsCh+akxfyoz9w9dt0AlH2dfYLOBss5SdDsr9pkhKNvkWBXU/r8A4ahmIGByBOLV2+4YwfCuFxbDDaGyg== parse-entities@^2.0.0: version "2.0.0" @@ -12306,16 +11823,6 @@ parseurl@~1.3.3: resolved "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz#9da19e7bee8d12dff0513ed5b76957793bc2e8d4" integrity sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ== -pascalcase@^0.1.1: - version "0.1.1" - resolved "https://registry.npmjs.org/pascalcase/-/pascalcase-0.1.1.tgz#b363e55e8006ca6fe21784d2db22bd15d7917f14" - integrity sha512-XHXfu/yOQRy9vYOtUDVMN60OEJjW013GoObG1o+xwQTpB9eYJX/BjXMsdW13ZDPruFhYYn0AG22w0xgQMwl3Nw== - -path-browserify@0.0.1: - version "0.0.1" - resolved "https://registry.npmjs.org/path-browserify/-/path-browserify-0.0.1.tgz#e6c4ddd7ed3aa27c68a20cc4e50e1a4ee83bbc4a" - integrity sha512-BapA40NHICOS+USX9SN4tyhq+A2RrN/Ws5F0Z5aMHDp98Fl86lX8Oti8B7uN93L4Ifv4fHOEA+pQw87gmMO/lQ== - path-dirname@^1.0.0: version "1.0.2" resolved "https://registry.npmjs.org/path-dirname/-/path-dirname-1.0.2.tgz#cc33d24d525e099a5388c0336c6e32b9160609e0" @@ -12368,15 +11875,15 @@ path-root@^0.1.1: dependencies: path-root-regex "^0.1.0" -path-to-regexp@0.1.7: - version "0.1.7" - resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.7.tgz#df604178005f522f15eb4490e7247a1bfaa67f8c" - integrity sha512-5DFkuoqlv1uYQKxy8omFBeJPQcdoE07Kv2sferDCrAq1ohOU+MSDswDIbnx3YAM60qIOnYa53wBhXW0EbMonrQ== +path-to-regexp@0.1.12, path-to-regexp@0.1.7: + version "0.1.12" + resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-0.1.12.tgz#d5e1a12e478a976d432ef3c58d534b9923164bb7" + integrity sha512-RA1GjUVMnvYFxuqovrEqZoxxW5NUZqbwKtYz/Tt7nXerk0LbLblQmrsgdeOxV5SFHf0UDggjS/bSeOZwt1pmEQ== -path-to-regexp@^1.7.0: - version "1.8.0" - resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-1.8.0.tgz#887b3ba9d84393e87a0a0b9f4cb756198b53548a" - integrity sha512-n43JRhlUKUAlibEJhPeir1ncUID16QnEjNpwzNdO3Lm4ywrBpBZ5oLD0I6br9evr1Y9JTqwRtAh7JLoOzAQdVA== +path-to-regexp@1.9.0, path-to-regexp@^1.7.0: + version "1.9.0" + resolved "https://registry.npmjs.org/path-to-regexp/-/path-to-regexp-1.9.0.tgz#5dc0753acbf8521ca2e0f137b4578b917b10cf24" + integrity sha512-xIp7/apCFJuUHdDLWe8O1HIkb0kQrOMb/0u6FXQjemHn/ii5LrIzU6bdECnsiTF/GjZkMEKg1xdiZwNqDYlZ6g== dependencies: isarray "0.0.1" @@ -12392,23 +11899,17 @@ path-type@^4.0.0: resolved "https://registry.npmjs.org/path-type/-/path-type-4.0.0.tgz#84ed01c0a7ba380afe09d90a8c180dcd9d03043b" integrity sha512-gDKb8aZMDeD/tZWs9P6+q0J9Mwkdl6xMV8TjnGP3qJVJ06bdMgkbBlLU8IdfOsIsFz2BW1rNVT3XuNEl8zPAvw== -pbkdf2@^3.0.3, pbkdf2@^3.1.2: - version "3.1.2" - resolved "https://registry.npmjs.org/pbkdf2/-/pbkdf2-3.1.2.tgz#dd822aa0887580e52f1a039dc3eda108efae3075" - integrity sha512-iuh7L6jA7JEGu2WxDwtQP1ddOpaJNC4KlDEFfdQajSGgGPNi4OyDc2R7QnbY2bR9QjBVGwgvTdNJZoE7RaxUMA== - dependencies: - create-hash "^1.1.2" - create-hmac "^1.1.4" - ripemd160 "^2.0.1" - safe-buffer "^5.0.1" - sha.js "^2.4.8" - picocolors@^1.0.0, picocolors@^1.0.1: version "1.0.1" resolved "https://registry.npmjs.org/picocolors/-/picocolors-1.0.1.tgz#a8ad579b571952f0e5d25892de5445bcfe25aaa1" integrity sha512-anP1Z8qwhkbmu7MFP5iTt+wQKXgwzf7zTyGlcdzabySa9vd0Xt392U0rVmz9poOaBj0uHJKyyo9/upk0HrEQew== -picomatch@^2.0.4, picomatch@^2.2.1, picomatch@^2.3.1: +picocolors@^1.1.1: + version "1.1.1" + resolved "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz#3d321af3eab939b083c8f929a1d12cda81c26b6b" + integrity sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA== + +picomatch@^2.3.1: version "2.3.1" resolved "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz#3ba3833733646d9d3e4995946c1365a67fb07a42" integrity sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA== @@ -12418,21 +11919,11 @@ pidtree@^0.3.0: resolved "https://registry.npmjs.org/pidtree/-/pidtree-0.3.1.tgz#ef09ac2cc0533df1f3250ccf2c4d366b0d12114a" integrity sha512-qQbW94hLHEqCg7nhby4yRC7G2+jYHY4Rguc2bjw7Uug4GIJuu1tvf2uHaZv5Q8zdt+WKJ6qK1FOI6amaWUo5FA== -pify@^2.3.0: - version "2.3.0" - resolved "https://registry.npmjs.org/pify/-/pify-2.3.0.tgz#ed141a6ac043a849ea588498e7dca8b15330e90c" - integrity sha512-udgsAY+fTnvv7kI7aaxbqwWNb0AHiB0qBO89PZKPkoTmGOgdbrHDKD+0B2X4uTfJ/FT1R09r9gTsjUjNJotuog== - pify@^3.0.0: version "3.0.0" resolved "https://registry.npmjs.org/pify/-/pify-3.0.0.tgz#e5a4acd2c101fdf3d9a4d07f0dbc4db49dd28176" integrity sha512-C3FsVNH1udSEX48gGX1xfvwTWfsYWj5U+8/uK15BGzIGrKoUpghX8hWZwa/OFnakBiiVNmBvemTJR5mcy7iPcg== -pify@^4.0.1: - version "4.0.1" - resolved "https://registry.npmjs.org/pify/-/pify-4.0.1.tgz#4b2cd25c50d598735c50292224fd8c6df41e3231" - integrity sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g== - pinkie-promise@^2.0.0: version "2.0.1" resolved "https://registry.npmjs.org/pinkie-promise/-/pinkie-promise-2.0.1.tgz#2135d6dfa7a358c069ac9b178776288228450ffa" @@ -12445,18 +11936,6 @@ pinkie@^2.0.0: resolved "https://registry.npmjs.org/pinkie/-/pinkie-2.0.4.tgz#72556b80cfa0d48a974e80e77248e80ed4f7f870" integrity sha512-MnUuEycAemtSaeFSjXKW/aroV7akBbY+Sv+RkyqFjgAe73F+MR0TBWKBRDkmfWq/HiFmdavfZ1G7h4SPZXaCSg== -pirates@^4.0.1: - version "4.0.6" - resolved "https://registry.npmjs.org/pirates/-/pirates-4.0.6.tgz#3018ae32ecfcff6c29ba2267cbf21166ac1f36b9" - integrity sha512-saLsH7WeYYPiD25LDuLRRY/i+6HaPYr6G1OUlN39otzkSTxKnubR9RTxS3/Kk50s1g2JTgFwWQDQyplC5/SHZg== - -pkg-dir@^3.0.0: - version "3.0.0" - resolved "https://registry.npmjs.org/pkg-dir/-/pkg-dir-3.0.0.tgz#2749020f239ed990881b1f71210d51eb6523bea3" - integrity sha512-/E57AYkoeQ25qkxMj5PBOVgF8Kiu/h7cYS30Z5+R7WaiCCBfLq58ZI/dSeaEKb9WVJV5n/03QwrN3IeWIFllvw== - dependencies: - find-up "^3.0.0" - pkg-dir@^4.1.0: version "4.2.0" resolved "https://registry.npmjs.org/pkg-dir/-/pkg-dir-4.2.0.tgz#f099133df7ede422e81d1d8448270eeb3e4261f3" @@ -12471,6 +11950,11 @@ pkg-dir@^5.0.0: dependencies: find-up "^5.0.0" +pkg-entry-points@^1.1.0: + version "1.1.1" + resolved "https://registry.npmjs.org/pkg-entry-points/-/pkg-entry-points-1.1.1.tgz#d5cd87f934e873bf73143ed1d0baf637e5f8fda4" + integrity sha512-BhZa7iaPmB4b3vKIACoppyUoYn8/sFs17VJJtzrzPZvEnN2nqrgg911tdL65lA2m1ml6UI3iPeYbZQ4VXpn1mA== + pkg-up@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/pkg-up/-/pkg-up-2.0.0.tgz#c819ac728059a461cab1c3889a2be3c49a004d7f" @@ -12501,40 +11985,11 @@ portfinder@^1.0.28: debug "^3.2.7" mkdirp "^0.5.6" -posix-character-classes@^0.1.0: - version "0.1.1" - resolved "https://registry.npmjs.org/posix-character-classes/-/posix-character-classes-0.1.1.tgz#01eac0fe3b5af71a2a6c02feabb8c1fef7e00eab" - integrity sha512-xTgYBc3fuo7Yt7JbiuFxSYGToMoz8fLoE6TC9Wx1P/u+LfeThMOAqmuyECnlBaaJb+u1m9hHiXUEtwW4OzfUJg== - possible-typed-array-names@^1.0.0: version "1.0.0" resolved "https://registry.npmjs.org/possible-typed-array-names/-/possible-typed-array-names-1.0.0.tgz#89bb63c6fada2c3e90adc4a647beeeb39cc7bf8f" integrity sha512-d7Uw+eZoloe0EHDIYoe+bQ5WXnGMOpmiZFTuMWCwpjzzkL2nTjcKiAk4hh8TjnGye2TwWOk3UXucZ+3rbmBa8Q== -postcss-import@^15.1.0: - version "15.1.0" - resolved "https://registry.npmjs.org/postcss-import/-/postcss-import-15.1.0.tgz#41c64ed8cc0e23735a9698b3249ffdbf704adc70" - integrity sha512-hpr+J05B2FVYUAXHeK1YyI267J/dDDhMU6B6civm8hSY1jYJnBXxzKDKDswzJmtLHryrjhnDjqqp/49t8FALew== - dependencies: - postcss-value-parser "^4.0.0" - read-cache "^1.0.0" - resolve "^1.1.7" - -postcss-js@^4.0.1: - version "4.0.1" - resolved "https://registry.npmjs.org/postcss-js/-/postcss-js-4.0.1.tgz#61598186f3703bab052f1c4f7d805f3991bee9d2" - integrity sha512-dDLF8pEO191hJMtlHFPRa8xsizHaM82MLfNkUHdUtVEV3tgTp5oj+8qbEqYM57SLfc74KSbw//4SeJma2LRVIw== - dependencies: - camelcase-css "^2.0.1" - -postcss-load-config@^4.0.1: - version "4.0.2" - resolved "https://registry.npmjs.org/postcss-load-config/-/postcss-load-config-4.0.2.tgz#7159dcf626118d33e299f485d6afe4aff7c4a3e3" - integrity sha512-bSVhyJGL00wMVoPUzAVAnbEoWyqRxkjv64tUl427SKnPrENtq6hJwUojroMz2VB+Q1edmi4IfrAPpami5VVgMQ== - dependencies: - lilconfig "^3.0.0" - yaml "^2.3.4" - postcss-modules-extract-imports@^3.0.0: version "3.1.0" resolved "https://registry.npmjs.org/postcss-modules-extract-imports/-/postcss-modules-extract-imports-3.1.0.tgz#b4497cb85a9c0c4b5aabeb759bb25e8d89f15002" @@ -12563,14 +12018,7 @@ postcss-modules-values@^4.0.0: dependencies: icss-utils "^5.0.0" -postcss-nested@^6.0.1: - version "6.0.1" - resolved "https://registry.npmjs.org/postcss-nested/-/postcss-nested-6.0.1.tgz#f83dc9846ca16d2f4fa864f16e9d9f7d0961662c" - integrity sha512-mEp4xPMi5bSWiMbsgoPfcP74lsWLHkQbZc3sY+jWYd65CUwXrUaTp0fmNpa01ZcETKlIgUdFN/MpS2xZtqL9dQ== - dependencies: - postcss-selector-parser "^6.0.11" - -postcss-selector-parser@^6.0.11, postcss-selector-parser@^6.0.2, postcss-selector-parser@^6.0.4: +postcss-selector-parser@^6.0.2, postcss-selector-parser@^6.0.4: version "6.1.0" resolved "https://registry.npmjs.org/postcss-selector-parser/-/postcss-selector-parser-6.1.0.tgz#49694cb4e7c649299fea510a29fa6577104bcf53" integrity sha512-UMz42UD0UY0EApS0ZL9o1XnLhSTtvvvLe5Dc2H2O56fvRZi+KulDyf5ctDhhtYJBGKStV2FL1fy6253cmLgqVQ== @@ -12578,12 +12026,12 @@ postcss-selector-parser@^6.0.11, postcss-selector-parser@^6.0.2, postcss-selecto cssesc "^3.0.0" util-deprecate "^1.0.2" -postcss-value-parser@^4.0.0, postcss-value-parser@^4.1.0, postcss-value-parser@^4.2.0: +postcss-value-parser@^4.1.0, postcss-value-parser@^4.2.0: version "4.2.0" resolved "https://registry.npmjs.org/postcss-value-parser/-/postcss-value-parser-4.2.0.tgz#723c09920836ba6d3e5af019f92bc0971c02e514" integrity sha512-1NNCs6uurfkVbeXG4S8JFT9t19m45ICnif8zWLd5oPSZ50QnwMfK+H3jv408d4jw/7Bttv5axS5IiHoLaVNHeQ== -postcss@^8.1.4, postcss@^8.2.15, postcss@^8.4.23: +postcss@^8.2.15: version "8.4.39" resolved "https://registry.npmjs.org/postcss/-/postcss-8.4.39.tgz#aa3c94998b61d3a9c259efa51db4b392e1bde0e3" integrity sha512-0vzE+lAiG7hZl1/9I8yzKLx3aR9Xbof3fBHKunvMfOCYAtMhrsnccJY2iTURb9EZd5+pLuiNV9/c/GZJOHsgIw== @@ -12629,15 +12077,10 @@ printf@^0.6.1: resolved "https://registry.npmjs.org/printf/-/printf-0.6.1.tgz#b9afa3d3b55b7f2e8b1715272479fc756ed88650" integrity sha512-is0ctgGdPJ5951KulgfzvHGwJtZ5ck8l042vRkV6jrkpBzTmb/lueTqguWHy2JfVA+RY6gFVlaZgUS0j7S/dsw== -prismjs@^1.29.0: - version "1.29.0" - resolved "https://registry.npmjs.org/prismjs/-/prismjs-1.29.0.tgz#f113555a8fa9b57c35e637bba27509dcf802dd12" - integrity sha512-Kx/1w86q/epKcmte75LNrEoT+lX8pBpavuAbvJWRXar7Hz8jrtF+e3vY751p0R8H9HdArwaCTNDDzHg/ScJK1Q== - -prismjs@~1.27.0: - version "1.27.0" - resolved "https://registry.npmjs.org/prismjs/-/prismjs-1.27.0.tgz#bb6ee3138a0b438a3653dd4d6ce0cc6510a45057" - integrity sha512-t13BGPUlFDR7wRB5kQDG4jjl7XeuH6jbJGt11JHPL96qwsEHNX2+68tFXqc1/k+/jALsbSWJKUOT/hcYAZ5LkA== +prismjs@1.30.0, prismjs@^1.30.0, prismjs@~1.27.0: + version "1.30.0" + resolved "https://registry.npmjs.org/prismjs/-/prismjs-1.30.0.tgz#d9709969d9d4e16403f6f348c63553b19f0975a9" + integrity sha512-DEvV2ZF2r2/63V+tK8hQvrR2ZGn10srHbXviTlcv7Kpzw8jWiNTqbVgjO3IY8RxrrOUF8VPMQQFysYYYv0YZxw== private@^0.1.6, private@^0.1.8: version "0.1.8" @@ -12656,21 +12099,11 @@ process-relative-require@^1.0.0: dependencies: node-modules-path "^1.0.0" -process@^0.11.10: - version "0.11.10" - resolved "https://registry.npmjs.org/process/-/process-0.11.10.tgz#7332300e840161bda3e69a1d1d91a7d4bc16f182" - integrity sha512-cdGef/drWFoydD1JsMzuFf8100nZl+GT+yacc2bEced5f9Rjk4z+WtFUTBu9PhOi9j/jfmBPu0mMEY4wIdAF8A== - progress@^2.0.0: version "2.0.3" resolved "https://registry.npmjs.org/progress/-/progress-2.0.3.tgz#7e8cf8d8f5b8f239c1bc68beb4eb78567d572ef8" integrity sha512-7PiHtLll5LdnKIMw100I+8xJXR5gW2QwWYkT6iJva0bXitZKa/XMrSbdmg3r2Xnaidz9Qumd0VPaMrZlF9V9sA== -promise-inflight@^1.0.1: - version "1.0.1" - resolved "https://registry.npmjs.org/promise-inflight/-/promise-inflight-1.0.1.tgz#98472870bf228132fcbdd868129bad12c3c029e3" - integrity sha512-6zWPyEOFaQBJYcGMHBKTKJ3u6TBsnMFOIZSa6ce1e/ZrrsOlnHRHbabMjLiBYKp+n44X9eUI6VUPaukCXHuG4g== - promise-map-series@^0.2.1: version "0.2.3" resolved "https://registry.npmjs.org/promise-map-series/-/promise-map-series-0.2.3.tgz#c2d377afc93253f6bd03dbb77755eb88ab20a847" @@ -12713,26 +12146,6 @@ psl@^1.1.33: resolved "https://registry.npmjs.org/psl/-/psl-1.9.0.tgz#d0df2a137f00794565fcaf3b2c00cd09f8d5a5a7" integrity sha512-E/ZsdU4HLs/68gYzgGTkMicWTLPdAftJLfJFlLUAAKZGkStNU72sZjT66SnMDVOfOWY/YAoiD7Jxa9iHvngcag== -public-encrypt@^4.0.0: - version "4.0.3" - resolved "https://registry.npmjs.org/public-encrypt/-/public-encrypt-4.0.3.tgz#4fcc9d77a07e48ba7527e7cbe0de33d0701331e0" - integrity sha512-zVpa8oKZSz5bTMTFClc1fQOnyyEzpl5ozpi1B5YcvBrdohMjH2rfsBtyXcuNuwjsDIXmBYlF2N5FlJYhR29t8Q== - dependencies: - bn.js "^4.1.0" - browserify-rsa "^4.0.0" - create-hash "^1.1.0" - parse-asn1 "^5.0.0" - randombytes "^2.0.1" - safe-buffer "^5.1.2" - -pump@^2.0.0: - version "2.0.1" - resolved "https://registry.npmjs.org/pump/-/pump-2.0.1.tgz#12399add6e4cf7526d973cbc8b5ce2e2908b3909" - integrity sha512-ruPMNRkN3MHP1cWJc9OWr+T/xDP0jhXYCLfJcBuX54hhfIBnaQmAUMfDcG4DM5UMWByBbJY69QSphm3jtDKIkA== - dependencies: - end-of-stream "^1.1.0" - once "^1.3.1" - pump@^3.0.0: version "3.0.0" resolved "https://registry.npmjs.org/pump/-/pump-3.0.0.tgz#b4a2116815bde2f4e1ea602354e8c75565107a64" @@ -12741,20 +12154,6 @@ pump@^3.0.0: end-of-stream "^1.1.0" once "^1.3.1" -pumpify@^1.3.3: - version "1.5.1" - resolved "https://registry.npmjs.org/pumpify/-/pumpify-1.5.1.tgz#36513be246ab27570b1a374a5ce278bfd74370ce" - integrity sha512-oClZI37HvuUJJxSKKrC17bZ9Cu0ZYhEAGPsPUy9KlMUmv9dKX2o77RUmq7f3XjIxbwyGwYzbzQ1L2Ks8sIradQ== - dependencies: - duplexify "^3.6.0" - inherits "^2.0.3" - pump "^2.0.0" - -punycode@^1.2.4, punycode@^1.4.1: - version "1.4.1" - resolved "https://registry.npmjs.org/punycode/-/punycode-1.4.1.tgz#c0d5a63b2718800ad8e1eb0fa5269c84dd41845e" - integrity sha512-jmYNElW7yvO7TV33CjSmvSiE2yco3bV2czu/OzDKdMNVZQWfxCblURLhf+47syQRBntjfLdd/H0egrzIG+oaFQ== - punycode@^2.1.0, punycode@^2.1.1: version "2.3.1" resolved "https://registry.npmjs.org/punycode/-/punycode-2.3.1.tgz#027422e2faec0b25e1549c3e1bd8309b9133b6e5" @@ -12767,18 +12166,20 @@ qs@6.11.0: dependencies: side-channel "^1.0.4" -qs@^6.11.2, qs@^6.4.0: +qs@6.13.0: + version "6.13.0" + resolved "https://registry.npmjs.org/qs/-/qs-6.13.0.tgz#6ca3bd58439f7e245655798997787b0d88a51906" + integrity sha512-+38qI9SOr8tfZ4QmJNplMUxqjbe7LKvvZgWdExBOmd+egZTtjLB67Gu0HRX3u/XOq7UU2Nx6nsjvS16Z9uwfpg== + dependencies: + side-channel "^1.0.6" + +qs@^6.4.0: version "6.12.2" resolved "https://registry.npmjs.org/qs/-/qs-6.12.2.tgz#5443b587f3bf73ac68968de491e5b25bafe04478" integrity sha512-x+NLUpx9SYrcwXtX7ob1gnkSems4i/mGZX5SlYxwIau6RrUSODO89TR/XDGGpn5RPWSYIB+aSfuSlV5+CmbTBg== dependencies: side-channel "^1.0.6" -querystring-es3@^0.2.0: - version "0.2.1" - resolved "https://registry.npmjs.org/querystring-es3/-/querystring-es3-0.2.1.tgz#9ec61f79049875707d69414596fd907a4d711e73" - integrity sha512-773xhDQnZBMFobEiztv8LIl70ch5MSF/jUQVlhwFyBILqq96anmoctVIYz+ZRp0qbCKATTn6ev02M3r7Ga5vqA== - querystringify@^2.1.1: version "2.2.0" resolved "https://registry.npmjs.org/querystringify/-/querystringify-2.2.0.tgz#3345941b4153cb9d082d8eee4cda2016a9aef7f6" @@ -12817,21 +12218,13 @@ qunit@^2.16.0, qunit@^2.17.2: node-watch "0.7.3" tiny-glob "0.2.9" -randombytes@^2.0.0, randombytes@^2.0.1, randombytes@^2.0.5, randombytes@^2.1.0: +randombytes@^2.1.0: version "2.1.0" resolved "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz#df6f84372f0270dc65cdf6291349ab7a473d4f2a" integrity sha512-vYl3iOX+4CKUWuxGi9Ukhie6fsqXqS9FE2Zaic4tNFD2N2QQaXOMFbuKK4QmDHC0JO6B1Zp41J0LpT0oR68amQ== dependencies: safe-buffer "^5.1.0" -randomfill@^1.0.3: - version "1.0.4" - resolved "https://registry.npmjs.org/randomfill/-/randomfill-1.0.4.tgz#c92196fc86ab42be983f1bf31778224931d61458" - integrity sha512-87lcbR8+MhcWcUiQ+9e+Rwx8MyR2P7qnt15ynUlbm3TU/fjbgz4GsvfSUDTemtCCtVCqb4ZcEFlyPNTh9bBTLw== - dependencies: - randombytes "^2.0.5" - safe-buffer "^5.1.0" - range-parser@~1.2.1: version "1.2.1" resolved "https://registry.npmjs.org/range-parser/-/range-parser-1.2.1.tgz#3cf37023d199e1c24d1a55b84800c2f3e6468031" @@ -12860,13 +12253,6 @@ react-is@^17.0.1: resolved "https://registry.npmjs.org/react-is/-/react-is-17.0.2.tgz#e691d4a8e9c789365655539ab372762b0efb54f0" integrity sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w== -read-cache@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/read-cache/-/read-cache-1.0.0.tgz#e664ef31161166c9751cdbe8dbcf86b5fb58f774" - integrity sha512-Owdv/Ft7IjOgm/i0xvNDZ1LrRANRfew4b2prF3OWMQLxLfu3bS8FVhCsrSCMK4lR56Y9ya+AThoTpDCTxCmpRA== - dependencies: - pify "^2.3.0" - read-installed@~4.0.3: version "4.0.3" resolved "https://registry.npmjs.org/read-installed/-/read-installed-4.0.3.tgz#ff9b8b67f187d1e4c29b9feb31f6b223acd19067" @@ -12900,7 +12286,16 @@ read-pkg@^3.0.0: normalize-package-data "^2.3.2" path-type "^3.0.0" -"readable-stream@1 || 2", readable-stream@^2.0.0, readable-stream@^2.0.1, readable-stream@^2.0.2, readable-stream@^2.0.6, readable-stream@^2.1.5, readable-stream@^2.2.2, readable-stream@^2.3.3, readable-stream@^2.3.6, readable-stream@^2.3.8, readable-stream@~2.3.6: +"readable-stream@2 || 3", readable-stream@^3.4.0, readable-stream@^3.6.0: + version "3.6.2" + resolved "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz#56a9b36ea965c00c5a93ef31eb111a0f11056967" + integrity sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA== + dependencies: + inherits "^2.0.3" + string_decoder "^1.1.1" + util-deprecate "^1.0.1" + +readable-stream@^2.0.1, readable-stream@^2.0.6: version "2.3.8" resolved "https://registry.npmjs.org/readable-stream/-/readable-stream-2.3.8.tgz#91125e8042bba1b9887f49345f6277027ce8be9b" integrity sha512-8p0AUk4XODgIewSi0l8Epjs+EVnWiK7NoDIEGU0HhE7+ZyY8D1IMY7odu5lRrFXGg71L15KG8QrPmum45RTtdA== @@ -12913,15 +12308,6 @@ read-pkg@^3.0.0: string_decoder "~1.1.1" util-deprecate "~1.0.1" -"readable-stream@2 || 3", readable-stream@^3.4.0, readable-stream@^3.6.0: - version "3.6.2" - resolved "https://registry.npmjs.org/readable-stream/-/readable-stream-3.6.2.tgz#56a9b36ea965c00c5a93ef31eb111a0f11056967" - integrity sha512-9u/sniCrY3D5WdsERHzHE4G2YCXqoG5FTHUiCC4SIbr6XcLZBY05ya9EKjYek9O5xOAwjGq+1JdGBAS7Q9ScoA== - dependencies: - inherits "^2.0.3" - string_decoder "^1.1.1" - util-deprecate "^1.0.1" - readable-stream@~1.0.2: version "1.0.34" resolved "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.34.tgz#125820e34bc842d2f2aaafafe4c2916ee32c157c" @@ -12942,21 +12328,10 @@ readdir-scoped-modules@^1.0.0: graceful-fs "^4.1.2" once "^1.3.0" -readdirp@^2.2.1: - version "2.2.1" - resolved "https://registry.npmjs.org/readdirp/-/readdirp-2.2.1.tgz#0e87622a3325aa33e892285caf8b4e846529a525" - integrity sha512-1JU/8q+VgFZyxwrJ+SVIOsh+KywWGpds3NTqikiKpDMZWScmAYyKIgqkO+ARvNWJfXeXR1zxz7aHF4u4CyH6vQ== - dependencies: - graceful-fs "^4.1.11" - micromatch "^3.1.10" - readable-stream "^2.0.2" - -readdirp@~3.6.0: - version "3.6.0" - resolved "https://registry.npmjs.org/readdirp/-/readdirp-3.6.0.tgz#74a370bd857116e245b29cc97340cd431a02a6c7" - integrity sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA== - dependencies: - picomatch "^2.2.1" +readdirp@^4.0.1: + version "4.1.2" + resolved "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz#eb85801435fbf2a7ee58f19e0921b068fc69948d" + integrity sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg== recast@^0.18.1: version "0.18.10" @@ -13042,14 +12417,6 @@ regenerator-transform@^0.15.2: dependencies: "@babel/runtime" "^7.8.4" -regex-not@^1.0.0, regex-not@^1.0.2: - version "1.0.2" - resolved "https://registry.npmjs.org/regex-not/-/regex-not-1.0.2.tgz#1f4ece27e00b0b65e0247a6810e6a85d83a5752c" - integrity sha512-J6SDjUgDxQj5NusnOtdFxDwN/+HWykR8GELwctJ7mdqhcyy1xEc4SRFHUXvxTp661YaVKAjfRLZ9cCqS6tn32A== - dependencies: - extend-shallow "^3.0.2" - safe-regex "^1.1.0" - regexp.prototype.flags@^1.5.1, regexp.prototype.flags@^1.5.2: version "1.5.2" resolved "https://registry.npmjs.org/regexp.prototype.flags/-/regexp.prototype.flags-1.5.2.tgz#138f644a3350f981a858c44f6bb1a61ff59be334" @@ -13231,7 +12598,7 @@ reselect@^3.0.1: resolved "https://registry.npmjs.org/reselect/-/reselect-3.0.1.tgz#efdaa98ea7451324d092b2b2163a6a1d7a9a2147" integrity sha512-b/6tFZCmRhtBMa4xGqiiRp9jh9Aqi2A687Lo265cN0/QohJQEBPiQ52f4QB6i0eF3yp3hmLL21LSGBcML2dlxA== -reselect@^4.0.0, reselect@^4.1.7: +reselect@^4.0.0: version "4.1.8" resolved "https://registry.npmjs.org/reselect/-/reselect-4.1.8.tgz#3f5dc671ea168dccdeb3e141236f69f02eaec524" integrity sha512-ab9EmR80F/zQTMNeneUr4cv+jSwPJgIlvEmVwLerwrWVbpLlBuls9XHzIeTFy4cegU2NHBp3va0LKOzU5qFEYQ== @@ -13293,12 +12660,12 @@ resolve-path@^1.4.0: http-errors "~1.6.2" path-is-absolute "1.0.1" -resolve-url@^0.2.1: - version "0.2.1" - resolved "https://registry.npmjs.org/resolve-url/-/resolve-url-0.2.1.tgz#2c637fe77c893afd2a663fe21aa9080068e2052a" - integrity sha512-ZuF55hVUQaaczgOIwqWzkEcEidmlD/xl44x1UZnhOXcYuFN2S6+rcxpG+C1N3So0wvNI3DmJICUFfu2SxhBmvg== +resolve.exports@^2.0.2: + version "2.0.3" + resolved "https://registry.npmjs.org/resolve.exports/-/resolve.exports-2.0.3.tgz#41955e6f1b4013b7586f873749a635dea07ebe3f" + integrity sha512-OcXjMsGdhL4XnbShKpAcSqPMzQoYkYyhbEaeSko47MjRP9NfEQMhZkXL1DoFlt9LWQn4YttrdnV6X2OiyzBi+A== -resolve@^1.1.7, resolve@^1.10.0, resolve@^1.10.1, resolve@^1.11.1, resolve@^1.12.0, resolve@^1.13.1, resolve@^1.14.2, resolve@^1.17.0, resolve@^1.20.0, resolve@^1.22.0, resolve@^1.22.2, resolve@^1.22.8, resolve@^1.3.3, resolve@^1.4.0, resolve@^1.5.0, resolve@^1.8.1: +resolve@^1.1.7, resolve@^1.10.0, resolve@^1.10.1, resolve@^1.11.1, resolve@^1.12.0, resolve@^1.13.1, resolve@^1.14.2, resolve@^1.17.0, resolve@^1.20.0, resolve@^1.22.0, resolve@^1.3.3, resolve@^1.4.0, resolve@^1.5.0, resolve@^1.8.1: version "1.22.8" resolved "https://registry.npmjs.org/resolve/-/resolve-1.22.8.tgz#b6c87a9f2aa06dfab52e3d70ac8cde321fa5a48d" integrity sha512-oKWePCxqpd6FlLvGV1VU0x7bkPmmCNolxzjMf4NczoDnQcIWrAF+cPtZn5i6n+RfD2d9i0tzpKnG6Yk168yIyw== @@ -13332,11 +12699,6 @@ restore-cursor@^3.1.0: onetime "^5.1.0" signal-exit "^3.0.2" -ret@~0.1.10: - version "0.1.15" - resolved "https://registry.npmjs.org/ret/-/ret-0.1.15.tgz#b8a4825d5bdb1fc3f6f53c2bc33f81388681c7bc" - integrity sha512-TTlYpa+OL+vMMNG24xSlQGEJ3B/RzEfUlLct7b5G/ytav+wPrplCpVMFuwzXbkecJrb6IYo1iFb0S9v37754mg== - reusify@^1.0.4: version "1.0.4" resolved "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz#90da382b1e126efc02146e90845a88db12925d76" @@ -13368,14 +12730,6 @@ rimraf@~2.6.2: dependencies: glob "^7.1.3" -ripemd160@^2.0.0, ripemd160@^2.0.1: - version "2.0.2" - resolved "https://registry.npmjs.org/ripemd160/-/ripemd160-2.0.2.tgz#a1c1a6f624751577ba5d07914cbc92850585890c" - integrity sha512-ii4iagi25WusVoiC4B4lq7pbXfAp3D9v5CwfkY33vffw2+pkDjY1D8GaN7spsxvCSx8dkPqOZCEZyfxcmJG2IA== - dependencies: - hash-base "^3.0.0" - inherits "^2.0.1" - rollup-pluginutils@^2.8.1: version "2.8.2" resolved "https://registry.npmjs.org/rollup-pluginutils/-/rollup-pluginutils-2.8.2.tgz#72f2af0748b592364dbd3389e600e5a9444a351e" @@ -13383,10 +12737,10 @@ rollup-pluginutils@^2.8.1: dependencies: estree-walker "^0.6.1" -rollup@^2.50.0: - version "2.79.1" - resolved "https://registry.npmjs.org/rollup/-/rollup-2.79.1.tgz#bedee8faef7c9f93a2647ac0108748f497f081c7" - integrity sha512-uKxbd0IhMZOhjAiD5oAFp7BqvkA4Dv47qpOCtaNvng4HBwdbWtdOh8f5nZNuk2rp51PMGk3bzfWu5oayNEuYnw== +rollup@2.79.2, rollup@^2.50.0: + version "2.79.2" + resolved "https://registry.npmjs.org/rollup/-/rollup-2.79.2.tgz#f150e4a5db4b121a21a747d762f701e5e9f49090" + integrity sha512-fS6iqSPZDs3dr/y7Od6y5nha8dW1YnbgtsyotCVvoFGKbERG++CVRFv1meyGDE1SNItQA8BrnCw7ScdAhRJ3XQ== optionalDependencies: fsevents "~2.3.2" @@ -13422,13 +12776,6 @@ run-parallel@^1.1.9: dependencies: queue-microtask "^1.2.2" -run-queue@^1.0.0, run-queue@^1.0.3: - version "1.0.3" - resolved "https://registry.npmjs.org/run-queue/-/run-queue-1.0.3.tgz#e848396f057d223f24386924618e25694161ec47" - integrity sha512-ntymy489o0/QQplUDnpYAYUsO50K9SBrIVaKCWDOJzYJts0f9WH9RFJkyagebkw5+y1oi00R7ynNW/d12GBumg== - dependencies: - aproba "^1.1.1" - rxjs@^6.4.0, rxjs@^6.6.0: version "6.6.7" resolved "https://registry.npmjs.org/rxjs/-/rxjs-6.6.7.tgz#90ac018acabf491bf65044235d5863c4dab804c9" @@ -13458,7 +12805,7 @@ safe-buffer@5.1.2, safe-buffer@~5.1.0, safe-buffer@~5.1.1: resolved "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.1.2.tgz#991ec69d296e0313747d59bdfd2b745c35f8828d" integrity sha512-Gd2UZBJDkXlY7GbJxfsE8/nvKkUEU1G38c1siN6QP6a9PT9MmHB8GnpscSmMJSoF8LOIrt8ud/wPtojys4G6+g== -safe-buffer@5.2.1, safe-buffer@>=5.1.0, safe-buffer@^5.0.1, safe-buffer@^5.1.0, safe-buffer@^5.1.1, safe-buffer@^5.1.2, safe-buffer@^5.2.0, safe-buffer@^5.2.1, safe-buffer@~5.2.0: +safe-buffer@5.2.1, safe-buffer@>=5.1.0, safe-buffer@^5.1.0, safe-buffer@^5.1.2, safe-buffer@~5.2.0: version "5.2.1" resolved "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz#1eaf9fa9bdb1fdd4ec75f58f9cdb4e6b7827eec6" integrity sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ== @@ -13477,13 +12824,6 @@ safe-regex-test@^1.0.3: es-errors "^1.3.0" is-regex "^1.1.4" -safe-regex@^1.1.0: - version "1.1.0" - resolved "https://registry.npmjs.org/safe-regex/-/safe-regex-1.1.0.tgz#40a3669f3b077d1e943d44629e157dd48023bf2e" - integrity sha512-aJXcif4xnaNUzvUuC5gcb46oTS7zvg4jpMTnuqtrEPlR3vFr4pxtdTwaF1Qs3Enjn9HK+ZlwQui+a7z0SywIzg== - dependencies: - ret "~0.1.10" - "safer-buffer@>= 2.1.2 < 3": version "2.1.2" resolved "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz#44fa161b0187b9549dd84bb91802f9bd8385cd6a" @@ -13504,14 +12844,16 @@ sane@^4.0.0, sane@^4.1.0: minimist "^1.1.1" walker "~1.0.5" -sass@^1.28.0, sass@^1.69.5: - version "1.77.6" - resolved "https://registry.npmjs.org/sass/-/sass-1.77.6.tgz#898845c1348078c2e6d1b64f9ee06b3f8bd489e4" - integrity sha512-ByXE1oLD79GVq9Ht1PeHWCPMPB8XHpBuz1r85oByKHjZY6qV6rWnQovQzXJXuQ/XyE1Oj3iPk3lo28uzaRA2/Q== +sass@^1.83.0, sass@^1.89.2: + version "1.89.2" + resolved "https://registry.npmjs.org/sass/-/sass-1.89.2.tgz#a771716aeae774e2b529f72c0ff2dfd46c9de10e" + integrity sha512-xCmtksBKd/jdJ9Bt9p7nPKiuqrlBMBuuGkQlkhZjjQk3Ty48lv93k5Dq6OPkKt4XwxDJ7tvlfrTa1MPA9bf+QA== dependencies: - chokidar ">=3.0.0 <4.0.0" - immutable "^4.0.0" + chokidar "^4.0.0" + immutable "^5.0.2" source-map-js ">=0.6.2 <2.0.0" + optionalDependencies: + "@parcel/watcher" "^2.4.1" saxes@^5.0.1: version "5.0.1" @@ -13520,15 +12862,6 @@ saxes@^5.0.1: dependencies: xmlchars "^2.2.0" -schema-utils@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/schema-utils/-/schema-utils-1.0.0.tgz#0b79a93204d7b600d4b2850d1f66c2a34951c770" - integrity sha512-i27Mic4KovM/lnGsy8whRCHhc7VicJajAjTrYg11K9zfZXnYIt4k5F+kZkwjnrhKzLic/HLU4j11mjsz2G/75g== - dependencies: - ajv "^6.1.0" - ajv-errors "^1.0.0" - ajv-keywords "^3.1.0" - schema-utils@^2.6.5: version "2.7.1" resolved "https://registry.npmjs.org/schema-utils/-/schema-utils-2.7.1.tgz#1ca4f32d1b24c590c203b8e7a50bf0ea4cd394d7" @@ -13606,13 +12939,6 @@ send@0.18.0: range-parser "~1.2.1" statuses "2.0.1" -serialize-javascript@^4.0.0: - version "4.0.0" - resolved "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-4.0.0.tgz#b525e1238489a5ecfc42afacc3fe99e666f4b1aa" - integrity sha512-GaNA54380uFefWghODBWEGisLZFj00nS5ACs6yHa9nLqlLpVLO8ChDGeKRjZnV4Nh4n0Qi7nhYZD/9fCPzEqkw== - dependencies: - randombytes "^2.1.0" - serialize-javascript@^6.0.1: version "6.0.2" resolved "https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-6.0.2.tgz#defa1e055c83bf6d59ea805d8da862254eb6a6c2" @@ -13657,21 +12983,6 @@ set-function-name@^2.0.1, set-function-name@^2.0.2: functions-have-names "^1.2.3" has-property-descriptors "^1.0.2" -set-value@^2.0.0, set-value@^2.0.1: - version "2.0.1" - resolved "https://registry.npmjs.org/set-value/-/set-value-2.0.1.tgz#a18d40530e6f07de4228c7defe4227af8cad005b" - integrity sha512-JxHc1weCN68wRY0fhCoXpyK55m/XPHafOmK4UWD7m2CI14GMcFypt4w/0+NV5f/ZMby2F6S2wwA7fgynh9gWSw== - dependencies: - extend-shallow "^2.0.1" - is-extendable "^0.1.1" - is-plain-object "^2.0.3" - split-string "^3.0.1" - -setimmediate@^1.0.4: - version "1.0.5" - resolved "https://registry.npmjs.org/setimmediate/-/setimmediate-1.0.5.tgz#290cbb232e306942d7d7ea9b83732ab7856f8285" - integrity sha512-MATJdZp8sLqDl/68LfQmbP8zKPLQNV6BIZoIgrscFDQ+RsvK/BxeDQOgyxKKoh0y/8h3BqVFnCqQ/gd+reiIXA== - setprototypeof@1.1.0: version "1.1.0" resolved "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.1.0.tgz#d0bd85536887b6fe7c0d818cb962d9d91c54e656" @@ -13682,14 +12993,6 @@ setprototypeof@1.2.0: resolved "https://registry.npmjs.org/setprototypeof/-/setprototypeof-1.2.0.tgz#66c9a24a73f9fc28cbe66b09fed3d33dcaf1b424" integrity sha512-E5LDX7Wrp85Kil5bhZv46j8jOeboKq5JMmYM3gVGdGH8xFpPWXUMsNrlODCrkoxMEeNi/XZIwuRvY4XNwYMJpw== -sha.js@^2.4.0, sha.js@^2.4.8: - version "2.4.11" - resolved "https://registry.npmjs.org/sha.js/-/sha.js-2.4.11.tgz#37a5cf0b81ecbc6943de109ba2960d1b26584ae7" - integrity sha512-QMEp5B7cftE7APOjk5Y6xgrbWu+WkLVQwk8JNjZ8nKRciZaByEW6MubieAiToS7+dwvrjGhH8jRXz3MVd0AYqQ== - dependencies: - inherits "^2.0.1" - safe-buffer "^5.0.1" - shebang-command@^1.2.0: version "1.2.0" resolved "https://registry.npmjs.org/shebang-command/-/shebang-command-1.2.0.tgz#44aac65b695b03398968c39f363fee5deafdf1ea" @@ -13804,20 +13107,6 @@ snake-case@^3.0.3: dot-case "^3.0.4" tslib "^2.0.3" -snapdragon@^0.8.1: - version "0.8.2" - resolved "https://registry.npmjs.org/snapdragon/-/snapdragon-0.8.2.tgz#64922e7c565b0e14204ba1aa7d6964278d25182d" - integrity sha512-FtyOnWN/wCHTVXOMwvSv26d+ko5vWlIDD6zoUJ7LW8vh+ZBC8QdljveRP+crNrtBwioEUWy/4dMtbBjA4ioNlg== - dependencies: - base "^0.11.1" - debug "^2.2.0" - define-property "^0.2.5" - extend-shallow "^2.0.1" - map-cache "^0.2.2" - source-map "^0.5.6" - source-map-resolve "^0.5.0" - use "^3.1.0" - socket.io-adapter@~2.5.2: version "2.5.5" resolved "https://registry.npmjs.org/socket.io-adapter/-/socket.io-adapter-2.5.5.tgz#c7a1f9c703d7756844751b6ff9abfc1780664082" @@ -13864,27 +13153,11 @@ sort-package-json@^1.49.0: is-plain-obj "2.1.0" sort-object-keys "^1.1.3" -source-list-map@^2.0.0: - version "2.0.1" - resolved "https://registry.npmjs.org/source-list-map/-/source-list-map-2.0.1.tgz#3993bd873bfc48479cca9ea3a547835c7c154b34" - integrity sha512-qnQ7gVMxGNxsiL4lEuJwe/To8UnK7fAnmbGEEH8RpLouuKbeEm0lhbQVFIrNSuB+G7tVrAlVsZgETT5nljf+Iw== - "source-map-js@>=0.6.2 <2.0.0", source-map-js@^1.0.1, source-map-js@^1.2.0: version "1.2.0" resolved "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.0.tgz#16b809c162517b5b8c3e7dcd315a2a5c2612b2af" integrity sha512-itJW8lvSA0TXEphiRoawsCksnlf8SyvmFzIhltqAHluXd88pkCd+cXJVHTDwdCr0IzwptSm035IHQktUu1QUMg== -source-map-resolve@^0.5.0: - version "0.5.3" - resolved "https://registry.npmjs.org/source-map-resolve/-/source-map-resolve-0.5.3.tgz#190866bece7553e1f8f267a2ee82c606b5509a1a" - integrity sha512-Htz+RnsXWk5+P2slx5Jh3Q66vhQj1Cllm0zvnaY98+NFx+Dv2CF/f5O/t8x+KaNdrdIAsruNzoh/KpialbqAnw== - dependencies: - atob "^2.1.2" - decode-uri-component "^0.2.0" - resolve-url "^0.2.1" - source-map-url "^0.4.0" - urix "^0.1.0" - source-map-resolve@^0.6.0: version "0.6.0" resolved "https://registry.npmjs.org/source-map-resolve/-/source-map-resolve-0.6.0.tgz#3d9df87e236b53f16d01e58150fc7711138e5ed2" @@ -13900,7 +13173,7 @@ source-map-support@^0.4.15: dependencies: source-map "^0.5.6" -source-map-support@~0.5.12, source-map-support@~0.5.20: +source-map-support@~0.5.20: version "0.5.21" resolved "https://registry.npmjs.org/source-map-support/-/source-map-support-0.5.21.tgz#04fe7c7f9e1ed2d662233c28cb2b35b9f63f6e4f" integrity sha512-uBHU3L3czsIyYXKX88fdrGovxdSCoTGDRZ6SYXtSRxLZUzHg5P/66Ht6uoUlHu9EZod+inXhKo3qQgwXUT/y1w== @@ -13913,11 +13186,6 @@ source-map-url@^0.3.0: resolved "https://registry.npmjs.org/source-map-url/-/source-map-url-0.3.0.tgz#7ecaf13b57bcd09da8a40c5d269db33799d4aaf9" integrity sha512-QU4fa0D6aSOmrT+7OHpUXw+jS84T0MLaQNtFs8xzLNe6Arj44Magd7WEbyVW5LNYoAPVV35aKs4azxIfVJrToQ== -source-map-url@^0.4.0: - version "0.4.1" - resolved "https://registry.npmjs.org/source-map-url/-/source-map-url-0.4.1.tgz#0af66605a745a5a2f91cf1bbf8a7afbc283dec56" - integrity sha512-cPiFOTLUKvJFIg4SKVScy4ilPPW6rFgMgfuZJPNoDuMs3nC1HbMUycBoJw77xFIp6z1UJQJOfx6C9GMH80DiTw== - source-map@0.4.x, source-map@^0.4.2: version "0.4.4" resolved "https://registry.npmjs.org/source-map/-/source-map-0.4.4.tgz#eba4f5da9c0dc999de68032d8b4f76173652036b" @@ -13942,11 +13210,6 @@ source-map@~0.1.x: dependencies: amdefine ">=0.0.4" -source-map@~0.7.4: - version "0.7.4" - resolved "https://registry.npmjs.org/source-map/-/source-map-0.7.4.tgz#a9bbe705c9d8846f4e08ff6765acf0f1b0898656" - integrity sha512-l3BikUxvPOcn5E74dZiq5BGsTb5yEwhaTSzccU6t4sDOH8NWJCstKO5QT2CvtFoK6F0saL7p9xHAqHOlCPJygA== - sourcemap-codec@^1.4.8: version "1.4.8" resolved "https://registry.npmjs.org/sourcemap-codec/-/sourcemap-codec-1.4.8.tgz#ea804bd94857402e6992d05a38ef1ae35a9ab4c4" @@ -14021,13 +13284,6 @@ spdx-satisfies@^4.0.0: spdx-expression-parse "^3.0.0" spdx-ranges "^2.0.0" -split-string@^3.0.1: - version "3.1.0" - resolved "https://registry.npmjs.org/split-string/-/split-string-3.1.0.tgz#7cb09dda3a86585705c64b39a6466038682e8fe2" - integrity sha512-NzNVhJDYpwceVVii8/Hu6DKfD2G+NrQHlS/V/qgv763EYudVwEcMQNxd2lh+0VrUByXN/oJkl5grOhYWvQUYiw== - dependencies: - extend-shallow "^3.0.0" - sprintf-js@^1.1.1: version "1.1.3" resolved "https://registry.npmjs.org/sprintf-js/-/sprintf-js-1.1.3.tgz#4914b903a2f8b685d17fdf78a70e917e872e444a" @@ -14043,13 +13299,6 @@ sri-toolbox@^0.2.0: resolved "https://registry.npmjs.org/sri-toolbox/-/sri-toolbox-0.2.0.tgz#a7fea5c3fde55e675cf1c8c06f3ebb5c2935835e" integrity sha512-DQIMWCAr/M7phwo+d3bEfXwSBEwuaJL+SJx9cuqt1Ty7K96ZFoHpYnSbhrQZEr0+0/GtmpKECP8X/R4RyeTAfw== -ssri@^6.0.1: - version "6.0.2" - resolved "https://registry.npmjs.org/ssri/-/ssri-6.0.2.tgz#157939134f20464e7301ddba3e90ffa8f7728ac5" - integrity sha512-cepbSq/neFK7xB6A50KHN0xHDotYzq58wWCa5LeWqnPrHG8GzfEjO/4O8kpmcGW+oaxkvhEJCWgbgNk4/ZV93Q== - dependencies: - figgy-pudding "^3.5.1" - stagehand@^1.0.0: version "1.0.1" resolved "https://registry.npmjs.org/stagehand/-/stagehand-1.0.1.tgz#0cbca6f906e4a7be36c5830dc31d9cc7091a827e" @@ -14057,14 +13306,6 @@ stagehand@^1.0.0: dependencies: debug "^4.1.0" -static-extend@^0.1.1: - version "0.1.2" - resolved "https://registry.npmjs.org/static-extend/-/static-extend-0.1.2.tgz#60809c39cbff55337226fd5e0b520f341f1fb5c6" - integrity sha512-72E9+uLc27Mt718pMHt9VMNiAL4LMsmDbBva8mxWUCkT07fSzEGMYUCk0XWY6lp0j6RBAG4cJ3mWuZv2OE3s0g== - dependencies: - define-property "^0.2.5" - object-copy "^0.1.0" - statuses@2.0.1: version "2.0.1" resolved "https://registry.npmjs.org/statuses/-/statuses-2.0.1.tgz#55cb000ccf1d48728bd23c685a063998cf1a1b63" @@ -14082,38 +13323,6 @@ stop-iteration-iterator@^1.0.0: dependencies: internal-slot "^1.0.4" -stream-browserify@^2.0.1: - version "2.0.2" - resolved "https://registry.npmjs.org/stream-browserify/-/stream-browserify-2.0.2.tgz#87521d38a44aa7ee91ce1cd2a47df0cb49dd660b" - integrity sha512-nX6hmklHs/gr2FuxYDltq8fJA1GDlxKQCz8O/IM4atRqBH8OORmBNgfvW5gG10GT/qQ9u0CzIvr2X5Pkt6ntqg== - dependencies: - inherits "~2.0.1" - readable-stream "^2.0.2" - -stream-each@^1.1.0: - version "1.2.3" - resolved "https://registry.npmjs.org/stream-each/-/stream-each-1.2.3.tgz#ebe27a0c389b04fbcc233642952e10731afa9bae" - integrity sha512-vlMC2f8I2u/bZGqkdfLQW/13Zihpej/7PmSiMQsbYddxuTsJp8vRe2x2FvVExZg7FaOds43ROAuFJwPR4MTZLw== - dependencies: - end-of-stream "^1.1.0" - stream-shift "^1.0.0" - -stream-http@^2.7.2: - version "2.8.3" - resolved "https://registry.npmjs.org/stream-http/-/stream-http-2.8.3.tgz#b2d242469288a5a27ec4fe8933acf623de6514fc" - integrity sha512-+TSkfINHDo4J+ZobQLWiMouQYB+UVYFttRA94FpEzzJ7ZdqcL4uUUQ7WkdkI4DSozGmgBUE/a47L+38PenXhUw== - dependencies: - builtin-status-codes "^3.0.0" - inherits "^2.0.1" - readable-stream "^2.3.6" - to-arraybuffer "^1.0.0" - xtend "^4.0.0" - -stream-shift@^1.0.0: - version "1.0.3" - resolved "https://registry.npmjs.org/stream-shift/-/stream-shift-1.0.3.tgz#85b8fab4d71010fc3ba8772e8046cc49b8a3864b" - integrity sha512-76ORR0DO1o1hlKwTbi/DM3EXWGf3ZJYO8cXX5RJwnul2DEg2oyoZyjLNoQM8WsvZiFKCRfC1O0J7iCvie3RZmQ== - string-argv@0.3.1: version "0.3.1" resolved "https://registry.npmjs.org/string-argv/-/string-argv-0.3.1.tgz#95e2fbec0427ae19184935f816d74aaa4c5c19da" @@ -14211,7 +13420,7 @@ string_decoder@0.10, string_decoder@~0.10.x: resolved "https://registry.npmjs.org/string_decoder/-/string_decoder-0.10.31.tgz#62e203bc41766c6c28c9fc84301dab1c5310fa94" integrity sha512-ev2QzSzWPYmy9GuqfIVildA4OdcGLeFZQrq5ys6RtiuF+RQQiZWr8TZNyAcuVXyQRYfEO+MsoB/1BuQVhOJuoQ== -string_decoder@^1.0.0, string_decoder@^1.1.1: +string_decoder@^1.1.1: version "1.3.0" resolved "https://registry.npmjs.org/string_decoder/-/string_decoder-1.3.0.tgz#42f114594a46cf1a8e30b0a84f56c78c3edac21e" integrity sha512-hkRX8U1WjJFd8LsDJ2yQ/wWWxaopEsABU1XfkM8A+j0+85JAGppt16cr1Whg6KIbb4okU6Mql6BOj+uup/wKeA== @@ -14304,24 +13513,16 @@ style-loader@^2.0.0: loader-utils "^2.0.0" schema-utils "^3.0.0" +style-mod@^4.0.0, style-mod@^4.1.0: + version "4.1.2" + resolved "https://registry.npmjs.org/style-mod/-/style-mod-4.1.2.tgz#ca238a1ad4786520f7515a8539d5a63691d7bf67" + integrity sha512-wnD1HyVqpJUI2+eKZ+eo1UwghftP6yuFheBqqe+bWCotBjC2K1YnteJILRMs3SM4V/0dLEW1SC27MWP5y+mwmw== + styled_string@0.0.1: version "0.0.1" resolved "https://registry.npmjs.org/styled_string/-/styled_string-0.0.1.tgz#d22782bd81295459bc4f1df18c4bad8e94dd124a" integrity sha512-DU2KZiB6VbPkO2tGSqQ9n96ZstUPjW7X4sGO6V2m1myIQluX0p1Ol8BrA/l6/EesqhMqXOIXs3cJNOy1UuU2BA== -sucrase@^3.32.0: - version "3.35.0" - resolved "https://registry.npmjs.org/sucrase/-/sucrase-3.35.0.tgz#57f17a3d7e19b36d8995f06679d121be914ae263" - integrity sha512-8EbVDiu9iN/nESwxeSxDKe0dunta1GOlHufmSSXxMD2z2/tMZpDMpvXQGsc+ajGo8y2uYUmixaSRUc/QPoQ0GA== - dependencies: - "@jridgewell/gen-mapping" "^0.3.2" - commander "^4.0.0" - glob "^10.3.10" - lines-and-columns "^1.1.6" - mz "^2.7.0" - pirates "^4.0.1" - ts-interface-checker "^0.1.9" - sum-up@^1.0.1: version "1.0.3" resolved "https://registry.npmjs.org/sum-up/-/sum-up-1.0.3.tgz#1c661f667057f63bcb7875aa1438bc162525156e" @@ -14397,6 +13598,11 @@ tabbable@^5.3.3: resolved "https://registry.npmjs.org/tabbable/-/tabbable-5.3.3.tgz#aac0ff88c73b22d6c3c5a50b1586310006b47fbf" integrity sha512-QD9qKY3StfbZqWOPLp0++pOrAVb/HbUi5xCc8cUo4XjP19808oaMiDzn0leBY5mCespIBM0CIZePzZjgzR83kA== +tabbable@^6.2.0: + version "6.2.0" + resolved "https://registry.npmjs.org/tabbable/-/tabbable-6.2.0.tgz#732fb62bc0175cfcec257330be187dcfba1f3b97" + integrity sha512-Cat63mxsVJlzYvN51JmVXIgNoUokrIaT2zLclCXjRd8boZ0004U4KCs/sToJ75C6sdlByWxpYnb5Boif1VSFew== + table@^6.0.9: version "6.8.2" resolved "https://registry.npmjs.org/table/-/table-6.8.2.tgz#c5504ccf201213fa227248bdc8c5569716ac6c58" @@ -14408,34 +13614,6 @@ table@^6.0.9: string-width "^4.2.3" strip-ansi "^6.0.1" -tailwindcss@^3.1.8: - version "3.4.4" - resolved "https://registry.npmjs.org/tailwindcss/-/tailwindcss-3.4.4.tgz#351d932273e6abfa75ce7d226b5bf3a6cb257c05" - integrity sha512-ZoyXOdJjISB7/BcLTR6SEsLgKtDStYyYZVLsUtWChO4Ps20CBad7lfJKVDiejocV4ME1hLmyY0WJE3hSDcmQ2A== - dependencies: - "@alloc/quick-lru" "^5.2.0" - arg "^5.0.2" - chokidar "^3.5.3" - didyoumean "^1.2.2" - dlv "^1.1.3" - fast-glob "^3.3.0" - glob-parent "^6.0.2" - is-glob "^4.0.3" - jiti "^1.21.0" - lilconfig "^2.1.0" - micromatch "^4.0.5" - normalize-path "^3.0.0" - object-hash "^3.0.0" - picocolors "^1.0.0" - postcss "^8.4.23" - postcss-import "^15.1.0" - postcss-js "^4.0.1" - postcss-load-config "^4.0.1" - postcss-nested "^6.0.1" - postcss-selector-parser "^6.0.11" - resolve "^1.22.2" - sucrase "^3.32.0" - tap-parser@^7.0.0: version "7.0.0" resolved "https://registry.npmjs.org/tap-parser/-/tap-parser-7.0.0.tgz#54db35302fda2c2ccc21954ad3be22b2cba42721" @@ -14445,7 +13623,7 @@ tap-parser@^7.0.0: js-yaml "^3.2.7" minipass "^2.2.0" -tapable@^1.0.0, tapable@^1.1.3: +tapable@^1.0.0: version "1.1.3" resolved "https://registry.npmjs.org/tapable/-/tapable-1.1.3.tgz#a1fccc06b58db61fd7a45da2da44f5f3a3e67ba2" integrity sha512-4WK/bYZmj8xLr+HUCODHGF1ZFzsYffasLUgEiMBY4fgtltdO6B4WJtlSbPaDTLpYTcGVwM2qLnFTICEcNxs3kA== @@ -14491,21 +13669,6 @@ temp@0.9.4: mkdirp "^0.5.1" rimraf "~2.6.2" -terser-webpack-plugin@^1.4.3: - version "1.4.5" - resolved "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-1.4.5.tgz#a217aefaea330e734ffacb6120ec1fa312d6040b" - integrity sha512-04Rfe496lN8EYruwi6oPQkG0vo8C+HT49X687FZnpPF0qMAIHONI6HEXYPKDOE8e5HjXTyKfqRd/agHtH0kOtw== - dependencies: - cacache "^12.0.2" - find-cache-dir "^2.1.0" - is-wsl "^1.1.0" - schema-utils "^1.0.0" - serialize-javascript "^4.0.0" - source-map "^0.6.1" - terser "^4.1.2" - webpack-sources "^1.4.0" - worker-farm "^1.7.0" - terser-webpack-plugin@^5.3.10: version "5.3.10" resolved "https://registry.npmjs.org/terser-webpack-plugin/-/terser-webpack-plugin-5.3.10.tgz#904f4c9193c6fd2a03f693a2150c62a92f40d199" @@ -14517,15 +13680,6 @@ terser-webpack-plugin@^5.3.10: serialize-javascript "^6.0.1" terser "^5.26.0" -terser@^4.1.2: - version "4.8.1" - resolved "https://registry.npmjs.org/terser/-/terser-4.8.1.tgz#a00e5634562de2239fd404c649051bf6fc21144f" - integrity sha512-4GnLC0x667eJG0ewJTa6z/yXrbLGv80D9Ru6HIpCQmO+Q4PfEtBFi0ObSckqwL6VyQv/7ENJieXHo2ANmdQwgw== - dependencies: - commander "^2.20.0" - source-map "~0.6.1" - source-map-support "~0.5.12" - terser@^5.26.0, terser@^5.7.0: version "5.31.1" resolved "https://registry.npmjs.org/terser/-/terser-5.31.1.tgz#735de3c987dd671e95190e6b98cfe2f07f3cf0d4" @@ -14595,28 +13749,6 @@ text-table@^0.2.0: resolved "https://registry.npmjs.org/textextensions/-/textextensions-2.6.0.tgz#d7e4ab13fe54e32e08873be40d51b74229b00fc4" integrity sha512-49WtAWS+tcsy93dRt6P0P3AMD2m5PvXRhuEA0kaXos5ZLlujtYmpmFsB+QvWUSxE1ZsstmYXfQ7L40+EcQgpAQ== -thenify-all@^1.0.0: - version "1.6.0" - resolved "https://registry.npmjs.org/thenify-all/-/thenify-all-1.6.0.tgz#1a1918d402d8fc3f98fbf234db0bcc8cc10e9726" - integrity sha512-RNxQH/qI8/t3thXJDwcstUO4zeqo64+Uy/+sNVRBx4Xn2OX+OZ9oP+iJnNFqplFra2ZUVeKCSa2oVWi3T4uVmA== - dependencies: - thenify ">= 3.1.0 < 4" - -"thenify@>= 3.1.0 < 4": - version "3.3.1" - resolved "https://registry.npmjs.org/thenify/-/thenify-3.3.1.tgz#8932e686a4066038a016dd9e2ca46add9838a95f" - integrity sha512-RVZSIV5IG10Hk3enotrhvz0T9em6cyHBLkH/YAZuKqd8hRkKhSfCGIcP2KUY0EPxndzANBmNllzWPwak+bheSw== - dependencies: - any-promise "^1.0.0" - -through2@^2.0.0: - version "2.0.5" - resolved "https://registry.npmjs.org/through2/-/through2-2.0.5.tgz#01c1e39eb31d07cb7d03a96a70823260b23132cd" - integrity sha512-/mrRod8xqpA+IHSLyGCQ2s8SPHiCDEeQJSep1jqLYeEUClOFG2Qsh+4FU6G9VeqpZnGW/Su8LQGc4YKni5rYSQ== - dependencies: - readable-stream "~2.3.6" - xtend "~4.0.1" - through2@^3.0.1: version "3.0.2" resolved "https://registry.npmjs.org/through2/-/through2-3.0.2.tgz#99f88931cfc761ec7678b41d5d7336b5b6a07bf4" @@ -14630,13 +13762,6 @@ through@^2.3.6, through@^2.3.8: resolved "https://registry.npmjs.org/through/-/through-2.3.8.tgz#0dd4c9ffaabc357960b1b724115d7e0e86a2e1f5" integrity sha512-w89qg7PI8wAdvX60bMDP+bFoD5Dvhm9oLheFp5O4a2QF0cSBGsBX4qZmadPMvVqlLJBBci+WqGGOAPvcDeNSVg== -timers-browserify@^2.0.4: - version "2.0.12" - resolved "https://registry.npmjs.org/timers-browserify/-/timers-browserify-2.0.12.tgz#44a45c11fbf407f34f97bccd1577c652361b00ee" - integrity sha512-9phl76Cqm6FhSX9Xe1ZUAMLtm1BLkKj2Qd5ApyWkXzsMRaA7dgr81kf4wJmQf/hAvg8EEyJxDo3du/0KlhPiKQ== - dependencies: - setimmediate "^1.0.4" - tiny-emitter@^2.0.0: version "2.1.0" resolved "https://registry.npmjs.org/tiny-emitter/-/tiny-emitter-2.1.0.tgz#1d1a56edfc51c43e863cbb5382a72330e3555423" @@ -14700,11 +13825,6 @@ tmpl@1.0.5: resolved "https://registry.npmjs.org/tmpl/-/tmpl-1.0.5.tgz#8683e0b902bb9c20c4f726e3c0b69f36518c07cc" integrity sha512-3f0uOEAQwIqGuWW2MVzYg8fV/QNnc/IpuJNG837rLuczAaLVHslWHZQj4IGiEl5Hs3kkbhwL9Ab7Hrsmuj+Smw== -to-arraybuffer@^1.0.0: - version "1.0.1" - resolved "https://registry.npmjs.org/to-arraybuffer/-/to-arraybuffer-1.0.1.tgz#7d229b1fcc637e466ca081180836a7aabff83f43" - integrity sha512-okFlQcoGTi4LQBG/PgSYblw9VOyptsz2KJZqc6qtgGdes8VktzUQkj4BI2blit072iS8VODNcMA+tvnS9dnuMA== - to-fast-properties@^1.0.3: version "1.0.3" resolved "https://registry.npmjs.org/to-fast-properties/-/to-fast-properties-1.0.3.tgz#b83571fa4d8c25b82e231b06e3a3055de4ca1a47" @@ -14715,13 +13835,6 @@ to-fast-properties@^2.0.0: resolved "https://registry.npmjs.org/to-fast-properties/-/to-fast-properties-2.0.0.tgz#dc5e698cbd079265bc73e0377681a4e4e83f616e" integrity sha512-/OaKK0xYrs3DmxRYqL/yDc+FxFUVYhDlXMhRmv3z915w2HF1tnN1omB354j8VUGO/hbRzyD6Y3sA7v7GS/ceog== -to-object-path@^0.3.0: - version "0.3.0" - resolved "https://registry.npmjs.org/to-object-path/-/to-object-path-0.3.0.tgz#297588b7b0e7e0ac08e04e672f85c1f4999e17af" - integrity sha512-9mWHdnGRuh3onocaHzukyvCZhzvr6tiflAy/JRFXcJX0TjgfWA9pk9t8CMbzmBE4Jfw58pXbkngtBtqYxzNEyg== - dependencies: - kind-of "^3.0.2" - to-regex-range@^5.0.1: version "5.0.1" resolved "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz#1648c44aae7c8d988a326018ed72f5b4dd0392e4" @@ -14729,16 +13842,6 @@ to-regex-range@^5.0.1: dependencies: is-number "^7.0.0" -to-regex@^3.0.1, to-regex@^3.0.2: - version "3.0.2" - resolved "https://registry.npmjs.org/to-regex/-/to-regex-3.0.2.tgz#13cfdd9b336552f30b51f33a8ae1b42a7a7599ce" - integrity sha512-FWtleNAtZ/Ki2qtqej2CXTOayOH9bHDQF+Q48VpWyDXjbYxA4Yz8iDB31zXOBUlOHHKidDbqGVrTUvQMPmBGBw== - dependencies: - define-property "^2.0.2" - extend-shallow "^3.0.2" - regex-not "^1.0.2" - safe-regex "^1.1.0" - to-vfile@^6.1.0: version "6.1.0" resolved "https://registry.npmjs.org/to-vfile/-/to-vfile-6.1.0.tgz#5f7a3f65813c2c4e34ee1f7643a5646344627699" @@ -14854,11 +13957,6 @@ trough@^1.0.0, trough@^1.0.5: resolved "https://registry.npmjs.org/trough/-/trough-1.0.5.tgz#b8b639cefad7d0bb2abd37d433ff8293efa5f406" integrity sha512-rvuRbTarPXmMb79SmzEp8aqXNKcK+y0XaB298IXueQ8I2PsrATcPBCSPyK/dDNa2iWOhKlfNnOjdAOTBU/nkFA== -ts-interface-checker@^0.1.9: - version "0.1.13" - resolved "https://registry.npmjs.org/ts-interface-checker/-/ts-interface-checker-0.1.13.tgz#784fd3d679722bc103b1b4b8030bcddb5db2a699" - integrity sha512-Y/arvbn+rrz3JCKl9C4kVNfTfSm2/mEp5FSz5EsZSANGPSlQrpRI5M4PKF+mJnE52jOO90PnPSc3Ur3bTQw0gA== - tslib@^1.9.0: version "1.14.1" resolved "https://registry.npmjs.org/tslib/-/tslib-1.14.1.tgz#cf2d38bdc34a134bcaf1091c41f6619e2f672d00" @@ -14869,11 +13967,6 @@ tslib@^2.0.3, tslib@^2.1.0: resolved "https://registry.npmjs.org/tslib/-/tslib-2.6.3.tgz#0438f810ad7a9edcde7a241c3d80db693c8cbfe0" integrity sha512-xNvxJEOUiWPGhUuUdQgAJPKOOJfGnIyKySOc09XkKsgdUV/3E2zvwZYdejjmRgPCgcym1juLH3226yA7sEFJKQ== -tty-browserify@0.0.0: - version "0.0.0" - resolved "https://registry.npmjs.org/tty-browserify/-/tty-browserify-0.0.0.tgz#a157ba402da24e9bf957f9aa69d524eed42901a6" - integrity sha512-JVa5ijo+j/sOoHGjw0sxw734b1LhBkQ3bvUGNdxnVXDCX81Yx7TFgnZygxrIIWn23hbfTaMYLwRmAxFyDuFmIw== - type-check@^0.4.0, type-check@~0.4.0: version "0.4.0" resolved "https://registry.npmjs.org/type-check/-/type-check-0.4.0.tgz#07b8203bfa7056c0657050e3ccd2c37730bab8f1" @@ -14972,11 +14065,6 @@ typedarray.prototype.slice@^1.0.3: typed-array-buffer "^1.0.2" typed-array-byte-offset "^1.0.2" -typedarray@^0.0.6: - version "0.0.6" - resolved "https://registry.npmjs.org/typedarray/-/typedarray-0.0.6.tgz#867ac74e3864187b1d3d47d996a78ec5c8830777" - integrity sha512-/aCDEGatGvZ2BIk+HmLf4ifCJFwvKFNb9/JeZPMulfgFracn9QFcAf5GO8B/mweUjSoblS5In0cWhqpfs/5PQA== - typescript-memoize@^1.0.0-alpha.3, typescript-memoize@^1.0.1: version "1.1.1" resolved "https://registry.npmjs.org/typescript-memoize/-/typescript-memoize-1.1.1.tgz#02737495d5df6ebf72c07ba0d002e8f4cf5ccfa0" @@ -15055,30 +14143,6 @@ unified@^9.0.0, unified@^9.2.2: trough "^1.0.0" vfile "^4.0.0" -union-value@^1.0.0: - version "1.0.1" - resolved "https://registry.npmjs.org/union-value/-/union-value-1.0.1.tgz#0b6fe7b835aecda61c6ea4d4f02c14221e109847" - integrity sha512-tJfXmxMeWYnczCVs7XAEvIV7ieppALdyepWMkHkwciRpZraG/xwT+s2JN8+pr1+8jCRf80FFzvr+MpQeeoF4Xg== - dependencies: - arr-union "^3.1.0" - get-value "^2.0.6" - is-extendable "^0.1.1" - set-value "^2.0.1" - -unique-filename@^1.1.1: - version "1.1.1" - resolved "https://registry.npmjs.org/unique-filename/-/unique-filename-1.1.1.tgz#1d69769369ada0583103a1e6ae87681b56573230" - integrity sha512-Vmp0jIp2ln35UTXuryvjzkjGdRyf9b2lTXuSYUiPmzRcl3FDtYqAwOnTJkAngD9SWhnoJzDbTKwaOrZ+STtxNQ== - dependencies: - unique-slug "^2.0.0" - -unique-slug@^2.0.0: - version "2.0.2" - resolved "https://registry.npmjs.org/unique-slug/-/unique-slug-2.0.2.tgz#baabce91083fc64e945b0f3ad613e264f7cd4e6c" - integrity sha512-zoWr9ObaxALD3DOPfjPSqxt4fnZiWblxHIgeWqW8x7UqDzEtHEQLzji2cuJYQFCU6KmoJikOYAZlrTHHebjx2w== - dependencies: - imurmurhash "^0.1.4" - unique-string@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/unique-string/-/unique-string-2.0.0.tgz#39c6451f81afb2749de2b233e3f7c5e8843bd89d" @@ -15158,14 +14222,6 @@ unpipe@1.0.0, unpipe@~1.0.0: resolved "https://registry.npmjs.org/unpipe/-/unpipe-1.0.0.tgz#b2bf4ee8514aae6165b4817829d21b2ef49904ec" integrity sha512-pjy2bYhSsufwWlKwPc+l3cN7+wuJlK6uz0YdJEOlQDbl6jo/YlPi4mb8agUkVC8BF7V8NuzeyPNqRksA3hztKQ== -unset-value@^1.0.0: - version "1.0.0" - resolved "https://registry.npmjs.org/unset-value/-/unset-value-1.0.0.tgz#8376873f7d2335179ffb1e6fc3a8ed0dfc8ab559" - integrity sha512-PcA2tsuGSF9cnySLHTLSh2qrQiJ70mn+r+Glzxv2TWZblxsxCC52BDlZoPCsz7STd9pN7EZetkWZBAvk4cgZdQ== - dependencies: - has-value "^0.3.1" - isobject "^3.0.0" - untildify@^2.1.0: version "2.1.0" resolved "https://registry.npmjs.org/untildify/-/untildify-2.1.0.tgz#17eb2807987f76952e9c0485fc311d06a826a2e0" @@ -15173,11 +14229,6 @@ untildify@^2.1.0: dependencies: os-homedir "^1.0.0" -upath@^1.1.1: - version "1.2.0" - resolved "https://registry.npmjs.org/upath/-/upath-1.2.0.tgz#8f66dbcd55a883acdae4408af8b035a5044c1894" - integrity sha512-aZwGpamFO61g3OlfT7OQCHqhGnW43ieH9WZeP7QxN/G/jS4jfqUkZxoryvJgVPEcrl5NL/ggHsSmLMHuH64Lhg== - update-browserslist-db@^1.0.16: version "1.1.0" resolved "https://registry.npmjs.org/update-browserslist-db/-/update-browserslist-db-1.1.0.tgz#7ca61c0d8650766090728046e416a8cde682859e" @@ -15198,11 +14249,6 @@ uri-js@^4.2.2, uri-js@^4.4.1: dependencies: punycode "^2.1.0" -urix@^0.1.0: - version "0.1.0" - resolved "https://registry.npmjs.org/urix/-/urix-0.1.0.tgz#da937f7a62e21fec1fd18d49b35c2935067a6c72" - integrity sha512-Am1ousAhSLBeB9cG/7k7r2R0zj50uDRlZHPGbazid5s9rlF1F/QKYObEKSIunSjIOkJZqwRRLpvewjEkM7pSqg== - url-parse@^1.5.3: version "1.5.10" resolved "https://registry.npmjs.org/url-parse/-/url-parse-1.5.10.tgz#9d3c2f736c1d75dd3bd2be507dcc111f1e2ea9c1" @@ -15211,19 +14257,6 @@ url-parse@^1.5.3: querystringify "^2.1.1" requires-port "^1.0.0" -url@^0.11.0: - version "0.11.3" - resolved "https://registry.npmjs.org/url/-/url-0.11.3.tgz#6f495f4b935de40ce4a0a52faee8954244f3d3ad" - integrity sha512-6hxOLGfZASQK/cijlZnZJTq8OXAkt/3YGfQX45vvMYXpZoo8NdWZcY73K108Jf759lS1Bv/8wXnHDTSz17dSRw== - dependencies: - punycode "^1.4.1" - qs "^6.11.2" - -use@^3.1.0: - version "3.1.1" - resolved "https://registry.npmjs.org/use/-/use-3.1.1.tgz#d50c8cac79a19fbc20f2911f56eb973f4e10070f" - integrity sha512-cwESVXlO3url9YWlFW/TA9cshCEhtu7IKJ/p5soJ/gGpj7vbvFrAY/eIioQ6Dw23KjZhYgiIo8HOs1nQ2vr/oQ== - username-sync@^1.0.2: version "1.0.3" resolved "https://registry.npmjs.org/username-sync/-/username-sync-1.0.3.tgz#ae41c5c8a4c8c2ecc1443a7d0742742bd7e36732" @@ -15239,20 +14272,6 @@ util-extend@^1.0.1: resolved "https://registry.npmjs.org/util-extend/-/util-extend-1.0.3.tgz#a7c216d267545169637b3b6edc6ca9119e2ff93f" integrity sha512-mLs5zAK+ctllYBj+iAQvlDCwoxU/WDOUaJkcFudeiAX6OajC6BKXJUa9a+tbtkC11dz2Ufb7h0lyvIOVn4LADA== -util@^0.10.4: - version "0.10.4" - resolved "https://registry.npmjs.org/util/-/util-0.10.4.tgz#3aa0125bfe668a4672de58857d3ace27ecb76901" - integrity sha512-0Pm9hTQ3se5ll1XihRic3FDIku70C+iHUdT/W926rSgHV5QgXsYbKZN8MSC3tJtSkhuROzvsQjAaFENRXr+19A== - dependencies: - inherits "2.0.3" - -util@^0.11.0: - version "0.11.1" - resolved "https://registry.npmjs.org/util/-/util-0.11.1.tgz#3236733720ec64bb27f6e26f421aaa2e1b588d61" - integrity sha512-HShAsny+zS2TZfaXxD9tYj4HQGlBezXZMZuM/S5PKLLoZkShZiGk9o5CzukI1LVHZvjdvZ2Sj1aW/Ndn2NB/HQ== - dependencies: - inherits "2.0.3" - utils-merge@1.0.1: version "1.0.1" resolved "https://registry.npmjs.org/utils-merge/-/utils-merge-1.0.1.tgz#9f95710f50a267947b2ccc124741c1028427e713" @@ -15319,11 +14338,6 @@ vfile@^4.0.0: unist-util-stringify-position "^2.0.0" vfile-message "^2.0.0" -vm-browserify@^1.0.1: - version "1.1.2" - resolved "https://registry.npmjs.org/vm-browserify/-/vm-browserify-1.1.2.tgz#78641c488b8e6ca91a75f511e7a3b32a86e5dda0" - integrity sha512-2ham8XPWTONajOR0ohOKOHXkm3+gaBmGut3SRuu75xLd/RRaY6vqgh8NBYYk7+RW3u5AtzPQZG8F10LHkl0lAQ== - w3c-hr-time@^1.0.2: version "1.0.2" resolved "https://registry.npmjs.org/w3c-hr-time/-/w3c-hr-time-1.0.2.tgz#0a89cdf5cc15822df9c360543676963e0cc308cd" @@ -15331,6 +14345,11 @@ w3c-hr-time@^1.0.2: dependencies: browser-process-hrtime "^1.0.0" +w3c-keyname@^2.2.4: + version "2.2.8" + resolved "https://registry.npmjs.org/w3c-keyname/-/w3c-keyname-2.2.8.tgz#7b17c8c6883d4e8b86ac8aba79d39e880f8869c5" + integrity sha512-dpojBhNsCNN7T82Tm7k26A6G9ML3NkhDsnw9n/eoxSRlVBB4CEtIQ/KTCLI2Fwf3ataSXRhYFkQi3SlnFwPvPQ== + w3c-xmlserializer@^2.0.0: version "2.0.0" resolved "https://registry.npmjs.org/w3c-xmlserializer/-/w3c-xmlserializer-2.0.0.tgz#3e7104a05b75146cc60f564380b7f683acf1020a" @@ -15399,24 +14418,6 @@ watch-detector@^1.0.0: silent-error "^1.1.1" tmp "^0.1.0" -watchpack-chokidar2@^2.0.1: - version "2.0.1" - resolved "https://registry.npmjs.org/watchpack-chokidar2/-/watchpack-chokidar2-2.0.1.tgz#38500072ee6ece66f3769936950ea1771be1c957" - integrity sha512-nCFfBIPKr5Sh61s4LPpy1Wtfi0HE8isJ3d2Yb5/Ppw2P2B/3eVSEBjKfN0fmHJSK14+31KwMKmcrzs2GM4P0Ww== - dependencies: - chokidar "^2.1.8" - -watchpack@^1.7.4: - version "1.7.5" - resolved "https://registry.npmjs.org/watchpack/-/watchpack-1.7.5.tgz#1267e6c55e0b9b5be44c2023aed5437a2c26c453" - integrity sha512-9P3MWk6SrKjHsGkLT2KHXdQ/9SNkyoJbabxnKOoJepsvJjJG8uYTR3yTPxPQvNDI3w4Nz1xnE0TLHK4RIVe/MQ== - dependencies: - graceful-fs "^4.1.2" - neo-async "^2.5.0" - optionalDependencies: - chokidar "^3.4.1" - watchpack-chokidar2 "^2.0.1" - watchpack@^2.4.1: version "2.4.1" resolved "https://registry.npmjs.org/watchpack/-/watchpack-2.4.1.tgz#29308f2cac150fa8e4c92f90e0ec954a9fed7fff" @@ -15454,54 +14455,16 @@ webidl-conversions@^6.1.0: resolved "https://registry.npmjs.org/webidl-conversions/-/webidl-conversions-6.1.0.tgz#9111b4d7ea80acd40f5270d666621afa78b69514" integrity sha512-qBIvFLGiBpLjfwmYAaHPXsn+ho5xZnGvyGvsarywGNc8VyQJUMHJ8OBKGGrPER0okBeMDaan4mNBlgBROxuI8w== -webpack-sources@^1.4.0, webpack-sources@^1.4.1: - version "1.4.3" - resolved "https://registry.npmjs.org/webpack-sources/-/webpack-sources-1.4.3.tgz#eedd8ec0b928fbf1cbfe994e22d2d890f330a933" - integrity sha512-lgTS3Xhv1lCOKo7SA5TjKXMjpSM4sBjNV5+q2bqesbSPs5FjGmU6jjtBSkX9b4qW87vDIsCIlUPOEhbZrMdjeQ== - dependencies: - source-list-map "^2.0.0" - source-map "~0.6.1" - webpack-sources@^3.2.3: version "3.2.3" resolved "https://registry.npmjs.org/webpack-sources/-/webpack-sources-3.2.3.tgz#2d4daab8451fd4b240cc27055ff6a0c2ccea0cde" integrity sha512-/DyMEOrDgLKKIG0fmvtz+4dUX/3Ghozwgm6iPp8KRhvn+eQf9+Q7GWxVNMk3+uCPWfdXYC4ExGBckIXdFEfH1w== -webpack@^4.43.0: - version "4.47.0" - resolved "https://registry.npmjs.org/webpack/-/webpack-4.47.0.tgz#8b8a02152d7076aeb03b61b47dad2eeed9810ebc" - integrity sha512-td7fYwgLSrky3fI1EuU5cneU4+pbH6GgOfuKNS1tNPcfdGinGELAqsb/BP4nnvZyKSG2i/xFGU7+n2PvZA8HJQ== - dependencies: - "@webassemblyjs/ast" "1.9.0" - "@webassemblyjs/helper-module-context" "1.9.0" - "@webassemblyjs/wasm-edit" "1.9.0" - "@webassemblyjs/wasm-parser" "1.9.0" - acorn "^6.4.1" - ajv "^6.10.2" - ajv-keywords "^3.4.1" - chrome-trace-event "^1.0.2" - enhanced-resolve "^4.5.0" - eslint-scope "^4.0.3" - json-parse-better-errors "^1.0.2" - loader-runner "^2.4.0" - loader-utils "^1.2.3" - memory-fs "^0.4.1" - micromatch "^3.1.10" - mkdirp "^0.5.3" - neo-async "^2.6.1" - node-libs-browser "^2.2.1" - schema-utils "^1.0.0" - tapable "^1.1.3" - terser-webpack-plugin "^1.4.3" - watchpack "^1.7.4" - webpack-sources "^1.4.1" - -webpack@^5.74.0: - version "5.92.1" - resolved "https://registry.npmjs.org/webpack/-/webpack-5.92.1.tgz#eca5c1725b9e189cffbd86e8b6c3c7400efc5788" - integrity sha512-JECQ7IwJb+7fgUFBlrJzbyu3GEuNBcdqr1LD7IbSzwkSmIevTm8PF+wej3Oxuz/JFBUZ6O1o43zsPkwm1C4TmA== - dependencies: - "@types/eslint-scope" "^3.7.3" +webpack@5.94.0, webpack@^4.43.0, webpack@^5.74.0: + version "5.94.0" + resolved "https://registry.npmjs.org/webpack/-/webpack-5.94.0.tgz#77a6089c716e7ab90c1c67574a28da518a20970f" + integrity sha512-KcsGn50VT+06JH/iunZJedYGUJS5FGjow8wb9c0v5n1Om8O1g4L6LjtfxwlXIATopoQu+vOXXa7gYisWxCoPyg== + dependencies: "@types/estree" "^1.0.5" "@webassemblyjs/ast" "^1.12.1" "@webassemblyjs/wasm-edit" "^1.12.1" @@ -15510,7 +14473,7 @@ webpack@^5.74.0: acorn-import-attributes "^1.9.5" browserslist "^4.21.10" chrome-trace-event "^1.0.2" - enhanced-resolve "^5.17.0" + enhanced-resolve "^5.17.1" es-module-lexer "^1.2.1" eslint-scope "5.1.1" events "^3.2.0" @@ -15642,13 +14605,6 @@ wordwrap@^1.0.0: resolved "https://registry.npmjs.org/wordwrap/-/wordwrap-1.0.0.tgz#27584810891456a4171c8d0226441ade90cbcaeb" integrity sha512-gvVzJFlPycKc5dZN4yPkP8w7Dc37BtP1yczEneOb4uq34pXZcvrtRTmWV8W+Ume+XCxKgbjM+nevkyFPMybd4Q== -worker-farm@^1.7.0: - version "1.7.0" - resolved "https://registry.npmjs.org/worker-farm/-/worker-farm-1.7.0.tgz#26a94c5391bbca926152002f69b84a4bf772e5a8" - integrity sha512-rvw3QTZc8lAxyVrqcSGVm5yP/IJ2UcB3U0graE3LCFoZ0Yn2x4EoVSqJKdB/T5M+FLcRPjz4TDacRf3OCfNUzw== - dependencies: - errno "~0.1.7" - workerpool@^2.3.0: version "2.3.4" resolved "https://registry.npmjs.org/workerpool/-/workerpool-2.3.4.tgz#661335ded59a08c01ca009e30cc96929a7b4b0aa" @@ -15665,7 +14621,7 @@ workerpool@^3.1.1: object-assign "4.1.1" rsvp "^4.8.4" -workerpool@^6.0.2, workerpool@^6.1.4, workerpool@^6.1.5: +workerpool@^6.1.4, workerpool@^6.1.5: version "6.5.1" resolved "https://registry.npmjs.org/workerpool/-/workerpool-6.5.1.tgz#060f73b39d0caf97c6db64da004cd01b4c099544" integrity sha512-Fs4dNYcsdpYSAfVxhnl1L5zTksjvOJxtC5hzMNl+1t9B8hTJTdKDyZ5ju7ztgPy+ft9tBFXoOlDNiOT9WUXZlA== @@ -15743,16 +14699,11 @@ xmlhttprequest-ssl@^1.6.3: resolved "https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.6.3.tgz#03b713873b01659dfa2c1c5d056065b27ddc2de6" integrity sha512-3XfeQE/wNkvrIktn2Kf0869fC0BN6UpydVasGIeSm2B1Llihf7/0UfZM+eCkOw3P7bP4+qPgqhm7ZoxuJtFU0Q== -xtend@^4.0.0, xtend@~4.0.1: +xtend@^4.0.0: version "4.0.2" resolved "https://registry.npmjs.org/xtend/-/xtend-4.0.2.tgz#bb72779f5fa465186b1f438f674fa347fdb5db54" integrity sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ== -y18n@^4.0.0: - version "4.0.3" - resolved "https://registry.npmjs.org/y18n/-/y18n-4.0.3.tgz#b5f259c82cd6e336921efd7bfd8bf560de9eeedf" - integrity sha512-JKhqTOwSrqNA1NY5lSztJ1GrBiUodLMmIZuLiDaMRJ+itFd+ABVE8XBjOvIWL+rSqNDC74LCSFmlb/U4UZ4hJQ== - y18n@^5.0.5: version "5.0.8" resolved "https://registry.npmjs.org/y18n/-/y18n-5.0.8.tgz#7f4934d0f7ca8c56f95314939ddcd2dd91ce1d55" @@ -15786,11 +14737,6 @@ yaml@^1.10.0, yaml@^1.9.2: resolved "https://registry.npmjs.org/yaml/-/yaml-1.10.2.tgz#2301c5ffbf12b467de8da2333a459e29e7920e4b" integrity sha512-r3vXyErRCYJ7wg28yvBY5VSoAF8ZvlcW9/BwUzEtUsjvX/DKs24dIkuwjtuprwJJHsbyUbLApepYTR1BN4uHrg== -yaml@^2.3.4: - version "2.4.5" - resolved "https://registry.npmjs.org/yaml/-/yaml-2.4.5.tgz#60630b206dd6d84df97003d33fc1ddf6296cca5e" - integrity sha512-aBx2bnqDzVOyNKfsysjA2ms5ZlnjSAW2eG3/L5G/CSujfjLJTJsEw1bGw8kCf04KodQWk1pxlGnZ56CRxiawmg== - yargs-parser@^20.2.2: version "20.2.9" resolved "https://registry.npmjs.org/yargs-parser/-/yargs-parser-20.2.9.tgz#2eb7dc3b0289718fc295f362753845c41a0c94ee" diff --git a/version/VERSION b/version/VERSION index b148ac3829b7..0b996293dc4c 100644 --- a/version/VERSION +++ b/version/VERSION @@ -1 +1 @@ -1.21.0-dev \ No newline at end of file +1.21.4-dev \ No newline at end of file diff --git a/website/.gitignore b/website/.gitignore index 7d809dab7e15..c16b381f857c 100644 --- a/website/.gitignore +++ b/website/.gitignore @@ -8,3 +8,4 @@ out .env*.local website-preview +.vercel diff --git a/website/content/README.md b/website/content/README.md new file mode 100644 index 000000000000..21f6fcf83af4 --- /dev/null +++ b/website/content/README.md @@ -0,0 +1,607 @@ +# Information architecture and content strategy for Consul documentation + +The `website/content` directory in the `hashicorp/consul` repository contains [the Consul documentation on developer.hashicorp.com](https://developer.hashicorp.com/consul). This `README` describes the directory structure and design principles for this documentation set. + +`README` table of contents: + +- [Content directory overview](#content-directory-overview) +- [North star principles for content design](#north-star-principles) +- [Consul content strategy](#content-strategy), including user persona and jobs-to-be-done +- [Consul taxonomy](#taxonomy) +- [Path syntax](#path-syntax) for directory name and nesting guidelines +- [Controlled vocabularies](#controlled-vocabularies) for Consul terms and labeling standards +- [Guide to partials](#guide-to-partials) +- [How to document new Consul features](#how-to-document-new-consul-features) +- [Maintaining and deprecating content](#maintaining-and-deprecating-content) + +To update the contents of this document, create a PR against the `main` branch of the `hashicorp/consul` GitHub repository. Apply the label `type/docs` to the PR to request review from an approver in the `consul-docs` group. + +## Content directory overview + +The `website/content` directory in the `hashicorp/consul` GitHub repo contains the following sub-directories: + +``` +. +├── api-docs +├── commands +├── docs +└── partials +``` + +After you merge a PR into a numbered release branch, changes to these folders appear at the following URLs: + +- Changes to `api-docs` appear at [https://developer.hashicorp.com/consul/api-docs](https://developer.hashicorp.com/consul/api-docs). +- Changes to `commands` appear at [https://developer.hashicorp.com/consul/commands](https://developer.hashicorp.com/consul/commands). +- Changes to `docs` appear at [https://developer.hashicorp.com/consul/docs](https://developer.hashicorp.com/consul/docs). + +URLs follow the directory structure for each file and omit the the `.mdx` file extension. Pages named `index.mdx` adopt their directory's name. For example, the file `docs/reference/agent/configuration-file/index.mdx` appears at the URL [https://developer.hashicorp.com/consul/docs/reference/agent/configuration-file](https://developer.hashicorp.com/consul/docs/reference/agent/configuration-file). + +The `partials` folder includes content that you can reuse across pages in any of the three folders. Refer to [Guide to Partials](#guide-to-partials) for more information. + +Tutorials that appear at [https://developer.hashicorp.com/consul/tutorials](https://developer.hashicorp.com/consul/tutorials) are located in a different repository. This content exists in the [hashicorp/tutorials GitHub repo](https://github.com/hashicorp/tutorials), which is internal to the HashiCorp organization. + +### Other directories of note + +The `website/data` directory contains `.json` files that populate the navigation sidebar on [developer.hashicorp.com](https://developer.hashicorp.com). + +The `website/public/img` directory contains the images used in the documentation. + +Instructions on editing these files, including instructions on running local builds of the documentation, are in the `README` for the `website` directory, one level above this one. + +## North Star principles + +The design of the content in the `docs/` directory, including structure, file paths, and labels, is governed by the following _north star principles_. + +1. **Users are humans**. Design for humans first. For example, file paths become URLs; create human-readable descriptions of the content and avoid unnecessary repetition. +1. **Less is always more**. Prefer single words for folder and file names; add a hyphen and a second word to disambiguate from existing content. +1. **Document what currently exists**. Do not create speculative folders and files to "reserve space" for future updates and releases. Do not describe Consul as it will exist in the future; describe it as it exists right now, in the latest release. +1. **Beauty works better**. When creating new files and directories, strive for consistency with the existing structure. For example, use parallel structures across directories and flatten directories that run too deep. Tip: If it doesn't look right, it's probably not right. +1. **Prefer partials over `ctrl+v`**. Spread content out, but document unique information in one place. When you need to repeat content across multiple pages, use partials to maintain content. + +These principles exist to help you navigate ambiguity when making changes to the underlying content. If you add new content and you're not quite sure where to place it or how to name it, use these "north stars" to help you make an informed decision about what to do. + +Over time, Consul may change in ways that require significant edits to this information architecture. The IA and content strategy were designed with this possibility in mind. Use these north star principles to help you make informed (and preferably incremental) changes over time. + +## Content strategy + +Consul's content strategy centers on three main considerations: + +- **User persona** considers the roles of Consul users in DevOps workflows, which may be either broad or narrowly defined. +- **Jobs-to-be-done** includes the practical outcomes users want to achieve when using Consul to address a latent concern. +- **Content type** asks what kind of content exists on the page, and follows the principles of Diataxis. + +You should keep all three of the considerations in mind when creating new content or updating existing content in the documentation and tutorials. Ask yourself the following questions to help you determine your content needs: + +- Who will use this documentation? +- What concerns will that person have? +- What goals are they trying to accomplish? +- What kind of content would help the user achieve their goal? + +For more information about recommended workflow patterns, refer to [How to document new Consul features](#how-to-document-new-consul-features) and [Maintaining and deprecating content](#maintaining-and-deprecating-content). + +### User personas, jobs-to-be-done, and critical user journeys + +Consul is a flexible service networking tool, with applications across DevOps workflows. The following table lists Consul's user personas, examples of their major concerns, and typical jobs that this user wants to complete using Consul's features. + +| User persona | Jobs-to-be-done | Critical user journeys | +| :-------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Application developer | • Application is discoverable across on-prem and cloud environments.
• Applications can reliably find and connect to dependent upstream services. | • I want to use Consul to register the locations of my applications across infrastructure environments so that they can be discovered by downstream applications.
• I want to define intelligent failover policies for my applications so that my services are highly available and fault tolerant.
• I want to use Consul's service catalog find and connect to healthy upstream services across multiple clouds and runtime environments. | +| Platform engineer | • Architect global service registry that makes services avilable regardless of infrastructure.
• Reliability and availability of the service registry so that I can meet service level objectives. | • I want to implement monitoring and alerting for Consul in order to quickly identify and resolve cluster instability, or service connectivity issues.
• I want to automate recovery procedures and ensure resilience in order to meet SLO and SLA objectives.
• I want to interconnect Consul clusters to enable unified service discovery across cloud and on-premises environments.
•I want to provide guidance and guardrails for enabling application developers to register services into Consul, and ensure their services are highly available. | +| Security engineer | • Ensure that critical infrastructure and services adhere to corporate security policies. | • I want to ensure that communication between critical infrastructure services is encrypted in transit, and access to systems is appropriately controlled and audited.
• I want to integrate Consul authentication with existing corporate identity and access management (IAM) systems.
• I want to ensure that users have least privileged, but sufficient access to Consul for managing and discovering services. | + +### Content types + +The content we create and host on developer.hashicorp.com follows the principles of the [Diátaxis method for structured documentation](https://diataxis.fr/), which use the following basic content types: + +- Explanation +- Usage +- Reference +- Tutorials + +Because tutorials are hosted in a separate repository, this README focuses on the first three content types. + +Within the "Explanation" category, we use three different types of pages, each of which has a distinct purpose. + +- **Index** pages provide lists of links to supporting documentation on a subject. [Example: Deploy Consul](https://developer.hashicorp.com/consul/docs/deploy) +- **Overview** pages provide an introduction to a subject and serve as a central information point. [Example: Expand service network east/west](https://developer.hashicorp.com/consul/docs/east-west) +- **Concept** pages provide discursive explanations of Consul's underlying systems and their operations. [Example: Consul catalog](https://developer.hashicorp.com/consul/docs/concept/catalog) + +## Taxonomy + +There are three main categories in the Consul docs information architecture. This division of categories is _not literal_ to the directory structure, even though the **Reference** category includes the repository's `reference` folder. + +- Intro +- Usage +- Reference + +These categories intentionally align with [Diataxis](https://diataxis.fr/). + +The following diagram summarizes the taxonomy of the Consul documentation: + +![Diagram of Consul information architecture's taxonomy](../public/img/taxonomy.png) + +In this image, the blue boxes represent literal directories. The salmon and purple boxes around them are figurative categories that are not literally represented in the file structure. + +### Intro + +The **Intro** category includes the following folders in `website/content/docs/`: + +- `architecture` +- `concept` +- `enterprise` +- `fundamentals` +- `use-case` + +The following table lists each term and a definition to help you decide where to place new content. + +| Term | Directory | What it includes | +| :----------- | :------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Architecture | `architecture` | The product's components and their “maps” in cloud networking contexts. | +| Concepts | `concept` | Describes the complex behavior of technical systems in a non-literal manner. For example, computers do not literally “gossip” when they use the gossip protocol. | +| Enterprise | `enterprise` | Consul Enterprise license offerings and how to implement them. | +| Fundamentals | `fundamentals` | The knowledge, connection and authorization methods, interactions, configurations, and operations most users require to use the product. | +| Use cases | `use-case` | The highest level goals practitioners have; a product function that “solves” enterprise concerns and usually competes with other products. | + +#### User persona indexed to intro topic + +This table indexes each intro directory and its contents with the typical concerns of the user persona based on their jobs-to-be-done and critical user journeys: + +| Intro topic | Platform engineer | Security engineer | Application developer | +| :----------- | :---------------: | :---------------: | :-------------------: | +| Architecture | ✅ | ✅ | ❌ | +| Concepts | ✅ | ✅ | ✅ | +| Enterprise | ✅ | ❌ | ❌ | +| Fundamentals | ✅ | ✅ | ✅ | +| Use cases | ✅ | ✅ | ✅ | + +The purpose of this table is to validate the relationship between the information architecture and the content strategy by indexing them to one another. Potential applications for this table include curricular learning paths and targeted content expansion. + +### Usage + +The **Usage** category includes the following folders in `website/content/docs/`: + +- `automate` +- `connect` +- `deploy` +- `discover` +- `east-west` +- `envoy-extension` +- `integrate` +- `manage` +- `manage-traffic` +- `monitor` +- `multi-tenant` +- `north-south` +- `observe` +- `register` +- `release-notes` +- `secure` +- `secure-mesh` +- `upgrade` + +These folders are organized into two groups that are _not literal_ to the directory structure, but are reflected in the navigation bar. + +- **Operations**. User actions, workflows, and goals related to installing and operating Consul as a long-running daemon on multiple nodes in a network. +- **Service networking**. User actions, workflows, and goals related to networking solutions for application workloads. + +Each folder is named after a corresponding _phase_, which have a set order in the group. + +#### Operations + +Operations consists of the following phases, intentionally ordered to reflect the full lifecycle of a Consul agent. + +| Phase | Directory | Description | +| :------------------- | :-------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Deploy Consul | `deploy` | The processes to install and start Consul server agents, client agents, and dataplanes. | +| Secure Consul | `secure` | The processes to set up and maintain secure communications with Consul agents, including ACLs, TLS, and gossip. | +| Manage multi-tenancy | `multi-tenant` | The processes to use one Consul datacenter for multiple tenants, including admin partitions, namespaces, network segments, and sameness groups. | +| Manage Consul | `manage` | The processes to manage and customize Consul's behavior, including DNS forwarding on nodes, server disaster recovery, rate limiting, and scaling options. | +| Monitor Consul | `monitor` | The processes to export Consul logs and telemetry for insight into agent behavior. | +| Upgrade Consul | `upgrade` | The processes to update the Consul version running in datacenters. | +| Release Notes | `release-notes` | Describes new, changed, and deprecated features for each release of Consul and major associated binaries. | + +#### Service networking + +Service Networking consists of the following phases, intentionally ordered to reflect a recommended “order of operations.” Although these phases do not need to be completed in order, their order reflects a general path for increasing complexity in Consul’s service networking capabilities as you develop your network. + +| Phase | Directory | Description | +| :--------------------- | :--------------- | :--------------------------------------------------------------------------------------------------------------------------------------- | +| Register services | `register` | How to define services and health checks and then register them with Consul. | +| Discover services | `discover` | How to use Consul's service discovery features, including Consul DNS, service lookups, load balancing. | +| Connect service mesh | `connect` | How to set up and use sidecar proxies in a Consul service mesh. | +| Secure north/south | `north-south` | How to configure, deploy, and use the Consul API gateway to secure network ingress. | +| Expand east/west | `east-west` | How to connect Consul datacenters across regions, runtimes, and providers with WAN federation and cluster peering. | +| Secure mesh traffic | `secure-mesh` | How to secure service-to-service communication with service intentions and TLS certificates. | +| Manage service traffic | `manage-traffic` | How to route traffic between services in a service mesh, including service failover and progressive rollouts. | +| Observe service mesh | `observe` | How to observe service mesh telemetry and application performance, including Grafana. | +| Automate applications | `automate` | How to automate Consul and applications to update dynamically, including the KV store, Consul-Terraform-Sync (CTS), and Consul template. | + +#### User persona indexed to usage phase + +This table indexes each usage directory and its contents with the typical concerns of the user persona based on their jobs-to-be-done and critical user journeys: + +| Usage phase | Platform engineer | Security engineer | Application developer | +| :--------------------- | :---------------: | :---------------: | :-------------------: | +| Deploy Consul | ✅ | ✅ | ❌ | +| Secure Consul | ✅ | ✅ | ❌ | +| Manage multi-tenancy | ✅ | ✅ | ❌ | +| Manage Consul | ✅ | ❌ | ❌ | +| Monitor Consul | ✅ | ✅ | ❌ | +| Upgrade Consul | ✅ | ❌ | ❌ | +| Release Notes | ✅ | ❌ | ❌ | +| Register services | ✅ | ❌ | ✅ | +| Discover services | ✅ | ❌ | ✅ | +| Connect service mesh | ✅ | ❌ | ❌ | +| Secure north/south | ✅ | ✅ | ❌ | +| Expand east/west | ✅ | ✅ | ❌ | +| Secure mesh traffic | ✅ | ✅ | ❌ | +| Manage service traffic | ✅ | ❌ | ❌ | +| Observe service mesh | ✅ | ❌ | ❌ | +| Automate applications | ✅ | ❌ | ✅ | + +The purpose of this table is to validate the relationship between the information architecture and the content strategy by indexing them to one another. Potential applications for this table include curricular learning paths and targeted content expansion. + +### Reference + +The **Reference** category includes the following folders in `website/content/docs/`: + +- `error-messages` +- `reference` +- `troubleshoot` + +The following table lists each term and a definition to help you decide where to place new content. + +| Term | Directory | What it includes | +| :------------- | :---------------- | :--------------------------------------------------------------------------------------------------- | +| Error Messages | `error-messsages` | Error messages and their causes, organized by runtime and Consul release binary. | +| Reference | `reference` | All reference information for configuring Consul, its components, and the infrastructure it runs on. | +| Troubleshoot | `troubleshoot` | Instructions and guidance about how to figure out what's wrong with a Consul deployment. | + +### User persona indexed to reference subject + +This table indexes each reference and its contents with the typical concerns of the user persona based on their jobs-to-be-done and critical user journeys. + +| Reference subject | Platform engineer | Security engineer | Application developer | +| :----------------------- | :---------------: | :---------------: | :-------------------: | +| Error messages | ✅ | ❌ | ❌ | +| Reference specifications | ✅ | ✅ | ✅ | +| Troubleshoot | ✅ | ❌ | ✅ | + +The purpose of this table is to validate the relationship between the information architecture and the content strategy by indexing them to one another. Potential applications for this table include curricular learning paths and targeted content expansion. + +## Path syntax + +A major advantage of this information architecture is the filepath structure. This structure "tags" documentation with keywords that describe the page's content to optimize the documentation for Google SEO while also helping human users build a "mental model" of Consul. + +Our syntax creates human-readable names for file paths using a controlled vocabulary and intentional permutations. In general, the syntax follows a repeating pattern of `Verb / Noun / Adjective` to describe increasingly specific content and user goals. + +For **Consul operations**, filepaths assume the following syntax: + + + +```plaintext +Phase / Component / Runtime / Action / Provider +``` + + + +Examples: + +- `secure/encryption/tls/rotate/vm` contains instructions for rotating TLS certificates Consul agents use to secure their communications when running on virtual machines. +- `deploy/server/k8s/platform/openshift` contains instructions on deploying a Consul server agent when running OpenShift. +- `upgrade/k8s/compatibility` contains information about compatible software versions to help you upgrade the version of Consul running on Kubernetes. + +For **service networking**, filepaths tend to have the following order: + + + +```plaintext +Phase / Feature / Action / Runtime / Interface +``` + + + +Examples: + +- `discover/load-balancer/nginx` contains instructions for using NGINX as a load balancer based on Consul service discovery results. +- `east-west/cluster-peering/establish/k8s` contains instructions for creating new connections between Consul clusters running on Kubernetes in different regions or cloud providers. +- `register/service/k8s/external` contains information about how to register services running on external nodes to Consul on Kubernetes by configuring them to join the Consul datacenter. +- `register/external/esm/k8s` contains information about registering services running on external nodes to Consul on Kubernetes using the External Services Manager (ESM). + +## Controlled vocabularies + +This section lists the standard names for files and directories, divided into sub-groups based on the syntax guide in this `README`. The following list provides links to specific vocabulary groups: + +- [Architecture vocabulary](#architecture-vocabulary) +- [Concepts vocabulary](#concepts-vocabulary) +- [Use case vocabulary](#use-case-vocabulary) +- [Components vocabulary](#components-vocabulary) +- [Features vocabulary](#features-vocabulary) +- [Runtimes vocabularly](#runtimes-vocabulary) +- [Actions vocabulary](#actions-vocabulary) +- [Providers vocabulary](#providers-vocabulary) +- [Interfaces vocabulary](#interfaces-vocabulary) +- [Configuration entry vocabulary](#configuration-entry-vocabulary) +- [Envoy extension vocabulary](#envoy-extension-vocabulary) + +### Architecture vocabulary + +Consul's _architecture_ vocabulary is structured according to where components run: + +- `control-plane`: The _control plane_ is the network infrastructure that maintains a central registry to track services and their respective IP addresses. Both server and client agents operate as part of the control plane. Consul dataplanes, despite the name, are also part of the Consul control plane. +- `data-plane`: Use two words, _data plane_, to refer to the application layer and components involved in service-to-service communication. + +Common architecture terms and where they run: + +| Control plane | Data plane | +| :------------- | :--------- | +| `agent` | `gateway` | +| `server agent` | `mesh` | +| `client agent` | `proxy` | +| `dataplane` | `service` | + +The **Reference** category also includes an `architecture` sub-directory. This "Reference architecture" includes information such as port requirements, server requirements, and AWS ECS architecture. + +### Concepts vocabulary + +Consul's _concepts_ vocabulary collects terms that describe how internal systems operate through human actions. + +| Concept | Label | Description | +| :-------------------------- | :------------ | :------------------------------------------------------------------------------------------------ | +| Consul catalog | `catalog` | Covers Consul's running service registry, which includes node addresses and health check results. | +| Consensus protocol (Raft) | `consensus` | Covers the server agent elections governed by the Raft protocol. | +| Cluster consistency | `consistency` | Covers Consul's anti-entropy features, consistency modes, and Jepsen testing. | +| Gossip communication (Serf) | `gossip` | Covers Serf communication between Consul agents in a datacenter. | +| Datacenter reliability | `reliability` | Covers fault tolerance, quorum size, and server redundancy. | + +### Use case vocabulary + +Consul's _use case_ vocabulary collects terms that describe the highest-level goals users have that would lead them to choose Consul as their networking solution. + +| Use case | Label | +| :-------------------------------- | :------------------ | +| Service discovery | `service-discovery` | +| Service mesh | `service-mesh` | +| API gateway security | `api-gateway` | +| Configuration management tooling | `config-management` | +| Domain Name Service (DNS) tooling | `dns` | + +### Components vocabulary + +Consul's _components_ vocabulary collects terms that describe Consul's built-in components, enterprise offerings, and other offerings that impact the operations of Consul agent clusters. + +| Component | Label | +| :--------------------------- | :------------------ | +| Access Control List (ACL) | `acl` | +| Admin partition | `admin-partition` | +| Audit logs | `audit-log` | +| Automated backups | `automated-backup` | +| Automated upgrades | `automated-upgrade` | +| Auth methods | `auth-method` | +| Cloud auto-join | `cloud-auto-join` | +| Consul-Terraform-Sync | `cts` | +| DNS | `dns` | +| FIPS | `fips` | +| JSON Web Token Authorization | `jwt-auth` | +| Consul Enterprise License | `license` | +| Long term support (LTS) | `lts` | +| Namespaces | `namespace` | +| Network areas | `network-area` | +| Network segments | `network-segment` | +| OIDC Authorization | `oidc-auth` | +| Agent rate limits | `rate-limit` | +| Read repilicas | `read-replica` | +| Redundancy zones | `redundancy-zone` | +| Sentinel policies | `sentinel` | +| Agent snapshots | `snapshot` | +| Single sign on (SSO) | `sso` | + +### Features vocabulary + +Consul's _features_ vocabulary collects terms that describe Consul product offerings related to service networking for application workloads. + +| Feature | Label | +| :------------------------------------------------ | :--------------------- | +| Cluster peering | `cluster-peering` | +| Consul template | `consul-template` | +| Consul DNS | `dns` | +| Discovery chain | `discovery-chain` | +| Distributed tracing | `distributed-tracing` | +| External services manager (ESM) | `esm` | +| Failover | `failover` | +| Health checks | `health-check` | +| Service intentions | `intention` | +| Ingress gateway (deprecated) | `ingress-gateway` | +| Key/value store | `kv` | +| Load balancing | `load-balancer` | +| Logs | `log` | +| Mesh gateways | `mesh-gateway` | +| Mutual Transport Layer Security (mTLS) encryption | `mtls` | +| Prepared queries (dynamic service lookup) | `dynamic` | +| Progressive application rollouts | `progressive-rollouts` | +| Services | `service` | +| Sessions | `session` | +| Static DNS queries (static service lookup) | `static` | +| Service mesh telemetry | `telemetry` | +| Transparent proxy | `transparent-proxy` | +| Virtual services | `virtual-service` | +| Consul DNS views | `views` | +| Wide area network (WAN) federation | `wan-federation` | +| Watches | `watch` | + +### Runtimes vocabulary + +Consul's _runtimes_ vocabulary collects the underlying runtimes where Consul supports operations. + +| Runtimes | Label | +| :----------------------- | :------- | +| Virtual machines (VMs) | `vm` | +| Kubernetes | `k8s` | +| Nomad | `nomad` | +| Docker | `docker` | +| HashiCorp Cloud Platform | `hcp` | + +#### Provider-specific runtimes + +This sub-group includes provider-specific runtimes, such as EKS and AKS. + +| Provider-specific runtimes | Label | +| :----------------------------------- | :---------- | +| AWS Elastic Container Service (ECS) | `ecs` | +| AWS Elastic Kubernetes Service (EKS) | `eks` | +| AWS Lambda (serverless) | `lambda` | +| Azure Kubernetes Service (AKS) | `aks` | +| Google Kubernetes Service (GKS) | `gks` | +| OpenShift | `openshift` | +| Argo | `argo` | + +### Actions vocabulary + +Consul's _actions_ vocabulary collects the actions user take to operate Consul and enact service networking states. + +| Action | Label | +| :------------------------- | :--------------- | +| Backup and restore | `backup-restore` | +| Bootstrap | `bootstrap` | +| Configure | `configure` | +| Deploy | `deploy` | +| Enable | `enable` | +| Encrypt | `encrypt` | +| Forward | `forwarding` | +| Initialize a system | `initialize` | +| Install a software package | `install` | +| Deploy a listener | `listener` | +| Manually take an action | `manual` | +| Migrate | `migrate` | +| Create a module | `module` | +| Monitor | `monitor` | +| Peer | `peer` | +| Render | `render` | +| Requirements | `requirements` | +| Reroute traffic | `reroute` | +| Rotate a certificate | `rotate` | +| Route traffic | `route` | +| Run | `run` | +| Source | `source` | +| Store | `store` | +| Technical specifications | `tech-specs` | + +### Providers vocabulary + +Consul's _providers_ vocabulary collects the cloud providers and server locations that Consul runs on. + +| Provider | Label | +| :-------------------------- | :--------- | +| Amazon Web Services (AWS) | `aws` | +| Microsoft Azure | `azure` | +| Google Cloud Platform (GCP) | `gcp` | +| External cloud provider | `external` | +| Custom cloud provider | `custom` | + +### Interfaces vocabulary + +Consul's _interfaces_ vocabulary includes the methods for interacting with Consul agents. + +| Interface | Label | +| :------------------------------------------- | :---- | +| Command Line Interface (CLI) | `cli` | +| HTTP Application Programming Interface (API) | `api` | +| Browser-based user interface (UI) | `ui` | + +### Configuration entry vocabulary + +Consul's _configuration entry_ vocabulary collects the names of the configuration entries and custom resource definitions (CRDs) that you must define to control service mesh state. + +| Name | Label | Configuration entry | CRD | +| :-------------------------- | :---------------------------- | :-----------------: | :------: | +| API gateway | `api-gateway` | ✅ | ❌ | +| Control plane request limit | `control-plane-request-limit` | ✅ | ✅ | +| Exported services | `exported-services` | ✅ | ✅ | +| File system certificates | `file-system-certificate` | ✅ | ❌ | +| HTTP route | `http-route` | ✅ | ❌ | +| Ingress gateway | `ingress-gateway` | ✅ | ✅ | +| Inline certificate | `inline-certificate` | ✅ | ❌ | +| JWT provider | `jwt-provider` | ✅ | ✅ | +| Mesh | `mesh` | ✅ | ✅ | +| Proxy defaults | `proxy-defaults` | ✅ | ✅ | +| Registration | `registration` | ❌ | ✅ | +| Sameness group | `sameness-group` | ✅ | ✅ | +| Service defaults | `service-defaults` | ✅ | ✅ | +| Service intentions | `service-intentions` | ✅ | ✅ | +| Service resolver | `service-resolver` | ✅ | ✅ | +| Service router | `service-router` | ✅ | ✅ | +| Service splitter | `service-splitter` | ✅ | ✅ | +| TCP route | `tcp-route` | ✅ | ❌ | +| Terminating gateway | `terminating-gateway` | ✅ | ✅ | + +### Envoy extension vocabulary + +Consul's _Envoy extension_ vocabulary collects names of supported extensions that run on Envoy proxies. + +| Envoy extension | Label | +| :------------------------------ | :------- | +| Apigee authorization | `apigee` | +| External service authorization | `ext` | +| AWS Lambda functions | `lambda` | +| Lua scripts | `lua` | +| OpenTelemetry collector service | `otel` | +| WebAssembly (WASM) plugins | `wasm` | + +## Guide to partials + +Partials have file paths that begin by describing the type of content. Then, the filepath mirrors existing structures in the main docs folder. There are two syntaxes used for the partial filepaths: + + + +```plaintext +Format / Type / Phase / Feature / Runtime +Examples / Component / Action / Filetype +``` + + + +Examples: + +- `text/descriptions/agent` contains a reusable description of the Consul agent that includes a link to additional information +- `text/guidance/discover` contains all links to documentation and tutorials for the "Discover services" phase of usage +- `tables/compatibility/k8s/lts` and `tables/compatibility/k8s/version` contain version compability tables that must iterate with each Consul release +- `examples/agent/server/encrypted` contains an example of a server agent configuration file in a datacenter with encrypted TLS communication +- `examples/agent/client/register-service` contains an example of a client agent configuration file that also registers a service to the node + +Reasons to use partials: + +- You need to repeat the same information, such as steps or requirements, across runtimes or cloud providers +- You need to reference tables, especially ones that contain version numbers that are updated for each Consul release +- You need to include a configuration example that can be reused in both reference and usage contexts +- You need to add an entry to the glossary of terms + +## How to document new Consul features + +1. Create a file `name.mdx` that serves as an overview combining explanation, usage, and reference information. +2. When you need more pages, move the file to a folder with `name` and change the filename to `index.mdx`. +3. Create redirects as required. + +For example, "DNS views" was introduced for Kubernetes in Consul v1.20. We created a file, `manage/dns/views.mdx`, then expanded it to `manage/dns/views/index.mdx` and `manage/dns/views/enable` when the content was substantial enough to warrant separate pages. The first file is _always_ reachable at the URL `manage/dns/views`, despite the directory and filename change. The `k8s` label is not used because Kubernetes is the only runtime it supports. Hypothetically, if ECS support for DNS views became available, the directory structure for `content/docs/manage/dns` would become: + +``` +. +├── forwarding.mdx +└── views + ├── enable + | ├── ecs.mdx + | └── k8s.mdx + └── index.mdx +``` + +## Maintaining and deprecating content + +Documentation is considered "maintained" when the usage instructions work when running the oldest supported LTS release. + +When components and features are no longer maintained, they may be "deprecated" by R&D. To deprecate content: + +1. Add a deprecation callout to the page. List the date or version when the deprecation occured. +1. On deprecation date, delete the content from the repository. Versioned docs preserves the information in older versions. If necessary, keep a single page in the documentation for announcement links and redirects. +1. Add redirects for deprecated content. +1. Move partials and images into a "legacy" folder if they are no longer used in the documentation. + +If it is possible to migrate existing data from a deprecated component to a replacement, document the migration steps. diff --git a/website/content/api-docs/acl/auth-methods.mdx b/website/content/api-docs/acl/auth-methods.mdx index 34f59cb866c5..4657f3b4da6c 100644 --- a/website/content/api-docs/acl/auth-methods.mdx +++ b/website/content/api-docs/acl/auth-methods.mdx @@ -15,7 +15,7 @@ ACL auth methods in Consul. For more information on how to setup ACLs, refer to the following resources: -- [Access control list (ACL) overview](/consul/docs/security/acl) +- [Access control list (ACL) overview](/consul/docs/secure/acl) - [ACL tutorial](/consul/tutorials/security/access-control-setup-production) ## Create an Auth Method @@ -51,7 +51,7 @@ The corresponding CLI command is [`consul acl auth-method create`](/consul/comma - `Type` `(string: )` - The type of auth method being configured. This field is immutable. For allowed values see the [auth method - documentation](/consul/docs/security/acl/auth-methods). + documentation](/consul/docs/secure/acl/auth-method). - `Description` `(string: "")` - Free form human readable description of the auth method. @@ -76,7 +76,7 @@ The corresponding CLI command is [`consul acl auth-method create`](/consul/comma - `Config` `(map[string]string: )` - The raw configuration to use for the chosen auth method. Contents will vary depending upon the type chosen. For more information on configuring specific auth method types, see the [auth - method documentation](/consul/docs/security/acl/auth-methods). + method documentation](/consul/docs/secure/acl/auth-method). - `Namespace` `(string: "")` - Specifies the namespace of the auth method you create. This field takes precedence over the `ns` query parameter, @@ -107,7 +107,7 @@ The corresponding CLI command is [`consul acl auth-method create`](/consul/comma prefixed-${serviceaccount.name} ``` -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -180,7 +180,7 @@ The corresponding CLI command is [`consul acl auth-method read`](/consul/command - `ns` `(string: "")` - Specifies the namespace of the auth method you look up. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -267,7 +267,7 @@ The corresponding CLI command is [`consul acl auth-method update`](/consul/comma - `Config` `(map[string]string: )` - The raw configuration to use for the chosen auth method. Contents will vary depending upon the type chosen. For more information on configuring specific auth method types, see the [auth - method documentation](/consul/docs/security/acl/auth-methods). + method documentation](/consul/docs/secure/acl/auth-method). - `Namespace` `(string: "")` - Specifies the namespace of the auth method you update. This field takes precedence over the `ns` query parameter, @@ -298,7 +298,7 @@ The corresponding CLI command is [`consul acl auth-method update`](/consul/comma prefixed-${serviceaccount.name} ``` -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -375,7 +375,7 @@ The corresponding CLI command is [`consul acl auth-method delete`](/consul/comma - `ns` `(string: "")` - Specifies the namespace of the auth method you delete. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -418,7 +418,7 @@ The corresponding CLI command is [`consul acl auth-method list`](/consul/command The namespace may be specified as '\*' and then results are returned for all namespaces. -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ## Sample Request diff --git a/website/content/api-docs/acl/binding-rules.mdx b/website/content/api-docs/acl/binding-rules.mdx index 34edeadc4a95..a8697b94a82f 100644 --- a/website/content/api-docs/acl/binding-rules.mdx +++ b/website/content/api-docs/acl/binding-rules.mdx @@ -15,7 +15,7 @@ rules in Consul. For more information on how to setup ACLs, refer to the following resources: -- [Access control list (ACL) overview](/consul/docs/security/acl) +- [Access control list (ACL) overview](/consul/docs/secure/acl) - [ACL tutorial](/consul/tutorials/security/access-control-setup-production) ## Create a Binding Rule @@ -172,7 +172,7 @@ The corresponding CLI command is [`consul acl binding-rule create`](/consul/comm This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -240,7 +240,7 @@ The corresponding CLI command is [`consul acl binding-rule read`](/consul/comman - `ns` `(string: "")` - Specifies the namespace of the binding rule you lookup. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -427,7 +427,7 @@ The corresponding CLI command is [`consul acl binding-rule update`](/consul/comm This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -495,7 +495,7 @@ The corresponding CLI command is [`consul acl binding-rule delete`](/consul/comm - `ns` `(string: "")` - Specifies the namespace of the binding rule you delete. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -539,7 +539,7 @@ The corresponding CLI command is [`consul acl binding-rule list`](/consul/comman The namespace may be specified as '\*' to return results for all namespaces. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/acl/index.mdx b/website/content/api-docs/acl/index.mdx index c45368370bf8..bf6a044a21c8 100644 --- a/website/content/api-docs/acl/index.mdx +++ b/website/content/api-docs/acl/index.mdx @@ -8,15 +8,15 @@ description: The /acl endpoints manage the Consul's ACL system. The `/acl` endpoints are used to manage ACL tokens and policies in Consul, [bootstrap the ACL system](#bootstrap-acls) and [check ACL replication status](#check-acl-replication). There are additional pages for managing [tokens](/consul/api-docs/acl/tokens) and [policies](/consul/api-docs/acl/policies) with the `/acl` endpoints. -For more information on how to setup ACLs, refer to the following resources: +For more information on how to setup ACLs, refer to the following resources: -- [Access control list (ACL) overview](/consul/docs/security/acl) +- [Access control list (ACL) overview](/consul/docs/secure/acl) - [ACL tutorial](/consul/tutorials/security/access-control-setup-production) ## Bootstrap ACLs This endpoint does a special one-time bootstrap of the ACL system, making the first -management token if the [`acl.tokens.initial_management`](/consul/docs/agent/config/config-files#acl_tokens_initial_management) +management token if the [`acl.tokens.initial_management`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_initial_management) configuration entry is not specified in the Consul server configuration and if the cluster has not been bootstrapped previously. An operator created token can be provided in the body of the request to bootstrap the cluster if required. The provided token should be presented in a UUID format. @@ -155,7 +155,7 @@ $ curl \ - `SourceDatacenter` - The authoritative ACL datacenter that ACLs are being replicated from and will match the - [`primary_datacenter`](/consul/docs/agent/config/config-files#primary_datacenter) configuration. + [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) configuration. - `ReplicationType` - The type of replication that is currently in use. @@ -195,7 +195,7 @@ $ curl \ ## Login to Auth Method This endpoint was added in Consul 1.5.0 and is used to exchange an [auth -method](/consul/docs/security/acl/auth-methods) bearer token for a newly-created +method](/consul/docs/secure/acl/auth-method) bearer token for a newly-created Consul ACL token. | Method | Path | Produces | @@ -214,7 +214,7 @@ The table below shows this endpoint's support for -> **Note** - To use the login process to create tokens in any connected secondary datacenter, [ACL -replication](/consul/docs/agent/config/config-files#acl_enable_token_replication) must be +replication](/consul/docs/reference/agent/configuration-file/acl#acl_enable_token_replication) must be enabled. Login requires the ability to create local tokens which is restricted to the primary datacenter and any secondary datacenters with ACL token replication enabled. @@ -329,7 +329,7 @@ $ curl \ This endpoint was added in Consul 1.8.0 and is used to obtain an authorization -URL from Consul to start an [OIDC login flow](/consul/docs/security/acl/auth-methods/oidc). +URL from Consul to start an [OIDC login flow](/consul/docs/secure/acl/auth-method/oidc). | Method | Path | Produces | | ------ | -------------------- | ------------------ | @@ -347,7 +347,7 @@ The table below shows this endpoint's support for -> **Note** - To use the login process to create tokens in any connected secondary datacenter, [ACL -replication](/consul/docs/agent/config/config-files#acl_enable_token_replication) must be +replication](/consul/docs/reference/agent/configuration-file/acl#acl_enable_token_replication) must be enabled. Login requires the ability to create local tokens which is restricted to the primary datacenter and any secondary datacenters with ACL token replication enabled. @@ -360,7 +360,7 @@ replication enabled. ### JSON Request Body Schema - `AuthMethod` `(string: )` - The name of the auth method to use for - login. This must be of type [`oidc`](/consul/docs/security/acl/auth-methods/oidc). + login. This must be of type [`oidc`](/consul/docs/secure/acl/auth-method/oidc). - `RedirectURI` `(string: )` - See [Redirect URIs](/consul/docs/security/acl/auth-methods/oidc#redirect-uris) for more information. @@ -430,7 +430,7 @@ The table below shows this endpoint's support for -> **Note** - To use the login process to create tokens in any connected secondary datacenter, [ACL -replication](/consul/docs/agent/config/config-files#acl_enable_token_replication) must be +replication](/consul/docs/reference/agent/configuration-file/acl#acl_enable_token_replication) must be enabled. Login requires the ability to create local tokens which is restricted to the primary datacenter and any secondary datacenters with ACL token replication enabled. @@ -443,7 +443,7 @@ replication enabled. ### JSON Request Body Schema - `AuthMethod` `(string: )` - The name of the auth method to use for - login. This must be of type [`oidc`](/consul/docs/security/acl/auth-methods/oidc). + login. This must be of type [`oidc`](/consul/docs/secure/acl/auth-method/oidc). - `State` `(string: )` - Opaque state ID that is part of the Authorization URL and will be included in the redirect following diff --git a/website/content/api-docs/acl/policies.mdx b/website/content/api-docs/acl/policies.mdx index 1d46fa1b5125..220038fea9a1 100644 --- a/website/content/api-docs/acl/policies.mdx +++ b/website/content/api-docs/acl/policies.mdx @@ -47,7 +47,7 @@ The corresponding CLI command is [`consul acl policy create`](/consul/commands/a - `Description` `(string: "")` - Free form human readable description of the policy. - `Rules` `(string: "")` - Specifies rules for the ACL policy. The format of the - `Rules` property is detailed in the [ACL Rules documentation](/consul/docs/security/acl/acl-rules). + `Rules` property is detailed in the [ACL Rules documentation](/consul/docs/reference/acl/rule). - `Datacenters` `(array)` - Specifies the datacenters the policy is valid within. When no datacenters are provided the policy is valid in all datacenters including @@ -57,7 +57,7 @@ The corresponding CLI command is [`consul acl policy create`](/consul/commands/a This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -172,7 +172,7 @@ The corresponding CLI command is [`consul acl policy read -name=`](/cons - `ns` `(string: "")` - Specifies the namespace of the policy you lookup. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -236,7 +236,7 @@ The corresponding CLI command is [`consul acl policy update`](/consul/commands/a - `Description` `(string: "")` - Free form human readable description of this policy. - `Rules` `(string: "")` - Specifies rules for this ACL policy. The format of the - `Rules` property is detailed in the [ACL Rules documentation](/consul/docs/security/acl/acl-rules). + `Rules` property is detailed in the [ACL Rules documentation](/consul/docs/reference/acl/rule). - `Datacenters` `(array)` - Specifies the datacenters this policy is valid within. When no datacenters are provided the policy is valid in all datacenters including @@ -246,7 +246,7 @@ The corresponding CLI command is [`consul acl policy update`](/consul/commands/a This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -313,7 +313,7 @@ The corresponding CLI command is [`consul acl policy delete`](/consul/commands/a - `ns` `(string: "")` - Specifies the namespace of the policy you delete. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -354,7 +354,7 @@ The corresponding CLI command is [`consul acl policy list`](/consul/commands/acl The namespace may be specified as '\*' to return results for all namespaces. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/acl/roles.mdx b/website/content/api-docs/acl/roles.mdx index e54e9a54b07b..607055bd0e11 100644 --- a/website/content/api-docs/acl/roles.mdx +++ b/website/content/api-docs/acl/roles.mdx @@ -13,7 +13,7 @@ The `/acl/role` endpoints [create](#create-a-role), [read](#read-a-role), For more information on how to setup ACLs, refer to the following resources: -- [Access control list (ACL) overview](/consul/docs/security/acl) +- [Access control list (ACL) overview](/consul/docs/secure/acl) - [ACL tutorial](/consul/tutorials/security/access-control-setup-production) ## Create a Role @@ -95,7 +95,7 @@ The corresponding CLI command is [`consul acl role create`](/consul/commands/acl This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -225,7 +225,7 @@ The corresponding CLI command is [`consul acl role read`](/consul/commands/acl/r - `ns` `(string: "")` - Specifies the namespace of the role you lookup. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -310,7 +310,7 @@ The corresponding CLI command is [`consul acl role read -name=`](/consul - `ns` `(string: "")` - Specifies the namespace of the role you lookup. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -424,7 +424,7 @@ The corresponding CLI command is [`consul acl role update`](/consul/commands/acl This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -531,7 +531,7 @@ The corresponding CLI command is [`consul acl role delete`](/consul/commands/acl - `ns` `(string: "")` - Specifies the namespace of the role you delete. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -575,7 +575,7 @@ The corresponding CLI command is [`consul acl role list`](/consul/commands/acl/r The namespace may be specified as '\*' to return results for all namespaces. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ## Sample Request diff --git a/website/content/api-docs/acl/templated-policies.mdx b/website/content/api-docs/acl/templated-policies.mdx index de3a6b59aef8..b980b4efc9f9 100644 --- a/website/content/api-docs/acl/templated-policies.mdx +++ b/website/content/api-docs/acl/templated-policies.mdx @@ -11,7 +11,7 @@ The `/acl/templated-policy` endpoints [read](#read-a-templated-policy-by-name), For more information on how to setup ACLs, refer to the following resources: -- [Access control list (ACL) overview](/consul/docs/security/acl) +- [Access control list (ACL) overview](/consul/docs/secure/acl) - [ACL tutorial](/consul/tutorials/security/access-control-setup-production) ## Read a templated policy by name @@ -87,7 +87,7 @@ The corresponding CLI command is [`consul acl templated-policy preview`](/consul - `Name` `(string: )` - Specifies the value of the `name` variable in the templated policy variables. -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample payload diff --git a/website/content/api-docs/acl/tokens.mdx b/website/content/api-docs/acl/tokens.mdx index af629a3500cf..19bb2eab8ba9 100644 --- a/website/content/api-docs/acl/tokens.mdx +++ b/website/content/api-docs/acl/tokens.mdx @@ -11,7 +11,7 @@ The `/acl/token` endpoints [create](#create-a-token), [read](#read-a-token), For more information on how to setup ACLs, refer to the following resources: -- [Access control list (ACL) overview](/consul/docs/security/acl) +- [Access control list (ACL) overview](/consul/docs/secure/acl) - [ACL tutorial](/consul/tutorials/security/access-control-setup-production) ## Create a Token @@ -118,7 +118,7 @@ The corresponding CLI command is [`consul acl token create`](/consul/commands/ac This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -219,7 +219,7 @@ The corresponding CLI command is [`consul acl token read`](/consul/commands/acl/ - `expanded` `(bool: false)` - If this field is set, the contents of all policies and roles affecting the token will also be returned. -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -501,7 +501,7 @@ The corresponding CLI command is [`consul acl token update`](/consul/commands/ac This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -597,7 +597,7 @@ The corresponding CLI command is [`consul acl token clone`](/consul/commands/acl This field takes precedence over the `ns` query parameter, one of several [other methods to specify the namespace](#methods-to-specify-namespace). -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -676,7 +676,7 @@ The corresponding CLI command is [`consul acl token delete`](/consul/commands/ac - `ns` `(string: "")` - Specifies the namespace of the token you delete. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -733,7 +733,7 @@ The corresponding CLI command is [`consul acl token list`](/consul/commands/acl/ The namespace may be specified as '\*' to return results for all namespaces. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/admin-partitions.mdx b/website/content/api-docs/admin-partitions.mdx index f71fff17256e..7d66a04ca03b 100644 --- a/website/content/api-docs/admin-partitions.mdx +++ b/website/content/api-docs/admin-partitions.mdx @@ -4,7 +4,7 @@ page_title: Admin Partition - HTTP API description: The /partition endpoints allow for managing Consul Enterprise Admin Partitions. --- -@include 'http_api_and_cli_characteristics_links.mdx' +@include 'legacy/http_api_and_cli_characteristics_links.mdx' # Admin Partition - HTTP API diff --git a/website/content/api-docs/agent/check.mdx b/website/content/api-docs/agent/check.mdx index 9418d0668463..3d3c9e365983 100644 --- a/website/content/api-docs/agent/check.mdx +++ b/website/content/api-docs/agent/check.mdx @@ -6,7 +6,7 @@ description: The /agent/check endpoints interact with checks on the local agent # Check - Agent HTTP API -Refer to [Define Health Checks](/consul/docs/services/usage/checks) for information about Consul health check capabilities. +Refer to [Define Health Checks](/consul/docs/register/health-check/vm) for information about Consul health check capabilities. The `/agent/check` endpoints interact with health checks managed by the local agent in Consul. These should not be confused with checks in the catalog. @@ -20,10 +20,10 @@ using the HTTP API. It is important to note that the checks known by the agent may be different from those reported by the catalog. This is usually due to changes being made while there is no leader elected. The agent performs active -[anti-entropy](/consul/docs/architecture/anti-entropy), so in most situations +[anti-entropy](/consul/docs/concept/consistency), so in most situations everything will be in sync within a few seconds. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | --------------- | ------------------ | diff --git a/website/content/api-docs/agent/connect.mdx b/website/content/api-docs/agent/connect.mdx index edef8125b336..b5fe00d9d9f5 100644 --- a/website/content/api-docs/agent/connect.mdx +++ b/website/content/api-docs/agent/connect.mdx @@ -27,8 +27,8 @@ them in the native configuration of the proxy itself (such as RBAC for Envoy). This endpoint tests whether a connection attempt is authorized between two services. This is the primary API that must be implemented by -[proxies](/consul/docs/connect/proxies) or -[native integrations](/consul/docs/connect/native) +[proxies](/consul/docs/connect/proxy) or +[native integrations](/consul/docs/automate/native) that wish to integrate with the service mesh. Prior to calling this API, it is expected that the client TLS certificate has been properly verified against the current CA roots. @@ -104,8 +104,8 @@ $ curl \ ## Certificate Authority (CA) Roots This endpoint returns the trusted certificate authority (CA) root certificates. -This is used by [proxies](/consul/docs/connect/proxies) or -[native integrations](/consul/docs/connect/native) to verify served client +This is used by [proxies](/consul/docs/connect/proxy) or +[native integrations](/consul/docs/automate/native) to verify served client or server certificates are valid. This is equivalent to the [non-Agent service mesh endpoint](/consul/api-docs/connect), diff --git a/website/content/api-docs/agent/index.mdx b/website/content/api-docs/agent/index.mdx index 749ea80f291d..29aaccb3a865 100644 --- a/website/content/api-docs/agent/index.mdx +++ b/website/content/api-docs/agent/index.mdx @@ -12,7 +12,7 @@ The `/agent` endpoints are used to interact with the local Consul agent. Usually, services and checks are registered with an agent which then takes on the burden of keeping that data synchronized with the cluster. For example, the agent registers services and checks with the Catalog and performs -[anti-entropy](/consul/docs/architecture/anti-entropy) to recover from outages. +[anti-entropy](/consul/docs/concept/consistency) to recover from outages. In addition to these endpoints, additional endpoints are grouped in the navigation for `Checks` and `Services`. @@ -231,7 +231,7 @@ to the nature of gossip, this is eventually consistent: the results may differ by agent. The strongly consistent view of nodes is instead provided by `/v1/catalog/nodes`. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ---------------- | ------------------ | @@ -261,7 +261,7 @@ The corresponding CLI command is [`consul members`](/consul/commands/members). network segment). When querying a server, setting this to the special string `_all` will show members in all segments. -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -453,12 +453,12 @@ $ curl \ ## View Metrics This endpoint will dump the metrics for the most recent finished interval. -For more information about metrics, see the [telemetry](/consul/docs/agent/telemetry) +For more information about metrics, see the [telemetry](/consul/docs/monitor/telemetry/agent) page. In order to enable [Prometheus](https://prometheus.io/) support, you need to use the configuration directive -[`prometheus_retention_time`](/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time). +[`prometheus_retention_time`](/consul/docs/reference/agent/configuration-file/telemetry#telemetry-prometheus_retention_time). Since Consul 1.7.2 this endpoint will also automatically switch output format if the request contains an `Accept` header with a compatible MIME type such as @@ -765,7 +765,7 @@ $ curl \ This endpoint updates the ACL tokens currently in use by the agent. It can be used to introduce ACL tokens to the agent for the first time, or to update tokens that were initially loaded from the agent's configuration. Tokens will be persisted -only if the [`acl.enable_token_persistence`](/consul/docs/agent/config/config-files#acl_enable_token_persistence) +only if the [`acl.enable_token_persistence`](/consul/docs/reference/agent//acl#acl_enable_token_persistence) configuration is `true`. When not being persisted, they will need to be reset if the agent is restarted. @@ -778,10 +778,10 @@ is restarted. | `PUT` | `/agent/token/replication` | `application/json` | The paths above correspond to the token names as found in the agent configuration: -[`default`](/consul/docs/agent/config/config-files#acl_tokens_default), [`agent`](/consul/docs/agent/config/config-files#acl_tokens_agent), -[`agent_recovery`](/consul/docs/agent/config/config-files#acl_tokens_agent_recovery), -[`config_file_service_registration`](/consul/docs/agent/config/config-files#acl_tokens_config_file_service_registration), -and [`replication`](/consul/docs/agent/config/config-files#acl_tokens_replication). +[`default`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_default), [`agent`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent), +[`agent_recovery`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent_recovery), +[`config_file_service_registration`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_config_file_service_registration), +and [`replication`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_replication). -> **Deprecation Note:** The following paths were deprecated in version 1.11 @@ -790,7 +790,7 @@ and [`replication`](/consul/docs/agent/config/config-files#acl_tokens_replicatio | `PUT` | `/agent/token/agent_master` | `application/json` | The paths above correspond to the token names as found in the agent configuration: -[`agent_master`](/consul/docs/agent/config/config-files#acl_tokens_agent_master). +[`agent_master`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent_master). -> **Deprecation Note:** The following paths were deprecated in version 1.4.3 @@ -802,9 +802,9 @@ The paths above correspond to the token names as found in the agent configuratio | `PUT` | `/agent/token/acl_replication_token` | `application/json` | The paths above correspond to the token names as found in the agent configuration: -[`acl_token`](/consul/docs/agent/config/config-files#acl_token_legacy), [`acl_agent_token`](/consul/docs/agent/config/config-files#acl_agent_token_legacy), -[`acl_agent_master_token`](/consul/docs/agent/config/config-files#acl_agent_master_token_legacy), and -[`acl_replication_token`](/consul/docs/agent/config/config-files#acl_replication_token_legacy). +[`acl_token`](/consul/docs/reference/agent/configuration-file/acl#acl_token_legacy), [`acl_agent_token`](/consul/docs/reference/agent/configuration-file/acl#acl_agent_token_legacy), +[`acl_agent_master_token`](/consul/docs/reference/agent/configuration-file/acl#acl_agent_master_token_legacy), and +[`acl_replication_token`](/consul/docs/reference/agent/configuration-file/acl#acl_replication_token_legacy). The table below shows this endpoint's support for [blocking queries](/consul/api-docs/features/blocking), diff --git a/website/content/api-docs/agent/service.mdx b/website/content/api-docs/agent/service.mdx index 301827418270..0212586047db 100644 --- a/website/content/api-docs/agent/service.mdx +++ b/website/content/api-docs/agent/service.mdx @@ -20,10 +20,10 @@ or added dynamically using the HTTP API. It is important to note that the services known by the agent may be different from those reported by the catalog. This is usually due to changes being made while there is no leader elected. The agent performs active -[anti-entropy](/consul/docs/architecture/anti-entropy), so in most situations +[anti-entropy](/consul/docs/concept/consistency), so in most situations everything will be in sync within a few seconds. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ----------------- | ------------------ | @@ -130,13 +130,13 @@ following selectors and filter operations being supported: This endpoint was added in Consul 1.3.0 and returns the full service definition for a single service instance registered on the local agent. It is used by -[service mesh proxies](/consul/docs/connect/proxies) to discover the embedded proxy +[service mesh proxies](/consul/docs/connect/proxy) to discover the embedded proxy configuration that was registered with the instance. It is important to note that the services known by the agent may be different from those reported by the catalog. This is usually due to changes being made while there is no leader elected. The agent performs active -[anti-entropy](/consul/docs/architecture/anti-entropy), so in most situations +[anti-entropy](/consul/docs/concept/consistency), so in most situations everything will be in sync within a few seconds. | Method | Path | Produces | @@ -180,7 +180,7 @@ $ curl \ ### Sample Response The response contains the fields specified in the [service -definition](/consul/docs/services/configuration/services-configuration-reference), but it includes an extra `ContentHash` field that contains the [hash-based blocking +definition](/consul/docs/reference/service), but it includes an extra `ContentHash` field that contains the [hash-based blocking query](/consul/api-docs/features/blocking#hash-based-blocking-queries) hash for the result. The same hash is also present in `X-Consul-ContentHash`. @@ -604,14 +604,14 @@ The `/agent/service/register` endpoint supports camel case and _snake case_ for - `Name` `(string: )` - Specifies the logical name of the service. Many service instances may share the same logical service name. We recommend using - valid DNS labels for service definition names. Refer to the Internet Engineering Task Force's [RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Service names that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/services/configuration/services-configuration-reference#name) for additional information. + valid DNS labels for service definition names. Refer to the Internet Engineering Task Force's [RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Service names that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/reference/service#name) for additional information. - `ID` `(string: "")` - Specifies a unique ID for this service. This must be unique per _agent_. This defaults to the `Name` parameter if not provided. - `Tags` `(array: nil)` - Specifies a list of tags to assign to the service. Tags enable you to filter when querying for the services and are exposed in Consul APIs. We recommend using - valid DNS labels for tags. Refer to the Internet Engineering Task Force's [RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Tags that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/services/configuration/services-configuration-reference#tags) for additional information. + valid DNS labels for tags. Refer to the Internet Engineering Task Force's [RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Tags that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/reference/service#tags) for additional information. - `Address` `(string: "")` - Specifies the address of the service. If not provided, the agent's address is used as the address for the service during @@ -634,16 +634,16 @@ The `/agent/service/register` endpoint supports camel case and _snake case_ for typical Consul service. You can specify the following values: - `"connect-proxy"` for [service mesh](/consul/docs/connect) proxies representing another service - `"mesh-gateway"` for instances of a [mesh gateway](/consul/docs/connect/gateways/mesh-gateway#service-mesh-proxy-configuration) - - `"terminating-gateway"` for instances of a [terminating gateway](/consul/docs/connect/gateways/terminating-gateway) - - `"ingress-gateway"` for instances of an [ingress gateway](/consul/docs/connect/gateways/ingress-gateway) + - `"terminating-gateway"` for instances of a [terminating gateway](/consul/docs/north-south/terminating-gateway) + - `"ingress-gateway"` for instances of an [ingress gateway](/consul/docs/north-south/ingress-gateway) - `Proxy` `(Proxy: nil)` - From 1.2.3 on, specifies the configuration for a service mesh proxy instance. This is only valid if `Kind` defines a proxy or gateway. - Refer to the [Service mesh proxy configuration reference](/consul/docs/connect/proxies/proxy-config-reference) + Refer to the [Service mesh proxy configuration reference](/consul/docs/reference/proxy/connect-proxy) for full details. - `Connect` `(Connect: nil)` - Specifies the - [configuration for service mesh](/consul/docs/connect/configuration). The connect subsystem provides Consul's service mesh capabilities. Refer to the + [configuration for service mesh](/consul/docs/connect). The connect subsystem provides Consul's service mesh capabilities. Refer to the [Connect Structure](#connect-structure) section below for supported fields. - `Check` `(Check: nil)` - Specifies a check. Please see the @@ -681,27 +681,27 @@ The `/agent/service/register` endpoint supports camel case and _snake case_ for Weights only apply to the locally registered service. If multiple nodes register the same service, each node implements `EnableTagOverride` and other service configuration items independently. Updating the tags for the service registered on one node does not necessarily update the same tags on services with the same name registered on another node. If `EnableTagOverride` is not specified the default value is - `false`. See [anti-entropy syncs](/consul/docs/architecture/anti-entropy) for + `false`. See [anti-entropy syncs](/consul/docs/concept/consistency) for additional information. -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' #### Connect Structure For the `Connect` field, the parameters are: - `Native` `(bool: false)` - Specifies whether this service supports - the [Consul service mesh](/consul/docs/connect) protocol [natively](/consul/docs/connect/native). + the [Consul service mesh](/consul/docs/connect) protocol [natively](/consul/docs/automate/native). If this is true, then service mesh proxies, DNS queries, etc. will be able to service discover this service. - `Proxy` `(Proxy: nil)` - **Deprecated** Specifies that a managed service mesh proxy should be started for this service instance, and optionally provides configuration for the proxy. Managed proxies (which have been deprecated since Consul v1.3.0) have been - [removed](/consul/docs/connect/proxies) since v1.6.0. + [removed](/consul/docs/connect/proxy) since v1.6.0. - `SidecarService` `(ServiceDefinition: nil)` - Specifies an optional nested service definition to register. Refer to - [Deploy sidecar services](/consul/docs/connect/proxies/deploy-sidecar-services) for additional information. + [Deploy sidecar services](/consul/docs/connect/proxy/sidecar) for additional information. ### Sample Payload @@ -771,7 +771,7 @@ The corresponding CLI command is [`consul services deregister`](/consul/commands - `ns` `(string: "")` - Specifies the namespace of the service you deregister. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/api-structure.mdx b/website/content/api-docs/api-structure.mdx index edfbcf32589e..48b9ec207f41 100644 --- a/website/content/api-docs/api-structure.mdx +++ b/website/content/api-docs/api-structure.mdx @@ -44,7 +44,7 @@ $ curl \ this method is highly discouraged because the token can show up in access logs as part of the URL. The `?token=` query parameter is deprecated and will be removed in a future Consul version. -To learn more about the ACL system read the [documentation](/consul/docs/security/acl). +To learn more about the ACL system read the [documentation](/consul/docs/secure/acl). ## Version Prefix @@ -89,7 +89,7 @@ However, we generally recommend using resource names that don't require URL-enco Depending on the validation that Consul applies to a resource name, Consul may still reject a request if it considers the resource name invalid for that endpoint. And even if Consul considers the resource name valid, it may degrade other functionality, -such as failed [DNS lookups](/consul/docs/services/discovery/dns-overview) +such as failed [DNS lookups](/consul/docs/discover/dns) for nodes or services with names containing invalid DNS characters. This HTTP API capability also allows the @@ -103,7 +103,7 @@ The linefeed character (`%0a`) will cause a request to be rejected even if it is Consul 0.7 added the ability to translate addresses in HTTP response based on the configuration setting for -[`translate_wan_addrs`](/consul/docs/agent/config/config-files#translate_wan_addrs). In order +[`translate_wan_addrs`](/consul/docs/reference/agent/configuration-file/general#translate_wan_addrs). In order to allow clients to know if address translation is in effect, the `X-Consul-Translate-Addresses` header will be added if translation is enabled, and will have a value of `true`. If translation is not enabled then this header @@ -114,9 +114,9 @@ will not be present. All API responses for Consul versions after 1.9 will include an HTTP response header `X-Consul-Default-ACL-Policy` set to either "allow" or "deny" which mirrors the current value of the agent's -[`acl.default_policy`](/consul/docs/agent/config/config-files#acl_default_policy) option. +[`acl.default_policy`](/consul/docs/reference/agent/configuration-file/acl#acl_default_policy) option. -This is also the default [intention](/consul/docs/connect/intentions) enforcement +This is also the default [intention](/consul/docs/secure-mesh/intention) enforcement action if no intention matches. This is returned even if ACLs are disabled. diff --git a/website/content/api-docs/catalog.mdx b/website/content/api-docs/catalog.mdx index f773353483a2..33cb1e21d8d0 100644 --- a/website/content/api-docs/catalog.mdx +++ b/website/content/api-docs/catalog.mdx @@ -17,7 +17,7 @@ API methods look similar. This endpoint is a low-level mechanism for registering or updating entries in the catalog. It is usually preferable to instead use the [agent endpoints](/consul/api-docs/agent) for registration as they are simpler and -perform [anti-entropy](/consul/docs/architecture/anti-entropy). +perform [anti-entropy](/consul/docs/concept/consistency). | Method | Path | Produces | | ------ | ------------------- | ------------------ | @@ -57,7 +57,7 @@ The table below shows this endpoint's support for - `Service` `(Service: nil)` - Contains an object the specifies the service to register. The `Service.Service` field is required. If `Service.ID` is not provided, the default is the `Service.Service`. You can only specify one service with a given `ID` per node. We recommend using - valid DNS labels for service definition names. Refer to the Internet Engineering Task Force's [RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Service names that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/services/configuration/services-configuration-reference#name) for additional information. + valid DNS labels for service definition names. Refer to the Internet Engineering Task Force's [RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Service names that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/reference/service#name) for additional information. The following fields are optional: - `Tags` - `Address` @@ -79,7 +79,7 @@ The table below shows this endpoint's support for treated as a service level health check, instead of a node level health check. The `Status` must be one of `passing`, `warning`, or `critical`. - You can provide defaults for TCP and HTTP health checks to the `Definition` field. Refer to [Health Checks](/consul/docs/services/usage/checks) for additional information. + You can provide defaults for TCP and HTTP health checks to the `Definition` field. Refer to [Health Checks](/consul/docs/register/health-check/vm) for additional information. Multiple checks can be provided by replacing `Check` with `Checks` and sending an array of `Check` objects. @@ -169,7 +169,7 @@ $ curl \ This endpoint is a low-level mechanism for directly removing entries from the Catalog. It is usually preferable to instead use the [agent endpoints](/consul/api-docs/agent) for deregistration as they are simpler and -perform [anti-entropy](/consul/docs/architecture/anti-entropy). +perform [anti-entropy](/consul/docs/concept/consistency). | Method | Path | Produces | | ------ | --------------------- | ------------------ | @@ -301,7 +301,7 @@ $ curl \ This endpoint and returns the nodes registered in a given datacenter. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ---------------- | ------------------ | @@ -337,7 +337,7 @@ The corresponding CLI command is [`consul catalog nodes`](/consul/commands/catal - `filter` `(string: "")` - Specifies the expression used to filter the queries results prior to returning the data. -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -399,7 +399,7 @@ the following selectors and filter operations being supported: This endpoint returns the services registered in a given datacenter. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ------------------- | ------------------ | @@ -434,7 +434,7 @@ The corresponding CLI command is [`consul catalog services`](/consul/commands/ca - `filter` `(string: "")` - Specifies the expression used to filter the queries results prior to returning the data. -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Filtering @@ -506,7 +506,7 @@ a given service. This endpoint returns the nodes providing a service in a given datacenter. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | -------------------------------- | ------------------ | @@ -551,10 +551,12 @@ The table below shows this endpoint's support for - `filter` `(string: "")` - Specifies the expression used to filter the queries results prior to returning the data. +- `peer` `(string: "")` - Specifies the name of the peer that exported the service. Does not apply when no cluster peering connections exist. + - `merge-central-config` - Include this flag in a request for `connect-proxy` kind or `*-gateway` kind services to return a fully resolved service definition that includes merged values from the - [proxy-defaults/global](/consul/docs/connect/config-entries/proxy-defaults) and - [service-defaults/:service](/consul/docs/connect/config-entries/service-defaults) config entries. + [proxy-defaults/global](/consul/docs/reference/config-entry/proxy-defaults) and + [service-defaults/:service](/consul/docs/reference/config-entry/service-defaults) config entries. Returning a fully resolved service definition is useful when a service was registered using the [/catalog/register](/consul/api-docs/catalog#register_entity) endpoint, which does not automatically merge config entries. @@ -664,7 +666,7 @@ $ curl \ [service registration API](/consul/api-docs/agent/service#kind) for more information. - `ServiceProxy` is the proxy config as specified in - [service mesh Proxies](/consul/docs/connect/proxies). + [service mesh Proxies](/consul/docs/connect/proxy). - `ServiceConnect` are the [service mesh](/consul/docs/connect) settings. The value of this struct is equivalent to the `Connect` field for service @@ -726,7 +728,7 @@ This will include both proxies and native integrations. A service may register both mesh-capable and incapable services at the same time, so this endpoint may be used to filter only the mesh-capable endpoints. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | --------------------------- | ------------------ | @@ -739,7 +741,7 @@ Parameters and response format are the same as This endpoint returns the node's registered services. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | -------------------------- | ------------------ | @@ -868,7 +870,7 @@ top level Node object. The following selectors and filter operations are support This endpoint returns the node's registered services. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ----------------------------------- | ------------------ | @@ -898,8 +900,8 @@ The table below shows this endpoint's support for - `merge-central-config` - Include this flag in a request for `connect-proxy` kind or `*-gateway` kind services to return a fully resolved service definition that includes merged values from the - [proxy-defaults/global](/consul/docs/connect/config-entries/proxy-defaults) and - [service-defaults/:service](/consul/docs/connect/config-entries/service-defaults) config entries. + [proxy-defaults/global](/consul/docs/reference/config-entry/proxy-defaults) and + [service-defaults/:service](/consul/docs/reference/config-entry/service-defaults) config entries. Returning a fully resolved service definition is useful when a service was registered using the [/catalog/register](/consul/api-docs/catalog#register_entity) endpoint, which does not automatically merge config entries. @@ -1009,7 +1011,7 @@ top level object. The following selectors and filter operations are supported: This endpoint returns the services associated with an ingress gateway or terminating gateway. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ------------------------------------ | ------------------ | diff --git a/website/content/api-docs/config.mdx b/website/content/api-docs/config.mdx index 49243da09456..05ffe8506e91 100644 --- a/website/content/api-docs/config.mdx +++ b/website/content/api-docs/config.mdx @@ -10,9 +10,9 @@ description: |- The `/config` endpoints create, update, delete and query central configuration entries registered with Consul. See the -[agent configuration](/consul/docs/agent/config/config-files#enable_central_service_config) +[agent configuration](/consul/docs/reference/agent/configuration-file/general#enable_central_service_config) for more information on how to enable this functionality for centrally -configuring services and [configuration entries docs](/consul/docs/agent/config-entries) for a description +configuring services and [configuration entries docs](/consul/docs/fundamentals/config-entry) for a description of the configuration entries content. ## Apply Configuration @@ -71,7 +71,7 @@ The ACL required depends on the config entry being written: - `ns` `(string: "")` - Specifies the namespace of the config entry you apply. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Payload @@ -152,7 +152,7 @@ The ACL required depends on the config entry kind being read: - `ns` `(string: "")` - Specifies the namespace of the config entry you lookup You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -178,7 +178,7 @@ $ curl \ This endpoint returns all config entries of the given kind. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | --------------- | ------------------ | @@ -232,7 +232,7 @@ The corresponding CLI command is [`consul config list`](/consul/commands/config/ - `ns` `(string: "")` - Specifies the namespace of the config entries you lookup. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -327,7 +327,7 @@ The corresponding CLI command is [`consul config delete`](/consul/commands/confi - `ns` `(string: "")` - Specifies the namespace of the config entry you delete. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/connect/ca.mdx b/website/content/api-docs/connect/ca.mdx index ad808c13df44..538fd7620640 100644 --- a/website/content/api-docs/connect/ca.mdx +++ b/website/content/api-docs/connect/ca.mdx @@ -182,7 +182,7 @@ The corresponding CLI command is [`consul connect ca set-config`](/consul/comman - `Config` `(map[string]string: )` - The raw configuration to use for the chosen provider. For more information on configuring the service mesh CA - providers, see [Provider Config](/consul/docs/connect/ca). + providers, see [Provider Config](/consul/docs/secure-mesh/certificate). - `ForceWithoutCrossSigning` `(bool: false)` - Indicates that the CA change should be forced to complete even if the current CA doesn't support root cross-signing. diff --git a/website/content/api-docs/connect/intentions.mdx b/website/content/api-docs/connect/intentions.mdx index 95a5cfca60a4..db59da3528b3 100644 --- a/website/content/api-docs/connect/intentions.mdx +++ b/website/content/api-docs/connect/intentions.mdx @@ -9,11 +9,11 @@ description: |- # Intentions - Connect HTTP API The `/connect/intentions` endpoint provide tools for managing -[intentions](/consul/docs/connect/intentions). +[intentions](/consul/docs/secure-mesh/intention). -> **1.9.0 and later:** Reading and writing intentions has been migrated to the -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entry kind. ## Upsert Intention by Name ((#upsert-intention-by-name)) @@ -60,7 +60,7 @@ The corresponding CLI command is [`consul intention create -replace`](/consul/co as shown in the [source and destination naming conventions](/consul/commands/intention#source-and-destination-naming). You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### JSON Request Body Schema @@ -76,7 +76,7 @@ The corresponding CLI command is [`consul intention create -replace`](/consul/co the `Permissions` field. - `Permissions` `(array)` - The list of all [additional L7 - attributes](/consul/docs/connect/config-entries/service-intentions#intentionpermission) + attributes](/consul/docs/reference/config-entry/service-intentions#intentionpermission) that extend the intention match criteria. Permission precedence is applied top to bottom. For any given request the @@ -84,7 +84,7 @@ The corresponding CLI command is [`consul intention create -replace`](/consul/co evaluation. As with L4 intentions, traffic that fails to match any of the provided permissions in this intention will be subject to the default intention behavior is defined by the default [ACL - policy](/consul/docs/agent/config/config-files#acl_default_policy). + policy](/consul/docs/reference/agent/configuration-file/acl#acl_default_policy). This should be omitted for an L4 intention as it is mutually exclusive with the `Action` field. @@ -120,7 +120,7 @@ true -> **Deprecated** - This endpoint is deprecated in Consul 1.9.0 in favor of [upserting by name](#upsert-intention-by-name) or editing the -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) config +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entry for the destination. This endpoint creates a new intention and returns its ID if it was created @@ -153,7 +153,7 @@ The corresponding CLI command is [`consul intention create`](/consul/commands/in as shown in the [source and destination naming conventions](/consul/commands/intention#source-and-destination-naming). You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### JSON Request Body Schema @@ -215,7 +215,7 @@ $ curl \ -> **Deprecated** - This endpoint is deprecated in Consul 1.9.0 in favor of [upserting by name](#upsert-intention-by-name) or editing the -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) config +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entry for the destination. This endpoint updates an intention with the given values. @@ -294,7 +294,7 @@ The corresponding CLI command is [`consul intention get`](/consul/commands/inten as shown in the [source and destination naming conventions](/consul/commands/intention#source-and-destination-naming). You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -325,7 +325,7 @@ $ curl \ -> **Deprecated** - This endpoint is deprecated in Consul 1.9.0 in favor of [reading by name](#read-specific-intention-by-name) or by viewing the -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entry for the destination. This endpoint reads a specific intention. @@ -382,7 +382,7 @@ $ curl \ This endpoint lists all intentions. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | --------------------- | ------------------ | @@ -410,7 +410,7 @@ The corresponding CLI command is [`consul intention list`](/consul/commands/inte The `*` wildcard may be used to list intentions from all namespaces. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -493,7 +493,7 @@ The corresponding CLI command is [`consul intention delete`](/consul/commands/in as shown in the [source and destination naming conventions](/consul/commands/intention#source-and-destination-naming). You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -507,7 +507,7 @@ $ curl \ -> **Deprecated** - This endpoint is deprecated in Consul 1.9.0 in favor of [deleting by name](#delete-intention-by-name) or editing the -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) config +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entry for the destination. This endpoint deletes a specific intention. @@ -638,7 +638,7 @@ The corresponding CLI command is [`consul intention match`](/consul/commands/int as shown in the [source and destination naming conventions](/consul/commands/intention#source-and-destination-naming). You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/coordinate.mdx b/website/content/api-docs/coordinate.mdx index ab00205e6e1f..16df8a201719 100644 --- a/website/content/api-docs/coordinate.mdx +++ b/website/content/api-docs/coordinate.mdx @@ -78,7 +78,7 @@ within the same area. This endpoint returns the LAN network coordinates for all nodes in a given datacenter. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ------------------- | ------------------ | diff --git a/website/content/api-docs/discovery-chain.mdx b/website/content/api-docs/discovery-chain.mdx index d84bd293c4e4..5ff117b11478 100644 --- a/website/content/api-docs/discovery-chain.mdx +++ b/website/content/api-docs/discovery-chain.mdx @@ -9,17 +9,17 @@ description: The /discovery-chain endpoints are for interacting with the discove -> **1.6.0+:** The discovery chain API is available in Consul versions 1.6.0 and newer. ~> This is a low-level API primarily targeted at developers building external -[service mesh proxy integrations](/consul/docs/connect/proxies/integrate). Future +[service mesh proxy integrations](/consul/docs/connect/proxy/custom). Future high-level proxy integration APIs may obviate the need for this API over time. The `/discovery-chain` endpoint returns the compiled [discovery -chain](/consul/docs/connect/manage-traffic/discovery-chain) for a service. +chain](/consul/docs/manage-traffic/discovery-chain) for a service. This will fetch all related [configuration -entries](/consul/docs/agent/config-entries) and render them into a form suitable -for use by a [service mesh proxy](/consul/docs/connect/proxies) implementation. This +entries](/consul/docs/fundamentals/config-entry) and render them into a form suitable +for use by a [service mesh proxy](/consul/docs/connect/proxy) implementation. This is a key component of [L7 Traffic -Management](/consul/docs/connect/manage-traffic). +Management](/consul/docs/manage-traffic). ## Read Compiled Discovery Chain @@ -66,14 +66,14 @@ The table below shows this endpoint's support for ### JSON Request Body Schema - `OverrideConnectTimeout` `(duration: 0s)` - Overrides the final [connect - timeout](/consul/docs/connect/config-entries/service-resolver#connecttimeout) for + timeout](/consul/docs/reference/config-entry/service-resolver#connecttimeout) for any service resolved in the compiled chain. This value comes from the `connect_timeout_ms` key in the opaque `config` field of the [upstream configuration](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference). - `OverrideProtocol` `(string: "")` - Overrides the final - [protocol](/consul/docs/connect/config-entries/service-defaults#protocol) used in + [protocol](/consul/docs/reference/config-entry/service-defaults#protocol) used in the compiled discovery chain. If the chain ordinarily would be TCP and an L7 protocol is passed here the diff --git a/website/content/api-docs/event.mdx b/website/content/api-docs/event.mdx index 2e283fc46072..179397ee55f4 100644 --- a/website/content/api-docs/event.mdx +++ b/website/content/api-docs/event.mdx @@ -88,10 +88,10 @@ $ curl \ This endpoint returns the most recent events (up to 256) known by the agent. As a consequence of how the [event command](/consul/commands/event) works, each agent may have a different view of the events. Events are broadcast using the -[gossip protocol](/consul/docs/architecture/gossip), so they have no global ordering +[gossip protocol](/consul/docs/concept/gossip), so they have no global ordering nor do they make a promise of delivery. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ------------- | ------------------ | @@ -146,7 +146,7 @@ $ curl \ The semantics of this endpoint's blocking queries are slightly different. Most blocking queries provide a monotonic index and block until a newer index is available. This can be supported as a consequence of the total ordering of the -[consensus protocol](/consul/docs/architecture/consensus). With gossip, there is no +[consensus protocol](/consul/docs/concept/consensus). With gossip, there is no ordering, and instead `X-Consul-Index` maps to the newest event that matches the query. diff --git a/website/content/api-docs/exported-services.mdx b/website/content/api-docs/exported-services.mdx index f7c89d4999a5..6224655264e9 100644 --- a/website/content/api-docs/exported-services.mdx +++ b/website/content/api-docs/exported-services.mdx @@ -12,7 +12,7 @@ description: The /exported-services endpoint lists exported services and their c The `/exported-services` endpoint returns a list of exported services, as well as the admin partitions and cluster peers that consume the services. -This list consists of the services that were exported using an [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services). Sameness groups and wildcards in the configuration entry are expanded in the response. +This list consists of the services that were exported using an [`exported-services` configuration entry](/consul/docs/reference/config-entry/exported-services). Sameness groups and wildcards in the configuration entry are expanded in the response. ## List Exported Services @@ -36,7 +36,7 @@ The table below shows this endpoint's support for ### Query Parameters -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/features/consistency.mdx b/website/content/api-docs/features/consistency.mdx index 746b062ab4c6..e0dd185e8a30 100644 --- a/website/content/api-docs/features/consistency.mdx +++ b/website/content/api-docs/features/consistency.mdx @@ -22,7 +22,7 @@ to fine-tune these trade-offs for their own use case at two levels: Consul servers are responsible for maintaining state information like the registration and health status of services and nodes. To protect this state against the potential failure of Consul servers, this state is replicated across three or more Consul servers in production using a -[consensus protocol](/consul/docs/architecture/consensus). +[consensus protocol](/consul/docs/concept/consensus). One Consul server is elected leader and acts as the ultimate authority on Consul's state. If a majority of Consul servers agree to a state change, the leader is responsible for recording @@ -74,8 +74,8 @@ Each HTTP API endpoint documents its support for the three read consistency mode ~> **Scaling read requests**: The most effective way to increase read scalability is to convert non-`stale` reads to `stale` reads. If most requests are already `stale` reads and additional load reduction is desired, use Consul Enterprise -[redundancy zones](/consul/docs/enterprise/redundancy) or -[read replicas](/consul/docs/enterprise/read-scale) +[redundancy zones](/consul/docs/manage/scale/redundancy-zone) or +[read replicas](/consul/docs/manage/scale/read-replica) to spread `stale` reads across additional, _non-voting_ Consul servers. Non-voting servers enhance read scalability without increasing the number of voting servers; adding more then 5 voting servers is not recommended because @@ -111,7 +111,7 @@ When making a request across federated Consul datacenters, requests are forwarde a local server to any remote server. Once in the remote datacenter, the request path is the same as a [local request with the same consistency mode](#intra-datacenter-request-behavior). The following diagrams show the cross-datacenter request paths when Consul servers in datacenters are -[federated either directly or via mesh gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). +[federated either directly or via mesh gateways](/consul/docs/east-west/mesh-gateway/enable). @@ -131,7 +131,7 @@ The following diagrams show the cross-datacenter request paths when Consul serve ### Consul DNS Queries -When DNS queries are issued to [Consul's DNS interface](/consul/docs/services/discovery/dns-overview), +When DNS queries are issued to [Consul's DNS interface](/consul/docs/discover/dns), Consul uses the `stale` consistency mode by default when interfacing with its underlying Consul service discovery HTTP APIs ([Catalog](/consul/api-docs/catalog), [Health](/consul/api-docs/health), and [Prepared Query](/consul/api-docs/query)). @@ -272,7 +272,7 @@ Note that some HTTP API endpoints support a `cached` parameter which has some of semantics as `stale` consistency mode but different trade offs. This behavior is described in [agent caching feature documentation](/consul/api-docs/features/caching) - -[`dns_config.allow_stale`]: /consul/docs/agent/config/config-files#allow_stale -[`dns_config.max_stale`]: /consul/docs/agent/config/config-files#max_stale -[`discovery_max_stale`]: /consul/docs/agent/config/config-files#discovery_max_stale + +[`dns_config.allow_stale`]: /consul/docs/reference/agent/configuration-file/dns#allow_stale +[`dns_config.max_stale`]: /consul/docs/reference/agent/configuration-file/dns#max_stale +[`discovery_max_stale`]: /consul/docs/reference/agent/configuration-file/general#discovery_max_stale diff --git a/website/content/api-docs/health.mdx b/website/content/api-docs/health.mdx index c6182f441b5d..adc4ec93e32d 100644 --- a/website/content/api-docs/health.mdx +++ b/website/content/api-docs/health.mdx @@ -21,7 +21,7 @@ use the [`/agent/check`](/consul/api-docs/agent/check) endpoints. This endpoint returns the checks specific to the node provided on the path. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | -------------------- | ------------------ | @@ -115,7 +115,7 @@ the following selectors and filter operations being supported: This endpoint returns the checks associated with the service provided on the path. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ------------------------- | ------------------ | @@ -205,7 +205,7 @@ This endpoint returns the service instances providing the service indicated on t Users can also build in support for dynamic load balancing and other features by incorporating the use of health checks. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | -------------------------- | ------------------ | @@ -222,7 +222,7 @@ The table below shows this endpoint's support for | `YES` 1 | `all` | `background refresh` | `node:read,service:read` |

- 1some query parameters will use the streaming backend for blocking queries. + 1some query parameters will use the streaming backend for blocking queries.

### Path Parameters @@ -238,7 +238,7 @@ The table below shows this endpoint's support for ascending order based on the estimated round trip time from that node. Passing `?near=_agent` uses the agent's node for the sort. ~> **Note:** Using `near` will ignore - [`use_streaming_backend`](/consul/docs/agent/config/config-files#use_streaming_backend) and always + [`use_streaming_backend`](/consul/docs/reference/agent/configuration-file/general#use_streaming_backend) and always use blocking queries, because the data required to sort the results is not available to the streaming backend. @@ -265,16 +265,16 @@ The table below shows this endpoint's support for - `merge-central-config` - Include this flag in a request for `connect-proxy` kind or `*-gateway` kind services to return a fully resolved service definition that includes merged values from the - [proxy-defaults/global](/consul/docs/connect/config-entries/proxy-defaults) and - [service-defaults/:service](/consul/docs/connect/config-entries/service-defaults) config entries. - Returning a fully resolved service definition is useful when a service was registered using the + [proxy-defaults/global](/consul/docs/reference/config-entry/proxy-defaults) and + [service-defaults/:service](/consul/docs/reference/config-entry/service-defaults) config entries. + Returning a fully resolved service definition is useful when a service was registered using the [/catalog/register](/consul/api-docs/catalog#register_entity) endpoint, which does not automatically merge config entries. - `ns` `(string: "")` - Specifies the namespace of the service. You can also [specify the namespace through other methods](#methods-to-specify-namespace). - `sg` `(string: "")` - Specifies the sameness group the service is a member of to - facilitate requests to identical services in other peers or partitions. + facilitate requests to identical services in other peers or partitions. ### Sample Request @@ -420,7 +420,7 @@ This will include both proxies and native integrations. A service may register both mesh-capable and incapable services at the same time, so this endpoint may be used to filter only the mesh-capable endpoints. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | -------------------------- | ------------------ | @@ -434,9 +434,9 @@ Parameters and response format are the same as -> **1.8.0+:** This API is available in Consul versions 1.8.0 and later. This endpoint returns the service instances providing an [ingress -gateway](/consul/docs/connect/gateways/ingress-gateway) for a service in a given datacenter. +gateway](/consul/docs/north-south/ingress-gateway) for a service in a given datacenter. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | -------------------------- | ------------------ | @@ -452,7 +452,7 @@ endpoint does not support the `peer` query parameter and the [streaming backend] This endpoint returns the checks in the state provided on the path. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ---------------------- | ------------------ | diff --git a/website/content/api-docs/index.mdx b/website/content/api-docs/index.mdx index 20d0021e30cd..9ce832702d12 100644 --- a/website/content/api-docs/index.mdx +++ b/website/content/api-docs/index.mdx @@ -37,7 +37,7 @@ The following API endpoints give you control over access to services in your net Use the following API endpoints enable network observability. - [`/status`](/consul/api-docs/status): Debug your Consul datacenter by returning low-level Raft information about Consul server peers. -- [`/agent/metrics`](/consul/api-docs/agent#view-metrics): Retrieve metrics for the most recent intervals that have finished. For more information about metrics, refer to [Telemetry](/consul/docs/agent/telemetry). +- [`/agent/metrics`](/consul/api-docs/agent#view-metrics): Retrieve metrics for the most recent intervals that have finished. For more information about metrics, refer to [Telemetry](/consul/docs/monitor/telemetry/agent). ## Manage Consul @@ -56,4 +56,4 @@ The following API endpoints enable you to dynamically configure your services. - [`/event`](/consul/api-docs/event): Start a custom event that you can use to build scripts and automations. - [`/kv`](/consul/api-docs/kv): Add, remove, and update metadata stored in the Consul KV store. -- [`/session`](/consul/api-docs/session): Create and manage [sessions](/consul/docs/dynamic-app-config/sessions) in Consul. You can use sessions to build distributed and granular locks to ensure nodes are properly writing to the Consul KV store. +- [`/session`](/consul/api-docs/session): Create and manage [sessions](/consul/docs/automate/session) in Consul. You can use sessions to build distributed and granular locks to ensure nodes are properly writing to the Consul KV store. diff --git a/website/content/api-docs/kv.mdx b/website/content/api-docs/kv.mdx index 3eef6eb27f71..8ed1d16572a5 100644 --- a/website/content/api-docs/kv.mdx +++ b/website/content/api-docs/kv.mdx @@ -79,7 +79,7 @@ The corresponding CLI command is [`consul kv get`](/consul/commands/kv/get). For recursive lookups, the namespace may be specified as '\*' to return results for all namespaces. -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -212,7 +212,7 @@ The corresponding CLI command is [`consul kv put`](/consul/commands/kv/put). session has locked the key.** For an example of how to use the lock feature, check the - [Leader Election tutorial](/consul/docs/dynamic-app-config/sessions/application-leader-election). + [Leader Election tutorial](/consul/docs/automate/application-leader-election). - `release` `(string: "")` - Supply a session ID to use in a release operation. This is useful when paired with `?acquire=` as it allows clients to yield a lock. This @@ -222,7 +222,7 @@ The corresponding CLI command is [`consul kv put`](/consul/commands/kv/put). - `ns` `(string: "")` - Specifies the namespace to query. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Payload @@ -293,7 +293,7 @@ The corresponding CLI command is [`consul kv delete`](/consul/commands/kv/delete - `ns` `(string: "")` - Specifies the namespace to query. You can also [specify the namespace through other methods](#methods-to-specify-namespace). -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/namespaces.mdx b/website/content/api-docs/namespaces.mdx index 8c963d80a9f0..b6371a80298b 100644 --- a/website/content/api-docs/namespaces.mdx +++ b/website/content/api-docs/namespaces.mdx @@ -67,7 +67,7 @@ The corresponding CLI command is [`consul namespace create`](/consul/commands/na - `Meta` `(map: )` - Specifies arbitrary KV metadata to associate with the namespace. -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -173,7 +173,7 @@ The corresponding CLI command is [`consul namespace read`](/consul/commands/name ### Query Parameters -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -278,7 +278,7 @@ The corresponding CLI command is [`consul namespace update`](/consul/commands/na - `Meta` `(map: )` - Specifies arbitrary KV metadata to associate with the namespace. -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' ### Sample Payload @@ -385,7 +385,7 @@ The corresponding CLI command is [`consul namespace delete`](/consul/commands/na ### Query Parameters -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -437,7 +437,7 @@ $ curl --request DELETE \ This endpoint lists all the Namespaces. The output will be filtered based on the privileges of the ACL token used for the request. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ------------- | ------------------ | @@ -460,7 +460,7 @@ The corresponding CLI command is [`consul namespace list`](/consul/commands/name ### Query Parameters -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/operator/autopilot.mdx b/website/content/api-docs/operator/autopilot.mdx index e59898862a26..e71cd09eac62 100644 --- a/website/content/api-docs/operator/autopilot.mdx +++ b/website/content/api-docs/operator/autopilot.mdx @@ -68,7 +68,7 @@ $ curl \ ``` For more information about the Autopilot configuration options, see the -[agent configuration section](/consul/docs/agent/config/config-files#autopilot). +[agent configuration section](/consul/docs/reference/agent/configuration-file/general#autopilot). ## Update Configuration @@ -327,7 +327,7 @@ $ curl \ - `OptimisticFailuretolerance` is the maximum number of servers that could fail in the right order over the right period of time without causing an outage. This value is only useful when using the [Redundancy - Zones feature](/consul/docs/enterprise/redundancy) with autopilot. + Zones feature](/consul/docs/manage/scale/redundancy-zone) with autopilot. - `Servers` is a mapping of server ID to an object holding detailed information about that server. The format of the detailed info is documented in its own section. diff --git a/website/content/api-docs/operator/keyring.mdx b/website/content/api-docs/operator/keyring.mdx index ec0b14fc8db7..582caf9e265b 100644 --- a/website/content/api-docs/operator/keyring.mdx +++ b/website/content/api-docs/operator/keyring.mdx @@ -9,7 +9,7 @@ description: |- # Keyring Operator HTTP API The `/operator/keyring` endpoints allow for management of the gossip encryption -keyring. Please see the [Gossip Protocol Guide](/consul/docs/architecture/gossip) for +keyring. Please see the [Gossip Protocol Guide](/consul/docs/concept/gossip) for more details on the gossip protocol and its use. ## List Gossip Encryption Keys diff --git a/website/content/api-docs/operator/raft.mdx b/website/content/api-docs/operator/raft.mdx index 7b55284644c5..c5f8f9696b85 100644 --- a/website/content/api-docs/operator/raft.mdx +++ b/website/content/api-docs/operator/raft.mdx @@ -11,7 +11,7 @@ description: |- The `/operator/raft` endpoints provide tools for management of Raft the consensus subsystem and cluster quorum. -Please see the [Consensus Protocol Guide](/consul/docs/architecture/consensus) for +Please see the [Consensus Protocol Guide](/consul/docs/concept/consensus) for more information about Raft consensus protocol and its use. ## Read Configuration diff --git a/website/content/api-docs/operator/segment.mdx b/website/content/api-docs/operator/segment.mdx index e65268e2ad91..78214e95480d 100644 --- a/website/content/api-docs/operator/segment.mdx +++ b/website/content/api-docs/operator/segment.mdx @@ -18,7 +18,7 @@ The network area functionality described here is available only in later. Network segments are operator-defined sections of agents on the LAN, typically isolated from other segments by network configuration. -Please check the [Network Segments documentation](/consul/docs/enterprise/network-segments/network-segments-overview) for more details. +Please check the [Network Segments documentation](/consul/docs/multi-tenant/network-segment) for more details. ## List Network Segments diff --git a/website/content/api-docs/peering.mdx b/website/content/api-docs/peering.mdx index f0e4d7767710..1dca013430ac 100644 --- a/website/content/api-docs/peering.mdx +++ b/website/content/api-docs/peering.mdx @@ -34,7 +34,7 @@ The table below shows this endpoint's support for and configuration entries such as `service-intentions`. This field must be a valid DNS hostname label. -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' - `ServerExternalAddresses` `([]string: )` - The addresses for the cluster that generates the peering token. Addresses take the form `{host or IP}:port`. You can specify one or more load balancers or external IPs that route external traffic to this cluster's Consul servers. @@ -99,7 +99,7 @@ The table below shows this endpoint's support for and configuration entries such as `service-intentions`. This field must be a valid DNS hostname label. -@include 'http-api-body-options-partition.mdx' +@include 'legacy/http-api-body-options-partition.mdx' - `PeeringToken` `(string: )` - The peering token fetched from the peer cluster. @@ -159,7 +159,7 @@ The table below shows this endpoint's support for ### Query Parameters -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -231,7 +231,7 @@ The table below shows this endpoint's support for ### Query Parameters -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request @@ -263,7 +263,7 @@ $ curl --request DELETE \ This endpoint lists all the peerings. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | ------------- | ------------------ | @@ -281,7 +281,7 @@ The table below shows this endpoint's support for ### Query Parameters -@include 'http-api-query-parms-partition.mdx' +@include 'legacy/http-api-query-parms-partition.mdx' ### Sample Request diff --git a/website/content/api-docs/query.mdx b/website/content/api-docs/query.mdx index f8c71bfb065a..a42c16ff6c2a 100644 --- a/website/content/api-docs/query.mdx +++ b/website/content/api-docs/query.mdx @@ -11,7 +11,7 @@ The `/query` endpoints create, update, destroy, and execute prepared queries. Prepared queries allow you to register a complex service query and then execute it later by specifying the query ID or name. Consul returns a set of healthy nodes that provide a given service. Refer to -[Enable Dynamic DNS Queries](/consul/docs/services/discovery/dns-dynamic-lookups) for additional information. +[Enable Dynamic DNS Queries](/consul/docs/discover/service/dynamic) for additional information. Check the [Geo Failover tutorial](/consul/tutorials/developer-discovery/automate-geo-failover) for details and examples for using prepared queries to implement geo failover for services. @@ -215,7 +215,7 @@ The table below shows this endpoint's support for service instances in the local datacenter. This option cannot be used with `NearestN` or `Datacenters`. - - `Peer` `(string: "")` - Specifies a [cluster peer](/consul/docs/connect/cluster-peering) to use for + - `Peer` `(string: "")` - Specifies a [cluster peer](/consul/docs/east-west/cluster-peering) to use for failover. - `Datacenter` `(string: "")` - Specifies a WAN federated datacenter to forward the @@ -325,7 +325,7 @@ $ curl \ This endpoint returns a list of all prepared queries. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | ------ | -------- | ------------------ | diff --git a/website/content/api-docs/session.mdx b/website/content/api-docs/session.mdx index 3f13f178d711..21d5b2faa8da 100644 --- a/website/content/api-docs/session.mdx +++ b/website/content/api-docs/session.mdx @@ -77,7 +77,7 @@ The table below shows this endpoint's support for 86400s). If provided, the session is invalidated if it is not renewed before the TTL expires. The lowest practical TTL should be used to keep the number of managed sessions low. When locks are forcibly expired, such as when following - the [leader election pattern](/consul/docs/dynamic-app-config/sessions/application-leader-election) in an application, + the [leader election pattern](/consul/docs/automate/application-leader-election) in an application, sessions may not be reaped for up to double this TTL, so long TTL values (> 1 hour) should be avoided. Valid time units include "s", "m" and "h". @@ -230,7 +230,7 @@ If the session does not exist, an empty JSON list `[]` is returned. This endpoint returns the active sessions for a given node. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | :----- | :-------------------- | ------------------ | @@ -293,7 +293,7 @@ $ curl \ This endpoint returns the list of active sessions. -@include 'http_api_results_filtered_by_acls.mdx' +@include 'legacy/http_api_results_filtered_by_acls.mdx' | Method | Path | Produces | | :----- | :-------------- | ------------------ | diff --git a/website/content/api-docs/snapshot.mdx b/website/content/api-docs/snapshot.mdx index 69460e9e3f70..a4cb8a244d1f 100644 --- a/website/content/api-docs/snapshot.mdx +++ b/website/content/api-docs/snapshot.mdx @@ -10,7 +10,7 @@ description: |- The `/snapshot` endpoints save and restore the state of the Consul servers for disaster recovery. Snapshots include all state managed by Consul's -Raft [consensus protocol](/consul/docs/architecture/consensus). +Raft [consensus protocol](/consul/docs/concept/consensus). ## Generate Snapshot diff --git a/website/content/commands/acl/auth-method/create.mdx b/website/content/commands/acl/auth-method/create.mdx index 4f1015cfb61d..61a02008df1b 100644 --- a/website/content/commands/acl/auth-method/create.mdx +++ b/website/content/commands/acl/auth-method/create.mdx @@ -67,9 +67,9 @@ Usage: `consul acl auth-method create [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' - `-namespace-rule-bind-namespace=` - Namespace to bind on match. Can use `${var}` interpolation. Added in Consul 1.8.0. @@ -80,9 +80,9 @@ Usage: `consul acl auth-method create [options] [args]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/auth-method/delete.mdx b/website/content/commands/acl/auth-method/delete.mdx index acb095c5872b..3a54675a07c4 100644 --- a/website/content/commands/acl/auth-method/delete.mdx +++ b/website/content/commands/acl/auth-method/delete.mdx @@ -31,15 +31,15 @@ Usage: `consul acl auth-method delete [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/auth-method/list.mdx b/website/content/commands/acl/auth-method/list.mdx index 42d7285ca76d..8e9a2d2b0be2 100644 --- a/website/content/commands/acl/auth-method/list.mdx +++ b/website/content/commands/acl/auth-method/list.mdx @@ -34,15 +34,15 @@ Usage: `consul acl auth-method list` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/auth-method/read.mdx b/website/content/commands/acl/auth-method/read.mdx index 015852f9684c..877d240a9f98 100644 --- a/website/content/commands/acl/auth-method/read.mdx +++ b/website/content/commands/acl/auth-method/read.mdx @@ -36,15 +36,15 @@ Usage: `consul acl auth-method read [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/auth-method/update.mdx b/website/content/commands/acl/auth-method/update.mdx index d328ea999c2b..35889b811ebe 100644 --- a/website/content/commands/acl/auth-method/update.mdx +++ b/website/content/commands/acl/auth-method/update.mdx @@ -72,9 +72,9 @@ Usage: `consul acl auth-method update [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' - `-namespace-rule-bind-namespace=` - Namespace to bind on match. Can use `${var}` interpolation. Added in Consul 1.8.0. @@ -85,9 +85,9 @@ Usage: `consul acl auth-method update [options] [args]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/binding-rule/create.mdx b/website/content/commands/acl/binding-rule/create.mdx index b32b2ce934e5..e94bbb545ccb 100644 --- a/website/content/commands/acl/binding-rule/create.mdx +++ b/website/content/commands/acl/binding-rule/create.mdx @@ -47,15 +47,15 @@ Usage: `consul acl binding-rule create [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/binding-rule/delete.mdx b/website/content/commands/acl/binding-rule/delete.mdx index 9164f62a7c0e..cf82d3cf6099 100644 --- a/website/content/commands/acl/binding-rule/delete.mdx +++ b/website/content/commands/acl/binding-rule/delete.mdx @@ -32,15 +32,15 @@ Usage: `consul acl binding-rule delete [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/binding-rule/list.mdx b/website/content/commands/acl/binding-rule/list.mdx index cbb9c184c821..8e5052640a36 100644 --- a/website/content/commands/acl/binding-rule/list.mdx +++ b/website/content/commands/acl/binding-rule/list.mdx @@ -34,15 +34,15 @@ Usage: `consul acl binding-rule list` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/binding-rule/read.mdx b/website/content/commands/acl/binding-rule/read.mdx index 50f1e8112f88..5809a721c9e3 100644 --- a/website/content/commands/acl/binding-rule/read.mdx +++ b/website/content/commands/acl/binding-rule/read.mdx @@ -37,15 +37,15 @@ Usage: `consul acl binding-rule read [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/binding-rule/update.mdx b/website/content/commands/acl/binding-rule/update.mdx index 22ad67e0cfc9..06dc4cf5fa00 100644 --- a/website/content/commands/acl/binding-rule/update.mdx +++ b/website/content/commands/acl/binding-rule/update.mdx @@ -54,15 +54,15 @@ Usage: `consul acl binding-rule update [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/bootstrap.mdx b/website/content/commands/acl/bootstrap.mdx index e5b43a3ec6b4..626609be86fb 100644 --- a/website/content/commands/acl/bootstrap.mdx +++ b/website/content/commands/acl/bootstrap.mdx @@ -49,6 +49,6 @@ Policies: #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/acl/policy/create.mdx b/website/content/commands/acl/policy/create.mdx index e178536515e0..f952e811d201 100644 --- a/website/content/commands/acl/policy/create.mdx +++ b/website/content/commands/acl/policy/create.mdx @@ -49,15 +49,15 @@ Usage: `consul acl policy create [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/policy/delete.mdx b/website/content/commands/acl/policy/delete.mdx index 2b611f5a78dc..bc6b2b15f706 100644 --- a/website/content/commands/acl/policy/delete.mdx +++ b/website/content/commands/acl/policy/delete.mdx @@ -34,15 +34,15 @@ Usage: `consul acl policy delete [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/policy/list.mdx b/website/content/commands/acl/policy/list.mdx index ad3b114b9793..3d19ad8240c4 100644 --- a/website/content/commands/acl/policy/list.mdx +++ b/website/content/commands/acl/policy/list.mdx @@ -34,15 +34,15 @@ Usage: `consul acl policy list` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/policy/read.mdx b/website/content/commands/acl/policy/read.mdx index 28e2f51a79d0..1e07142a8abb 100644 --- a/website/content/commands/acl/policy/read.mdx +++ b/website/content/commands/acl/policy/read.mdx @@ -39,15 +39,15 @@ Usage: `consul acl policy read [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/policy/update.mdx b/website/content/commands/acl/policy/update.mdx index edc529a5da9f..dab0f2a51d57 100644 --- a/website/content/commands/acl/policy/update.mdx +++ b/website/content/commands/acl/policy/update.mdx @@ -58,15 +58,15 @@ Usage: `consul acl policy update [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/role/create.mdx b/website/content/commands/acl/role/create.mdx index 82989a2697a1..c1d83df84f9e 100644 --- a/website/content/commands/acl/role/create.mdx +++ b/website/content/commands/acl/role/create.mdx @@ -52,15 +52,15 @@ Usage: `consul acl role create [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/role/delete.mdx b/website/content/commands/acl/role/delete.mdx index 9a39e97ab843..61b721edbb29 100644 --- a/website/content/commands/acl/role/delete.mdx +++ b/website/content/commands/acl/role/delete.mdx @@ -34,15 +34,15 @@ Usage: `consul acl role delete [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/role/list.mdx b/website/content/commands/acl/role/list.mdx index afd8c75a41a9..fe75b99cfcc4 100644 --- a/website/content/commands/acl/role/list.mdx +++ b/website/content/commands/acl/role/list.mdx @@ -34,15 +34,15 @@ Usage: `consul acl role list` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/role/read.mdx b/website/content/commands/acl/role/read.mdx index aa5008f57d5c..3f04c96ce2ba 100644 --- a/website/content/commands/acl/role/read.mdx +++ b/website/content/commands/acl/role/read.mdx @@ -39,15 +39,15 @@ Usage: `consul acl role read [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/role/update.mdx b/website/content/commands/acl/role/update.mdx index 1308e445ba7f..6b27cd5778cf 100644 --- a/website/content/commands/acl/role/update.mdx +++ b/website/content/commands/acl/role/update.mdx @@ -63,15 +63,15 @@ Usage: `consul acl role update [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/set-agent-token.mdx b/website/content/commands/acl/set-agent-token.mdx index c56b706871b2..f77b3317405c 100644 --- a/website/content/commands/acl/set-agent-token.mdx +++ b/website/content/commands/acl/set-agent-token.mdx @@ -14,7 +14,7 @@ Corresponding HTTP API Endpoint: [\[PUT\] /v1/agent/token/:type](/consul/api-doc This command updates the ACL tokens currently in use by the agent. It can be used to introduce ACL tokens to the agent for the first time, or to update tokens that were initially loaded from the agent's configuration. Tokens are not persisted unless -[`acl.enable_token_persistence`](/consul/docs/agent/config/config-files#acl_enable_token_persistence) +[`acl.enable_token_persistence`](/consul/docs/reference/agent/configuration-file/acl#acl_enable_token_persistence) is `true`, so tokens will need to be updated again if that option is `false` and the agent is restarted. @@ -38,7 +38,7 @@ The token types are: - `dns` - Specifies the token that agents use to request information needed to respond to DNS queries. If the `dns` token is not set, Consul uses the `default` token by default. Because the `default` token allows unauthenticated HTTP API access to list nodes and services, we - strongly recommend using the `dns` token. Create DNS tokens using the [templated policy](/consul/docs/security/acl/tokens/create/create-a-dns-token#create_a_dns_token) option + strongly recommend using the `dns` token. Create DNS tokens using the [templated policy](/consul/docs/secure/acl/token/dns) option to ensure that the token has the permissions needed to respond to all DNS queries. - `config_file_service_registration` - This is the token that the agent uses to @@ -61,9 +61,9 @@ The token types are: ### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/templated-policy/list.mdx b/website/content/commands/acl/templated-policy/list.mdx index bc77f4c3f526..008b291f8ed5 100644 --- a/website/content/commands/acl/templated-policy/list.mdx +++ b/website/content/commands/acl/templated-policy/list.mdx @@ -27,9 +27,9 @@ Usage: `consul acl templated-policy list` ### API options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Example diff --git a/website/content/commands/acl/templated-policy/preview.mdx b/website/content/commands/acl/templated-policy/preview.mdx index cfffa6db7e55..08608b8b1b7d 100644 --- a/website/content/commands/acl/templated-policy/preview.mdx +++ b/website/content/commands/acl/templated-policy/preview.mdx @@ -36,15 +36,15 @@ Usage: `consul acl templated-policy preview [options] [args]` ### Enterprise options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' ### API options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/templated-policy/read.mdx b/website/content/commands/acl/templated-policy/read.mdx index 94e461e4ea94..cedebe8718d4 100644 --- a/website/content/commands/acl/templated-policy/read.mdx +++ b/website/content/commands/acl/templated-policy/read.mdx @@ -28,9 +28,9 @@ Usage: `consul acl templated-policy read [options] [args]` ### API options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/token/clone.mdx b/website/content/commands/acl/token/clone.mdx index 937097fb284d..45420e0ee80a 100644 --- a/website/content/commands/acl/token/clone.mdx +++ b/website/content/commands/acl/token/clone.mdx @@ -38,15 +38,15 @@ Usage: `consul acl token clone [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/token/create.mdx b/website/content/commands/acl/token/create.mdx index 8cd5fb6b948b..92907226dd87 100644 --- a/website/content/commands/acl/token/create.mdx +++ b/website/content/commands/acl/token/create.mdx @@ -66,15 +66,15 @@ Usage: `consul acl token create [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/token/delete.mdx b/website/content/commands/acl/token/delete.mdx index 13a3ca26e6e3..6d727ddcf112 100644 --- a/website/content/commands/acl/token/delete.mdx +++ b/website/content/commands/acl/token/delete.mdx @@ -32,15 +32,15 @@ Usage: `consul acl token delete [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/token/list.mdx b/website/content/commands/acl/token/list.mdx index fd3b6c215114..de9c05a740e0 100644 --- a/website/content/commands/acl/token/list.mdx +++ b/website/content/commands/acl/token/list.mdx @@ -34,15 +34,15 @@ Usage: `consul acl token list` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/token/read.mdx b/website/content/commands/acl/token/read.mdx index dfcbea12d8e1..934e6718673f 100644 --- a/website/content/commands/acl/token/read.mdx +++ b/website/content/commands/acl/token/read.mdx @@ -43,15 +43,15 @@ Usage: `consul acl token read [options] [args]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/acl/token/update.mdx b/website/content/commands/acl/token/update.mdx index 1f89d828641c..5a72b36d5368 100644 --- a/website/content/commands/acl/token/update.mdx +++ b/website/content/commands/acl/token/update.mdx @@ -93,15 +93,15 @@ instead. #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/agent.mdx b/website/content/commands/agent.mdx index 6e2a7524119e..2029327b5240 100644 --- a/website/content/commands/agent.mdx +++ b/website/content/commands/agent.mdx @@ -9,11 +9,565 @@ description: >- # Consul Agent -The `consul agent` command is the heart of Consul: it runs the agent that -performs the important task of maintaining membership information, -running checks, announcing services, handling queries, etc. - -Due to the power and flexibility of this command, the Consul agent -is documented in its own section. See the [Consul Agent](/consul/docs/agent) -section for more information on how to use this command and the -options it has. +This page describes the available command-line options for the Consul agent. + +## Usage + +```shell-session +consul agent +``` + +## Environment Variables + +Environment variables **cannot** be used to configure the Consul client. They +_can_ be used when running other `consul` CLI commands that connect with a +running agent, e.g. `CONSUL_HTTP_ADDR=192.168.0.1:8500 consul members`. + +REFER TO [Consul Commands](/consul/commands#environment-variables) for more +information. + +## Options + +- `-auto-reload-config` ((#\_auto_reload_config)) - This option directs Consul to automatically reload the [reloadable configuration options](/consul/docs/agent/config#reloadable-configuration) when configuration files change. + Consul also watches the certificate and key files specified with the `cert_file` and `key_file` parameters and reloads the configuration if the files are updated. + +- `-check_output_max_size` - Override the default + limit of 4k for maximum size of checks, this is a positive value. By limiting this + size, it allows to put less pressure on Consul servers when many checks are having + a very large output in their checks. In order to completely disable check output + capture, it is possible to use [`discard_check_output`](/consul/docs/reference/agent/configuration-file/general#discard_check_output). + +- `-client` ((#\_client)) - The address to which Consul will bind client + interfaces, including the HTTP and DNS servers. By default, this is "127.0.0.1", + allowing only loopback connections. In Consul 1.0 and later this can be set to + a space-separated list of addresses to bind to, or a [go-sockaddr] + template that can potentially resolve to multiple addresses. + + + + ```shell + $ consul agent -dev -client '{{ GetPrivateInterfaces | exclude "type" "ipv6" | join "address" " " }}' + ``` + + + + + + ```shell + $ consul agent -dev -client '{{ GetPrivateInterfaces | join "address" " " }} {{ GetAllInterfaces | include "flags" "loopback" | join "address" " " }}' + ``` + + + + + + ```shell + $ consul agent -dev -client '{{ GetPrivateInterfaces | exclude "name" "br.*" | join "address" " " }}' + ``` + + + +- `-data-dir` ((#\_data_dir)) - This flag provides a data directory for + the agent to store state. This is required for all agents. The directory should + be durable across reboots. This is especially critical for agents that are running + in server mode as they must be able to persist cluster state. Additionally, the + directory must support the use of filesystem locking, meaning some types of mounted + folders (e.g. VirtualBox shared folders) may not be suitable. + + **Note:** both server and non-server agents may store ACL tokens in the state in this directory so read access may grant access to any tokens on servers and to any tokens used during service registration on non-servers. On Unix-based platforms the files are written with 0600 permissions so you should ensure only trusted processes can execute as the same user as Consul. On Windows, you should ensure the directory has suitable permissions configured as these will be inherited. + +- `-datacenter` ((#\_datacenter)) - This flag controls the datacenter in + which the agent is running. If not provided, it defaults to "dc1". Consul has first-class + support for multiple datacenters, but it relies on proper configuration. Nodes + in the same datacenter should be on a single LAN. + +~> **Warning:** This `datacenter` string must conform to [RFC 1035 DNS label requirements](https://datatracker.ietf.org/doc/html/rfc1035#section-2.3.1), + consisting solely of letters, digits, and hyphens, with a maximum + length of 63 characters, and no hyphens at the beginning or end of the label. + Non-compliant names create Consul DNS entries incompatible with PKI X.509 certificate generation. + +- `-dev` ((#\_dev)) - Enable development server mode. This is useful for + quickly starting a Consul agent with all persistence options turned off, enabling + an in-memory server which can be used for rapid prototyping or developing against + the API. In this mode, [service mesh is enabled](/consul/docs/fundamentals/config-entry) and + will by default create a new root CA certificate on startup. This mode is **not** + intended for production use as it does not write any data to disk. The gRPC port + is also defaulted to `8502` in this mode. + +- `-disable-keyring-file` ((#\_disable_keyring_file)) - If set, the keyring + will not be persisted to a file. Any installed keys will be lost on shutdown, and + only the given `-encrypt` key will be available on startup. This defaults to false. + +- `-enable-script-checks` ((#\_enable_script_checks)) This controls whether + [health checks that execute scripts](/consul/docs/register/health-check/vm) are enabled on this + agent, and defaults to `false` so operators must opt-in to allowing these. This + was added in Consul 0.9.0. + + ~> **Security Warning:** Enabling script checks in some configurations may + introduce a remote execution vulnerability which is known to be targeted by + malware. We strongly recommend `-enable-local-script-checks` instead. See [this + blog post](https://www.hashicorp.com/blog/protecting-consul-from-rce-risk-in-specific-configurations) + for more details. + +- `-enable-local-script-checks` ((#\_enable_local_script_checks)) + Like [`-enable-script-checks`](#_enable_script_checks), but only enable them when + they are defined in the local configuration files. Script checks defined in HTTP + API registrations will still not be allowed. + +- `-encrypt` ((#\_encrypt)) - Specifies the secret key to use for encryption + of Consul network traffic. This key must be 32-bytes that are Base64-encoded. The + easiest way to create an encryption key is to use [`consul keygen`](/consul/commands/keygen). + All nodes within a cluster must share the same encryption key to communicate. The + provided key is automatically persisted to the data directory and loaded automatically + whenever the agent is restarted. This means that to encrypt Consul's gossip protocol, + this option only needs to be provided once on each agent's initial startup sequence. + If it is provided after Consul has been initialized with an encryption key, then + the provided key is ignored and a warning will be displayed. + +- `-grpc-port` ((#\_grpc_port)) - the gRPC API port to listen on. Default + -1 (gRPC disabled). See [ports](/consul/docs/agent/config#ports-used) documentation for more detail. + +- `-hcl` ((#\_hcl)) - A HCL configuration fragment. This HCL configuration + fragment is appended to the configuration and allows to specify the full range + of options of a config file on the command line. This option can be specified multiple + times. This was added in Consul 1.0. + +- `-http-port` ((#\_http_port)) - the HTTP API port to listen on. This overrides + the default port 8500. This option is very useful when deploying Consul to an environment + which communicates the HTTP port through the environment e.g. PaaS like CloudFoundry, + allowing you to set the port directly via a Procfile. + +- `-https-port` ((#\_https_port)) - the HTTPS API port to listen on. Default + -1 (https disabled). See [ports](/consul/docs/agent/config#ports-used) documentation for more detail. + +- `-default-query-time` ((#\_default_query_time)) - This flag controls the + amount of time a blocking query will wait before Consul will force a response. + This value can be overridden by the `wait` query parameter. Note that Consul applies + some jitter on top of this time. Defaults to 300s. + +- `-max-query-time` ((#\_max_query_time)) - this flag controls the maximum + amount of time a blocking query can wait before Consul will force a response. Consul + applies jitter to the wait time. The jittered time will be capped to this time. + Defaults to 600s. + +- `-pid-file` ((#\_pid_file)) - This flag provides the file path for the + agent to store its PID. This is useful for sending signals (for example, `SIGINT` + to close the agent or `SIGHUP` to update check definitions) to the agent. + +- `-protocol` ((#\_protocol)) - The Consul protocol version to use. Consul + agents speak protocol 2 by default, however agents will automatically use protocol > 2 when speaking to compatible agents. This should be set only when [upgrading](/consul/docs/upgrade). You can view the protocol versions supported by Consul by running `consul version`. + +- `-raft-protocol` ((#\_raft_protocol)) - This controls the internal version + of the Raft consensus protocol used for server communications. This must be set + to 3 in order to gain access to Autopilot features, with the exception of [`cleanup_dead_servers`](/consul/docs/reference/agent/configuration-file/general#cleanup_dead_servers). Defaults to 3 in Consul 1.0.0 and later (defaulted to 2 previously). See [Raft Protocol Version Compatibility](/consul/docs/upgrade/version-specific#raft-protocol-version-compatibility) for more details. + +- `-segment` ((#\_segment)) - This flag is used to set + the name of the network segment the agent belongs to. An agent can only join and + communicate with other agents within its network segment. Ensure the [join + operation uses the correct port for this segment](/consul/docs/multi-tenant/network-segment/vm#configure-clients-to-join-segments). + Review the [Network Segments documentation](/consul/docs/multi-tenant/network-segment/vm) + for more details. By default, this is an empty string, which is the `` + network segment. + + ~> **Warning:** The `segment` flag cannot be used with the [`partition`](/consul/docs/reference/agent/configuration-file/general#partition) option. + +## Advertise Address Options + +- `-advertise` ((#\_advertise)) - The advertise address is used to change + the address that we advertise to other nodes in the cluster. By default, the [`-bind`](#_bind) + address is advertised. However, in some cases, there may be a routable address + that cannot be bound. This flag enables gossiping a different address to support + this. If this address is not routable, the node will be in a constant flapping + state as other nodes will treat the non-routability as a failure. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] + template that is resolved at runtime. + + + + ```shell-session + $ consul agent -advertise '{{ GetInterfaceIP "eth0" }}' + ``` + + + +- `-advertise-wan` ((#\_advertise-wan)) - The advertise WAN address is used + to change the address that we advertise to server nodes joining through the WAN. + This can also be set on client agents when used in combination with the [`translate_wan_addrs`](/consul/docs/reference/agent/configuration-file/general#translate_wan_addrs) configuration option. By default, the [`-advertise`](#_advertise) address + is advertised. However, in some cases all members of all datacenters cannot be + on the same physical or virtual network, especially on hybrid setups mixing cloud + and private datacenters. This flag enables server nodes gossiping through the public + network for the WAN while using private VLANs for gossiping to each other and their + client agents, and it allows client agents to be reached at this address when being + accessed from a remote datacenter if the remote datacenter is configured with [`translate_wan_addrs`](/consul/docs/reference/agent/configuration-file/general#translate_wan_addrs). In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] + template that is resolved at runtime. + +## Address Bind Options + +- `-bind` ((#\_bind)) - The address that should be bound to for internal + cluster communications. This is an IP address that should be reachable by all other + nodes in the cluster. By default, this is "0.0.0.0", meaning Consul will bind to + all addresses on the local machine and will [advertise](#_advertise) + the private IPv4 address to the rest of the cluster. If there are multiple private + IPv4 addresses available, Consul will exit with an error at startup. If you specify + `"[::]"`, Consul will [advertise](#_advertise) the public + IPv6 address. If there are multiple public IPv6 addresses available, Consul will + exit with an error at startup. Consul uses both TCP and UDP and the same port for + both. If you have any firewalls, be sure to allow both protocols. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] + template that must resolve at runtime to a single address. Some example templates: + + + + ```shell-session + $ consul agent -bind '{{ GetPrivateInterfaces | include "network" "10.0.0.0/8" | attr "address" }}' + ``` + + + + + + ```shell-session + $ consul agent -bind '{{ GetInterfaceIP "eth0" }}' + ``` + + + + + + ```shell-session + $ consul agent -bind '{{ GetAllInterfaces | include "name" "^eth" | include "flags" "forwardable|up" | attr "address" }}' + ``` + + + +- `-serf-wan-bind` ((#\_serf_wan_bind)) - The address that should be bound + to for Serf WAN gossip communications. By default, the value follows the same rules + as [`-bind` command-line flag](#_bind), and if this is not specified, the `-bind` + option is used. This is available in Consul 0.7.1 and later. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] + template that is resolved at runtime. + +- `-serf-lan-bind` ((#\_serf_lan_bind)) - The address that should be bound + to for Serf LAN gossip communications. This is an IP address that should be reachable + by all other LAN nodes in the cluster. By default, the value follows the same rules + as [`-bind` command-line flag](#_bind), and if this is not specified, the `-bind` + option is used. This is available in Consul 0.7.1 and later. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] + template that is resolved at runtime. + +## Bootstrap Options + +- `-bootstrap` ((#\_bootstrap)) - This flag is used to control if a server + is in "bootstrap" mode. It is important that no more than one server **per** datacenter + be running in this mode. Technically, a server in bootstrap mode is allowed to + self-elect as the Raft leader. It is important that only a single node is in this + mode; otherwise, consistency cannot be guaranteed as multiple nodes are able to + self-elect. It is not recommended to use this flag after a cluster has been bootstrapped. + +- `-bootstrap-expect` ((#\_bootstrap_expect)) - This flag provides the number + of expected servers in the datacenter. Either this value should not be provided + or the value must agree with other servers in the cluster. When provided, Consul + waits until the specified number of servers are available and then bootstraps the + cluster. This allows an initial leader to be elected automatically. This cannot + be used in conjunction with the legacy [`-bootstrap`](#_bootstrap) flag. This flag + requires [`-server`](#_server) mode. + +## Configuration File Options + +- `-config-file` ((#\_config_file)) - A configuration file to load. For + more information on the format of this file, read the [Configuration Files](/consul/docs/reference/agent) + section. This option can be specified multiple times to load multiple configuration + files. If it is specified multiple times, configuration files loaded later will + merge with configuration files loaded earlier. During a config merge, single-value + keys (string, int, bool) will simply have their values replaced while list types + will be appended together. + +- `-config-dir` ((#\_config_dir)) - A directory of configuration files to + load. Consul will load all files in this directory with the suffix ".json" or ".hcl". + The load order is alphabetical, and the same merge routine is used as with + the [`config-file`](#_config_file) option above. This option can be specified multiple + times to load multiple directories. Sub-directories of the config directory are + not loaded. For more information on the format of the configuration files, refer to + the [Configuration Files](/consul/docs/reference/agent) section. + +- `-config-format` ((#\_config_format)) - The format of the configuration + files to load. Normally, Consul detects the format of the config files from the + ".json" or ".hcl" extension. Setting this option to either "json" or "hcl" forces + Consul to interpret any file with or without extension to be interpreted in that + format. + +## DNS and Domain Options + +- `-dns-port` ((#\_dns_port)) - the DNS port to listen on. This overrides + the default port 8600. This is available in Consul 0.7 and later. + +- `-domain` ((#\_domain)) - By default, Consul responds to DNS queries in + the "consul." domain. This flag can be used to change that domain. All queries + in this domain are assumed to be handled by Consul and will not be recursively + resolved. + +- `-alt-domain` ((#\_alt_domain)) - This flag allows Consul to respond to + DNS queries in an alternate domain, in addition to the primary domain. If unset, + no alternate domain is used. + + In Consul 1.10.4 and later, Consul DNS responses will use the same domain as in the query (`-domain` or `-alt-domain`) where applicable. + PTR query responses will always use `-domain`, since the desired domain cannot be included in the query. + +- `-recursor` ((#\_recursor)) - Specifies the address of an upstream DNS + server. This option may be provided multiple times, and is functionally equivalent + to the [`recursors` configuration option](/consul/docs/reference/agent/configuration-file/general#recursors). + +- `-join` ((#\_join)) - **Deprecated in Consul 1.15. This flag will be removed in a future version of Consul. Use the `-retry-join` flag instead.** + This is an alias of [`-retry-join`](#_retry_join). + +- `-retry-join` ((#\_retry_join)) - Address of another agent to join upon starting up. Joining is + retried until success. Once the agent joins successfully as a member, it will not attempt to join + again. After joining, the agent solely maintains its membership via gossip. This option can be + specified multiple times to specify multiple agents to join. By default, the agent won't join any + nodes when it starts up. The value can contain IPv4, IPv6, or DNS addresses. Literal IPv6 + addresses must be enclosed in square brackets. If multiple values are given, they are tried and + retried in the order listed until the first succeeds. + + This supports [Cloud Auto-Joining](#cloud-auto-joining). + + This can be dynamically defined with a [go-sockaddr] template that is resolved at runtime. + + If Consul is running on a non-default Serf LAN port, you must specify the port number in the address when using the `-retry-join` flag. Alternatively, you can specify the custom port number as the default in the agent's [`ports.serf_lan`](/consul/docs/reference/agent/configuration-file/general#serf_lan_port) configuration or with the [`-serf-lan-port`](#_serf_lan_port) command line flag when starting the agent. + + If your network contains network segments, refer to the [network segments documentation](/consul/docs/multi-tenant/network-segment/vm) for additional information. + + Here are some examples of using `-retry-join`: + + + + ```shell-session + $ consul agent -retry-join "consul.domain.internal" + ``` + + + + + + ```shell-session + $ consul agent -retry-join "10.0.4.67" + ``` + + + + + + ```shell-session + $ consul agent -retry-join "192.0.2.10:8304" + ``` + + + + + + ```shell-session + $ consul agent -retry-join "[::1]:8301" + ``` + + + + + + ```shell-session + $ consul agent -retry-join "consul.domain.internal" -retry-join "10.0.4.67" + ``` + + + + ### Cloud Auto-Joining + + The `-retry-join` option accepts a unified interface using the + [go-discover](https://github.com/hashicorp/go-discover) library for doing + automatic cluster joining using cloud metadata. For more information, see + the [Cloud Auto-join page](/consul/docs/deploy/server/cloud-auto-join). + + + + ```shell-session + $ consul agent -retry-join "provider=aws tag_key=..." + ``` + + + +- `-retry-interval` ((#\_retry_interval)) - Time to wait between join attempts. + Defaults to 30s. + +- `-retry-max` ((#\_retry_max)) - The maximum number of join attempts if using + [`-retry-join`](#_retry_join) before exiting with return code 1. By default, this is set + to 0 which is interpreted as infinite retries. + +- `-join-wan` ((#\_join_wan)) - **Deprecated in Consul 1.15. This flag will be removed in a future version of Consul. Use the `-retry-join-wan` flag instead.** + This is an alias of [`-retry-join-wan`](#_retry_join_wan) + +- `-retry-join-wan` ((#\_retry_join_wan)) - Address of another WAN agent to join upon starting up. + WAN joining is retried until success. This can be specified multiple times to specify multiple WAN + agents to join. If multiple values are given, they are tried and retried in the order listed + until the first succeeds. By default, the agent won't WAN join any nodes when it starts up. + + This supports [Cloud Auto-Joining](#cloud-auto-joining). + + This can be dynamically defined with a [go-sockaddr] template that is resolved at runtime. + +- `-primary-gateway` ((#\_primary_gateway)) - Similar to [`-retry-join-wan`](#_retry_join_wan) + but allows retrying discovery of fallback addresses for the mesh gateways in the + primary datacenter if the first attempt fails. This is useful for cases where we + know the address will become available eventually. [Cloud Auto-Joining](#cloud-auto-joining) + is supported as well as [go-sockaddr] + templates. This was added in Consul 1.8.0. + +- `-retry-interval-wan` ((#\_retry_interval_wan)) - Time to wait between + [`-retry-join-wan`](#_retry_join_wan) attempts. Defaults to 30s. + +- `-retry-max-wan` ((#\_retry_max_wan)) - The maximum number of [`-retry-join-wan`](#_join_wan) + attempts to be made before exiting with return code 1. By default, this is set + to 0 which is interpreted as infinite retries. + +- `-rejoin` ((#\_rejoin)) - When provided, Consul will ignore a previous + leave and attempt to rejoin the cluster when starting. By default, Consul treats + leave as a permanent intent and does not attempt to join the cluster again when + starting. This flag allows the previous state to be used to rejoin the cluster. + +## Log Options + +- `-log-file` ((#\_log_file)) - writes all the Consul agent log messages + to a file at the path indicated by this flag. The filename defaults to `consul.log`. + When the log file rotates, this value is used as a prefix for the path to the log and the current timestamp is + appended to the file name. If the value ends in a path separator, `consul-` + will be appended to the value. If the file name is missing an extension, `.log` + is appended. For example, setting `log-file` to `/var/log/` would result in a log + file path of `/var/log/consul.log`. `log-file` can be combined with + [`-log-rotate-bytes`](#_log_rotate_bytes) and [`-log-rotate-duration`](#_log_rotate_duration) + for a fine-grained log rotation experience. After rotation, the path and filename take the following form: + `/var/log/consul-{timestamp}.log` + +- `-log-rotate-bytes` ((#\_log_rotate_bytes)) - to specify the number of + bytes that should be written to a log before it needs to be rotated. Unless specified, + there is no limit to the number of bytes that can be written to a log file. + +- `-log-rotate-duration` ((#\_log_rotate_duration)) - to specify the maximum + duration a log should be written to before it needs to be rotated. Must be a duration + value such as 30s. Defaults to 24h. + +- `-log-rotate-max-files` ((#\_log_rotate_max_files)) - to specify the maximum + number of older log file archives to keep. Defaults to 0 (no files are ever deleted). + Set to -1 to discard old log files when a new one is created. + +- `-log-level` ((#\_log_level)) - The level of logging to show after the + Consul agent has started. This defaults to "info". The available log levels are + "trace", "debug", "info", "warn", and "error". You can always connect to an agent + via [`consul monitor`](/consul/commands/monitor) and use any log level. Also, + the log level can be changed during a config reload. + +- `-log-json` ((#\_log_json)) - This flag enables the agent to output logs + in a JSON format. By default this is false. + +- `-syslog` ((#\_syslog)) - This flag enables logging to syslog. This is + only supported on Linux and macOS. It will result in an error if provided on Windows. + +## Node Options + +- `-node` ((#\_node)) - The name of this node in the cluster. This must + be unique within the cluster. By default this is the hostname of the machine. + The node name cannot contain whitespace or quotation marks. To query the node from DNS, the name must only contain alphanumeric characters and hyphens (`-`). + +- `-node-id` ((#\_node_id)) - Available in Consul 0.7.3 and later, this + is a unique identifier for this node across all time, even if the name of the node + or address changes. This must be in the form of a hex string, 36 characters long, + such as `adf4238a-882b-9ddc-4a9d-5b6758e4159e`. If this isn't supplied, which is + the most common case, then the agent will generate an identifier at startup and + persist it in the [data directory](#_data_dir) so that it will remain the same + across agent restarts. Information from the host will be used to generate a deterministic + node ID if possible, unless [`-disable-host-node-id`](#_disable_host_node_id) is + set to true. + +- `-node-meta` ((#\_node_meta)) - Available in Consul 0.7.3 and later, this + specifies an arbitrary metadata key/value pair to associate with the node, of the + form `key:value`. This can be specified multiple times. Node metadata pairs have + the following restrictions: + + - A maximum of 64 key/value pairs can be registered per node. + - Metadata keys must be between 1 and 128 characters (inclusive) in length + - Metadata keys must contain only alphanumeric, `-`, and `_` characters. + - Metadata keys must not begin with the `consul-` prefix; that is reserved for internal use by Consul. + - Metadata values must be between 0 and 512 (inclusive) characters in length. + - Metadata values for keys beginning with `rfc1035-` are encoded verbatim in DNS TXT requests, otherwise + the metadata kv-pair is encoded according [RFC1464](https://www.ietf.org/rfc/rfc1464.txt). + +- `-disable-host-node-id` ((#\_disable_host_node_id)) - Setting this to + true will prevent Consul from using information from the host to generate a deterministic + node ID, and will instead generate a random node ID which will be persisted in + the data directory. This is useful when running multiple Consul agents on the same + host for testing. This defaults to false in Consul prior to version 0.8.5 and in + 0.8.5 and later defaults to true, so you must opt-in for host-based IDs. Host-based + IDs are generated using [gopsutil](https://github.com/shirou/gopsutil/), which + is shared with HashiCorp's [Nomad](https://www.nomadproject.io/), so if you opt-in + to host-based IDs then Consul and Nomad will use information on the host to automatically + assign the same ID in both systems. + +## Serf Options + +- `-serf-lan-allowed-cidrs` ((#\_serf_lan_allowed_cidrs)) - The Serf LAN allowed CIDRs allow to accept incoming + connections for Serf only from several networks (multiple values are supported). + Those networks are specified with CIDR notation (eg: 192.168.1.0/24). + This is available in Consul 1.8 and later. + +- `-serf-lan-port` ((#\_serf_lan_port)) - the Serf LAN port to listen on. + This overrides the default Serf LAN port 8301. This is available in Consul 1.2.2 + and later. + +- `-serf-wan-allowed-cidrs` ((#\_serf_wan_allowed_cidrs)) - The Serf WAN allowed CIDRs allow to accept incoming + connections for Serf only from several networks (multiple values are supported). + Those networks are specified with CIDR notation (eg: 192.168.1.0/24). + This is available in Consul 1.8 and later. + +- `-serf-wan-port` ((#\_serf_wan_port)) - the Serf WAN port to listen on. + This overrides the default Serf WAN port 8302. This is available in Consul 1.2.2 + and later. + +## Server Options + +- `-server` ((#\_server)) - This flag is used to control if an agent is + in server or client mode. When provided, an agent will act as a Consul server. + Each Consul cluster must have at least one server and ideally no more than 5 per + datacenter. All servers participate in the Raft consensus algorithm to ensure that + transactions occur in a consistent, linearizable manner. Transactions modify cluster + state, which is maintained on all server nodes to ensure availability in the case + of node failure. Server nodes also participate in a WAN gossip pool with server + nodes in other datacenters. Servers act as gateways to other datacenters and forward + RPC traffic as appropriate. + +- `-server-port` ((#\_server_port)) - the server RPC port to listen on. + This overrides the default server RPC port 8300. This is available in Consul 1.2.2 + and later. + +- `-non-voting-server` ((#\_non_voting_server)) - **This field + is deprecated in Consul 1.9.1. See the [`-read-replica`](#_read_replica) flag instead.** + +- `-read-replica` ((#\_read_replica)) - This + flag is used to make the server not participate in the Raft quorum, and have it + only receive the data replication stream. This can be used to add read scalability + to a cluster in cases where a high volume of reads to servers are needed. + +## UI Options + +- `-ui` ((#\_ui)) - Enables the built-in web UI server and the required + HTTP routes. This eliminates the need to maintain the Consul web UI files separately + from the binary. + +- `-ui-dir` ((#\_ui_dir)) - This flag provides the directory containing + the Web UI resources for Consul. This will automatically enable the Web UI. The + directory must be readable to the agent. Starting with Consul version 0.7.0 and + later, the Web UI assets are included in the binary so this flag is no longer necessary; + specifying only the `-ui` flag is enough to enable the Web UI. Specifying both + the '-ui' and '-ui-dir' flags will result in an error. + + +- `-ui-content-path` ((#\_ui\_content\_path)) - This flag provides the option + to change the path the Consul UI loads from and will be displayed in the browser. + By default, the path is `/ui/`, for example `http://localhost:8500/ui/`. Only alphanumerics, + `-`, and `_` are allowed in a custom path.`/v1/` is not allowed as it would overwrite + the API endpoint. + + + +[go-sockaddr]: https://godoc.org/github.com/hashicorp/go-sockaddr/template diff --git a/website/content/commands/catalog/datacenters.mdx b/website/content/commands/catalog/datacenters.mdx index c9b70f5ecc3a..f34f556ffb31 100644 --- a/website/content/commands/catalog/datacenters.mdx +++ b/website/content/commands/catalog/datacenters.mdx @@ -38,6 +38,6 @@ Usage: `consul catalog datacenters [options]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/catalog/nodes.mdx b/website/content/commands/catalog/nodes.mdx index 6efa3ad0716c..6fbc34d18030 100644 --- a/website/content/commands/catalog/nodes.mdx +++ b/website/content/commands/catalog/nodes.mdx @@ -85,10 +85,10 @@ Usage: `consul catalog nodes [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/catalog/services.mdx b/website/content/commands/catalog/services.mdx index a57adcb4a927..eabde39e1887 100644 --- a/website/content/commands/catalog/services.mdx +++ b/website/content/commands/catalog/services.mdx @@ -69,12 +69,12 @@ Usage: `consul catalog services [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/config/delete.mdx b/website/content/commands/config/delete.mdx index 085b0e34c411..c68f8baa987e 100644 --- a/website/content/commands/config/delete.mdx +++ b/website/content/commands/config/delete.mdx @@ -12,7 +12,7 @@ Command: `consul config delete` Corresponding HTTP API Endpoint: [\[DELETE\] /v1/config/:kind/:name](/consul/api-docs/config#delete-configuration) The `config delete` command deletes the configuration entry specified by the -kind and name. See the [configuration entries docs](/consul/docs/agent/config-entries) +kind and name. See the [configuration entries docs](/consul/docs/fundamentals/config-entry) for more details about configuration entries. The table below shows this command's [required ACLs](/consul/api-docs/api-structure#authentication). Configuration of @@ -59,13 +59,13 @@ config entry. This is used in combination with the -cas flag. #### Enterprise Options -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/config/index.mdx b/website/content/commands/config/index.mdx index f485cef75e33..59b79a001542 100644 --- a/website/content/commands/config/index.mdx +++ b/website/content/commands/config/index.mdx @@ -12,9 +12,9 @@ Command: `consul config` The `config` command is used to interact with Consul's central configuration system. It exposes commands for creating, updating, reading, and deleting different kinds of config entries. See the -[agent configuration](/consul/docs/agent/config/config-files#enable_central_service_config) +[agent configuration](/consul/docs/reference/agent/configuration-file/general#enable_central_service_config) for more information on how to enable this functionality for centrally -configuring services and [configuration entries docs](/consul/docs/agent/config-entries) for a description +configuring services and [configuration entries docs](/consul/docs/fundamentals/config-entry) for a description of the configuration entries content. ## Usage diff --git a/website/content/commands/config/list.mdx b/website/content/commands/config/list.mdx index 5f55a7bcff72..c3ac3d64d19a 100644 --- a/website/content/commands/config/list.mdx +++ b/website/content/commands/config/list.mdx @@ -12,7 +12,7 @@ Command: `consul config list` Corresponding HTTP API Endpoint: [\[GET\] /v1/config/:kind](/consul/api-docs/config#list-configurations) The `config list` command lists all given config entries of the given kind. -See the [configuration entries docs](/consul/docs/agent/config-entries) for more +See the [configuration entries docs](/consul/docs/fundamentals/config-entry) for more details about configuration entries. The table below shows this command's [required ACLs](/consul/api-docs/api-structure#authentication). Configuration of @@ -48,13 +48,13 @@ Usage: `consul config list [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/config/read.mdx b/website/content/commands/config/read.mdx index c54a60dfec16..ff17a6f63351 100644 --- a/website/content/commands/config/read.mdx +++ b/website/content/commands/config/read.mdx @@ -13,7 +13,7 @@ Corresponding HTTP API Endpoint: [\[GET\] /v1/config/:kind/:name](/consul/api-do The `config read` command reads the config entry specified by the given kind and name and outputs its JSON representation. See the -[configuration entries docs](/consul/docs/agent/config-entries) for more +[configuration entries docs](/consul/docs/fundamentals/config-entry) for more details about configuration entries. The table below shows this command's [required ACLs](/consul/api-docs/api-structure#authentication). Configuration of @@ -52,13 +52,13 @@ Usage: `consul config read [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/config/write.mdx b/website/content/commands/config/write.mdx index f466654c617f..3419dd98f3b2 100644 --- a/website/content/commands/config/write.mdx +++ b/website/content/commands/config/write.mdx @@ -12,7 +12,7 @@ Command: `consul config write` Corresponding HTTP API Endpoint: [\[PUT\] /v1/config](/consul/api-docs/config#apply-configuration) The `config write` command creates or updates a centralized config entry. -See the [configuration entries docs](/consul/docs/agent/config-entries) for more +See the [configuration entries docs](/consul/docs/fundamentals/config-entry) for more details about configuration entries. The table below shows this command's [required ACLs](/consul/api-docs/api-structure#authentication). Configuration of @@ -53,13 +53,13 @@ Usage: `consul config write [options] FILE` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples @@ -74,7 +74,7 @@ From stdin: ### Config Entry examples All config entries must have a `Kind` when registered. See -[Service Mesh - Config Entries](/consul/docs/connect/config-entries) for the list of +[Service Mesh - Config Entries](/consul/docs/fundamentals/config-entry) for the list of supported config entries. #### Service defaults @@ -91,7 +91,7 @@ service use the `http` protocol. } ``` -For more information, refer to the [service defaults configuration reference](/consul/docs/connect/config-entries/service-defaults). +For more information, refer to the [service defaults configuration reference](/consul/docs/reference/config-entry/service-defaults). #### Proxy defaults @@ -110,4 +110,4 @@ Envoy proxies. } ``` -For more information, refer to the [proxy defaults configuration reference](/consul/docs/connect/config-entries/proxy-defaults). +For more information, refer to the [proxy defaults configuration reference](/consul/docs/reference/config-entry/proxy-defaults). diff --git a/website/content/commands/connect/ca.mdx b/website/content/commands/connect/ca.mdx index 350d18e6d318..a16224d04e31 100644 --- a/website/content/commands/connect/ca.mdx +++ b/website/content/commands/connect/ca.mdx @@ -13,7 +13,7 @@ Command: `consul connect ca` This command is used to interact with Consul service mesh's Certificate Authority managed by the connect subsystem. It can be used to view or modify the current CA configuration. Refer to the -[service mesh CA documentation](/consul/docs/connect/ca) for more information. +[service mesh CA documentation](/consul/docs/secure-mesh/certificate) for more information. ```text Usage: consul connect ca [options] [args] @@ -57,9 +57,9 @@ Corresponding HTTP API Endpoint: [\[GET\] /v1/connect/ca/configuration](/consul/ #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' The output looks like this: @@ -123,6 +123,6 @@ The return code will indicate success or failure. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/connect/envoy.mdx b/website/content/commands/connect/envoy.mdx index 913adb981dff..8b883bfd510b 100644 --- a/website/content/commands/connect/envoy.mdx +++ b/website/content/commands/connect/envoy.mdx @@ -36,7 +36,7 @@ Usage: `consul connect envoy [options] [-- pass-through options]` #### Envoy Options for both Sidecars and Gateways -- `-proxy-id` - The [proxy service](/consul/docs/connect/proxies/proxy-config-reference) ID. +- `-proxy-id` - The [proxy service](/consul/docs/reference/proxy/connect-proxy) ID. This service ID must already be registered with the local agent unless a gateway is being registered with the `-register` flag. As of Consul 1.8.0, this can also be specified via the `CONNECT_PROXY_ID` environment variable. @@ -56,7 +56,7 @@ Usage: `consul connect envoy [options] [-- pass-through options]` ACL token from `-token` or the environment and so should be handled as a secret. This token grants the identity of any service it has `service:write` permission for and so can be used to access any upstream service that that service is - allowed to access by [service mesh intentions](/consul/docs/connect/intentions). + allowed to access by [service mesh intentions](/consul/docs/secure-mesh/intention). - `-envoy-version` - The version of envoy that is being started. Default is `1.23.1`. This is required so that the correct configuration can be generated. @@ -167,14 +167,14 @@ compatibility with Envoy and prevent potential issues. Default is `false`. If Envoy is configured as an ingress gateway, Consul instantiates a `/ready` HTTP endpoint at the specified IP and port. Consul uses `/ready` HTTP endpoints to check gateway health. Ingress gateways also use the specified IP when instantiating user-defined listeners configured in the - [ingress gateway configuration entry](/consul/docs/connect/config-entries/ingress-gateway). + [ingress gateway configuration entry](/consul/docs/reference/config-entry/ingress-gateway). ~> **Note**: Ensure that user-defined ingress gateway listeners use a different port than the port specified in `-address` so that they do not conflict with the health check endpoint. - `-admin-access-log-path` - - **Deprecated in Consul 1.15.0 in favor of [`proxy-defaults` access logs](/consul/docs/connect/config-entries/proxy-defaults#accesslogs).** + **Deprecated in Consul 1.15.0 in favor of [`proxy-defaults` access logs](/consul/docs/reference/config-entry/proxy-defaults#accesslogs).** The path to write the access log for the administration server. If no access log is desired specify `/dev/null`. By default it will use `/dev/null`. @@ -199,9 +199,9 @@ compatibility with Envoy and prevent potential issues. Default is `false`. #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options @@ -215,19 +215,19 @@ proxy configuration needed. be used instead. The scheme can also be set to HTTPS by setting the environment variable CONSUL_HTTP_SSL=true. This may be a unix domain socket using `unix:///path/to/socket` if the [agent is configured to - listen](/consul/docs/agent/config/config-files#addresses) that way. + listen](/consul/docs/reference/agent/configuration-file/general#addresses) that way. -> **Note:** gRPC uses the same TLS settings as the HTTPS API. If HTTPS is enabled then gRPC will require HTTPS as well. - @include 'http_api_options_client.mdx' + @include 'legacy/http_api_options_client.mdx' ## Examples In the following examples, a local service instance is registered on the local agent with a sidecar proxy (using the [sidecar service -registration](/consul/docs/connect/proxies/deploy-sidecar-services) helper): +registration](/consul/docs/connect/proxy/sidecar) helper): ```hcl service { diff --git a/website/content/commands/connect/expose.mdx b/website/content/commands/connect/expose.mdx index 79be1e0e60ae..5f6f67c7f475 100644 --- a/website/content/commands/connect/expose.mdx +++ b/website/content/commands/connect/expose.mdx @@ -14,7 +14,7 @@ Command: `consul connect expose` The connect expose subcommand is used to expose a mesh-enabled service through an Ingress gateway by modifying the gateway's configuration and adding an intention to allow traffic from the gateway to the service. See the -[Ingress gateway documentation](/consul/docs/connect/gateways/ingress-gateway) for more information +[Ingress gateway documentation](/consul/docs/north-south/ingress-gateway) for more information about Ingress gateways. ```text @@ -46,9 +46,9 @@ Usage: consul connect expose [options] #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/connect/proxy.mdx b/website/content/commands/connect/proxy.mdx index 0a4571df5f9e..d073a87a3235 100644 --- a/website/content/commands/connect/proxy.mdx +++ b/website/content/commands/connect/proxy.mdx @@ -24,14 +24,14 @@ Usage: `consul connect proxy [options]` - `-sidecar-for` - The _ID_ (not name if they differ) of the service instance this proxy will represent. The target service doesn't need to exist on the local agent yet but a [sidecar proxy - registration](/consul/docs/connect/proxies/deploy-sidecar-services) with + registration](/consul/docs/connect/proxy/sidecar) with `proxy.destination_service_id` equal to the passed value must be present. If multiple proxy registrations targeting the same local service instance are present the command will error and `-proxy-id` should be used instead. This can also be specified via the `CONNECT_SIDECAR_FOR` environment variable. - `-proxy-id` - The [proxy - service](/consul/docs/connect/proxies/proxy-config-reference) ID on the + service](/consul/docs/reference/proxy/connect-proxy) ID on the local agent. This must already be present on the local agent. This option can also be specified via the `CONNECT_PROXY_ID` environment variable. @@ -44,7 +44,7 @@ Usage: `consul connect proxy [options]` doesn't need to actually exist in the Consul catalog, but proper ACL permissions (`service:write`) are required. This and the remaining options can be used to setup a proxy that is not registered already with local config - [useful for development](/consul/docs/connect/dev). + [useful for development](/consul/docs/troubleshoot/mesh). - `-upstream` - Upstream service to support connecting to. The format should be 'name:addr', such as 'db:8181'. This will make 'db' available on port 8181. @@ -72,9 +72,9 @@ Usage: `consul connect proxy [options]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/connect/redirect-traffic.mdx b/website/content/commands/connect/redirect-traffic.mdx index d59fe7928cf6..8fd968ba0d2f 100644 --- a/website/content/commands/connect/redirect-traffic.mdx +++ b/website/content/commands/connect/redirect-traffic.mdx @@ -38,7 +38,7 @@ Usage: `consul connect redirect-traffic [options]` - `-consul-dns-port` - The port of the Consul DNS resolver. If provided, DNS queries will be redirected to the provided IP address for name resolution. -- `-proxy-id` - The [proxy service](/consul/docs/connect/proxies/proxy-config-reference) ID. +- `-proxy-id` - The [proxy service](/consul/docs/reference/proxy/connect-proxy) ID. This service ID must already be registered with the local agent. - `-proxy-inbound-port` - The inbound port that the proxy is listening on. @@ -60,13 +60,13 @@ Usage: `consul connect redirect-traffic [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/debug.mdx b/website/content/commands/debug.mdx index 6feeb685f452..f78688e1328d 100644 --- a/website/content/commands/debug.mdx +++ b/website/content/commands/debug.mdx @@ -69,7 +69,7 @@ all targets for 5 minutes. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Capture Targets @@ -83,7 +83,7 @@ information when `debug` is running. By default, it captures all information. | `members` | A list of all the WAN and LAN members in the cluster. | | `metrics` | Metrics from the in-memory metrics endpoint in the target, captured at the interval. | | `logs` | `TRACE` level logs for the target agent, captured for the duration. | -| `pprof` | Golang heap, CPU, goroutine, and trace profiling. CPU and traces are captured for `duration` in a single file while heap and goroutine are separate snapshots for each `interval`. This information is not retrieved unless [`enable_debug`](/consul/docs/agent/config/config-files#enable_debug) is set to `true` on the target agent or ACLs are enabled and an ACL token with `operator:read` is provided. | +| `pprof` | Golang heap, CPU, goroutine, and trace profiling. CPU and traces are captured for `duration` in a single file while heap and goroutine are separate snapshots for each `interval`. This information is not retrieved unless [`enable_debug`](/consul/docs/reference/agent/configuration-file/general#enable_debug) is set to `true` on the target agent or ACLs are enabled and an ACL token with `operator:read` is provided. | ## Examples diff --git a/website/content/commands/event.mdx b/website/content/commands/event.mdx index d89b71d68c4a..086ff05aa462 100644 --- a/website/content/commands/event.mdx +++ b/website/content/commands/event.mdx @@ -19,14 +19,14 @@ The `event` command provides a mechanism to fire a custom user event to an entire datacenter. These events are opaque to Consul, but they can be used to build scripting infrastructure to do automated deploys, restart services, or perform any other orchestration action. Events can be handled by -[using a watch](/consul/docs/dynamic-app-config/watches). +[using a watch](/consul/docs/automate/watch). -Under the hood, events are propagated using the [gossip protocol](/consul/docs/architecture/gossip). +Under the hood, events are propagated using the [gossip protocol](/consul/docs/concept/gossip). While the details are not important for using events, an understanding of the semantics is useful. The gossip layer will make a best-effort to deliver the event, but there is **no guaranteed delivery**. Unlike most Consul data, which is -replicated using [consensus](/consul/docs/architecture/consensus), event data +replicated using [consensus](/consul/docs/concept/consensus), event data is purely peer-to-peer over gossip. This means it is not persisted and does not have a total ordering. In practice, this means you cannot rely on the order of message delivery. An advantage however is that events can still @@ -66,6 +66,6 @@ payload can be provided as the final argument. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/exec.mdx b/website/content/commands/exec.mdx index 07754bd2b938..16e2451e46ce 100644 --- a/website/content/commands/exec.mdx +++ b/website/content/commands/exec.mdx @@ -17,7 +17,7 @@ the `web` service. Remote execution works by specifying a job, which is stored in the KV store. Agents are informed about the new job using the [event system](/consul/commands/event), -which propagates messages via the [gossip protocol](/consul/docs/architecture/gossip). +which propagates messages via the [gossip protocol](/consul/docs/concept/gossip). As a result, delivery is best-effort, and there is **no guarantee** of execution. While events are purely gossip driven, remote execution relies on the KV store @@ -78,6 +78,6 @@ completion as a script to evaluate. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/force-leave.mdx b/website/content/commands/force-leave.mdx index 0af9218f233d..c71d49b7d2fc 100644 --- a/website/content/commands/force-leave.mdx +++ b/website/content/commands/force-leave.mdx @@ -46,7 +46,7 @@ Usage: `consul force-leave [options] node` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/index.mdx b/website/content/commands/index.mdx index a3e500811034..f7d1ff2f49f0 100644 --- a/website/content/commands/index.mdx +++ b/website/content/commands/index.mdx @@ -95,7 +95,7 @@ Command Options ## Authentication -When the [ACL system is enabled](/consul/docs/agent/config/config-files#acl) the Consul CLI will +When the [ACL system is enabled](/consul/docs/reference/agent/configuration-file/acl) the Consul CLI will require an [ACL token](/consul/docs/security/acl#tokens) to perform API requests. The ACL token can be provided directly on the command line using the `-token` command line flag, @@ -242,8 +242,8 @@ CONSUL_TLS_SERVER_NAME=consulserver.domain Like [`CONSUL_HTTP_ADDR`](#consul_http_addr) but configures the address the local agent is listening for gRPC requests. Currently gRPC is only used for -integrating [Envoy proxy](/consul/docs/connect/proxies/envoy) and must be [enabled -explicitly](/consul/docs/agent/config/config-files#grpc_port) in agent configuration. +integrating [Envoy proxy](/consul/docs/reference/proxy/envoy) and must be [enabled +explicitly](/consul/docs/reference/agent/configuration-file/general#grpc_port) in agent configuration. ``` CONSUL_GRPC_ADDR=127.0.0.1:8502 diff --git a/website/content/commands/info.mdx b/website/content/commands/info.mdx index 0a74422f5806..08eec1a3e26e 100644 --- a/website/content/commands/info.mdx +++ b/website/content/commands/info.mdx @@ -19,9 +19,9 @@ There are currently the top-level keys for: - agent: Provides information about the agent - consul: Information about the consul library (client or server) -- raft: Provides info about the Raft [consensus library](/consul/docs/architecture/consensus) -- serf_lan: Provides info about the LAN [gossip pool](/consul/docs/architecture/gossip) -- serf_wan: Provides info about the WAN [gossip pool](/consul/docs/architecture/gossip) +- raft: Provides info about the Raft [consensus library](/consul/docs/concept/consensus) +- serf_lan: Provides info about the LAN [gossip pool](/consul/docs/concept/gossip) +- serf_wan: Provides info about the WAN [gossip pool](/consul/docs/concept/gossip) Here is an example output: @@ -75,4 +75,4 @@ Usage: `consul info` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' diff --git a/website/content/commands/intention/check.mdx b/website/content/commands/intention/check.mdx index 23ae56ceae4e..13a85510d64e 100644 --- a/website/content/commands/intention/check.mdx +++ b/website/content/commands/intention/check.mdx @@ -41,13 +41,13 @@ Usage: `consul intention check [options] SRC DST` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/intention/create.mdx b/website/content/commands/intention/create.mdx index bbaccc2be2f4..c72241c9f918 100644 --- a/website/content/commands/intention/create.mdx +++ b/website/content/commands/intention/create.mdx @@ -10,7 +10,7 @@ description: >- -> **Deprecated** - This command is deprecated in Consul 1.9.0 in favor of using the [config entry CLI command](/consul/commands/config/write). To create an intention, create or modify a -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) config +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entry for the destination. Command: `consul intention create` @@ -52,13 +52,13 @@ are not supported from commands, but may be from the corresponding HTTP endpoint #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/intention/delete.mdx b/website/content/commands/intention/delete.mdx index 2cfb97a966db..55b46a371764 100644 --- a/website/content/commands/intention/delete.mdx +++ b/website/content/commands/intention/delete.mdx @@ -23,7 +23,7 @@ are not supported from commands, but may be from the corresponding HTTP endpoint -> **Deprecated** - The one argument form of this command is deprecated in Consul 1.9.0. Intentions no longer need IDs when represented as -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) config +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entries. ## Usage @@ -37,13 +37,13 @@ Usage: #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/intention/get.mdx b/website/content/commands/intention/get.mdx index b3db133fd44e..17c5c2112a35 100644 --- a/website/content/commands/intention/get.mdx +++ b/website/content/commands/intention/get.mdx @@ -15,7 +15,7 @@ The `intention get` command shows a single intention. -> **Deprecated** - The one argument form of this command is deprecated in Consul 1.9.0. Intentions no longer need IDs when represented as -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) config +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entries. The table below shows this command's [required ACLs](/consul/api-docs/api-structure#authentication). Configuration of @@ -37,13 +37,13 @@ Usage: #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/intention/index.mdx b/website/content/commands/intention/index.mdx index f7ea711a27fb..bfdb319c789a 100644 --- a/website/content/commands/intention/index.mdx +++ b/website/content/commands/intention/index.mdx @@ -10,18 +10,18 @@ description: >- Command: `consul intention` The `intention` command is used to interact with service mesh -[intentions](/consul/docs/connect/intentions). It exposes commands for +[intentions](/consul/docs/secure-mesh/intention). It exposes commands for creating, updating, reading, deleting, checking, and managing intentions. This command is available in Consul 1.2 and later. Use the -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) configuration entry or the [HTTP +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) configuration entry or the [HTTP API](/consul/api-docs/connect/intentions) to manage intentions. ~> **Deprecated** - This command is deprecated in Consul 1.9.0 in favor of using the [config entry CLI command](/consul/commands/config/write). To create an intention, create or modify a -[`service-intentions`](/consul/docs/connect/config-entries/service-intentions) config +[`service-intentions`](/consul/docs/reference/config-entry/service-intentions) config entry for the destination. ## Usage diff --git a/website/content/commands/intention/list.mdx b/website/content/commands/intention/list.mdx index 4cdfa5c8a2f7..9fc25c19fca3 100644 --- a/website/content/commands/intention/list.mdx +++ b/website/content/commands/intention/list.mdx @@ -29,11 +29,11 @@ Usage: #### Enterprise Options -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/intention/match.mdx b/website/content/commands/intention/match.mdx index 3d94939b38bf..8ec41ff21d6a 100644 --- a/website/content/commands/intention/match.mdx +++ b/website/content/commands/intention/match.mdx @@ -40,13 +40,13 @@ Usage: `consul intention match [options] SRC_OR_DST` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/join.mdx b/website/content/commands/join.mdx index 7bb438911d4a..4d33f74b1c1c 100644 --- a/website/content/commands/join.mdx +++ b/website/content/commands/join.mdx @@ -46,4 +46,4 @@ command will fail only if Consul was unable to join any of the specified address #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' diff --git a/website/content/commands/keygen.mdx b/website/content/commands/keygen.mdx index c512eeef53f4..df2e0e89ad4e 100644 --- a/website/content/commands/keygen.mdx +++ b/website/content/commands/keygen.mdx @@ -12,6 +12,6 @@ description: >- Command: `consul keygen` The `keygen` command generates an encryption key that can be used for -[Consul agent traffic encryption](/consul/docs/security/encryption). +[Consul agent traffic encryption](/consul/docs/secure/encryption). The keygen command uses a cryptographically strong pseudo-random number generator to generate the key. diff --git a/website/content/commands/keyring.mdx b/website/content/commands/keyring.mdx index b1c914fddc24..6c6e5aa43f31 100644 --- a/website/content/commands/keyring.mdx +++ b/website/content/commands/keyring.mdx @@ -12,7 +12,7 @@ Command: `consul keyring` Corresponding HTTP API Endpoints: [\[VARIES\] /v1/operator/keyring](/consul/api-docs/operator/keyring) The `keyring` command is used to examine and modify the encryption keys used in -Consul's [Gossip Pools](/consul/docs/architecture/gossip). It is capable of +Consul's [Gossip Pools](/consul/docs/concept/gossip). It is capable of distributing new encryption keys to the cluster, retiring old encryption keys, and changing the keys used by the cluster to encrypt messages. @@ -76,7 +76,7 @@ Only one actionable argument may be specified per run, including `-list`, #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Output diff --git a/website/content/commands/kv/delete.mdx b/website/content/commands/kv/delete.mdx index 17216b0ca78b..3f9696146f8f 100644 --- a/website/content/commands/kv/delete.mdx +++ b/website/content/commands/kv/delete.mdx @@ -40,15 +40,15 @@ Usage: `consul kv delete [options] KEY_OR_PREFIX` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/kv/export.mdx b/website/content/commands/kv/export.mdx index c05b2f80f895..0f44dc7218a3 100644 --- a/website/content/commands/kv/export.mdx +++ b/website/content/commands/kv/export.mdx @@ -28,15 +28,15 @@ Usage: `consul kv export [options] [PREFIX]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/kv/get.mdx b/website/content/commands/kv/get.mdx index 6534fe35486b..1e0830043184 100644 --- a/website/content/commands/kv/get.mdx +++ b/website/content/commands/kv/get.mdx @@ -56,15 +56,15 @@ Usage: `consul kv get [options] [KEY_OR_PREFIX]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/kv/import.mdx b/website/content/commands/kv/import.mdx index a960b1b738ce..550a247925e4 100644 --- a/website/content/commands/kv/import.mdx +++ b/website/content/commands/kv/import.mdx @@ -31,15 +31,15 @@ Usage: `consul kv import [options] [DATA]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/kv/put.mdx b/website/content/commands/kv/put.mdx index dcde811fc01c..42026ede1ebd 100644 --- a/website/content/commands/kv/put.mdx +++ b/website/content/commands/kv/put.mdx @@ -59,15 +59,15 @@ Usage: `consul kv put [options] KEY [DATA]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/leave.mdx b/website/content/commands/leave.mdx index 8c3744cc287c..1d63288b4c8a 100644 --- a/website/content/commands/leave.mdx +++ b/website/content/commands/leave.mdx @@ -24,7 +24,7 @@ non-graceful leave can affect cluster availability. Depending on how many Consul servers are running, running `consul leave` on a server explicitly can reduce the quorum size (which is derived from the number of Consul servers, see -[deployment_table](/consul/docs/architecture/consensus#deployment_table)). +[deployment_table](/consul/docs/concept/reliability#deployment-size)). Even if the cluster used `bootstrap_expect` to set a number of servers and thus quorum size initially, issuing `consul leave` on a server will reconfigure the cluster to have fewer servers. This means you could end up with just one server that is still able to commit writes because the quorum size for @@ -44,4 +44,4 @@ Usage: `consul leave [options]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' diff --git a/website/content/commands/license.mdx b/website/content/commands/license.mdx index 762e66df43d8..27495050d5e4 100644 --- a/website/content/commands/license.mdx +++ b/website/content/commands/license.mdx @@ -15,7 +15,7 @@ Command: `consul license` The `license` command provides a list of all datacenters that use the Consul Enterprise license applied to the current datacenter. ~> **Warning**: Consul 1.10.0 removed the ability to set and reset the license using the CLI. -See the [licensing documentation](/consul/docs/enterprise/license/overview) for more information about +See the [licensing documentation](/consul/docs/enterprise/license) for more information about Consul Enterprise license management. If ACLs are enabled then a token with operator privileges may be required in @@ -159,9 +159,9 @@ Licensed Features: #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## get @@ -200,9 +200,9 @@ Licensed Features: #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## reset @@ -245,6 +245,6 @@ Licensed Features: #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/commands/lock.mdx b/website/content/commands/lock.mdx index 85a15680516f..4c970019706f 100644 --- a/website/content/commands/lock.mdx +++ b/website/content/commands/lock.mdx @@ -18,7 +18,7 @@ communication is disrupted, the child process is terminated. The number of lock holders is configurable with the `-n` flag. By default, a single holder is allowed, and a lock is used for mutual exclusion. This -uses the [leader election algorithm](/consul/docs/dynamic-app-config/sessions/application-leader-election). +uses the [leader election algorithm](/consul/docs/automate/application-leader-election). If the lock holder count is more than one, then a semaphore is used instead. A semaphore allows more than a single holder, but this is less efficient than @@ -80,9 +80,43 @@ Windows has no POSIX compatible notion for `SIGTERM`. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' + +## Node health checks and TTL behavior + +When you run `consul lock`, Consul automatically creates an _ephemeral session_ that attaches one or more node checks, such as the `serfHealth` check, by default. These checks keep the node “healthy” from Consul’s perspective. This session automatically renews as long as the agent passes these health checks. For sessions with a a TTL configured, that TTL never reaches zero as long as the node remains healthy. + +This design ensures the lock is not lost as long as the local Consul agent is up and healthy. However, emphemeral sessions run indefinitely when: + +- **The node remains healthy**, including in partial networks where at least one Consul server still reads the node as online. +- **No explicit session destroy** or forced release occurs. + +To strictly enforce the TTL and prevent auto-renewed by node checks, you must create or manage your own session separately. In that scenario: + +1. **Manually create a session** with the HTTP API that either excludes node checks or uses custom service checks. +1. **Acquire the lock** on that custom session using the raw KV API (`?acquire=`). +1. **Manage renewals** and releases yourself as needed. + +When you remove node checks, the TTL-based session expires after the specified time if you do not renew it. Consul releases the lock automatically when the session expires. + +## Node health checks and TTL behavior + +When you run `consul lock`, Consul automatically creates an _ephemeral session_ that attaches one or more node checks, such as the `serfHealth` check, by default. These checks keep the node “healthy” from Consul’s perspective. This session automatically renews as long as the agent passes these health checks. For sessions with a a TTL configured, that TTL never reaches zero as long as the node remains healthy. + +This design ensures the lock is not lost as long as the local Consul agent is up and healthy. However, emphemeral sessions run indefinitely when: + +- **The node remains healthy**, including in partial networks where at least one Consul server still reads the node as online. +- **No explicit session destroy** or forced release occurs. + +To strictly enforce the TTL and prevent auto-renewed by node checks, you must create or manage your own session separately. In that scenario: + +1. **Manually create a session** with the HTTP API that either excludes node checks or uses custom service checks. +1. **Acquire the lock** on that custom session using the raw KV API (`?acquire=`). +1. **Manage renewals** and releases yourself as needed. + +When you remove node checks, the TTL-based session expires after the specified time if you do not renew it. Consul releases the lock automatically when the session expires. ## SHELL diff --git a/website/content/commands/login.mdx b/website/content/commands/login.mdx index b5ec3c11cb1f..98017e58a126 100644 --- a/website/content/commands/login.mdx +++ b/website/content/commands/login.mdx @@ -51,11 +51,11 @@ Usage: `consul login [options]` - `-oidc-callback-listen-addr=` - The address to bind a webserver on to handle the browser callback from the OIDC workflow. Added in Consul 1.8.0. -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ### Examples diff --git a/website/content/commands/logout.mdx b/website/content/commands/logout.mdx index 315393640465..5366fcc9d429 100644 --- a/website/content/commands/logout.mdx +++ b/website/content/commands/logout.mdx @@ -33,7 +33,7 @@ Usage: `consul logout [options]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ### Examples diff --git a/website/content/commands/maint.mdx b/website/content/commands/maint.mdx index 387c3668211b..d76848425e3a 100644 --- a/website/content/commands/maint.mdx +++ b/website/content/commands/maint.mdx @@ -53,7 +53,7 @@ Usage: `consul maint [options]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## List mode diff --git a/website/content/commands/members.mdx b/website/content/commands/members.mdx index 6299f21e4c82..a244601408ea 100644 --- a/website/content/commands/members.mdx +++ b/website/content/commands/members.mdx @@ -56,8 +56,8 @@ Usage: `consul members [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' diff --git a/website/content/commands/monitor.mdx b/website/content/commands/monitor.mdx index b37635b57b4d..27b089a07758 100644 --- a/website/content/commands/monitor.mdx +++ b/website/content/commands/monitor.mdx @@ -34,4 +34,4 @@ Usage: `consul monitor [options]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' diff --git a/website/content/commands/namespace/create.mdx b/website/content/commands/namespace/create.mdx index 068666e5cafd..07a3541072d2 100644 --- a/website/content/commands/namespace/create.mdx +++ b/website/content/commands/namespace/create.mdx @@ -61,11 +61,11 @@ from the CLI arguments. #### API Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/namespace/delete.mdx b/website/content/commands/namespace/delete.mdx index adc235be6062..263757539c27 100644 --- a/website/content/commands/namespace/delete.mdx +++ b/website/content/commands/namespace/delete.mdx @@ -30,11 +30,11 @@ Usage: `consul namespace delete ` #### API Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/namespace/list.mdx b/website/content/commands/namespace/list.mdx index 1b043ed657d6..8ebb7f268eed 100644 --- a/website/content/commands/namespace/list.mdx +++ b/website/content/commands/namespace/list.mdx @@ -42,11 +42,11 @@ Usage: `consul namespace list` #### API Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/namespace/read.mdx b/website/content/commands/namespace/read.mdx index 778f672ff5c8..89e7e8f6c78a 100644 --- a/website/content/commands/namespace/read.mdx +++ b/website/content/commands/namespace/read.mdx @@ -41,11 +41,11 @@ Usage: `consul namespace read ` #### API Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/namespace/update.mdx b/website/content/commands/namespace/update.mdx index 4cce12235675..1cd65244c29c 100644 --- a/website/content/commands/namespace/update.mdx +++ b/website/content/commands/namespace/update.mdx @@ -68,11 +68,11 @@ with the existing namespace definition. #### API Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/namespace/write.mdx b/website/content/commands/namespace/write.mdx index f1a16ec006bd..7fcee48b8654 100644 --- a/website/content/commands/namespace/write.mdx +++ b/website/content/commands/namespace/write.mdx @@ -29,7 +29,7 @@ Usage: `consul namespace write ` The `` must either be a file path or `-` to indicate that the definition should be read from stdin. The definition can be in either JSON -or HCL format. See [here](/consul/docs/enterprise/namespaces#namespace-definition) for a description of the namespace definition. +or HCL format. See [here](/consul/docs/multi-tenant/namespace#namespace-definition) for a description of the namespace definition. #### Command Options @@ -40,11 +40,11 @@ or HCL format. See [here](/consul/docs/enterprise/namespaces#namespace-definitio #### API Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/operator/area.mdx b/website/content/commands/operator/area.mdx index 0b5525c714b4..e155ea04f3e9 100644 --- a/website/content/commands/operator/area.mdx +++ b/website/content/commands/operator/area.mdx @@ -83,9 +83,9 @@ The return code indicates success or failure. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## delete @@ -121,9 +121,9 @@ The return code indicates success or failure. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## join @@ -166,9 +166,9 @@ The return code indicates success or failure. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## list @@ -204,9 +204,9 @@ The return code indicates success or failure. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## members @@ -269,9 +269,9 @@ The return code indicates success or failure. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## update @@ -310,6 +310,6 @@ The return code indicates success or failure. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' \ No newline at end of file +@include 'legacy/http_api_options_server.mdx' \ No newline at end of file diff --git a/website/content/commands/operator/autopilot.mdx b/website/content/commands/operator/autopilot.mdx index 1b4f43409c83..8a1ac55759db 100644 --- a/website/content/commands/operator/autopilot.mdx +++ b/website/content/commands/operator/autopilot.mdx @@ -57,9 +57,9 @@ UpgradeMigrationTag = "" #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## set-config @@ -107,17 +107,17 @@ The return code indicates success or failure. - `-disable-upgrade-migration` - Controls whether Consul will avoid promoting new servers until it can perform a migration. Must be one of `[true|false]`. -- `-redundancy-zone-tag` - Controls the [`-node-meta`](/consul/docs/agent/config/cli-flags#_node_meta) +- `-redundancy-zone-tag` - Controls the [`-node-meta`](/consul/commands/agent#_node_meta) key name used for separating servers into different redundancy zones. -- `-upgrade-version-tag` - Controls the [`-node-meta`](/consul/docs/agent/config/cli-flags#_node_meta) +- `-upgrade-version-tag` - Controls the [`-node-meta`](/consul/commands/agent#_node_meta) tag to use for version info when performing upgrade migrations. If left blank, the Consul version will be used. #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## state @@ -137,9 +137,9 @@ Usage: `consul operator autopilot state [options]` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' #### Command Options diff --git a/website/content/commands/operator/usage.mdx b/website/content/commands/operator/usage.mdx index 83ae028f6c95..3cd5446e6c9d 100644 --- a/website/content/commands/operator/usage.mdx +++ b/website/content/commands/operator/usage.mdx @@ -105,7 +105,7 @@ Total 3 #### Command Options -- `-all-datacenters` - Display service counts from all known datacenters. +- `-all-datacenters` - Display service counts from all known federated datacenters. Default is `false`. - `-billable` - Display only billable service information. Default is `false`. diff --git a/website/content/commands/partition.mdx b/website/content/commands/partition.mdx index 660ea71ccaaf..3dbb0cb47a9e 100644 --- a/website/content/commands/partition.mdx +++ b/website/content/commands/partition.mdx @@ -5,7 +5,7 @@ description: | The partition command enables you create and manage Consul Enterprise admin partitions. --- -@include 'http_api_and_cli_characteristics_links.mdx' +@include 'legacy/http_api_and_cli_characteristics_links.mdx' # Consul Admin Partition diff --git a/website/content/commands/peering/delete.mdx b/website/content/commands/peering/delete.mdx index 1bd474dc306f..f913b49f59a0 100644 --- a/website/content/commands/peering/delete.mdx +++ b/website/content/commands/peering/delete.mdx @@ -34,11 +34,11 @@ Usage: `consul peering delete [options] -name ` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/peering/establish.mdx b/website/content/commands/peering/establish.mdx index 782fb9cda681..894e39e04226 100644 --- a/website/content/commands/peering/establish.mdx +++ b/website/content/commands/peering/establish.mdx @@ -36,11 +36,11 @@ Usage: `consul peering establish [options] -name -peering-token ` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/peering/generate-token.mdx b/website/content/commands/peering/generate-token.mdx index 6ce3fb059cb5..0a818866731b 100644 --- a/website/content/commands/peering/generate-token.mdx +++ b/website/content/commands/peering/generate-token.mdx @@ -43,11 +43,11 @@ You can specify one or more load balancers or external IPs that route external t #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/peering/list.mdx b/website/content/commands/peering/list.mdx index 9838de3e7bc5..cbf44dc06131 100644 --- a/website/content/commands/peering/list.mdx +++ b/website/content/commands/peering/list.mdx @@ -30,11 +30,11 @@ Usage: `consul peering list [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/peering/read.mdx b/website/content/commands/peering/read.mdx index 95a41b2701aa..14f5abb3cc5a 100644 --- a/website/content/commands/peering/read.mdx +++ b/website/content/commands/peering/read.mdx @@ -31,11 +31,11 @@ Usage: `consul peering read [options] -name ` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/reload.mdx b/website/content/commands/reload.mdx index a7eaa45df3d2..b649bf6f61c4 100644 --- a/website/content/commands/reload.mdx +++ b/website/content/commands/reload.mdx @@ -39,4 +39,4 @@ Usage: `consul reload` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' diff --git a/website/content/commands/rtt.mdx b/website/content/commands/rtt.mdx index 0fc2bd37abbd..2e67aa6499cd 100644 --- a/website/content/commands/rtt.mdx +++ b/website/content/commands/rtt.mdx @@ -69,4 +69,4 @@ The following environment variables control accessing the HTTP server via SSL: #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' diff --git a/website/content/commands/services/deregister.mdx b/website/content/commands/services/deregister.mdx index 138de2e763eb..adaffcc5b4a5 100644 --- a/website/content/commands/services/deregister.mdx +++ b/website/content/commands/services/deregister.mdx @@ -16,7 +16,7 @@ Note that this command can only deregister services that were registered with the agent specified and is intended to be paired with `services register`. By default, the command deregisters services on the local agent. -We recommend deregistering services registered with a configuration file by deleting the file and [reloading](/consul/commands/reload) Consul. Refer to [Services Overview](/consul/docs/services/services) for additional information about services. +We recommend deregistering services registered with a configuration file by deleting the file and [reloading](/consul/commands/reload) Consul. Refer to [Services Overview](/consul/docs/fundamentals/service) for additional information about services. The following table shows the [ACLs](/consul/api-docs/api-structure#authentication) required to run the `consul services deregister` command: @@ -43,13 +43,13 @@ This flexibility makes it easy to pair the command with the #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/services/export.mdx b/website/content/commands/services/export.mdx index 370f95dd6437..91e8e76a78dd 100644 --- a/website/content/commands/services/export.mdx +++ b/website/content/commands/services/export.mdx @@ -40,13 +40,13 @@ Usage: consul services export [options] -name -consumer-peers ` - A comma-separated list of admin partitions within the same datacenter to export the service to. This flag is optional when `-consumer-peers` is specified. -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/services/exported-services.mdx b/website/content/commands/services/exported-services.mdx index 395ad2cb8b30..d1fbcb0dfe04 100644 --- a/website/content/commands/services/exported-services.mdx +++ b/website/content/commands/services/exported-services.mdx @@ -11,7 +11,7 @@ Command: `consul services exported-services` Corresponding HTTP API Endpoint: [\[GET\] /v1/exported-services](/consul/api-docs/exported-services) -The `exported-services` command displays the services that were exported using an [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services). Sameness groups and wildcards in the configuration entry are expanded in the response. +The `exported-services` command displays the services that were exported using an [`exported-services` configuration entry](/consul/docs/reference/config-entry/exported-services). Sameness groups and wildcards in the configuration entry are expanded in the response. The table below shows this command's [required ACLs](/consul/api-docs/api-structure#authentication). @@ -32,11 +32,11 @@ Usage: `consul services exported-services [options]` #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/services/index.mdx b/website/content/commands/services/index.mdx index f511ffe2efce..e70a9c906771 100644 --- a/website/content/commands/services/index.mdx +++ b/website/content/commands/services/index.mdx @@ -10,7 +10,7 @@ description: | Command: `consul services` The `services` command has subcommands for interacting with Consul services -registered with the [local agent](/consul/docs/agent). These provide +registered with the [local agent](/consul/docs/fundamentals/agent). These provide useful commands such as `register` and `deregister` for easily registering services in scripts, dev mode, etc. To view all services in the catalog, instead of only agent-local services, diff --git a/website/content/commands/services/register.mdx b/website/content/commands/services/register.mdx index 01a09d19bfdb..e7977407b438 100644 --- a/website/content/commands/services/register.mdx +++ b/website/content/commands/services/register.mdx @@ -14,7 +14,7 @@ Corresponding HTTP API Endpoint: [\[PUT\] /v1/agent/service/register](/consul/ap The `services register` command registers a service with the local agent. This command returns after registration and must be paired with explicit service deregistration. This command simplifies service registration from -scripts. Refer to [Register Services and Health Checks](/consul/docs/services/usage/register-services-checks) for information about other service registration methods. +scripts. Refer to [Register Services and Health Checks](/consul/docs/register/service/vm) for information about other service registration methods. The following table shows the [ACLs](/consul/api-docs/api-structure#authentication) required to use the `consul services register` command: @@ -57,7 +57,7 @@ The flags below should only be set if _no arguments_ are given. If no arguments are given, the flags below can be used to register a single service. -The following fields specify identical parameters in a standard service definition file. Refer to [Services Configuration Reference](/consul/docs/services/configuration/services-configuration-reference) for details about each configuration option. +The following fields specify identical parameters in a standard service definition file. Refer to [Services Configuration Reference](/consul/docs/reference/service) for details about each configuration option. - `-id` - The ID of the service. This will default to `-name` if not set. @@ -78,13 +78,13 @@ The following fields specify identical parameters in a standard service definiti #### Enterprise Options -@include 'cli-http-api-partition-options.mdx' +@include 'legacy/cli-http-api-partition-options.mdx' -@include 'http_api_namespace_options.mdx' +@include 'legacy/http_api_namespace_options.mdx' #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples diff --git a/website/content/commands/snapshot/agent.mdx b/website/content/commands/snapshot/agent.mdx index e86c62809e48..d374c343afb6 100644 --- a/website/content/commands/snapshot/agent.mdx +++ b/website/content/commands/snapshot/agent.mdx @@ -396,9 +396,16 @@ These options cannot be used when using `backup_destinations` in a config file. - `-azure-blob-container-name` - Container to use. Required for Azure blob storage, and setting this disables local storage. -* `-azure-blob-environment` - Environment to use. Defaults to AZUREPUBLICCLOUD. Other valid environments +- `-azure-blob-environment` - Environment to use. Defaults to AZUREPUBLICCLOUD. Other valid environments are AZURECHINACLOUD, AZUREGERMANCLOUD and AZUREUSGOVERNMENTCLOUD. Introduced in Consul 1.7.3. +~> The following options were introduced in v1.20.1+ent. + +- `-azure-blob-service-principal-id` - The ID of the service principal that owns the blob object. +- `-azure-blob-service-principal-secret` - The secret of the service principal that owns the blob object. + +- `-azure-blob-tenant-id` - The ID of the tenant that owns the Azure blob. + #### Google Cloud Storage options ~> This option is deprecated when used with a top-level `google_storage` object in a config file. Use `snapshot_agent -> backup_destinations -> google_storage[0]` in a config file instead. @@ -420,7 +427,7 @@ This integration needs the following information: #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' ## Examples @@ -459,5 +466,5 @@ then the order of precedence is as follows: 3. `license_path` configuration. The ability to load licenses from the configuration or environment was added in v1.10.0, -v1.9.7 and v1.8.13. See the [licensing documentation](/consul/docs/enterprise/license/overview) for +v1.9.7 and v1.8.13. See the [licensing documentation](/consul/docs/enterprise/license) for more information about Consul Enterprise license management. diff --git a/website/content/commands/snapshot/restore.mdx b/website/content/commands/snapshot/restore.mdx index e33827f48c1a..034f5669c422 100644 --- a/website/content/commands/snapshot/restore.mdx +++ b/website/content/commands/snapshot/restore.mdx @@ -36,9 +36,9 @@ Usage: `consul snapshot restore [options] FILE` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' ## Examples diff --git a/website/content/commands/snapshot/save.mdx b/website/content/commands/snapshot/save.mdx index cf77cd48a695..1427bcebf421 100644 --- a/website/content/commands/snapshot/save.mdx +++ b/website/content/commands/snapshot/save.mdx @@ -45,9 +45,9 @@ Usage: `consul snapshot save [options] FILE` #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' - `-append-filename=` - Value can be - version,dc,node,status Adds consul version, datacenter name, node name, and status (leader/follower) diff --git a/website/content/commands/tls/ca.mdx b/website/content/commands/tls/ca.mdx index f0ec37018c58..3e6afe55295d 100644 --- a/website/content/commands/tls/ca.mdx +++ b/website/content/commands/tls/ca.mdx @@ -41,7 +41,7 @@ Usage: `consul tls ca create [options]` - `-days=` - Number of days the CA is valid for. Defaults to 1825 days (approximately 5 years). -- `-domain=` - The DNS domain of the Consul cluster that agents are [configured](/consul/docs/agent/config/cli-flags#_domain) with. +- `-domain=` - The DNS domain of the Consul cluster that agents are [configured](/consul/commands/agent#_domain) with. Defaults to `consul`. Only used when `-name-constraint` is set. Additional domains can be passed with `-additional-name-constraint`. diff --git a/website/content/commands/troubleshoot/index.mdx b/website/content/commands/troubleshoot/index.mdx index 74d9d9cec32a..a681e20bcccf 100644 --- a/website/content/commands/troubleshoot/index.mdx +++ b/website/content/commands/troubleshoot/index.mdx @@ -9,7 +9,7 @@ description: >- Command: `consul troubleshoot` -Use the `troubleshoot` command to diagnose Consul service mesh configuration or network issues. For additional information about using the `troubleshoot` command, including explanations, requirements, usage instructions, refer to the [service-to-service troubleshooting overview](/consul/docs/troubleshoot/troubleshoot-services). +Use the `troubleshoot` command to diagnose Consul service mesh configuration or network issues. For additional information about using the `troubleshoot` command, including explanations, requirements, usage instructions, refer to the [service-to-service troubleshooting overview](/consul/docs/troubleshoot/service-communication). ## Usage diff --git a/website/content/commands/troubleshoot/ports.mdx b/website/content/commands/troubleshoot/ports.mdx index 5a4d5faf5082..0a0986d747d7 100644 --- a/website/content/commands/troubleshoot/ports.mdx +++ b/website/content/commands/troubleshoot/ports.mdx @@ -23,7 +23,7 @@ Usage: `consul troubleshoot ports [options]` ## Examples The following example checks the default ports Consul server uses for TCP connectivity. Note that the `CONSUL_HTTP_ADDR` environment variable is set to `localhost`. As a result, the `-host` flag is not required. -Refer to [Required Ports](/consul/docs/install/ports) for additional information. +Refer to [Required Ports](/consul/docs/reference/architecture/ports) for additional information. ```shell-session $ export CONSUL_HTTP_ADDR=localhost diff --git a/website/content/commands/troubleshoot/proxy.mdx b/website/content/commands/troubleshoot/proxy.mdx index fcd9552247dd..11cd6e1955d6 100644 --- a/website/content/commands/troubleshoot/proxy.mdx +++ b/website/content/commands/troubleshoot/proxy.mdx @@ -9,7 +9,7 @@ description: >- Command: `consul troubleshoot proxy` -The `troubleshoot proxy` command diagnoses Consul service mesh configuration and network issues to an upstream. For additional information about using the `troubleshoot proxy` command, including explanations, requirements, usage instructions, refer to the [service-to-service troubleshooting overview](/consul/docs/troubleshoot/troubleshoot-services). +The `troubleshoot proxy` command diagnoses Consul service mesh configuration and network issues to an upstream. For additional information about using the `troubleshoot proxy` command, including explanations, requirements, usage instructions, refer to the [service-to-service troubleshooting overview](/consul/docs/troubleshoot/service-communication). ## Usage diff --git a/website/content/commands/troubleshoot/upstreams.mdx b/website/content/commands/troubleshoot/upstreams.mdx index f04a8beedc08..b5ca7ccbf88d 100644 --- a/website/content/commands/troubleshoot/upstreams.mdx +++ b/website/content/commands/troubleshoot/upstreams.mdx @@ -9,7 +9,7 @@ description: >- Command: `consul troubleshoot upstreams` -The `troubleshoot upstreams` lists the available upstreams in the Consul service mesh from the current service. For additional information about using the `troubleshoot upstreams` command, including explanations, requirements, usage instructions, refer to the [service-to-service troubleshooting overview](/consul/docs/troubleshoot/troubleshoot-services). +The `troubleshoot upstreams` lists the available upstreams in the Consul service mesh from the current service. For additional information about using the `troubleshoot upstreams` command, including explanations, requirements, usage instructions, refer to the [service-to-service troubleshooting overview](/consul/docs/troubleshoot/service-communication). ## Usage diff --git a/website/content/commands/validate.mdx b/website/content/commands/validate.mdx index a435f29120a6..f92d7983e408 100644 --- a/website/content/commands/validate.mdx +++ b/website/content/commands/validate.mdx @@ -21,7 +21,7 @@ to be loaded by the agent. This command cannot operate on partial configuration fragments since those won't pass the full agent validation. For more information on the format of Consul's configuration files, read the -consul agent [Configuration Files](/consul/docs/agent/config/config-files) +consul agent [Configuration Files](/consul/docs/reference/agent/configuration-file) section. ## Usage diff --git a/website/content/commands/watch.mdx b/website/content/commands/watch.mdx index 806864dae953..89d6fa4661a1 100644 --- a/website/content/commands/watch.mdx +++ b/website/content/commands/watch.mdx @@ -19,7 +19,7 @@ a process with the latest values of the view. If no process is specified, the current values are dumped to STDOUT which can be a useful way to inspect data in Consul. -There is more [documentation on watches here](/consul/docs/dynamic-app-config/watches). +There is more [documentation on watches here](/consul/docs/automate/watch). ## Usage @@ -28,7 +28,7 @@ Usage: `consul watch [options] [child...]` The only required option is `-type` which specifies the particular data view. Depending on the type, various options may be required or optionally provided. There is more documentation on watch -[specifications here](/consul/docs/dynamic-app-config/watches). +[specifications here](/consul/docs/automate/watch). #### Command Options @@ -60,6 +60,6 @@ or optionally provided. There is more documentation on watch #### API Options -@include 'http_api_options_client.mdx' +@include 'legacy/http_api_options_client.mdx' -@include 'http_api_options_server.mdx' +@include 'legacy/http_api_options_server.mdx' diff --git a/website/content/docs/agent/config-entries.mdx b/website/content/docs/agent/config-entries.mdx deleted file mode 100644 index 9eb3f82f7f5f..000000000000 --- a/website/content/docs/agent/config-entries.mdx +++ /dev/null @@ -1,168 +0,0 @@ ---- -layout: docs -page_title: How to Use Configuration Entries -description: >- - Configuration entries define the behavior of Consul service mesh components. Learn how to use the `consul config` command to create, manage, and delete configuration entries. ---- - -# How to Use Configuration Entries - -Configuration entries can be created to provide cluster-wide defaults for -various aspects of Consul. - -Outside of Kubernetes, configuration entries can be specified in HCL or JSON using either -`snake_case` or `CamelCase` for key names. On Kubernetes, configuration -entries can be managed by custom resources in YAML. - -Outside of Kubernetes, every configuration entry specified in HCL or JSON has at least two fields: -`Kind` and `Name`. Those two fields are used to uniquely identify a -configuration entry. Configuration entries specified as HCL or JSON objects -use either `snake_case` or `CamelCase` for key names. - - - -```hcl -Kind = "" -Name = "" -``` - - - -On Kubernetes, `Kind` is set as the custom resource `kind` and `Name` is set -as `metadata.name`: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: -metadata: - name: -``` - - - -## Supported Config Entries - -See [Service Mesh - Config Entries](/consul/docs/connect/config-entries) for the list -of supported config entries. - -## Managing Configuration Entries In Kubernetes - -See [Kubernetes Custom Resource Definitions](/consul/docs/k8s/crds). - -## Managing Configuration Entries Outside Of Kubernetes - -Configuration entries outside of Kubernetes should be managed with the Consul -[CLI](/consul/commands/config) or [API](/consul/api-docs/config). Additionally, as a -convenience for initial cluster bootstrapping, configuration entries can be -specified in all of the Consul server -[configuration files](/consul/docs/agent/config/config-files#config_entries_bootstrap) - -### Managing Configuration Entries with the CLI - -#### Creating or Updating a Configuration Entry - -The [`consul config write`](/consul/commands/config/write) command is used to -create and update configuration entries. This command will load either a JSON or -HCL file holding the configuration entry definition and then will push this -configuration to Consul. - -Example HCL Configuration File: - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -Config { - local_connect_timeout_ms = 1000 - handshake_timeout_ms = 10000 -} -``` - - - -Then to apply this configuration, run: - -```shell-session -$ consul config write proxy-defaults.hcl -``` - -If you need to make changes to a configuration entry, simple edit that file and -then rerun the command. This command will not output anything unless there is an -error in applying the configuration entry. The `write` command also supports a -`-cas` option to enable performing a compare-and-swap operation to prevent -overwriting other unknown modifications. - -#### Reading a Configuration Entry - -The [`consul config read`](/consul/commands/config/read) command is used to -read the current value of a configuration entry. The configuration entry will be -displayed in JSON form which is how its transmitted between the CLI client and -Consul's HTTP API. - -Example: - -```shell-session -$ consul config read -kind service-defaults -name web -{ - "Kind": "service-defaults", - "Name": "web", - "Protocol": "http" -} -``` - -#### Listing Configuration Entries - -The [`consul config list`](/consul/commands/config/list) command is used to -list out all the configuration entries for a given kind. - -Example: - -```shell-session -$ consul config list -kind service-defaults -web -api -db -``` - -#### Deleting Configuration Entries - -The [`consul config delete`](/consul/commands/config/delete) command is used -to delete an entry by specifying both its `kind` and `name`. - -Example: - -```shell-session -$ consul config delete -kind service-defaults -name web -``` - -This command will not output anything when the deletion is successful. - -#### Configuration Entry Management with Namespaces - -Configuration entry operations support passing a namespace in -order to isolate the entry to affect only operations within that namespace. This was -added in Consul 1.7.0. - -Example: - -```shell-session -$ consul config write service-defaults.hcl -namespace foo -``` - -```shell-session -$ consul config list -kind service-defaults -namespace foo -web -api -``` - -### Bootstrapping From A Configuration File - -Configuration entries can be bootstrapped by adding them [inline to each Consul -server's configuration file](/consul/docs/agent/config/config-files#config_entries). When a -server gains leadership, it will attempt to initialize the configuration entries. -If a configuration entry does not already exist outside of the servers -configuration, then it will create it. If a configuration entry does exist, that -matches both `kind` and `name`, then the server will do nothing. diff --git a/website/content/docs/agent/config/cli-flags.mdx b/website/content/docs/agent/config/cli-flags.mdx deleted file mode 100644 index a38ddff8cbb5..000000000000 --- a/website/content/docs/agent/config/cli-flags.mdx +++ /dev/null @@ -1,565 +0,0 @@ ---- -layout: docs -page_title: Agents - CLI Reference -description: >- - Add flags to the `consul agent` command to configure agent properties and actions from the CLI. Learn about configurable options and how to format them with examples. ---- - -# Agents Command-line Reference ((#commandline_options)) - --> **Note:** Some CLI arguments may be different from HCL keys. See [Configuration Key Reference](/consul/docs/agent/config/config-files#config_key_reference) for equivalent HCL Keys. - -This topic describes the available command-line options for the Consul agent. - -## Usage - -See [Agent Overview](/consul/docs/agent#starting-the-consul-agent) for examples of how to use flags with the `consul agent` CLI. - -## Environment Variables - -Environment variables **cannot** be used to configure the Consul client. They -_can_ be used when running other `consul` CLI commands that connect with a -running agent, e.g. `CONSUL_HTTP_ADDR=192.168.0.1:8500 consul members`. - -See [Consul Commands](/consul/commands#environment-variables) for more -information. - -## General - -- `-auto-reload-config` ((#\_auto_reload_config)) - This option directs Consul to automatically reload the [reloadable configuration options](/consul/docs/agent/config#reloadable-configuration) when configuration files change. - Consul also watches the certificate and key files specified with the `cert_file` and `key_file` parameters and reloads the configuration if the files are updated. - -- `-check_output_max_size` - Override the default - limit of 4k for maximum size of checks, this is a positive value. By limiting this - size, it allows to put less pressure on Consul servers when many checks are having - a very large output in their checks. In order to completely disable check output - capture, it is possible to use [`discard_check_output`](/consul/docs/agent/config/config-files#discard_check_output). - -- `-client` ((#\_client)) - The address to which Consul will bind client - interfaces, including the HTTP and DNS servers. By default, this is "127.0.0.1", - allowing only loopback connections. In Consul 1.0 and later this can be set to - a space-separated list of addresses to bind to, or a [go-sockaddr] - template that can potentially resolve to multiple addresses. - - - - ```shell - $ consul agent -dev -client '{{ GetPrivateInterfaces | exclude "type" "ipv6" | join "address" " " }}' - ``` - - - - - - ```shell - $ consul agent -dev -client '{{ GetPrivateInterfaces | join "address" " " }} {{ GetAllInterfaces | include "flags" "loopback" | join "address" " " }}' - ``` - - - - - - ```shell - $ consul agent -dev -client '{{ GetPrivateInterfaces | exclude "name" "br.*" | join "address" " " }}' - ``` - - - -- `-data-dir` ((#\_data_dir)) - This flag provides a data directory for - the agent to store state. This is required for all agents. The directory should - be durable across reboots. This is especially critical for agents that are running - in server mode as they must be able to persist cluster state. Additionally, the - directory must support the use of filesystem locking, meaning some types of mounted - folders (e.g. VirtualBox shared folders) may not be suitable. - - **Note:** both server and non-server agents may store ACL tokens in the state in this directory so read access may grant access to any tokens on servers and to any tokens used during service registration on non-servers. On Unix-based platforms the files are written with 0600 permissions so you should ensure only trusted processes can execute as the same user as Consul. On Windows, you should ensure the directory has suitable permissions configured as these will be inherited. - -- `-datacenter` ((#\_datacenter)) - This flag controls the datacenter in - which the agent is running. If not provided, it defaults to "dc1". Consul has first-class - support for multiple datacenters, but it relies on proper configuration. Nodes - in the same datacenter should be on a single LAN. - -- `-dev` ((#\_dev)) - Enable development server mode. This is useful for - quickly starting a Consul agent with all persistence options turned off, enabling - an in-memory server which can be used for rapid prototyping or developing against - the API. In this mode, [service mesh is enabled](/consul/docs/connect/configuration) and - will by default create a new root CA certificate on startup. This mode is **not** - intended for production use as it does not write any data to disk. The gRPC port - is also defaulted to `8502` in this mode. - -- `-disable-keyring-file` ((#\_disable_keyring_file)) - If set, the keyring - will not be persisted to a file. Any installed keys will be lost on shutdown, and - only the given `-encrypt` key will be available on startup. This defaults to false. - -- `-enable-script-checks` ((#\_enable_script_checks)) This controls whether - [health checks that execute scripts](/consul/docs/services/usage/checks) are enabled on this - agent, and defaults to `false` so operators must opt-in to allowing these. This - was added in Consul 0.9.0. - - ~> **Security Warning:** Enabling script checks in some configurations may - introduce a remote execution vulnerability which is known to be targeted by - malware. We strongly recommend `-enable-local-script-checks` instead. See [this - blog post](https://www.hashicorp.com/blog/protecting-consul-from-rce-risk-in-specific-configurations) - for more details. - -- `-enable-local-script-checks` ((#\_enable_local_script_checks)) - Like [`-enable-script-checks`](#_enable_script_checks), but only enable them when - they are defined in the local configuration files. Script checks defined in HTTP - API registrations will still not be allowed. - -- `-encrypt` ((#\_encrypt)) - Specifies the secret key to use for encryption - of Consul network traffic. This key must be 32-bytes that are Base64-encoded. The - easiest way to create an encryption key is to use [`consul keygen`](/consul/commands/keygen). - All nodes within a cluster must share the same encryption key to communicate. The - provided key is automatically persisted to the data directory and loaded automatically - whenever the agent is restarted. This means that to encrypt Consul's gossip protocol, - this option only needs to be provided once on each agent's initial startup sequence. - If it is provided after Consul has been initialized with an encryption key, then - the provided key is ignored and a warning will be displayed. - -- `-grpc-port` ((#\_grpc_port)) - the gRPC API port to listen on. Default - -1 (gRPC disabled). See [ports](/consul/docs/agent/config#ports-used) documentation for more detail. - -- `-hcl` ((#\_hcl)) - A HCL configuration fragment. This HCL configuration - fragment is appended to the configuration and allows to specify the full range - of options of a config file on the command line. This option can be specified multiple - times. This was added in Consul 1.0. - -- `-http-port` ((#\_http_port)) - the HTTP API port to listen on. This overrides - the default port 8500. This option is very useful when deploying Consul to an environment - which communicates the HTTP port through the environment e.g. PaaS like CloudFoundry, - allowing you to set the port directly via a Procfile. - -- `-https-port` ((#\_https_port)) - the HTTPS API port to listen on. Default - -1 (https disabled). See [ports](/consul/docs/agent/config#ports-used) documentation for more detail. - -- `-default-query-time` ((#\_default_query_time)) - This flag controls the - amount of time a blocking query will wait before Consul will force a response. - This value can be overridden by the `wait` query parameter. Note that Consul applies - some jitter on top of this time. Defaults to 300s. - -- `-max-query-time` ((#\_max_query_time)) - this flag controls the maximum - amount of time a blocking query can wait before Consul will force a response. Consul - applies jitter to the wait time. The jittered time will be capped to this time. - Defaults to 600s. - -- `-pid-file` ((#\_pid_file)) - This flag provides the file path for the - agent to store its PID. This is useful for sending signals (for example, `SIGINT` - to close the agent or `SIGHUP` to update check definitions) to the agent. - -- `-protocol` ((#\_protocol)) - The Consul protocol version to use. Consul - agents speak protocol 2 by default, however agents will automatically use protocol > 2 when speaking to compatible agents. This should be set only when [upgrading](/consul/docs/upgrading). You can view the protocol versions supported by Consul by running `consul version`. - -- `-raft-protocol` ((#\_raft_protocol)) - This controls the internal version - of the Raft consensus protocol used for server communications. This must be set - to 3 in order to gain access to Autopilot features, with the exception of [`cleanup_dead_servers`](/consul/docs/agent/config/config-files#cleanup_dead_servers). Defaults to 3 in Consul 1.0.0 and later (defaulted to 2 previously). See [Raft Protocol Version Compatibility](/consul/docs/upgrading/upgrade-specific#raft-protocol-version-compatibility) for more details. - -- `-segment` ((#\_segment)) - This flag is used to set - the name of the network segment the agent belongs to. An agent can only join and - communicate with other agents within its network segment. Ensure the [join - operation uses the correct port for this segment](/consul/docs/enterprise/network-segments/create-network-segment#configure-clients-to-join-segments). - Review the [Network Segments documentation](/consul/docs/enterprise/network-segments/create-network-segment) - for more details. By default, this is an empty string, which is the `` - network segment. - - ~> **Warning:** The `segment` flag cannot be used with the [`partition`](/consul/docs/agent/config/config-files#partition-1) option. - -## Advertise Address Options - -- `-advertise` ((#\_advertise)) - The advertise address is used to change - the address that we advertise to other nodes in the cluster. By default, the [`-bind`](#_bind) - address is advertised. However, in some cases, there may be a routable address - that cannot be bound. This flag enables gossiping a different address to support - this. If this address is not routable, the node will be in a constant flapping - state as other nodes will treat the non-routability as a failure. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] - template that is resolved at runtime. - - - - ```shell-session - $ consul agent -advertise '{{ GetInterfaceIP "eth0" }}' - ``` - - - -- `-advertise-wan` ((#\_advertise-wan)) - The advertise WAN address is used - to change the address that we advertise to server nodes joining through the WAN. - This can also be set on client agents when used in combination with the [`translate_wan_addrs`](/consul/docs/agent/config/config-files#translate_wan_addrs) configuration option. By default, the [`-advertise`](#_advertise) address - is advertised. However, in some cases all members of all datacenters cannot be - on the same physical or virtual network, especially on hybrid setups mixing cloud - and private datacenters. This flag enables server nodes gossiping through the public - network for the WAN while using private VLANs for gossiping to each other and their - client agents, and it allows client agents to be reached at this address when being - accessed from a remote datacenter if the remote datacenter is configured with [`translate_wan_addrs`](/consul/docs/agent/config/config-files#translate_wan_addrs). In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] - template that is resolved at runtime. - -## Address Bind Options - -- `-bind` ((#\_bind)) - The address that should be bound to for internal - cluster communications. This is an IP address that should be reachable by all other - nodes in the cluster. By default, this is "0.0.0.0", meaning Consul will bind to - all addresses on the local machine and will [advertise](#_advertise) - the private IPv4 address to the rest of the cluster. If there are multiple private - IPv4 addresses available, Consul will exit with an error at startup. If you specify - `"[::]"`, Consul will [advertise](#_advertise) the public - IPv6 address. If there are multiple public IPv6 addresses available, Consul will - exit with an error at startup. Consul uses both TCP and UDP and the same port for - both. If you have any firewalls, be sure to allow both protocols. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] - template that must resolve at runtime to a single address. Some example templates: - - - - ```shell-session - $ consul agent -bind '{{ GetPrivateInterfaces | include "network" "10.0.0.0/8" | attr "address" }}' - ``` - - - - - - ```shell-session - $ consul agent -bind '{{ GetInterfaceIP "eth0" }}' - ``` - - - - - - ```shell-session - $ consul agent -bind '{{ GetAllInterfaces | include "name" "^eth" | include "flags" "forwardable|up" | attr "address" }}' - ``` - - - -- `-serf-wan-bind` ((#\_serf_wan_bind)) - The address that should be bound - to for Serf WAN gossip communications. By default, the value follows the same rules - as [`-bind` command-line flag](#_bind), and if this is not specified, the `-bind` - option is used. This is available in Consul 0.7.1 and later. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] - template that is resolved at runtime. - -- `-serf-lan-bind` ((#\_serf_lan_bind)) - The address that should be bound - to for Serf LAN gossip communications. This is an IP address that should be reachable - by all other LAN nodes in the cluster. By default, the value follows the same rules - as [`-bind` command-line flag](#_bind), and if this is not specified, the `-bind` - option is used. This is available in Consul 0.7.1 and later. In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] - template that is resolved at runtime. - -## Bootstrap Options - -- `-bootstrap` ((#\_bootstrap)) - This flag is used to control if a server - is in "bootstrap" mode. It is important that no more than one server **per** datacenter - be running in this mode. Technically, a server in bootstrap mode is allowed to - self-elect as the Raft leader. It is important that only a single node is in this - mode; otherwise, consistency cannot be guaranteed as multiple nodes are able to - self-elect. It is not recommended to use this flag after a cluster has been bootstrapped. - -- `-bootstrap-expect` ((#\_bootstrap_expect)) - This flag provides the number - of expected servers in the datacenter. Either this value should not be provided - or the value must agree with other servers in the cluster. When provided, Consul - waits until the specified number of servers are available and then bootstraps the - cluster. This allows an initial leader to be elected automatically. This cannot - be used in conjunction with the legacy [`-bootstrap`](#_bootstrap) flag. This flag - requires [`-server`](#_server) mode. - -## Configuration File Options - -- `-config-file` ((#\_config_file)) - A configuration file to load. For - more information on the format of this file, read the [Configuration Files](/consul/docs/agent/config/config-files) - section. This option can be specified multiple times to load multiple configuration - files. If it is specified multiple times, configuration files loaded later will - merge with configuration files loaded earlier. During a config merge, single-value - keys (string, int, bool) will simply have their values replaced while list types - will be appended together. - -- `-config-dir` ((#\_config_dir)) - A directory of configuration files to - load. Consul will load all files in this directory with the suffix ".json" or ".hcl". - The load order is alphabetical, and the same merge routine is used as with - the [`config-file`](#_config_file) option above. This option can be specified multiple - times to load multiple directories. Sub-directories of the config directory are - not loaded. For more information on the format of the configuration files, see - the [Configuration Files](/consul/docs/agent/config/config-files) section. - -- `-config-format` ((#\_config_format)) - The format of the configuration - files to load. Normally, Consul detects the format of the config files from the - ".json" or ".hcl" extension. Setting this option to either "json" or "hcl" forces - Consul to interpret any file with or without extension to be interpreted in that - format. - -## DNS and Domain Options - -- `-dns-port` ((#\_dns_port)) - the DNS port to listen on. This overrides - the default port 8600. This is available in Consul 0.7 and later. - -- `-domain` ((#\_domain)) - By default, Consul responds to DNS queries in - the "consul." domain. This flag can be used to change that domain. All queries - in this domain are assumed to be handled by Consul and will not be recursively - resolved. - -- `-alt-domain` ((#\_alt_domain)) - This flag allows Consul to respond to - DNS queries in an alternate domain, in addition to the primary domain. If unset, - no alternate domain is used. - - In Consul 1.10.4 and later, Consul DNS responses will use the same domain as in the query (`-domain` or `-alt-domain`) where applicable. - PTR query responses will always use `-domain`, since the desired domain cannot be included in the query. - -- `-recursor` ((#\_recursor)) - Specifies the address of an upstream DNS - server. This option may be provided multiple times, and is functionally equivalent - to the [`recursors` configuration option](/consul/docs/agent/config/config-files#recursors). - -- `-join` ((#\_join)) - **Deprecated in Consul 1.15. This flag will be removed in a future version of Consul. Use the `-retry-join` flag instead.** - This is an alias of [`-retry-join`](#_retry_join). - -- `-retry-join` ((#\_retry_join)) - Address of another agent to join upon starting up. Joining is - retried until success. Once the agent joins successfully as a member, it will not attempt to join - again. After joining, the agent solely maintains its membership via gossip. This option can be - specified multiple times to specify multiple agents to join. By default, the agent won't join any - nodes when it starts up. The value can contain IPv4, IPv6, or DNS addresses. Literal IPv6 - addresses must be enclosed in square brackets. If multiple values are given, they are tried and - retried in the order listed until the first succeeds. - - This supports [Cloud Auto-Joining](#cloud-auto-joining). - - This can be dynamically defined with a [go-sockaddr] template that is resolved at runtime. - - If Consul is running on a non-default Serf LAN port, you must specify the port number in the address when using the `-retry-join` flag. Alternatively, you can specify the custom port number as the default in the agent's [`ports.serf_lan`](/consul/docs/agent/config/config-files#serf_lan_port) configuration or with the [`-serf-lan-port`](#_serf_lan_port) command line flag when starting the agent. - - If your network contains network segments, refer to the [network segments documentation](/consul/docs/enterprise/network-segments/create-network-segment) for additional information. - - Here are some examples of using `-retry-join`: - - - - ```shell-session - $ consul agent -retry-join "consul.domain.internal" - ``` - - - - - - ```shell-session - $ consul agent -retry-join "10.0.4.67" - ``` - - - - - - ```shell-session - $ consul agent -retry-join "192.0.2.10:8304" - ``` - - - - - - ```shell-session - $ consul agent -retry-join "[::1]:8301" - ``` - - - - - - ```shell-session - $ consul agent -retry-join "consul.domain.internal" -retry-join "10.0.4.67" - ``` - - - - ### Cloud Auto-Joining - - The `-retry-join` option accepts a unified interface using the - [go-discover](https://github.com/hashicorp/go-discover) library for doing - automatic cluster joining using cloud metadata. For more information, see - the [Cloud Auto-join page](/consul/docs/install/cloud-auto-join). - - - - ```shell-session - $ consul agent -retry-join "provider=aws tag_key=..." - ``` - - - -- `-retry-interval` ((#\_retry_interval)) - Time to wait between join attempts. - Defaults to 30s. - -- `-retry-max` ((#\_retry_max)) - The maximum number of join attempts if using - [`-retry-join`](#_retry_join) before exiting with return code 1. By default, this is set - to 0 which is interpreted as infinite retries. - -- `-join-wan` ((#\_join_wan)) - **Deprecated in Consul 1.15. This flag will be removed in a future version of Consul. Use the `-retry-join-wan` flag instead.** - This is an alias of [`-retry-join-wan`](#_retry_join_wan) - -- `-retry-join-wan` ((#\_retry_join_wan)) - Address of another WAN agent to join upon starting up. - WAN joining is retried until success. This can be specified multiple times to specify multiple WAN - agents to join. If multiple values are given, they are tried and retried in the order listed - until the first succeeds. By default, the agent won't WAN join any nodes when it starts up. - - This supports [Cloud Auto-Joining](#cloud-auto-joining). - - This can be dynamically defined with a [go-sockaddr] template that is resolved at runtime. - -- `-primary-gateway` ((#\_primary_gateway)) - Similar to [`-retry-join-wan`](#_retry_join_wan) - but allows retrying discovery of fallback addresses for the mesh gateways in the - primary datacenter if the first attempt fails. This is useful for cases where we - know the address will become available eventually. [Cloud Auto-Joining](#cloud-auto-joining) - is supported as well as [go-sockaddr] - templates. This was added in Consul 1.8.0. - -- `-retry-interval-wan` ((#\_retry_interval_wan)) - Time to wait between - [`-retry-join-wan`](#_retry_join_wan) attempts. Defaults to 30s. - -- `-retry-max-wan` ((#\_retry_max_wan)) - The maximum number of [`-retry-join-wan`](#_join_wan) - attempts to be made before exiting with return code 1. By default, this is set - to 0 which is interpreted as infinite retries. - -- `-rejoin` ((#\_rejoin)) - When provided, Consul will ignore a previous - leave and attempt to rejoin the cluster when starting. By default, Consul treats - leave as a permanent intent and does not attempt to join the cluster again when - starting. This flag allows the previous state to be used to rejoin the cluster. - -## Log Options - -- `-log-file` ((#\_log_file)) - writes all the Consul agent log messages - to a file at the path indicated by this flag. The filename defaults to `consul.log`. - When the log file rotates, this value is used as a prefix for the path to the log and the current timestamp is - appended to the file name. If the value ends in a path separator, `consul-` - will be appended to the value. If the file name is missing an extension, `.log` - is appended. For example, setting `log-file` to `/var/log/` would result in a log - file path of `/var/log/consul.log`. `log-file` can be combined with - [`-log-rotate-bytes`](#_log_rotate_bytes) and [`-log-rotate-duration`](#_log_rotate_duration) - for a fine-grained log rotation experience. After rotation, the path and filename take the following form: - `/var/log/consul-{timestamp}.log` - -- `-log-rotate-bytes` ((#\_log_rotate_bytes)) - to specify the number of - bytes that should be written to a log before it needs to be rotated. Unless specified, - there is no limit to the number of bytes that can be written to a log file. - -- `-log-rotate-duration` ((#\_log_rotate_duration)) - to specify the maximum - duration a log should be written to before it needs to be rotated. Must be a duration - value such as 30s. Defaults to 24h. - -- `-log-rotate-max-files` ((#\_log_rotate_max_files)) - to specify the maximum - number of older log file archives to keep. Defaults to 0 (no files are ever deleted). - Set to -1 to discard old log files when a new one is created. - -- `-log-level` ((#\_log_level)) - The level of logging to show after the - Consul agent has started. This defaults to "info". The available log levels are - "trace", "debug", "info", "warn", and "error". You can always connect to an agent - via [`consul monitor`](/consul/commands/monitor) and use any log level. Also, - the log level can be changed during a config reload. - -- `-log-json` ((#\_log_json)) - This flag enables the agent to output logs - in a JSON format. By default this is false. - -- `-syslog` ((#\_syslog)) - This flag enables logging to syslog. This is - only supported on Linux and macOS. It will result in an error if provided on Windows. - -## Node Options - -- `-node` ((#\_node)) - The name of this node in the cluster. This must - be unique within the cluster. By default this is the hostname of the machine. - The node name cannot contain whitespace or quotation marks. To query the node from DNS, the name must only contain alphanumeric characters and hyphens (`-`). - -- `-node-id` ((#\_node_id)) - Available in Consul 0.7.3 and later, this - is a unique identifier for this node across all time, even if the name of the node - or address changes. This must be in the form of a hex string, 36 characters long, - such as `adf4238a-882b-9ddc-4a9d-5b6758e4159e`. If this isn't supplied, which is - the most common case, then the agent will generate an identifier at startup and - persist it in the [data directory](#_data_dir) so that it will remain the same - across agent restarts. Information from the host will be used to generate a deterministic - node ID if possible, unless [`-disable-host-node-id`](#_disable_host_node_id) is - set to true. - -- `-node-meta` ((#\_node_meta)) - Available in Consul 0.7.3 and later, this - specifies an arbitrary metadata key/value pair to associate with the node, of the - form `key:value`. This can be specified multiple times. Node metadata pairs have - the following restrictions: - - - A maximum of 64 key/value pairs can be registered per node. - - Metadata keys must be between 1 and 128 characters (inclusive) in length - - Metadata keys must contain only alphanumeric, `-`, and `_` characters. - - Metadata keys must not begin with the `consul-` prefix; that is reserved for internal use by Consul. - - Metadata values must be between 0 and 512 (inclusive) characters in length. - - Metadata values for keys beginning with `rfc1035-` are encoded verbatim in DNS TXT requests, otherwise - the metadata kv-pair is encoded according [RFC1464](https://www.ietf.org/rfc/rfc1464.txt). - -- `-disable-host-node-id` ((#\_disable_host_node_id)) - Setting this to - true will prevent Consul from using information from the host to generate a deterministic - node ID, and will instead generate a random node ID which will be persisted in - the data directory. This is useful when running multiple Consul agents on the same - host for testing. This defaults to false in Consul prior to version 0.8.5 and in - 0.8.5 and later defaults to true, so you must opt-in for host-based IDs. Host-based - IDs are generated using [gopsutil](https://github.com/shirou/gopsutil/), which - is shared with HashiCorp's [Nomad](https://www.nomadproject.io/), so if you opt-in - to host-based IDs then Consul and Nomad will use information on the host to automatically - assign the same ID in both systems. - -## Serf Options - -- `-serf-lan-allowed-cidrs` ((#\_serf_lan_allowed_cidrs)) - The Serf LAN allowed CIDRs allow to accept incoming - connections for Serf only from several networks (multiple values are supported). - Those networks are specified with CIDR notation (eg: 192.168.1.0/24). - This is available in Consul 1.8 and later. - -- `-serf-lan-port` ((#\_serf_lan_port)) - the Serf LAN port to listen on. - This overrides the default Serf LAN port 8301. This is available in Consul 1.2.2 - and later. - -- `-serf-wan-allowed-cidrs` ((#\_serf_wan_allowed_cidrs)) - The Serf WAN allowed CIDRs allow to accept incoming - connections for Serf only from several networks (multiple values are supported). - Those networks are specified with CIDR notation (eg: 192.168.1.0/24). - This is available in Consul 1.8 and later. - -- `-serf-wan-port` ((#\_serf_wan_port)) - the Serf WAN port to listen on. - This overrides the default Serf WAN port 8302. This is available in Consul 1.2.2 - and later. - -## Server Options - -- `-server` ((#\_server)) - This flag is used to control if an agent is - in server or client mode. When provided, an agent will act as a Consul server. - Each Consul cluster must have at least one server and ideally no more than 5 per - datacenter. All servers participate in the Raft consensus algorithm to ensure that - transactions occur in a consistent, linearizable manner. Transactions modify cluster - state, which is maintained on all server nodes to ensure availability in the case - of node failure. Server nodes also participate in a WAN gossip pool with server - nodes in other datacenters. Servers act as gateways to other datacenters and forward - RPC traffic as appropriate. - -- `-server-port` ((#\_server_port)) - the server RPC port to listen on. - This overrides the default server RPC port 8300. This is available in Consul 1.2.2 - and later. - -- `-non-voting-server` ((#\_non_voting_server)) - **This field - is deprecated in Consul 1.9.1. See the [`-read-replica`](#_read_replica) flag instead.** - -- `-read-replica` ((#\_read_replica)) - This - flag is used to make the server not participate in the Raft quorum, and have it - only receive the data replication stream. This can be used to add read scalability - to a cluster in cases where a high volume of reads to servers are needed. - -## UI Options - -- `-ui` ((#\_ui)) - Enables the built-in web UI server and the required - HTTP routes. This eliminates the need to maintain the Consul web UI files separately - from the binary. - -- `-ui-dir` ((#\_ui_dir)) - This flag provides the directory containing - the Web UI resources for Consul. This will automatically enable the Web UI. The - directory must be readable to the agent. Starting with Consul version 0.7.0 and - later, the Web UI assets are included in the binary so this flag is no longer necessary; - specifying only the `-ui` flag is enough to enable the Web UI. Specifying both - the '-ui' and '-ui-dir' flags will result in an error. - -- `-ui-content-path` ((#\_ui\_content\_path)) - This flag provides the option - to change the path the Consul UI loads from and will be displayed in the browser. - By default, the path is `/ui/`, for example `http://localhost:8500/ui/`. Only alphanumerics, - `-`, and `_` are allowed in a custom path. `/v1/` is not allowed as it would overwrite - the API endpoint. - -{/* list of reference-style links */} - -[go-sockaddr]: https://godoc.org/github.com/hashicorp/go-sockaddr/template diff --git a/website/content/docs/agent/config/config-files.mdx b/website/content/docs/agent/config/config-files.mdx deleted file mode 100644 index 63e8137c4f06..000000000000 --- a/website/content/docs/agent/config/config-files.mdx +++ /dev/null @@ -1,2369 +0,0 @@ ---- -layout: docs -page_title: Agents - Configuration File Reference -description: >- - Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. ---- - -# Agents Configuration File Reference ((#configuration_files)) - -This topic describes the parameters for configuring Consul agents. For information about how to start Consul agents, refer to [Starting the Consul Agent](/consul/docs/agent#starting-the-consul-agent). - -## Overview - -You can create one or more files to configure the Consul agent on startup. We recommend -grouping similar configurations into separate files, such as ACL parameters, to make it -easier to manage configuration changes. Using external files may be easier than -configuring agents on the command-line when Consul is -being configured using a configuration management system. - -The configuration files are formatted as HCL or JSON. JSON formatted configs are -easily readable and editable by both humans and computers. JSON formatted -configuration consists of a single JSON object with multiple configuration keys -specified within it. - - - -```hcl -datacenter = "east-aws" -data_dir = "/opt/consul" -log_level = "INFO" -node_name = "foobar" -server = true -watches = [ - { - type = "checks" - handler = "/usr/bin/health-check-handler.sh" - } -] - -telemetry { - statsite_address = "127.0.0.1:2180" -} -``` - -```json -{ - "datacenter": "east-aws", - "data_dir": "/opt/consul", - "log_level": "INFO", - "node_name": "foobar", - "server": true, - "watches": [ - { - "type": "checks", - "handler": "/usr/bin/health-check-handler.sh" - } - ], - "telemetry": { - "statsite_address": "127.0.0.1:2180" - } -} -``` - - - -### Time-to-live values - -Consul uses the Go `time` package to parse all time-to-live (TTL) values used in Consul agent configuration files. Specify integer and float values as a string and include one or more of the following units of time: - -- `ns` -- `us` -- `µs` -- `ms` -- `s` -- `m` -- `h` - -Examples: - -- `'300ms'` -- `'1.5h'` -- `'2h45m'` - -Refer to the [formatting specification](https://golang.org/pkg/time/#ParseDuration) for additional information. - -## General parameters - -- `addresses` - This is a nested object that allows setting - bind addresses. In Consul 1.0 and later these can be set to a space-separated list - of addresses to bind to, or a [go-sockaddr] template that can potentially resolve to multiple addresses. - - `http`, `https` and `grpc` all support binding to a Unix domain socket. A - socket can be specified in the form `unix:///path/to/socket`. A new domain - socket will be created at the given path. If the specified file path already - exists, Consul will attempt to clear the file and create the domain socket - in its place. The permissions of the socket file are tunable via the - [`unix_sockets` config construct](#unix_sockets). - - When running Consul agent commands against Unix socket interfaces, use the - `-http-addr` argument to specify the path to the socket. You can also place - the desired values in the `CONSUL_HTTP_ADDR` environment variable. - - For TCP addresses, the environment variable value should be an IP address - _with the port_. For example: `10.0.0.1:8500` and not `10.0.0.1`. However, - ports are set separately in the [`ports`](#ports) structure when - defining them in a configuration file. - - The following keys are valid: - - - `dns` - The DNS server. Defaults to `client_addr` - - `http` - The HTTP API. Defaults to `client_addr` - - `https` - The HTTPS API. Defaults to `client_addr` - - `grpc` - The gRPC API. Defaults to `client_addr` - - `grpc_tls` - The gRPC API with TLS. Defaults to `client_addr` - -- `alt_domain` Equivalent to the [`-alt-domain` command-line flag](/consul/docs/agent/config/cli-flags#_alt_domain) - -- `audit` - Added in Consul 1.8, the audit object allow users to enable auditing - and configure a sink and filters for their audit logs. For more information, review the [audit log tutorial](/consul/tutorials/datacenter-operations/audit-logging). - - - - ```hcl - audit { - enabled = true - sink "My sink" { - type = "file" - format = "json" - path = "data/audit/audit.json" - delivery_guarantee = "best-effort" - rotate_duration = "24h" - rotate_max_files = 15 - rotate_bytes = 25165824 - } - } - ``` - - ```json - { - "audit": { - "enabled": true, - "sink": { - "My sink": { - "type": "file", - "format": "json", - "path": "data/audit/audit.json", - "delivery_guarantee": "best-effort", - "rotate_duration": "24h", - "rotate_max_files": 15, - "rotate_bytes": 25165824 - } - } - } - } - ``` - - - - The following sub-keys are available: - - - `enabled` - Controls whether Consul logs out each time a user - performs an operation. ACLs must be enabled to use this feature. Defaults to `false`. - - - `sink` - This object provides configuration for the destination to which - Consul will log auditing events. Sink is an object containing keys to sink objects, where the key is the name of the sink. - - - `type` - Type specifies what kind of sink this is. - The following keys are valid: - - `file` - Currently only file sinks are available, they take the following keys. - - `format` - Format specifies what format the events will - be emitted with. - The following keys are valid: - - `json` - Currently only json events are offered. - - `path` - The directory and filename to write audit events to. - - `delivery_guarantee` - Specifies - the rules governing how audit events are written. - The following keys are valid: - - `best-effort` - Consul only supports `best-effort` event delivery. - - `mode` - The permissions to set on the audit log files. - - `rotate_duration` - Specifies the - interval by which the system rotates to a new log file. At least one of `rotate_duration` or `rotate_bytes` - must be configured to enable audit logging. - - `rotate_max_files` - Defines the - limit that Consul should follow before it deletes old log files. - - `rotate_bytes` - Specifies how large an - individual log file can grow before Consul rotates to a new file. At least one of `rotate_bytes` or - `rotate_duration` must be configured to enable audit logging. - -- `autopilot` Added in Consul 0.8, this object allows a - number of sub-keys to be set which can configure operator-friendly settings for - Consul servers. When these keys are provided as configuration, they will only be - respected on bootstrapping. If they are not provided, the defaults will be used. - In order to change the value of these options after bootstrapping, you will need - to use the [Consul Operator Autopilot](/consul/commands/operator/autopilot) - command. For more information about Autopilot, review the [Autopilot tutorial](/consul/tutorials/datacenter-operations/autopilot-datacenter-operations). - - The following sub-keys are available: - - - `cleanup_dead_servers` - This controls the - automatic removal of dead server nodes periodically and whenever a new server - is added to the cluster. Defaults to `true`. - - - `last_contact_threshold` - Controls the - maximum amount of time a server can go without contact from the leader before - being considered unhealthy. Must be a duration value such as `10s`. Defaults - to `200ms`. - - - `max_trailing_logs` - Controls the maximum number - of log entries that a server can trail the leader by before being considered - unhealthy. Defaults to 250. - - - `min_quorum` - Sets the minimum number of servers necessary - in a cluster. Autopilot will stop pruning dead servers when this minimum is reached. There is no default. - - - `server_stabilization_time` - Controls - the minimum amount of time a server must be stable in the 'healthy' state before - being added to the cluster. Only takes effect if all servers are running Raft - protocol version 3 or higher. Must be a duration value such as `30s`. Defaults - to `10s`. - - - `redundancy_zone_tag` - - This controls the [`node_meta`](#node_meta) key to use when Autopilot is separating - servers into zones for redundancy. Only one server in each zone can be a voting - member at one time. If left blank (the default), this feature will be disabled. - - - `disable_upgrade_migration` - - If set to `true`, this setting will disable Autopilot's upgrade migration strategy - in Consul Enterprise of waiting until enough newer-versioned servers have been - added to the cluster before promoting any of them to voters. Defaults to `false`. - - - `upgrade_version_tag` - - The node_meta tag to use for version info when performing upgrade migrations. - If this is not set, the Consul version will be used. - -- `auto_config` This object allows setting options for the `auto_config` feature. - - The following sub-keys are available: - - - `enabled` (Defaults to `false`) This option enables `auto_config` on a client - agent. When starting up but before joining the cluster, the client agent will - make an RPC to the configured server addresses to request configuration settings, - such as its `agent` ACL token, TLS certificates, Gossip encryption key as well - as other configuration settings. These configurations get merged in as defaults - with any user-supplied configuration on the client agent able to override them. - The initial RPC uses a JWT specified with either `intro_token`, - `intro_token_file` or the `CONSUL_INTRO_TOKEN` environment variable to authorize - the request. How the JWT token is verified is controlled by the `auto_config.authorizer` - object available for use on Consul servers. Enabling this option also enables - service mesh because it is vital for `auto_config`, more specifically the service mesh CA - and certificates infrastructure. - - ~> **Warning:** Enabling `auto_config` conflicts with the [`auto_encrypt.tls`](#tls) feature. - Only one option may be specified. - - - `intro_token` (Defaults to `""`) This specifies the JWT to use for the initial - `auto_config` RPC to the Consul servers. This can be overridden with the - `CONSUL_INTRO_TOKEN` environment variable - - - `intro_token_file` (Defaults to `""`) This specifies a file containing the JWT - to use for the initial `auto_config` RPC to the Consul servers. This token - from this file is only loaded if the `intro_token` configuration is unset as - well as the `CONSUL_INTRO_TOKEN` environment variable - - - `server_addresses` (Defaults to `[]`) This specifies the addresses of servers in - the local datacenter to use for the initial RPC. These addresses support - [Cloud Auto-Joining](/consul/docs/agent/config/cli-flags#cloud-auto-joining) and can optionally include a port to - use when making the outbound connection. If no port is provided, the `server_port` - will be used. - - - `dns_sans` (Defaults to `[]`) This is a list of extra DNS SANs to request in the - client agent's TLS certificate. The `localhost` DNS SAN is always requested. - - - `ip_sans` (Defaults to `[]`) This is a list of extra IP SANs to request in the - client agent's TLS certificate. The `::1` and `127.0.0.1` IP SANs are always requested. - - - `authorization` This object controls how a Consul server will authorize `auto_config` - requests and in particular how to verify the JWT intro token. - - - `enabled` (Defaults to `false`) This option enables `auto_config` authorization - capabilities on the server. - - - `static` This object controls configuring the static authorizer setup in the Consul - configuration file. Almost all sub-keys are identical to those provided by the [JWT - Auth Method](/consul/docs/security/acl/auth-methods/jwt). - - - `jwt_validation_pub_keys` (Defaults to `[]`) A list of PEM-encoded public keys - to use to authenticate signatures locally. - - Exactly one of `jwks_url` `jwt_validation_pub_keys`, or `oidc_discovery_url` is required. - - - `oidc_discovery_url` (Defaults to `""`) The OIDC Discovery URL, without any - .well-known component (base path). - - Exactly one of `jwks_url` `jwt_validation_pub_keys`, or `oidc_discovery_url` is required. - - - `oidc_discovery_ca_cert` (Defaults to `""`) PEM encoded CA cert for use by the TLS - client used to talk with the OIDC Discovery URL. NOTE: Every line must end - with a newline (`\n`). If not set, system certificates are used. - - - `jwks_url` (Defaults to `""`) The JWKS URL to use to authenticate signatures. - - Exactly one of `jwks_url` `jwt_validation_pub_keys`, or `oidc_discovery_url` is required. - - - `jwks_ca_cert` (Defaults to `""`) PEM encoded CA cert for use by the TLS client - used to talk with the JWKS URL. NOTE: Every line must end with a newline - (`\n`). If not set, system certificates are used. - - - `claim_mappings` (Defaults to `(map[string]string)` Mappings of claims (key) that - will be copied to a metadata field (value). Use this if the claim you are capturing - is singular (such as an attribute). - - When mapped, the values can be any of a number, string, or boolean and will - all be stringified when returned. - - - `list_claim_mappings` (Defaults to `(map[string]string)`) Mappings of claims (key) - will be copied to a metadata field (value). Use this if the claim you are capturing - is list-like (such as groups). - - When mapped, the values in each list can be any of a number, string, or - boolean and will all be stringified when returned. - - - `jwt_supported_algs` (Defaults to `["RS256"]`) JWTSupportedAlgs is a list of - supported signing algorithms. - - - `bound_audiences` (Defaults to `[]`) List of `aud` claims that are valid for - login; any match is sufficient. - - - `bound_issuer` (Defaults to `""`) The value against which to match the `iss` - claim in a JWT. - - - `expiration_leeway` (Defaults to `"0s"`) Duration of leeway when - validating expiration of a token to account for clock skew. Defaults to 150s - (2.5 minutes) if set to 0s and can be disabled if set to -1ns. - - - `not_before_leeway` (Defaults to `"0s"`) Duration of leeway when - validating not before values of a token to account for clock skew. Defaults - to 150s (2.5 minutes) if set to 0s and can be disabled if set to -1. - - - `clock_skew_leeway` (Defaults to `"0s"`) Duration of leeway when - validating all claims to account for clock skew. Defaults to 60s (1 minute) - if set to 0s and can be disabled if set to -1ns. - - - `claim_assertions` (Defaults to `[]`) List of assertions about the mapped - claims required to authorize the incoming RPC request. The syntax uses - [github.com/hashicorp/go-bexpr](https://github.com/hashicorp/go-bexpr) which is shared with the - [API filtering feature](/consul/api-docs/features/filtering). For example, the following - configurations when combined will ensure that the JWT `sub` matches the node - name requested by the client. - - - - ```hcl - claim_mappings { - sub = "node_name" - } - claim_assertions = [ - "value.node_name == \"${node}\"" - ] - ``` - - ```json - { - "claim_mappings": { - "sub": "node_name" - }, - "claim_assertions": ["value.node_name == \"${node}\""] - } - ``` - - - - The assertions are lightly templated using [HIL syntax](https://github.com/hashicorp/hil) - to interpolate some values from the RPC request. The list of variables that can be interpolated - are: - - - `node` - The node name the client agent is requesting. - - - `segment` - The network segment name the client is requesting. - - - `partition` - The admin partition name the client is requesting. - -- `auto_reload_config` Equivalent to the [`-auto-reload-config` command-line flag](/consul/docs/agent/config/cli-flags#_auto_reload_config). - -- `bind_addr` Equivalent to the [`-bind` command-line flag](/consul/docs/agent/config/cli-flags#_bind). - - This parameter can be set to a go-sockaddr template that resolves to a single - address. Special characters such as backslashes `\` or double quotes `"` - within a double quoted string value must be escaped with a backslash `\`. - Some example templates: - - - - ```hcl - bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}" - ``` - - ```json - { - "bind_addr": "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}" - } - ``` - - - -- `cache` configuration for client agents. When an `?index` query parameter is specified but '?cached' is not appended in a [streaming backend call](/consul/api-docs/features/blocking#streaming-backend), Consul bypasses these configuration values. The configurable values are the following: - - - `entry_fetch_max_burst` The size of the token bucket used to recharge the rate-limit per - cache entry. The default value is 2 and means that when cache has not been updated - for a long time, 2 successive queries can be made as long as the rate-limit is not - reached. - - - `entry_fetch_rate` configures the rate-limit at which the cache may refresh a single - entry. On a cluster with many changes/s, watching changes in the cache might put high - pressure on the servers. This ensures the number of requests for a single cache entry - will never go beyond this limit, even when a given service changes every 1/100s. - Since this is a per cache entry limit, having a highly unstable service will only rate - limit the watched on this service, but not the other services/entries. - The value is strictly positive, expressed in queries per second as a float, - 1 means 1 query per second, 0.1 mean 1 request every 10s maximum. - The default value is "No limit" and should be tuned on large - clusters to avoid performing too many RPCs on entries changing a lot. - -- `check_update_interval` ((#check_update_interval)) - This interval controls how often check output from checks in a steady state is - synchronized with the server. By default, this is set to 5 minutes ("5m"). Many - checks which are in a steady state produce slightly different output per run (timestamps, - etc) which cause constant writes. This configuration allows deferring the sync - of check output for a given interval to reduce write pressure. If a check ever - changes state, the new state and associated output is synchronized immediately. - To disable this behavior, set the value to "0s". - -- `client_addr` Equivalent to the [`-client` command-line flag](/consul/docs/agent/config/cli-flags#_client). - -- `config_entries` This object allows setting options for centralized config entries. - - The following sub-keys are available: - - - `bootstrap` ((#config_entries_bootstrap)) - This is a list of inlined config entries to insert into the state store when - the Consul server gains leadership. This option is only applicable to server - nodes. Each bootstrap entry will be created only if it does not exist. When reloading, - any new entries that have been added to the configuration will be processed. - See the [configuration entry docs](/consul/docs/agent/config-entries) for more - details about the contents of each entry. - -- `datacenter` Equivalent to the [`-datacenter` command-line flag](/consul/docs/agent/config/cli-flags#_datacenter). - -- `data_dir` Equivalent to the [`-data-dir` command-line flag](/consul/docs/agent/config/cli-flags#_data_dir). - -- `default_intention_policy` Controls how service-to-service traffic is authorized - in the absence of specific intentions. - Can be set to `allow`, `deny`, or left empty to default to [`acl.default_policy`](#acl_default_policy). - -- `disable_anonymous_signature` Disables providing an anonymous - signature for de-duplication with the update check. See [`disable_update_check`](#disable_update_check). - -- `disable_http_unprintable_char_filter` Defaults to false. Consul 1.0.3 fixed a potential security vulnerability where malicious users could craft KV keys with unprintable chars that would confuse operators using the CLI or UI into taking wrong actions. Users who had data written in older versions of Consul that did not have this restriction will be unable to delete those values by default in 1.0.3 or later. This setting enables those users to **temporarily** disable the filter such that delete operations can work on those keys again to get back to a healthy state. It is strongly recommended that this filter is not disabled permanently as it exposes the original security vulnerability. - -- `disable_remote_exec` Disables support for remote execution. When set to true, the agent will ignore - any incoming remote exec requests. In versions of Consul prior to 0.8, this defaulted - to false. In Consul 0.8 the default was changed to true, to make remote exec opt-in - instead of opt-out. - -- `disable_update_check` Disables automatic checking for security bulletins and new version releases. This is disabled in Consul Enterprise. - -- `discard_check_output` Discards the output of health checks before storing them. This reduces the number of writes to the Consul raft log in environments where health checks have volatile output like timestamps, process ids, ... - -- `discovery_max_stale` - Enables stale requests for all service discovery HTTP endpoints. This is - equivalent to the [`max_stale`](#max_stale) configuration for DNS requests. If this value is zero (default), all service discovery HTTP endpoints are forwarded to the leader. If this value is greater than zero, any Consul server can handle the service discovery request. If a Consul server is behind the leader by more than `discovery_max_stale`, the query will be re-evaluated on the leader to get more up-to-date results. Consul agents also add a new `X-Consul-Effective-Consistency` response header which indicates if the agent did a stale read. `discover-max-stale` was introduced in Consul 1.0.7 as a way for Consul operators to force stale requests from clients at the agent level, and defaults to zero which matches default consistency behavior in earlier Consul versions. - -- `enable_agent_tls_for_checks` When set, uses a subset of the agent's TLS configuration (`key_file`, - `cert_file`, `ca_file`, `ca_path`, and `server_name`) to set up the client for HTTP or gRPC health checks. This allows services requiring 2-way TLS to be checked using the agent's credentials. This was added in Consul 1.0.1 and defaults to false. - -- `enable_central_service_config` When set, the Consul agent will look for any - [centralized service configuration](/consul/docs/agent/config-entries) - that match a registering service instance. If it finds any, the agent will merge the centralized defaults with the service instance configuration. This allows for things like service protocol or proxy configuration to be defined centrally and inherited by any affected service registrations. - This defaults to `false` in versions of Consul prior to 1.9.0, and defaults to `true` in Consul 1.9.0 and later. - -- `enable_debug` (boolean, default is `false`): When set to `true`, enables Consul to report additional debugging information, including runtime profiling (`pprof`) data. This setting is only required for clusters without ACL [enabled](#acl_enabled). If you change this setting, you must restart the agent for the change to take effect. - -- `enable_script_checks` Equivalent to the [`-enable-script-checks` command-line flag](/consul/docs/agent/config/cli-flags#_enable_script_checks). - - ACLs must be enabled for agents and the `enable_script_checks` option must be set to `true` to enable script checks in Consul 0.9.0 and later. See [Registering and Querying Node Information](/consul/docs/security/acl/acl-rules#registering-and-querying-node-information) for related information. - - ~> **Security Warning:** Enabling script checks in some configurations may introduce a known remote execution vulnerability targeted by malware. We strongly recommend `enable_local_script_checks` instead. Refer to the following article for additional guidance: [_Protecting Consul from RCE Risk in Specific Configurations_](https://www.hashicorp.com/blog/protecting-consul-from-rce-risk-in-specific-configurations) - for more details. - -- `enable_local_script_checks` Equivalent to the [`-enable-local-script-checks` command-line flag](/consul/docs/agent/config/cli-flags#_enable_local_script_checks). - -- `disable_keyring_file` - Equivalent to the - [`-disable-keyring-file` command-line flag](/consul/docs/agent/config/cli-flags#_disable_keyring_file). - -- `disable_coordinates` - Disables sending of [network coordinates](/consul/docs/architecture/coordinates). - When network coordinates are disabled the `near` query param will not work to sort the nodes, - and the [`consul rtt`](/consul/commands/rtt) command will not be able to provide round trip time between nodes. - -- `http_config` This object allows setting options for the HTTP API and UI. - - The following sub-keys are available: - - - `block_endpoints` - This object is a list of HTTP API endpoint prefixes to block on the agent, and - defaults to an empty list, meaning all endpoints are enabled. Any endpoint that - has a common prefix with one of the entries on this list will be blocked and - will return a 403 response code when accessed. For example, to block all of the - V1 ACL endpoints, set this to `["/v1/acl"]`, which will block `/v1/acl/create`, - `/v1/acl/update`, and the other ACL endpoints that begin with `/v1/acl`. This - only works with API endpoints, not `/ui` or `/debug`, those must be disabled - with their respective configuration options. Any CLI commands that use disabled - endpoints will no longer function as well. For more general access control, Consul's - [ACL system](/consul/tutorials/security/access-control-setup-production) - should be used, but this option is useful for removing access to HTTP API endpoints - completely, or on specific agents. This is available in Consul 0.9.0 and later. - - - `response_headers` This object allows adding headers to the HTTP API and UI responses. For example, the following config can be used to enable [CORS](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) on the HTTP API endpoints: - - - - ```hcl - http_config { - response_headers { - Access-Control-Allow-Origin = "*" - } - } - ``` - - ```json - { - "http_config": { - "response_headers": { - "Access-Control-Allow-Origin": "*" - } - } - } - ``` - - - - - `allow_write_http_from` This object is a list of networks in CIDR notation (eg "127.0.0.0/8") that are allowed to call the agent write endpoints. It defaults to an empty list, which means all networks are allowed. This is used to make the agent read-only, except for select ip ranges. - To block write calls from anywhere, use `[ "255.255.255.255/32" ]`. - To only allow write calls from localhost, use `[ "127.0.0.0/8" ]` - To only allow specific IPs, use `[ "10.0.0.1/32", "10.0.0.2/32" ]` - - - `use_cache` ((#http_config_use_cache)) Defaults to true. If disabled, the agent won't be using [agent caching](/consul/api-docs/features/caching) to answer the request. Even when the url parameter is provided. - - - `max_header_bytes` This setting controls the maximum number of bytes the consul http server will read parsing the request header's keys and values, including the request line. It does not limit the size of the request body. If zero, or negative, http.DefaultMaxHeaderBytes is used, which equates to 1 Megabyte. - -- `leave_on_terminate` If enabled, when the agent receives a TERM signal, it will send a `Leave` message to the rest of the cluster and gracefully leave. The default behavior for this feature varies based on whether or not the agent is running as a client or a server (prior to Consul 0.7 the default value was unconditionally set to `false`). On agents in client-mode, this defaults to `true` and for agents in server-mode, this defaults to `false`. - -- `license_path` This specifies the path to a file that contains the Consul Enterprise license. Alternatively the license may also be specified in either the `CONSUL_LICENSE` or `CONSUL_LICENSE_PATH` environment variables. See the [licensing documentation](/consul/docs/enterprise/license/overview) for more information about Consul Enterprise license management. Added in versions 1.10.0, 1.9.7 and 1.8.13. Prior to version 1.10.0 the value may be set for all agents to facilitate forwards compatibility with 1.10 but will only actually be used by client agents. - -- `limits`: This block specifies various types of limits that the Consul server agent enforces. - - - `http_max_conns_per_client` - Configures a limit of how many concurrent TCP connections a single client IP address is allowed to open to the agent's HTTP(S) server. This affects the HTTP(S) servers in both client and server agents. Default value is `200`. - - `https_handshake_timeout` - Configures the limit for how long the HTTPS server in both client and server agents will wait for a client to complete a TLS handshake. This should be kept conservative as it limits how many connections an unauthenticated attacker can open if `verify_incoming` is being using to authenticate clients (strongly recommended in production). Default value is `5s`. - - `request_limits` - This object specifies configurations that limit the rate of RPC and gRPC requests on the Consul server. Limiting the rate of gRPC and RPC requests also limits HTTP requests to the Consul server. - - `mode` - String value that specifies an action to take if the rate of requests exceeds the limit. You can specify the following values: - - `permissive`: The server continues to allow requests and records an error in the logs. - - `enforcing`: The server stops accepting requests and records an error in the logs. - - `disabled`: Limits are not enforced or tracked. This is the default value for `mode`. - - `read_rate` - Integer value that specifies the number of read requests per second. Default is `-1` which represents infinity. - - `write_rate` - Integer value that specifies the number of write requests per second. Default is `-1` which represents infinity. - - `rpc_handshake_timeout` - Configures the limit for how long servers will wait after a client TCP connection is established before they complete the connection handshake. When TLS is used, the same timeout applies to the TLS handshake separately from the initial protocol negotiation. All Consul clients should perform this immediately on establishing a new connection. This should be kept conservative as it limits how many connections an unauthenticated attacker can open if `verify_incoming` is being using to authenticate clients (strongly recommended in production). When `verify_incoming` is true on servers, this limits how long the connection socket and associated goroutines will be held open before the client successfully authenticates. Default value is `5s`. - - `rpc_client_timeout` - Configures the limit for how long a client is allowed to read from an RPC connection. This is used to set an upper bound for calls to eventually terminate so that RPC connections are not held indefinitely. Blocking queries can override this timeout. Default is `60s`. - - `rpc_max_conns_per_client` - Configures a limit of how many concurrent TCP connections a single source IP address is allowed to open to a single server. It affects both clients connections and other server connections. In general Consul clients multiplex many RPC calls over a single TCP connection so this can typically be kept low. It needs to be more than one though since servers open at least one additional connection for raft RPC, possibly more for WAN federation when using network areas, and snapshot requests from clients run over a separate TCP conn. A reasonably low limit significantly reduces the ability of an unauthenticated attacker to consume unbounded resources by holding open many connections. You may need to increase this if WAN federated servers connect via proxies or NAT gateways or similar causing many legitimate connections from a single source IP. Default value is `100` which is designed to be extremely conservative to limit issues with certain deployment patterns. Most deployments can probably reduce this safely. 100 connections on modern server hardware should not cause a significant impact on resource usage from an unauthenticated attacker though. - - `rpc_rate` - Configures the RPC rate limiter on Consul _clients_ by setting the maximum request rate that this agent is allowed to make for RPC requests to Consul servers, in requests per second. Defaults to infinite, which disables rate limiting. - - `rpc_max_burst` - The size of the token bucket used to recharge the RPC rate limiter on Consul _clients_. Defaults to 1000 tokens, and each token is good for a single RPC call to a Consul server. See https://en.wikipedia.org/wiki/Token_bucket for more details about how token bucket rate limiters operate. - - `kv_max_value_size` - **(Advanced)** Configures the maximum number of bytes for a kv request body to the [`/v1/kv`](/consul/api-docs/kv) endpoint. This limit defaults to [raft's](https://github.com/hashicorp/raft) suggested max size (512KB). **Note that tuning these improperly can cause Consul to fail in unexpected ways**, it may potentially affect leadership stability and prevent timely heartbeat signals by increasing RPC IO duration. This option affects the txn endpoint too, but Consul 1.7.2 introduced `txn_max_req_len` which is the preferred way to set the limit for the txn endpoint. If both limits are set, the higher one takes precedence. - - `txn_max_req_len` - **(Advanced)** Configures the maximum number of bytes for a transaction request body to the [`/v1/txn`](/consul/api-docs/txn) endpoint. This limit defaults to [raft's](https://github.com/hashicorp/raft) suggested max size (512KB). **Note that tuning these improperly can cause Consul to fail in unexpected ways**, it may potentially affect leadership stability and prevent timely heartbeat signals by increasing RPC IO duration. - -- `default_query_time` Equivalent to the [`-default-query-time` command-line flag](/consul/docs/agent/config/cli-flags#_default_query_time). - -- `max_query_time` Equivalent to the [`-max-query-time` command-line flag](/consul/docs/agent/config/cli-flags#_max_query_time). - -- `peering` This object allows setting options for cluster peering. - - The following sub-keys are available: - - - `enabled` ((#peering_enabled)) (Defaults to `true`) Controls whether cluster peering is enabled. - When disabled, the UI won't show peering, all peering APIs will return - an error, any peerings stored in Consul already will be ignored (but they will not be deleted), - and all peering connections from other clusters will be rejected. This was added in Consul 1.13.0. - -- `partition` - This flag is used to set - the name of the admin partition the agent belongs to. An agent can only join - and communicate with other agents within its admin partition. Review the - [Admin Partitions documentation](/consul/docs/enterprise/admin-partitions) for more - details. By default, this is an empty string, which is the `default` admin - partition. This cannot be set on a server agent. - - ~> **Warning:** The `partition` option cannot be used either the - [`segment`](#segment-2) option or [`-segment`](/consul/docs/agent/config/cli-flags#_segment) flag. - -- `performance` Available in Consul 0.7 and later, this is a nested object that allows tuning the performance of different subsystems in Consul. See the [Server Performance](/consul/docs/install/performance) documentation for more details. The following parameters are available: - - - `leave_drain_time` - A duration that a server will dwell during a graceful leave in order to allow requests to be retried against other Consul servers. Under normal circumstances, this can prevent clients from experiencing "no leader" errors when performing a rolling update of the Consul servers. This was added in Consul 1.0. Must be a duration value such as 10s. Defaults to 5s. - - - `raft_multiplier` - An integer multiplier used by Consul servers to scale key Raft timing parameters. Omitting this value or setting it to 0 uses default timing described below. Lower values are used to tighten timing and increase sensitivity while higher values relax timings and reduce sensitivity. Tuning this affects the time it takes Consul to detect leader failures and to perform leader elections, at the expense of requiring more network and CPU resources for better performance. - - By default, Consul will use a lower-performance timing that's suitable - for [minimal Consul servers](/consul/docs/install/performance#minimum), currently equivalent - to setting this to a value of 5 (this default may be changed in future versions of Consul, - depending if the target minimum server profile changes). Setting this to a value of 1 will - configure Raft to its highest-performance mode, equivalent to the default timing of Consul - prior to 0.7, and is recommended for [production Consul servers](/consul/docs/install/performance#production). - - See the note on [last contact](/consul/docs/install/performance#production-server-requirements) timing for more - details on tuning this parameter. The maximum allowed value is 10. - - - `rpc_hold_timeout` - A duration that a client - or server will retry internal RPC requests during leader elections. Under normal - circumstances, this can prevent clients from experiencing "no leader" errors. - This was added in Consul 1.0. Must be a duration value such as 10s. Defaults - to 7s. - - - `grpc_keepalive_interval` - A duration that determines the frequency that Consul servers send keep-alive messages to inactive gRPC clients. Configure this setting to modify how quickly Consul detects and removes improperly closed xDS or peering connections. Default is `30s`. - - - `grpc_keepalive_timeout` - A duration that determines how long a Consul server waits for a reply to a keep-alive message. If the server does not receive a reply before the end of the duration, Consul flags the gRPC connection as unhealthy and forcibly removes it. Defaults to `20s`. - -- `pid_file` Equivalent to the [`-pid-file` command line flag](/consul/docs/agent/config/cli-flags#_pid_file). - -- `ports` This is a nested object that allows setting the bind ports for the following keys: - - - `dns` ((#dns_port)) - The DNS server, -1 to disable. Default 8600. - TCP and UDP. - - `http` ((#http_port)) - The HTTP API, -1 to disable. Default 8500. - TCP only. - - `https` ((#https_port)) - The HTTPS API, -1 to disable. Default -1 - (disabled). **We recommend using `8501`** for `https` by convention as some tooling - will work automatically with this. - - `grpc` ((#grpc_port)) - The gRPC API, -1 to disable. Default -1 (disabled). - **We recommend using `8502` for `grpc`** as your conventional gRPC port number, as it allows some - tools to work automatically. This parameter is set to `8502` by default when the agent runs - in `-dev` mode. The `grpc` port only supports plaintext traffic starting in Consul 1.14. - Refer to `grpc_tls` for more information on configuring a TLS-enabled port. - - `grpc_tls` ((#grpc_tls_port)) - The gRPC API with TLS connections, -1 to disable. gRPC_TLS is enabled by default on port 8503 for Consul servers. - **We recommend using `8503` for `grpc_tls`** as your conventional gRPC port number, as it allows some - tools to work automatically. `grpc_tls` is always guaranteed to be encrypted. Both `grpc` and `grpc_tls` - can be configured at the same time, but they may not utilize the same port number. This field was added in Consul 1.14. - - `serf_lan` ((#serf_lan_port)) - The Serf LAN port. Default 8301. TCP - and UDP. Equivalent to the [`-serf-lan-port` command line flag](/consul/docs/agent/config/cli-flags#_serf_lan_port). - - `serf_wan` ((#serf_wan_port)) - The Serf WAN port. Default 8302. - Equivalent to the [`-serf-wan-port` command line flag](/consul/docs/agent/config/cli-flags#_serf_wan_port). Set - to -1 to disable. **Note**: this will disable WAN federation which is not recommended. - Various catalog and WAN related endpoints will return errors or empty results. - TCP and UDP. - - `server` ((#server_rpc_port)) - Server RPC address. Default 8300. TCP - only. - - `sidecar_min_port` ((#sidecar_min_port)) - Inclusive minimum port number - to use for automatically assigned [sidecar service registrations](/consul/docs/connect/proxies/deploy-sidecar-services). - Default 21000. Set to `0` to disable automatic port assignment. - - `sidecar_max_port` ((#sidecar_max_port)) - Inclusive maximum port number - to use for automatically assigned [sidecar service registrations](/consul/docs/connect/proxies/deploy-sidecar-services). - Default 21255. Set to `0` to disable automatic port assignment. - - `expose_min_port` ((#expose_min_port)) - Inclusive minimum port number - to use for automatically assigned [exposed check listeners](/consul/docs/connect/proxies/proxy-config-reference#expose-paths-configuration-reference). - Default 21500. Set to `0` to disable automatic port assignment. - - `expose_max_port` ((#expose_max_port)) - Inclusive maximum port number - to use for automatically assigned [exposed check listeners](/consul/docs/connect/proxies/proxy-config-reference#expose-paths-configuration-reference). - Default 21755. Set to `0` to disable automatic port assignment. - -- `primary_datacenter` - This designates the datacenter - which is authoritative for ACL information, intentions and is the root Certificate - Authority for service mesh. It must be provided to enable ACLs. All servers and datacenters - must agree on the primary datacenter. Setting it on the servers is all you need - for cluster-level enforcement, but for the APIs to forward properly from the clients, - it must be set on them too. In Consul 0.8 and later, this also enables agent-level - enforcement of ACLs. - -- `primary_gateways` Equivalent to the [`-primary-gateway` - command-line flag](/consul/docs/agent/config/cli-flags#_primary_gateway). Takes a list of addresses to use as the - mesh gateways for the primary datacenter when authoritative replicated catalog - data is not present. Discovery happens every [`primary_gateways_interval`](#primary_gateways_interval) - until at least one primary mesh gateway is discovered. This was added in Consul - 1.8.0. - -- `primary_gateways_interval` Time to wait - between [`primary_gateways`](#primary_gateways) discovery attempts. Defaults to - 30s. This was added in Consul 1.8.0. - -- `protocol` ((#protocol)) Equivalent to the [`-protocol` command-line - flag](/consul/docs/agent/config/cli-flags#_protocol). - -- `reap` This controls Consul's automatic reaping of child processes, - which is useful if Consul is running as PID 1 in a Docker container. If this isn't - specified, then Consul will automatically reap child processes if it detects it - is running as PID 1. If this is set to true or false, then it controls reaping - regardless of Consul's PID (forces reaping on or off, respectively). This option - was removed in Consul 0.7.1. For later versions of Consul, you will need to reap - processes using a wrapper, please see the [Consul Docker image entry point script](https://github.com/hashicorp/docker-consul/blob/master/0.X/docker-entrypoint.sh) - for an example. If you are using Docker 1.13.0 or later, you can use the new `--init` - option of the `docker run` command and docker will enable an init process with - PID 1 that reaps child processes for the container. More info on [Docker docs](https://docs.docker.com/engine/reference/commandline/run/#options). - -- `reconnect_timeout` This controls how long it - takes for a failed node to be completely removed from the cluster. This defaults - to 72 hours and it is recommended that this is set to at least double the maximum - expected recoverable outage time for a node or network partition. WARNING: Setting - this time too low could cause Consul servers to be removed from quorum during an - extended node failure or partition, which could complicate recovery of the cluster. - The value is a time with a unit suffix, which can be "s", "m", "h" for seconds, - minutes, or hours. The value must be >= 8 hours. - -- `reconnect_timeout_wan` This is the WAN equivalent - of the [`reconnect_timeout`](#reconnect_timeout) parameter, which controls - how long it takes for a failed server to be completely removed from the WAN pool. - This also defaults to 72 hours, and must be >= 8 hours. - -- `recursors` This flag provides addresses of upstream DNS - servers that are used to recursively resolve queries if they are not inside the - service domain for Consul. For example, a node can use Consul directly as a DNS - server, and if the record is outside of the "consul." domain, the query will be - resolved upstream. As of Consul 1.0.1 recursors can be provided as IP addresses - or as go-sockaddr templates. IP addresses are resolved in order, and duplicates - are ignored. - -- `rpc` configuration for Consul servers. - - - `enable_streaming` ((#rpc_enable_streaming)) defaults to true. If set to false it will disable - the gRPC subscribe endpoint on a Consul Server. All - servers in all federated datacenters must have this enabled before any client can use - [`use_streaming_backend`](#use_streaming_backend). - -- `reporting` - This option allows options for HashiCorp reporting. - - `license` - The license object allows users to control automatic reporting of license utilization metrics to HashiCorp. - - `enabled`: (Defaults to `true`) Enables automatic license utilization reporting. - -- `segment` - Equivalent to the [`-segment` command-line flag](/consul/docs/agent/config/cli-flags#_segment). - - ~> **Warning:** The `segment` option cannot be used with the [`partition`](#partition-1) option. - -- `segments` - (Server agents only) This is a list of nested objects - that specifies user-defined network segments, not including the `` segment, which is - created automatically. Refer to the [network segments documentation](/consul/docs/enterprise/network-segments/create-network-segment)for additional information. - for more details. - - - `name` ((#segment_name)) - The name of the segment. Must be a string - between 1 and 64 characters in length. - - `bind` ((#segment_bind)) - The bind address to use for the segment's - gossip layer. Defaults to the [`-bind`](#_bind) value if not provided. - - `port` ((#segment_port)) - The port to use for the segment's gossip - layer (required). - - `advertise` ((#segment_advertise)) - The advertise address to use for - the segment's gossip layer. Defaults to the [`-advertise`](/consul/docs/agent/config/cli-flags#_advertise) value - if not provided. - - `rpc_listener` ((#segment_rpc_listener)) - If true, a separate RPC - listener will be started on this segment's [`-bind`](/consul/docs/agent/config/cli-flags#_bind) address on the rpc - port. Only valid if the segment's bind address differs from the [`-bind`](/consul/docs/agent/config/cli-flags#_bind) - address. Defaults to false. - -- `server` Equivalent to the [`-server` command-line flag](/consul/docs/agent/config/cli-flags#_server). - -- `server_rejoin_age_max` - controls the allowed maximum age of a stale server attempting to rejoin a cluster. - If the server has not ran during this period, it will refuse to start up again until an operator intervenes by manually deleting the `server_metadata.json` - file located in the data dir. - This is to protect clusters from instability caused by decommissioned servers accidentally being started again. - Note: the default value is 168h (equal to 7d) and the minimum value is 6h. - -- `non_voting_server` - **This field is deprecated in Consul 1.9.1. See the [`read_replica`](#read_replica) field instead.** - -- `read_replica` - Equivalent to the [`-read-replica` command-line flag](/consul/docs/agent/config/cli-flags#_read_replica). - -- `session_ttl_min` The minimum allowed session TTL. This ensures sessions are not created with TTLs - shorter than the specified limit. It is recommended to keep this limit at or above - the default to encourage clients to send infrequent heartbeats. Defaults to 10s. - -- `skip_leave_on_interrupt` This is similar - to [`leave_on_terminate`](#leave_on_terminate) but only affects interrupt handling. - When Consul receives an interrupt signal (such as hitting Control-C in a terminal), - Consul will gracefully leave the cluster. Setting this to `true` disables that - behavior. The default behavior for this feature varies based on whether or not - the agent is running as a client or a server (prior to Consul 0.7 the default value - was unconditionally set to `false`). On agents in client-mode, this defaults to - `false` and for agents in server-mode, this defaults to `true` (i.e. Ctrl-C on - a server will keep the server in the cluster and therefore quorum, and Ctrl-C on - a client will gracefully leave). - -- `translate_wan_addrs` If set to true, Consul - will prefer a node's configured [WAN address](/consul/docs/agent/config/cli-flags#_advertise-wan) - when servicing DNS and HTTP requests for a node in a remote datacenter. This allows - the node to be reached within its own datacenter using its local address, and reached - from other datacenters using its WAN address, which is useful in hybrid setups - with mixed networks. This is disabled by default. - - Starting in Consul 0.7 and later, node addresses in responses to HTTP requests will also prefer a - node's configured [WAN address](/consul/docs/agent/config/cli-flags#_advertise-wan) when querying for a node in a remote - datacenter. An [`X-Consul-Translate-Addresses`](/consul/api-docs/api-structure#translated-addresses) header - will be present on all responses when translation is enabled to help clients know that the addresses - may be translated. The `TaggedAddresses` field in responses also have a `lan` address for clients that - need knowledge of that address, regardless of translation. - - The following endpoints translate addresses: - - - [`/v1/catalog/nodes`](/consul/api-docs/catalog#list-nodes) - - [`/v1/catalog/node/`](/consul/api-docs/catalog#retrieve-map-of-services-for-a-node) - - [`/v1/catalog/service/`](/consul/api-docs/catalog#list-nodes-for-service) - - [`/v1/health/service/`](/consul/api-docs/health#list-nodes-for-service) - - [`/v1/query//execute`](/consul/api-docs/query#execute-prepared-query) - -- `unix_sockets` - This allows tuning the ownership and - permissions of the Unix domain socket files created by Consul. Domain sockets are - only used if the HTTP address is configured with the `unix://` prefix. - - It is important to note that this option may have different effects on - different operating systems. Linux generally observes socket file permissions - while many BSD variants ignore permissions on the socket file itself. It is - important to test this feature on your specific distribution. This feature is - currently not functional on Windows hosts. - - The following options are valid within this construct and apply globally to all - sockets created by Consul: - - - `user` - The name or ID of the user who will own the socket file. - - `group` - The group ID ownership of the socket file. This option - currently only supports numeric IDs. - - `mode` - The permission bits to set on the file. - -- `use_streaming_backend` defaults to true. When enabled Consul client agents will use - streaming rpc, instead of the traditional blocking queries, for endpoints which support - streaming. All servers must have [`rpc.enable_streaming`](#rpc_enable_streaming) - enabled before any client can enable `use_streaming_backend`. - -- `watches` - Watches is a list of watch specifications which - allow an external process to be automatically invoked when a particular data view - is updated. See the [watch documentation](/consul/docs/dynamic-app-config/watches) for more detail. - Watches can be modified when the configuration is reloaded. - -## ACL Parameters - -- `acl` ((#acl)) - This object allows a number of sub-keys to be set which - controls the ACL system. Configuring the ACL system within the ACL stanza was added - in Consul 1.4.0 - - The following sub-keys are available: - - - `enabled` ((#acl_enabled)) - Enables ACLs. - - - `policy_ttl` ((#acl_policy_ttl)) - Used to control Time-To-Live caching - of ACL policies. By default, this is 30 seconds. This setting has a major performance - impact: reducing it will cause more frequent refreshes while increasing it reduces - the number of refreshes. However, because the caches are not actively invalidated, - ACL policy may be stale up to the TTL value. - - - `role_ttl` ((#acl_role_ttl)) - Used to control Time-To-Live caching - of ACL roles. By default, this is 30 seconds. This setting has a major performance - impact: reducing it will cause more frequent refreshes while increasing it reduces - the number of refreshes. However, because the caches are not actively invalidated, - ACL role may be stale up to the TTL value. - - - `token_ttl` ((#acl_token_ttl)) - Used to control Time-To-Live caching - of ACL tokens. By default, this is 30 seconds. This setting has a major performance - impact: reducing it will cause more frequent refreshes while increasing it reduces - the number of refreshes. However, because the caches are not actively invalidated, - ACL token may be stale up to the TTL value. - - - `down_policy` ((#acl_down_policy)) - Either "allow", "deny", "extend-cache" - or "async-cache"; "extend-cache" is the default. In the case that a policy or - token cannot be read from the [`primary_datacenter`](#primary_datacenter) or - leader node, the down policy is applied. In "allow" mode, all actions are permitted, - "deny" restricts all operations, and "extend-cache" allows any cached objects - to be used, ignoring the expiry time of the cached entry. If the request uses an - ACL that is not in the cache, "extend-cache" falls back to the behavior of - `default_policy`. - The value "async-cache" acts the same way as "extend-cache" - but performs updates asynchronously when ACL is present but its TTL is expired, - thus, if latency is bad between the primary and secondary datacenters, latency - of operations is not impacted. - - - `default_policy` ((#acl_default_policy)) - Either "allow" or "deny"; - defaults to "allow" but this will be changed in a future major release. The default - policy controls the behavior of a token when there is no matching rule. In "allow" - mode, ACLs are a denylist: any operation not specifically prohibited is allowed. - In "deny" mode, ACLs are an allowlist: any operation not specifically - allowed is blocked. **Note**: this will not take effect until you've enabled ACLs. - - - `enable_key_list_policy` ((#acl_enable_key_list_policy)) - Boolean value, defaults to false. - When true, the `list` permission will be required on the prefix being recursively read from the KV store. - Regardless of being enabled, the full set of KV entries under the prefix will be filtered - to remove any entries that the request's ACL token does not grant at least read - permissions. This option is only available in Consul 1.0 and newer. - - - `enable_token_replication` ((#acl_enable_token_replication)) - By default - secondary Consul datacenters will perform replication of only ACL policies and - roles. Setting this configuration will will enable ACL token replication and - allow for the creation of both [local tokens](/consul/api-docs/acl/tokens#local) and - [auth methods](/consul/docs/security/acl/auth-methods) in connected secondary datacenters. - - ~> **Warning:** When enabling ACL token replication on the secondary datacenter, - global tokens already present in the secondary datacenter will be lost. For - production environments, consider configuring ACL replication in your initial - datacenter bootstrapping process. - - - `enable_token_persistence` ((#acl_enable_token_persistence)) - Either - `true` or `false`. When `true` tokens set using the API will be persisted to - disk and reloaded when an agent restarts. - - - `tokens` ((#acl_tokens)) - This object holds all of the configured - ACL tokens for the agents usage. - - - `initial_management` ((#acl_tokens_initial_management)) - This is available in - Consul 1.11 and later. In prior versions, use [`acl.tokens.master`](#acl_tokens_master). - - Only used for servers in the [`primary_datacenter`](#primary_datacenter). - This token will be created with management-level permissions if it does not exist. - It allows operators to bootstrap the ACL system with a token Secret ID that is - well-known. - - The `initial_management` token is only installed when a server acquires cluster - leadership. If you would like to install or change it, set the new value for - `initial_management` in the configuration for all servers. Once this is done, - restart the current leader to force a leader election. If the `initial_management` - token is not supplied, then the servers do not create an initial management token. - When you provide a value, it should be a UUID. To maintain backwards compatibility - and an upgrade path this restriction is not currently enforced but will be in a - future major Consul release. - - - `master` ((#acl_tokens_master)) **Renamed in Consul 1.11 to - [`acl.tokens.initial_management`](#acl_tokens_initial_management).** - - - `default` ((#acl_tokens_default)) - When provided, this agent will - use this token by default when making requests to the Consul servers - instead of the [anonymous token](/consul/docs/security/acl/tokens#anonymous-token). - Consul HTTP API requests can provide an alternate token in their authorization header - to override the `default` or anonymous token on a per-request basis, - as described in [HTTP API Authentication](/consul/api-docs/api-structure#authentication). - - - `agent` ((#acl_tokens_agent)) - Used for clients and servers to perform - internal operations. If this isn't specified, then the - [`default`](#acl_tokens_default) will be used. - - This token must at least have write access to the node name it will - register as in order to set any of the node-level information in the - catalog such as metadata, or the node's tagged addresses. - - - `agent_recovery` ((#acl_tokens_agent_recovery)) - This is available in Consul 1.11 - and later. In prior versions, use [`acl.tokens.agent_master`](#acl_tokens_agent_master). - - Used to access [agent endpoints](/consul/api-docs/agent) that require agent read or write privileges, - or node read privileges, even if Consul servers aren't present to validate any tokens. - This should only be used by operators during outages, regular ACL tokens should normally - be used by applications. - - - `agent_master` ((#acl_tokens_agent_master)) **Renamed in Consul 1.11 to - [`acl.tokens.agent_recovery`](#acl_tokens_agent_recovery).** - - - `config_file_service_registration` ((#acl_tokens_config_file_service_registration)) - Specifies the ACL - token the agent uses to register services and checks from [service](/consul/docs/services/usage/define-services) and [check](/consul/docs/services/usage/checks) definitions - specified in configuration files or fragments passed to the agent using the `-hcl` - flag. - - If the `token` field is defined in the service or check definition, then that token is used to - register the service or check instead. If the `config_file_service_registration` token is not - defined and if the `token` field is not defined in the service or check definition, then the - agent uses the [`default`](#acl_tokens_default) token to register the service or check. - - This token needs write permission to register all services and checks defined in this agent's - configuration. For example, if there are two service definitions in the agent's configuration - files for services "A" and "B", then the token needs `service:write` permissions for both - services "A" and "B" in order to successfully register both services. If the token is missing - `service:write` permissions for service "B", the agent will successfully register service "A" - and fail to register service "B". Failed registration requests are eventually retried as part - of [anti-entropy enforcement](/consul/docs/architecture/anti-entropy). If a registration request is - failing due to missing permissions, the token for this agent can be updated with - additional policy rules or the `config_file_service_registration` token can be replaced using - the [Set Agent Token](/consul/commands/acl/set-agent-token) CLI command. - - - `dns` ((#acl_tokens_dns)) - Specifies the token that agents use to request information needed to respond to DNS queries. - If the `dns` token is not set, the `default` token is used instead. - Because the `default` token allows unauthenticated HTTP API access to list nodes and services, we - strongly recommend using the `dns` token. Create DNS tokens using the [templated policy](/consul/docs/security/acl/tokens/create/create-a-dns-token#create_a_dns_token) - option to ensure that the token has the permissions needed to respond to all DNS queries. - - - `replication` ((#acl_tokens_replication)) - Specifies the token that the agent uses to - authorize secondary datacenters with the primary datacenter for replication - operations. This token is required for servers outside the [`primary_datacenter`](#primary_datacenter) when ACLs are enabled. This token may be provided later using the [agent token API](/consul/api-docs/agent#update-acl-tokens) on each server. This token must have at least "read" permissions on ACL data but if ACL token replication is enabled then it must have "write" permissions. This also enables service mesh data replication, for which the token will require both operator "write" and intention "read" permissions for replicating CA and Intention data. - - ~> **Warning:** When enabling ACL token replication on the secondary datacenter, - policies and roles already present in the secondary datacenter will be lost. For - production environments, consider configuring ACL replication in your initial - datacenter bootstrapping process. - - - `managed_service_provider` ((#acl_tokens_managed_service_provider)) - An - array of ACL tokens used by Consul managed service providers for cluster operations. - - - - ```hcl - managed_service_provider { - accessor_id = "ed22003b-0832-4e48-ac65-31de64e5c2ff" - secret_id = "cb6be010-bba8-4f30-a9ed-d347128dde17" - } - ``` - - ```json - "managed_service_provider": [ - { - "accessor_id": "ed22003b-0832-4e48-ac65-31de64e5c2ff", - "secret_id": "cb6be010-bba8-4f30-a9ed-d347128dde17" - } - ] - ``` - - - -- `acl_datacenter` - **This field is deprecated in Consul 1.4.0. See the [`primary_datacenter`](#primary_datacenter) field instead.** - - This designates the datacenter which is authoritative for ACL information. It must be provided to enable ACLs. All servers and datacenters must agree on the ACL datacenter. Setting it on the servers is all you need for cluster-level enforcement, but for the APIs to forward properly from the clients, - it must be set on them too. In Consul 0.8 and later, this also enables agent-level enforcement - of ACLs. Please review the [ACL tutorial](/consul/tutorials/security/access-control-setup-production) for more details. - -- `acl_default_policy` ((#acl_default_policy_legacy)) - **Deprecated in Consul 1.4.0. See the [`acl.default_policy`](#acl_default_policy) field instead.** - Either "allow" or "deny"; defaults to "allow". The default policy controls the - behavior of a token when there is no matching rule. In "allow" mode, ACLs are a - denylist: any operation not specifically prohibited is allowed. In "deny" mode, - ACLs are an allowlist: any operation not specifically allowed is blocked. **Note**: - this will not take effect until you've set `primary_datacenter` to enable ACL support. - -- `acl_down_policy` ((#acl_down_policy_legacy)) - **Deprecated in Consul - 1.4.0. See the [`acl.down_policy`](#acl_down_policy) field instead.** Either "allow", - "deny", "extend-cache" or "async-cache"; "extend-cache" is the default. In the - case that the policy for a token cannot be read from the [`primary_datacenter`](#primary_datacenter) - or leader node, the down policy is applied. In "allow" mode, all actions are permitted, - "deny" restricts all operations, and "extend-cache" allows any cached ACLs to be - used, ignoring their TTL values. If a non-cached ACL is used, "extend-cache" acts - like "deny". The value "async-cache" acts the same way as "extend-cache" but performs - updates asynchronously when ACL is present but its TTL is expired, thus, if latency - is bad between ACL authoritative and other datacenters, latency of operations is - not impacted. - -- `acl_agent_master_token` ((#acl_agent_master_token_legacy)) - **Deprecated - in Consul 1.4.0. See the [`acl.tokens.agent_master`](#acl_tokens_agent_master) - field instead.** Used to access [agent endpoints](/consul/api-docs/agent) that - require agent read or write privileges, or node read privileges, even if Consul - servers aren't present to validate any tokens. This should only be used by operators - during outages, regular ACL tokens should normally be used by applications. This - was added in Consul 0.7.2 and is only used when [`acl_enforce_version_8`](#acl_enforce_version_8) is set to true. - -- `acl_agent_token` ((#acl_agent_token_legacy)) - **Deprecated in Consul - 1.4.0. See the [`acl.tokens.agent`](#acl_tokens_agent) field instead.** Used for - clients and servers to perform internal operations. If this isn't specified, then - the [`acl_token`](#acl_token) will be used. This was added in Consul 0.7.2. - - This token must at least have write access to the node name it will register as in order to set any - of the node-level information in the catalog such as metadata, or the node's tagged addresses. - -- `acl_enforce_version_8` - **Deprecated in - Consul 1.4.0 and removed in 1.8.0.** Used for clients and servers to determine if enforcement should - occur for new ACL policies being previewed before Consul 0.8. Added in Consul 0.7.2, - this defaults to false in versions of Consul prior to 0.8, and defaults to true - in Consul 0.8 and later. This helps ease the transition to the new ACL features - by allowing policies to be in place before enforcement begins. - -- `acl_master_token` ((#acl_master_token_legacy)) - **Deprecated in Consul - 1.4.0. See the [`acl.tokens.master`](#acl_tokens_master) field instead.** - -- `acl_replication_token` ((#acl_replication_token_legacy)) - **Deprecated - in Consul 1.4.0. See the [`acl.tokens.replication`](#acl_tokens_replication) field - instead.** Only used for servers outside the [`primary_datacenter`](#primary_datacenter) - running Consul 0.7 or later. When provided, this will enable [ACL replication](/consul/tutorials/security-operations/access-control-replication-multiple-datacenters) - using this ACL replication using this token to retrieve and replicate the ACLs - to the non-authoritative local datacenter. In Consul 0.9.1 and later you can enable - ACL replication using [`acl.enable_token_replication`](#acl_enable_token_replication) and then - set the token later using the [agent token API](/consul/api-docs/agent#update-acl-tokens) - on each server. If the `acl_replication_token` is set in the config, it will automatically - set [`acl.enable_token_replication`](#acl_enable_token_replication) to true for backward compatibility. - - If there's a partition or other outage affecting the authoritative datacenter, and the - [`acl_down_policy`](/consul/docs/agent/config/config-files#acl_down_policy) is set to "extend-cache", tokens not - in the cache can be resolved during the outage using the replicated set of ACLs. - -- `acl_token` ((#acl_token_legacy)) - **Deprecated in Consul 1.4.0. See - the [`acl.tokens.default`](#acl_tokens_default) field instead.** - -- `acl_ttl` ((#acl_ttl_legacy)) - **Deprecated in Consul 1.4.0. See the - [`acl.token_ttl`](#acl_token_ttl) field instead.**Used to control Time-To-Live - caching of ACLs. By default, this is 30 seconds. This setting has a major performance - impact: reducing it will cause more frequent refreshes while increasing it reduces - the number of refreshes. However, because the caches are not actively invalidated, - ACL policy may be stale up to the TTL value. - -- `enable_acl_replication` **Deprecated in Consul 1.11. Use the [`acl.enable_token_replication`](#acl_enable_token_replication) field instead.** - When set on a Consul server, enables ACL replication without having to set - the replication token via [`acl_replication_token`](#acl_replication_token). Instead, enable ACL replication - and then introduce the token using the [agent token API](/consul/api-docs/agent#update-acl-tokens) on each server. - See [`acl_replication_token`](#acl_replication_token) for more details. - - ~> **Warning:** When enabling ACL token replication on the secondary datacenter, - policies and roles already present in the secondary datacenter will be lost. For - production environments, consider configuring ACL replication in your initial - datacenter bootstrapping process. - -## Advertise Address Parameters - -- `advertise_addr` Equivalent to the [`-advertise` command-line flag](/consul/docs/agent/config/cli-flags#_advertise). - -- `advertise_addr_ipv4` This was added together with [`advertise_addr_ipv6`](#advertise_addr_ipv6) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. - -- `advertise_addr_ipv6` This was added together with [`advertise_addr_ipv4`](#advertise_addr_ipv4) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. - -- `advertise_addr_wan` Equivalent to the [`-advertise-wan` command-line flag](/consul/docs/agent/config/cli-flags#_advertise-wan). - -- `advertise_addr_wan_ipv4` This was added together with [`advertise_addr_wan_ipv6`](#advertise_addr_wan_ipv6) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. - -- `advertise_addr_wan_ipv6` This was added together with [`advertise_addr_wan_ipv4`](#advertise_addr_wan_ipv4) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. - -- `advertise_reconnect_timeout` This is a per-agent setting of the [`reconnect_timeout`](#reconnect_timeout) parameter. - This agent will advertise to all other nodes in the cluster that after this timeout, the node may be completely - removed from the cluster. This may only be set on client agents and if unset then other nodes will use the main - `reconnect_timeout` setting when determining when this node may be removed from the cluster. - -## Bootstrap Parameters - -- `bootstrap` Equivalent to the [`-bootstrap` command-line flag](/consul/docs/agent/config/cli-flags#_bootstrap). - -- `bootstrap_expect` Equivalent to the [`-bootstrap-expect` command-line flag](/consul/docs/agent/config/cli-flags#_bootstrap_expect). - -## Self-managed HCP Parameters - -- `cloud` This object specifies settings for connecting self-managed clusters to HCP. This was added in Consul 1.14 - - - `client_id` The OAuth2 client ID for authentication with HCP. This can be overridden using the `HCP_CLIENT_ID` environment variable. - - - `client_secret` The OAuth2 client secret for authentication with HCP. This can be overridden using the `HCP_CLIENT_SECRET` environment variable. - - - `resource_id` The HCP resource identifier. This can be overridden using the `HCP_RESOURCE_ID` environment variable. - -## Service Mesh Parameters ((#connect-parameters)) - -The noun _connect_ is used throughout this documentation to refer to the connect -subsystem that provides Consul's service mesh capabilities. - -- `connect` This object allows setting options for the Connect feature. - - The following sub-keys are available: - - - `enabled` ((#connect_enabled)) (Defaults to `true`) Controls whether Connect features are - enabled on this agent. Should be enabled on all servers in the cluster - in order for service mesh to function properly. - Will be set to `true` automatically if `auto_config.enabled` or `auto_encrypt.allow_tls` is `true`. - - - `enable_mesh_gateway_wan_federation` ((#connect_enable_mesh_gateway_wan_federation)) (Defaults to `false`) Controls whether cross-datacenter federation traffic between servers is funneled - through mesh gateways. This was added in Consul 1.8.0. - - - `ca_provider` ((#connect_ca_provider)) Controls which CA provider to - use for the service mesh's CA. Currently only the `aws-pca`, `consul`, and `vault` providers are supported. - This is only used when initially bootstrapping the cluster. For an existing cluster, - use the [Update CA Configuration Endpoint](/consul/api-docs/connect/ca#update-ca-configuration). - - - `ca_config` ((#connect_ca_config)) An object which allows setting different - config options based on the CA provider chosen. This is only used when initially - bootstrapping the cluster. For an existing cluster, use the [Update CA Configuration - Endpoint](/consul/api-docs/connect/ca#update-ca-configuration). - - The following providers are supported: - - #### AWS ACM Private CA Provider (`ca_provider = "aws-pca"`) - - - `existing_arn` ((#aws_ca_existing_arn)) The Amazon Resource Name (ARN) of - an existing private CA in your ACM account. If specified, Consul will - attempt to use the existing CA to issue certificates. - - #### Consul CA Provider (`ca_provider = "consul"`) - - - `private_key` ((#consul_ca_private_key)) The PEM contents of the - private key to use for the CA. - - - `root_cert` ((#consul_ca_root_cert)) The PEM contents of the root - certificate to use for the CA. - - #### Vault CA Provider (`ca_provider = "vault"`) - - - `address` ((#vault_ca_address)) The address of the Vault server to - connect to. - - - `token` ((#vault_ca_token)) The Vault token to use. In Consul 1.8.5 and later, if - the token has the [renewable](/vault/api-docs/auth/token#renewable) - flag set, Consul will attempt to renew its lease periodically after half the - duration has expired. - - - `root_pki_path` ((#vault_ca_root_pki)) The path to use for the root - CA pki backend in Vault. This can be an existing backend with a CA already - configured, or a blank/unmounted backend in which case Consul will automatically - mount/generate the CA. The Vault token given above must have `sudo` access - to this backend, as well as permission to mount the backend at this path if - it is not already mounted. - - - `intermediate_pki_path` ((#vault_ca_intermediate_pki)) - The path to use for the temporary intermediate CA pki backend in Vault. **Consul - will overwrite any data at this path in order to generate a temporary intermediate - CA**. The Vault token given above must have `write` access to this backend, - as well as permission to mount the backend at this path if it is not already - mounted. - - - `auth_method` ((#vault_ca_auth_method)) - Vault auth method to use for logging in to Vault. - Please see [Vault Auth Methods](/vault/docs/auth) for more information - on how to configure individual auth methods. If auth method is provided, Consul will obtain a - new token from Vault when the token can no longer be renewed. - - - `type` The type of Vault auth method. - - - `mount_path` The mount path of the auth method. - If not provided the auth method type will be used as the mount path. - - - `params` The parameters to configure the auth method. - Please see [Vault Auth Methods](/vault/docs/auth) for information on how - to configure the auth method you wish to use. If using the Kubernetes auth method, Consul will - read the service account token from the default mount path `/var/run/secrets/kubernetes.io/serviceaccount/token` - if the `jwt` parameter is not provided. - - #### Common CA Config Options - - There are also a number of common configuration options supported by all providers: - - - `csr_max_concurrent` ((#ca_csr_max_concurrent)) Sets a limit on the number - of Certificate Signing Requests that can be processed concurrently. Defaults - to 0 (disabled). This is useful when you want to limit the number of CPU cores - available to the server for certificate signing operations. For example, on an - 8 core server, setting this to 1 will ensure that no more than one CPU core - will be consumed when generating or rotating certificates. Setting this is - recommended **instead** of `csr_max_per_second` when you want to limit the - number of cores consumed since it is simpler to reason about limiting CSR - resources this way without artificially slowing down rotations. Added in 1.4.1. - - - `csr_max_per_second` ((#ca_csr_max_per_second)) Sets a rate limit - on the maximum number of Certificate Signing Requests (CSRs) the servers will - accept. This is used to prevent CA rotation from causing unbounded CPU usage - on servers. It defaults to 50 which is conservative – a 2017 Macbook can process - about 100 per second using only ~40% of one CPU core – but sufficient for deployments - up to ~1500 service instances before the time it takes to rotate is impacted. - For larger deployments we recommend increasing this based on the expected number - of server instances and server resources, or use `csr_max_concurrent` instead - if servers have more than one CPU core. Setting this to zero disables rate limiting. - Added in 1.4.1. - - - `leaf_cert_ttl` ((#ca_leaf_cert_ttl)) Specifies the upper bound on the expiry - of a leaf certificate issued for a service. In most cases a new leaf - certificate will be requested by a proxy before this limit is reached. This - is also the effective limit on how long a server outage can last (with no leader) - before network connections will start being rejected. Defaults to `72h`. - - You can specify a range from one hour (minimum) up to one year (maximum) using - the following units: `h`, `m`, `s`, `ms`, `us` (or `µs`), `ns`, or a combination - of those units, e.g. `1h5m`. - - This value is also used when rotating out old root certificates from - the cluster. When a root certificate has been inactive (rotated out) - for more than twice the _current_ `leaf_cert_ttl`, it will be removed - from the trusted list. - - - `intermediate_cert_ttl` ((#ca_intermediate_cert_ttl)) Specifies the expiry for the - intermediate certificates. Defaults to `8760h` (1 year). Must be at least 3 times `leaf_cert_ttl`. - - - `root_cert_ttl` ((#ca_root_cert_ttl)) Specifies the expiry for a root certificate. - Defaults to 10 years as `87600h`. This value, if provided, needs to be higher than the - intermediate certificate TTL. - - This setting applies to all Consul CA providers. - - For the Vault provider, this value is only used if the backend is not initialized at first. - - This value is also applied on the `ca set-config` command. - - - `private_key_type` ((#ca_private_key_type)) The type of key to generate - for this CA. This is only used when the provider is generating a new key. If - `private_key` is set for the Consul provider, or existing root or intermediate - PKI paths given for Vault then this will be ignored. Currently supported options - are `ec` or `rsa`. Default is `ec`. - - It is required that all servers in a datacenter have - the same config for the CA. It is recommended that servers in - different datacenters use the same key type and size, - although the built-in CA and Vault provider will both allow mixed CA - key types. - - Some CA providers (currently Vault) will not allow cross-signing a - new CA certificate with a different key type. This means that if you - migrate from an RSA-keyed Vault CA to an EC-keyed CA from any - provider, you may have to proceed without cross-signing which risks - temporary connection issues for workloads during the new certificate - rollout. We highly recommend testing this outside of production to - understand the impact and suggest sticking to same key type where - possible. - - Note that this only affects _CA_ keys generated by the provider. - Leaf certificate keys are always EC 256 regardless of the CA - configuration. - - - `private_key_bits` ((#ca_private_key_bits)) The length of key to - generate for this CA. This is only used when the provider is generating a new - key. If `private_key` is set for the Consul provider, or existing root or intermediate - PKI paths given for Vault then this will be ignored. - - Currently supported values are: - - - `private_key_type = ec` (default): `224, 256, 384, 521` - corresponding to the NIST P-\* curves of the same name. - - `private_key_type = rsa`: `2048, 4096` - -- `locality` : Specifies a map of configurations that set the region and zone of the Consul agent. When specified on server agents, `locality` applies to all partitions on the server. When specified on clients, `locality` applies to all services registered to the client. Configure this field to enable Consul to route traffic to the nearest physical service instance. This field is intended for use primarily with VM and Nomad workloads. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. - - `region`: String value that specifies the region where the Consul agent is running. Consul assigns this value to services registered to that agent. When service proxy regions match, Consul is able to prioritize routes between service instances in the same region over instances in other regions. You must specify values that are consistent with how regions are defined in your network, for example `us-west-1` for networks in AWS. - - `zone`: String value that specifies the availability zone where the Consul agent is running. Consul assigns this value to services registered to that agent. When service proxy regions match, Consul is able to prioritize routes between service instances in the same region and zone over instances in other regions and zones. When healthy service instances are available in multiple zones within the most-local region, Consul prioritizes instances that also match the downstream proxy's `zone`. You must specify values that are consistent with how zones are defined in your network, for example `us-west-1a` for networks in AWS. - -## DNS and Domain Parameters - -- `dns_config` This object allows a number of sub-keys - to be set which can tune how DNS queries are serviced. Refer to [DNS caching](/consul/docs/services/discovery/dns-cache) for more information. - - The following sub-keys are available: - - - `allow_stale` - Enables a stale query for DNS information. - This allows any Consul server, rather than only the leader, to service the request. - The advantage of this is you get linear read scalability with Consul servers. - In versions of Consul prior to 0.7, this defaulted to false, meaning all requests - are serviced by the leader, providing stronger consistency but less throughput - and higher latency. In Consul 0.7 and later, this defaults to true for better - utilization of available servers. - - - `max_stale` - When [`allow_stale`](#allow_stale) is - specified, this is used to limit how stale results are allowed to be. If a Consul - server is behind the leader by more than `max_stale`, the query will be re-evaluated - on the leader to get more up-to-date results. Prior to Consul 0.7.1 this defaulted - to 5 seconds; in Consul 0.7.1 and later this defaults to 10 years ("87600h") - which effectively allows DNS queries to be answered by any server, no matter - how stale. In practice, servers are usually only milliseconds behind the leader, - so this lets Consul continue serving requests in long outage scenarios where - no leader can be elected. - - - `node_ttl` - By default, this is "0s", so all node lookups - are served with a 0 TTL value. DNS caching for node lookups can be enabled by - setting this value. This should be specified with the "s" suffix for second or - "m" for minute. - - - `service_ttl` - This is a sub-object which allows - for setting a TTL on service lookups with a per-service policy. The "\*" wildcard - service can be used when there is no specific policy available for a service. - By default, all services are served with a 0 TTL value. DNS caching for service - lookups can be enabled by setting this value. - - - `enable_truncate` - If set to true, a UDP DNS - query that would return more than 3 records, or more than would fit into a valid - UDP response, will set the truncated flag, indicating to clients that they should - re-query using TCP to get the full set of records. - - - `only_passing` - If set to true, any nodes whose - health checks are warning or critical will be excluded from DNS results. If false, - the default, only nodes whose health checks are failing as critical will be excluded. - For service lookups, the health checks of the node itself, as well as the service-specific - checks are considered. For example, if a node has a health check that is critical - then all services on that node will be excluded because they are also considered - critical. - - - `recursor_strategy` - If set to `sequential`, Consul will query recursors in the - order listed in the [`recursors`](#recursors) option. If set to `random`, - Consul will query an upstream DNS resolvers in a random order. Defaults to - `sequential`. - - - `recursor_timeout` - Timeout used by Consul when - recursively querying an upstream DNS server. See [`recursors`](#recursors) for more details. Default is 2s. This is available in Consul 0.7 and later. - - - `disable_compression` - If set to true, DNS - responses will not be compressed. Compression was added and enabled by default - in Consul 0.7. - - - `udp_answer_limit` - Limit the number of resource - records contained in the answer section of a UDP-based DNS response. This parameter - applies only to UDP DNS queries that are less than 512 bytes. This setting is - deprecated and replaced in Consul 1.0.7 by [`a_record_limit`](#a_record_limit). - - - `a_record_limit` - Limit the number of resource - records contained in the answer section of a A, AAAA or ANY DNS response (both - TCP and UDP). When answering a question, Consul will use the complete list of - matching hosts, shuffle the list randomly, and then limit the number of answers - to `a_record_limit` (default: no limit). This limit does not apply to SRV records. - - In environments where [RFC 3484 Section 6](https://tools.ietf.org/html/rfc3484#section-6) Rule 9 - is implemented and enforced (i.e. DNS answers are always sorted and - therefore never random), clients may need to set this value to `1` to - preserve the expected randomized distribution behavior (note: - [RFC 3484](https://tools.ietf.org/html/rfc3484) has been obsoleted by - [RFC 6724](https://tools.ietf.org/html/rfc6724) and as a result it should - be increasingly uncommon to need to change this value with modern - resolvers). - - - `enable_additional_node_meta_txt` - When set to true, Consul - will add TXT records for Node metadata into the Additional section of the DNS responses for several query types such as SRV queries. When set to false those records are not emitted. This does not impact the behavior of those same TXT records when they would be added to the Answer section of the response like when querying with type TXT or ANY. This defaults to true. - - - `soa` Allow to tune the setting set up in SOA. Non specified - values fallback to their default values, all values are integers and expressed - as seconds. - - The following settings are available: - - - `expire` ((#soa_expire)) - Configure SOA Expire duration in seconds, - default value is 86400, ie: 24 hours. - - - `min_ttl` ((#soa_min_ttl)) - Configure SOA DNS minimum TTL. As explained - in [RFC-2308](https://tools.ietf.org/html/rfc2308) this also controls negative - cache TTL in most implementations. Default value is 0, ie: no minimum delay - or negative TTL. - - - `refresh` ((#soa_refresh)) - Configure SOA Refresh duration in seconds, - default value is `3600`, ie: 1 hour. - - - `retry` ((#soa_retry)) - Configures the Retry duration expressed - in seconds, default value is 600, ie: 10 minutes. - - - `use_cache` ((#dns_use_cache)) - When set to true, DNS resolution will - use the agent cache described in [agent caching](/consul/api-docs/features/caching). - This setting affects all service and prepared queries DNS requests. Implies [`allow_stale`](#allow_stale) - - - `cache_max_age` ((#dns_cache_max_age)) - When [use_cache](#dns_use_cache) - is enabled, the agent will attempt to re-fetch the result from the servers if - the cached value is older than this duration. See: [agent caching](/consul/api-docs/features/caching). - - **Note** that unlike the `max-age` HTTP header, a value of 0 for this field is - equivalent to "no max age". To get a fresh value from the cache use a very small value - of `1ns` instead of 0. - - - `prefer_namespace` ((#dns_prefer_namespace)) **Deprecated in Consul 1.11. - Use the [canonical DNS format for enterprise service lookups](/consul/docs/services/discovery/dns-static-lookups#service-lookups-for-consul-enterprise) instead.** - - When set to `true`, in a DNS query for a service, a single label between the domain - and the `service` label is treated as a namespace name instead of a datacenter. - When set to `false`, the default, the behavior is the same as non-Enterprise - versions and treats the single label as the datacenter. - -- `domain` Equivalent to the [`-domain` command-line flag](/consul/docs/agent/config/cli-flags#_domain). - -## Encryption Parameters - -- `auto_encrypt` This object allows setting options for the `auto_encrypt` feature. - - The following sub-keys are available: - - - `allow_tls` (Defaults to `false`) This option enables - `auto_encrypt` on the servers and allows them to automatically distribute certificates - from the service mesh CA to the clients. If enabled, the server can accept incoming - connections from both the built-in CA and the service mesh CA, as well as their certificates. - Note, the server will only present the built-in CA and certificate, which the - client can verify using the CA it received from `auto_encrypt` endpoint. If disabled, - a client configured with `auto_encrypt.tls` will be unable to start. - - - `tls` (Defaults to `false`) Allows the client to request the - service mesh CA and certificates from the servers, for encrypting RPC communication. - The client will make the request to any servers listed in the `-retry-join` - option. This requires that every server to have `auto_encrypt.allow_tls` enabled. - When both `auto_encrypt` options are used, it allows clients to receive certificates - that are generated on the servers. If the `-server-port` is not the default one, - it has to be provided to the client as well. Usually this is discovered through - LAN gossip, but `auto_encrypt` provision happens before the information can be - distributed through gossip. The most secure `auto_encrypt` setup is when the - client is provided with the built-in CA, `verify_server_hostname` is turned on, - and when an ACL token with `node.write` permissions is setup. It is also possible - to use `auto_encrypt` with a CA and ACL, but without `verify_server_hostname`, - or only with a ACL enabled, or only with CA and `verify_server_hostname`, or - only with a CA, or finally without a CA and without ACL enabled. In any case, - the communication to the `auto_encrypt` endpoint is always TLS encrypted. - - ~> **Warning:** Enabling `auto_encrypt.tls` conflicts with the [`auto_config`](#auto_config) feature. - Only one option may be specified. - - - `dns_san` (Defaults to `[]`) When this option is being - used, the certificates requested by `auto_encrypt` from the server have these - `dns_san` set as DNS SAN. - - - `ip_san` (Defaults to `[]`) When this option is being used, - the certificates requested by `auto_encrypt` from the server have these `ip_san` - set as IP SAN. - -- `encrypt` Equivalent to the [`-encrypt` command-line flag](/consul/docs/agent/config/cli-flags#_encrypt). - -- `encrypt_verify_incoming` - This is an optional - parameter that can be used to disable enforcing encryption for incoming gossip - in order to upshift from unencrypted to encrypted gossip on a running cluster. - See [this section](/consul/docs/security/encryption#configuring-gossip-encryption-on-an-existing-cluster) - for more information. Defaults to true. - -- `encrypt_verify_outgoing` - This is an optional - parameter that can be used to disable enforcing encryption for outgoing gossip - in order to upshift from unencrypted to encrypted gossip on a running cluster. - See [this section](/consul/docs/security/encryption#configuring-gossip-encryption-on-an-existing-cluster) - for more information. Defaults to true. - -## Gossip Parameters - -- `gossip_lan` - **(Advanced)** This object contains a - number of sub-keys which can be set to tune the LAN gossip communications. These - are only provided for users running especially large clusters that need fine tuning - and are prepared to spend significant effort correctly tuning them for their environment - and workload. **Tuning these improperly can cause Consul to fail in unexpected - ways**. The default values are appropriate in almost all deployments. - - - `gossip_nodes` - The number of random nodes to send - gossip messages to per gossip_interval. Increasing this number causes the gossip - messages to propagate across the cluster more quickly at the expense of increased - bandwidth. The default is 3. - - - `gossip_interval` - The interval between sending - messages that need to be gossiped that haven't been able to piggyback on probing - messages. If this is set to zero, non-piggyback gossip is disabled. By lowering - this value (more frequent) gossip messages are propagated across the cluster - more quickly at the expense of increased bandwidth. The default is 200ms. - - - `probe_interval` - The interval between random - node probes. Setting this lower (more frequent) will cause the cluster to detect - failed nodes more quickly at the expense of increased bandwidth usage. The default - is 1s. - - - `probe_timeout` - The timeout to wait for an ack - from a probed node before assuming it is unhealthy. This should be at least the - 99-percentile of RTT (round-trip time) on your network. The default is 500ms - and is a conservative value suitable for almost all realistic deployments. - - - `retransmit_mult` - The multiplier for the number - of retransmissions that are attempted for messages broadcasted over gossip. The - number of retransmits is scaled using this multiplier and the cluster size. The - higher the multiplier, the more likely a failed broadcast is to converge at the - expense of increased bandwidth. The default is 4. - - - `suspicion_mult` - The multiplier for determining - the time an inaccessible node is considered suspect before declaring it dead. - The timeout is scaled with the cluster size and the probe_interval. This allows - the timeout to scale properly with expected propagation delay with a larger cluster - size. The higher the multiplier, the longer an inaccessible node is considered - part of the cluster before declaring it dead, giving that suspect node more time - to refute if it is indeed still alive. The default is 4. - -- `gossip_wan` - **(Advanced)** This object contains a - number of sub-keys which can be set to tune the WAN gossip communications. These - are only provided for users running especially large clusters that need fine tuning - and are prepared to spend significant effort correctly tuning them for their environment - and workload. **Tuning these improperly can cause Consul to fail in unexpected - ways**. The default values are appropriate in almost all deployments. - - - `gossip_nodes` - The number of random nodes to send - gossip messages to per gossip_interval. Increasing this number causes the gossip - messages to propagate across the cluster more quickly at the expense of increased - bandwidth. The default is 4. - - - `gossip_interval` - The interval between sending - messages that need to be gossiped that haven't been able to piggyback on probing - messages. If this is set to zero, non-piggyback gossip is disabled. By lowering - this value (more frequent) gossip messages are propagated across the cluster - more quickly at the expense of increased bandwidth. The default is 500ms. - - - `probe_interval` - The interval between random - node probes. Setting this lower (more frequent) will cause the cluster to detect - failed nodes more quickly at the expense of increased bandwidth usage. The default - is 5s. - - - `probe_timeout` - The timeout to wait for an ack - from a probed node before assuming it is unhealthy. This should be at least the - 99-percentile of RTT (round-trip time) on your network. The default is 3s - and is a conservative value suitable for almost all realistic deployments. - - - `retransmit_mult` - The multiplier for the number - of retransmissions that are attempted for messages broadcasted over gossip. The - number of retransmits is scaled using this multiplier and the cluster size. The - higher the multiplier, the more likely a failed broadcast is to converge at the - expense of increased bandwidth. The default is 4. - - - `suspicion_mult` - The multiplier for determining - the time an inaccessible node is considered suspect before declaring it dead. - The timeout is scaled with the cluster size and the probe_interval. This allows - the timeout to scale properly with expected propagation delay with a larger cluster - size. The higher the multiplier, the longer an inaccessible node is considered - part of the cluster before declaring it dead, giving that suspect node more time - to refute if it is indeed still alive. The default is 6. - -## Join Parameters - -- `rejoin_after_leave` Equivalent to the [`-rejoin` command-line flag](/consul/docs/agent/config/cli-flags#_rejoin). - -- `retry_join` - Equivalent to the [`-retry-join`](/consul/docs/agent/config/cli-flags#retry-join) command-line flag. - -- `retry_interval` Equivalent to the [`-retry-interval` command-line flag](/consul/docs/agent/config/cli-flags#_retry_interval). - -- `retry_max` - Equivalent to the [`-retry-max`](/consul/docs/agent/config/cli-flags#_retry_max) command-line flag. - -- `retry_join_wan` Equivalent to the [`-retry-join-wan` command-line flag](/consul/docs/agent/config/cli-flags#_retry_join_wan). Takes a list of addresses to attempt joining to WAN every [`retry_interval_wan`](#_retry_interval_wan) until at least one join works. - -- `retry_interval_wan` Equivalent to the [`-retry-interval-wan` command-line flag](/consul/docs/agent/config/cli-flags#_retry_interval_wan). - -- `start_join` **Deprecated in Consul 1.15. Use the [`retry_join`](/consul/docs/agent/config/config-files#retry_join) field instead. This field will be removed in a future version of Consul.** - This field is an alias of `retry_join`. - -- `start_join_wan` **Deprecated in Consul 1.15. Use the [`retry_join_wan`](/consul/docs/agent/config/config-files#retry_join_wan) field instead. This field will be removed in a future version of Consul.** - This field is an alias of `retry_join_wan`. - -## Log Parameters - -- `log_file` Equivalent to the [`-log-file` command-line flag](/consul/docs/agent/config/cli-flags#_log_file). - -- `log_rotate_duration` Equivalent to the [`-log-rotate-duration` command-line flag](/consul/docs/agent/config/cli-flags#_log_rotate_duration). - -- `log_rotate_bytes` Equivalent to the [`-log-rotate-bytes` command-line flag](/consul/docs/agent/config/cli-flags#_log_rotate_bytes). - -- `log_rotate_max_files` Equivalent to the [`-log-rotate-max-files` command-line flag](/consul/docs/agent/config/cli-flags#_log_rotate_max_files). - -- `log_level` Equivalent to the [`-log-level` command-line flag](/consul/docs/agent/config/cli-flags#_log_level). - -- `log_json` Equivalent to the [`-log-json` command-line flag](/consul/docs/agent/config/cli-flags#_log_json). - -- `enable_syslog` Equivalent to the [`-syslog` command-line flag](/consul/docs/agent/config/cli-flags#_syslog). - -- `syslog_facility` When [`enable_syslog`](#enable_syslog) - is provided, this controls to which facility messages are sent. By default, `LOCAL0` - will be used. - -## Node Parameters - -- `node_id` Equivalent to the [`-node-id` command-line flag](/consul/docs/agent/config/cli-flags#_node_id). - -- `node_name` Equivalent to the [`-node` command-line flag](/consul/docs/agent/config/cli-flags#_node). - -- `node_meta` Available in Consul 0.7.3 and later, This object allows associating arbitrary metadata key/value pairs with the local node, which can then be used for filtering results from certain catalog endpoints. See the [`-node-meta` command-line flag](/consul/docs/agent/config/cli-flags#_node_meta) for more information. - - - - ```hcl - node_meta { - instance_type = "t2.medium" - } - ``` - - ```json - { - "node_meta": { - "instance_type": "t2.medium" - } - } - ``` - - - -- `disable_host_node_id` Equivalent to the [`-disable-host-node-id` command-line flag](/consul/docs/agent/config/cli-flags#_disable_host_node_id). - -## Raft Parameters - -- `raft_boltdb` ((#raft_boltdb)) **These fields are deprecated in Consul v1.15.0. - Use [`raft_logstore`](#raft_logstore) instead.** This is a nested - object that allows configuring options for Raft's BoltDB-based log store. - - - `NoFreelistSync` **This field is deprecated in Consul v1.15.0. Use the - [`raft_logstore.boltdb.no_freelist_sync`](#raft_logstore_boltdb_no_freelist_sync) field - instead.** Setting this to `true` disables syncing the BoltDB freelist - to disk within the raft.db file. Not syncing the freelist to disk - reduces disk IO required for write operations at the expense of potentially - increasing start up time due to needing to scan the db to discover where the - free space resides within the file. - -- `raft_logstore` ((#raft_logstore)) This is a nested object that allows - configuring options for Raft's LogStore component which is used to persist - logs and crucial Raft state on disk during writes. This was added in Consul - v1.15.0. - - - `backend` ((#raft_logstore_backend)) Specifies which storage - engine to use to persist logs. Valid options are `boltdb` or `wal`. Default - is `boltdb`. The `wal` option specifies an experimental backend that - should be used with caution. Refer to - [Experimental WAL LogStore backend](/consul/docs/agent/wal-logstore) - for more information. - - - `disable_log_cache` ((#raft_logstore_disable_log_cache)) Disables the in-memory cache for recent logs. We recommend using it for performance testing purposes, as no significant improvement has been measured when the cache is disabled. While the in-memory log cache theoretically prevents disk reads for recent logs, recent logs are also stored in the OS page cache, which does not slow either the `boltdb` or `wal` backend's ability to read them. - - - `verification` ((#raft_logstore_verification)) This is a nested object that - allows configuring the online verification of the LogStore. Verification - provides additional assurances that LogStore backends are correctly storing - data. It imposes low overhead on servers and is safe to run in - production. It is most useful when evaluating a new backend - implementation. - - Verification must be enabled on the leader to have any effect and can be - used with any backend. When enabled, the leader periodically writes a - special "checkpoint" log message that includes the checksums of all log entries - written to Raft since the last checkpoint. Followers that have verification - enabled run a background task for each checkpoint that reads all logs - directly from the LogStore and then recomputes the checksum. A report is output - as an INFO level log for each checkpoint. - - Checksum failure should never happen and indicate unrecoverable corruption - on that server. The only correct response is to stop the server, remove its - data directory, and restart so it can be caught back up with a correct - server again. Please report verification failures including details about - your hardware and workload via GitHub issues. Refer to - [Experimental WAL LogStore backend](/consul/docs/agent/wal-logstore) - for more information. - - - `enabled` ((#raft_logstore_verification_enabled)) - Set to `true` to - allow this Consul server to write and verify log verification checkpoints - when elected leader. - - - `interval` ((#raft_logstore_verification_interval)) - Specifies the time - interval between checkpoints. There is no default value. You must - configure the `interval` and set [`enabled`](#raft_logstore_verification_enabled) - to `true` to correctly enable intervals. We recommend using an interval - between `30s` and `5m`. The performance overhead is insignificant when the - interval is set to `5m` or less. - - - `boltdb` ((#raft_logstore_boltdb)) - Object that configures options for - Raft's `boltdb` backend. It has no effect if the `backend` is not `boltdb`. - - - `no_freelist_sync` ((#raft_logstore_boltdb_no_freelist_sync)) - Set to - `true` to disable storing BoltDB's freelist to disk within the - `raft.db` file. Disabling freelist syncs reduces the disk IO required - for write operations, but could potentially increase start up time - because Consul must scan the database to find free space - within the file. - - - `wal` ((#raft_logstore_wal)) - Object that configures the `wal` backend. - Refer to [Experimental WAL LogStore backend](/consul/docs/agent/wal-logstore) - for more information. - - - `segment_size_mb` ((#raft_logstore_wal_segment_size_mb)) - Integer value - that represents the target size in MB for each segment file before - rolling to a new segment. The default value is `64` and is suitable for - most deployments. While a smaller value may use less disk space because you - can reclaim space by deleting old segments sooner, the smaller segment that results - may affect performance because safely rotating to a new file more - frequently can impact tail latencies. Larger values are unlikely - to improve performance significantly. We recommend using this - configuration for performance testing purposes. - -- `raft_protocol` ((#raft_protocol)) Equivalent to the [`-raft-protocol` - command-line flag](/consul/docs/agent/config/cli-flags#_raft_protocol). - -- `raft_snapshot_threshold` ((#\_raft_snapshot_threshold)) This controls the - minimum number of raft commit entries between snapshots that are saved to - disk. This is a low-level parameter that should rarely need to be changed. - Very busy clusters experiencing excessive disk IO may increase this value to - reduce disk IO, and minimize the chances of all servers taking snapshots at - the same time. Increasing this trades off disk IO for disk space since the log - will grow much larger and the space in the raft.db file can't be reclaimed - till the next snapshot. Servers may take longer to recover from crashes or - failover if this is increased significantly as more logs will need to be - replayed. In Consul 1.1.0 and later this defaults to 16384, and in prior - versions it was set to 8192. - - Since Consul 1.10.0 this can be reloaded using `consul reload` or sending the - server a `SIGHUP` to allow tuning snapshot activity without a rolling restart - in emergencies. - -- `raft_snapshot_interval` ((#\_raft_snapshot_interval)) This controls how often - servers check if they need to save a snapshot to disk. This is a low-level - parameter that should rarely need to be changed. Very busy clusters - experiencing excessive disk IO may increase this value to reduce disk IO, and - minimize the chances of all servers taking snapshots at the same time. - Increasing this trades off disk IO for disk space since the log will grow much - larger and the space in the raft.db file can't be reclaimed till the next - snapshot. Servers may take longer to recover from crashes or failover if this - is increased significantly as more logs will need to be replayed. In Consul - 1.1.0 and later this defaults to `30s`, and in prior versions it was set to - `5s`. - - Since Consul 1.10.0 this can be reloaded using `consul reload` or sending the - server a `SIGHUP` to allow tuning snapshot activity without a rolling restart - in emergencies. - -- `raft_trailing_logs` - This controls how many log entries are left in the log - store on disk after a snapshot is made. This should only be adjusted when - followers cannot catch up to the leader due to a very large snapshot size - and high write throughput causing log truncation before an snapshot can be - fully installed on a follower. If you need to use this to recover a cluster, - consider reducing write throughput or the amount of data stored on Consul as - it is likely under a load it is not designed to handle. The default value is - 10000 which is suitable for all normal workloads. Added in Consul 1.5.3. - - Since Consul 1.10.0 this can be reloaded using `consul reload` or sending the - server a `SIGHUP` to allow recovery without downtime when followers can't keep - up. - -## Serf Parameters - -- `serf_lan` ((#serf_lan_bind)) Equivalent to the [`-serf-lan-bind` command-line flag](/consul/docs/agent/config/cli-flags#_serf_lan_bind). - This is an IP address, not to be confused with [`ports.serf_lan`](#serf_lan_port). - -- `serf_lan_allowed_cidrs` ((#serf_lan_allowed_cidrs)) Equivalent to the [`-serf-lan-allowed-cidrs` command-line flag](/consul/docs/agent/config/cli-flags#_serf_lan_allowed_cidrs). - -- `serf_wan` ((#serf_wan_bind)) Equivalent to the [`-serf-wan-bind` command-line flag](/consul/docs/agent/config/cli-flags#_serf_wan_bind). - -- `serf_wan_allowed_cidrs` ((#serf_wan_allowed_cidrs)) Equivalent to the [`-serf-wan-allowed-cidrs` command-line flag](/consul/docs/agent/config/cli-flags#_serf_wan_allowed_cidrs). - -## Telemetry Parameters - -- `telemetry` This is a nested object that configures where - Consul sends its runtime telemetry, and contains the following keys: - - - `circonus_api_token` ((#telemetry-circonus_api_token)) A valid API - Token used to create/manage check. If provided, metric management is - enabled. - - - `circonus_api_app` ((#telemetry-circonus_api_app)) A valid app name - associated with the API token. By default, this is set to "consul". - - - `circonus_api_url` ((#telemetry-circonus_api_url)) - The base URL to use for contacting the Circonus API. By default, this is set - to "https://api.circonus.com/v2". - - - `circonus_submission_interval` ((#telemetry-circonus_submission_interval)) The interval at which metrics are submitted to Circonus. By default, this is set to "10s" (ten seconds). - - - `circonus_submission_url` ((#telemetry-circonus_submission_url)) - The `check.config.submission_url` field, of a Check API object, from a previously - created HTTPTrap check. - - - `circonus_check_id` ((#telemetry-circonus_check_id)) - The Check ID (not **check bundle**) from a previously created HTTPTrap check. - The numeric portion of the `check._cid` field in the Check API object. - - - `circonus_check_force_metric_activation` ((#telemetry-circonus_check_force_metric_activation)) Force activation of metrics which already exist and are not currently active. - If check management is enabled, the default behavior is to add new metrics as - they are encountered. If the metric already exists in the check, it will **not** - be activated. This setting overrides that behavior. By default, this is set to - false. - - - `circonus_check_instance_id` ((#telemetry-circonus_check_instance_id)) Uniquely identifies the metrics coming from this **instance**. It can be used to - maintain metric continuity with transient or ephemeral instances as they move - around within an infrastructure. By default, this is set to hostname:application - name (e.g. "host123:consul"). - - - `circonus_check_search_tag` ((#telemetry-circonus_check_search_tag)) A special tag which, when coupled with the instance id, helps to narrow down - the search results when neither a Submission URL or Check ID is provided. By - default, this is set to service:application name (e.g. "service:consul"). - - - `circonus_check_display_name` ((#telemetry-circonus_check_display_name)) Specifies a name to give a check when it is created. This name is displayed in - the Circonus UI Checks list. Available in Consul 0.7.2 and later. - - - `circonus_check_tags` ((#telemetry-circonus_check_tags)) - Comma separated list of additional tags to add to a check when it is created. - Available in Consul 0.7.2 and later. - - - `circonus_broker_id` ((#telemetry-circonus_broker_id)) - The ID of a specific Circonus Broker to use when creating a new check. The numeric - portion of `broker._cid` field in a Broker API object. If metric management is - enabled and neither a Submission URL nor Check ID is provided, an attempt will - be made to search for an existing check using Instance ID and Search Tag. If - one is not found, a new HTTPTrap check will be created. By default, this is not - used and a random Enterprise Broker is selected, or the default Circonus Public - Broker. - - - `circonus_broker_select_tag` ((#telemetry-circonus_broker_select_tag)) A special tag which will be used to select a Circonus Broker when a Broker ID - is not provided. The best use of this is to as a hint for which broker should - be used based on **where** this particular instance is running (e.g. a specific - geo location or datacenter, dc:sfo). By default, this is left blank and not used. - - - `disable_hostname` ((#telemetry-disable_hostname)) - Set to `true` to stop prepending the machine's hostname to gauge-type metrics. Default is `false`. - - - `disable_per_tenancy_usage_metrics` ((#telemetry-disable_per_tenancy_usage_metrics)) - Set to `true` to exclude tenancy labels from usage metrics. This significantly decreases CPU utilization in clusters with many admin partitions or namespaces. - - - `dogstatsd_addr` ((#telemetry-dogstatsd_addr)) This provides the address - of a DogStatsD instance in the format `host:port`. DogStatsD is a protocol-compatible - flavor of statsd, with the added ability to decorate metrics with tags and event - information. If provided, Consul will send various telemetry information to that - instance for aggregation. This can be used to capture runtime information. - - - `dogstatsd_tags` ((#telemetry-dogstatsd_tags)) This provides a list - of global tags that will be added to all telemetry packets sent to DogStatsD. - It is a list of strings, where each string looks like "my_tag_name:my_tag_value". - - - `enable_host_metrics` ((#telemetry-enable_host_metrics)) - This enables reporting of host metrics about system resources, defaults to false. - - - `filter_default` ((#telemetry-filter_default)) - This controls whether to allow metrics that have not been specified by the filter. - Defaults to `true`, which will allow all metrics when no filters are provided. - When set to `false` with no filters, no metrics will be sent. - - - `metrics_prefix` ((#telemetry-metrics_prefix)) - The prefix used while writing all telemetry data. By default, this is set to - "consul". This was added in Consul 1.0. For previous versions of Consul, use - the config option `statsite_prefix` in this same structure. This was renamed - in Consul 1.0 since this prefix applied to all telemetry providers, not just - statsite. - - - `prefix_filter` ((#telemetry-prefix_filter)) - This is a list of filter rules to apply for allowing/blocking metrics by - prefix in the following format: - - - - ```hcl - telemetry { - prefix_filter = ["+consul.raft.apply", "-consul.http", "+consul.http.GET"] - } - ``` - - ```json - { - "telemetry": { - "prefix_filter": [ - "+consul.raft.apply", - "-consul.http", - "+consul.http.GET" - ] - } - } - ``` - - - - A leading "**+**" will enable any metrics with the given prefix, and a leading "**-**" will block them. If there is overlap between two rules, the more specific rule will take precedence. Blocking will take priority if the same prefix is listed multiple times. - - - `prometheus_retention_time` ((#telemetry-prometheus_retention_time)) If the value is greater than `0s` (the default), this enables [Prometheus](https://prometheus.io/) - export of metrics. The duration can be expressed using the duration semantics - and will aggregates all counters for the duration specified (it might have an - impact on Consul's memory usage). A good value for this parameter is at least - 2 times the interval of scrape of Prometheus, but you might also put a very high - retention time such as a few days (for instance 744h to enable retention to 31 - days). Fetching the metrics using prometheus can then be performed using the - [`/v1/agent/metrics?format=prometheus`](/consul/api-docs/agent#view-metrics) endpoint. - The format is compatible natively with prometheus. When running in this mode, - it is recommended to also enable the option [`disable_hostname`](#telemetry-disable_hostname) - to avoid having prefixed metrics with hostname. Consul does not use the default - Prometheus path, so Prometheus must be configured as follows. Note that using - `?format=prometheus` in the path won't work as `?` will be escaped, so it must be - specified as a parameter. - - - - ```yaml - metrics_path: '/v1/agent/metrics' - params: - format: ['prometheus'] - ``` - - - - - `statsd_address` ((#telemetry-statsd_address)) This provides the address - of a statsd instance in the format `host:port`. If provided, Consul will send - various telemetry information to that instance for aggregation. This can be used - to capture runtime information. This sends UDP packets only and can be used with - statsd or statsite. - - - `statsite_address` ((#telemetry-statsite_address)) This provides the - address of a statsite instance in the format `host:port`. If provided, Consul - will stream various telemetry information to that instance for aggregation. This - can be used to capture runtime information. This streams via TCP and can only - be used with statsite. - -## UI Parameters - -- `ui` - **This field is deprecated in Consul 1.9.0. See the [`ui_config.enabled`](#ui_config_enabled) field instead.** - Equivalent to the [`-ui`](/consul/docs/agent/config/cli-flags#_ui) command-line flag. - -- `ui_config` - This object allows a number of sub-keys to be set which controls - the display or features available in the UI. Configuring the UI with this - stanza was added in Consul 1.9.0. - - The following sub-keys are available: - - - `enabled` ((#ui_config_enabled)) - This enables the service of the web UI - from this agent. Boolean value, defaults to false. In `-dev` mode this - defaults to true. Replaces `ui` from before 1.9.0. Equivalent to the - [`-ui`](/consul/docs/agent/config/cli-flags#_ui) command-line flag. - - - `dir` ((#ui_config_dir)) - This specifies that the web UI should be served - from an external dir rather than the build in one. This allows for - customization or development. Replaces `ui_dir` from before 1.9.0. - Equivalent to the [`-ui-dir`](/consul/docs/agent/config/cli-flags#_ui_dir) command-line flag. - - - `content_path` ((#ui_config_content_path)) - This specifies the HTTP path - that the web UI should be served from. Defaults to `/ui/`. Equivalent to the - [`-ui-content-path`](/consul/docs/agent/config/cli-flags#_ui_content_path) flag. - - - `metrics_provider` ((#ui_config_metrics_provider)) - Specifies a named - metrics provider implementation the UI should use to fetch service metrics. - By default metrics are disabled. Consul 1.9.0 includes a built-in provider - named `prometheus` that can be enabled explicitly here. It also requires the - `metrics_proxy` to be configured below and direct queries to a Prometheus - instance that has Envoy metrics for all services in the datacenter. - - - `metrics_provider_files` ((#ui_config_metrics_provider_files)) - An optional array - of absolute paths to javascript files on the Agent's disk which will be - served as part of the UI. These files should contain metrics provider - implementations and registration enabling UI metric queries to be customized - or implemented for an alternative time-series backend. - - ~> **Security Note:** These javascript files are included in the UI with no - further validation or sand-boxing. By configuring them here the operator is - fully trusting anyone able to write to them as well as the original authors - not to include malicious code in the UI being served. - - - `metrics_provider_options_json` ((#ui_config_metrics_provider_options_json)) - - This is an optional raw JSON object as a string which is passed to the - provider implementation's `init` method at startup to allow arbitrary - configuration to be passed through. - - - `metrics_proxy` ((#ui_config_metrics_proxy)) - This object configures an - internal agent API endpoint that will proxy GET requests to a metrics - backend to allow querying metrics data in the UI. This simplifies deployment - where the metrics backend is not exposed externally to UI users' browsers. - It may also be used to augment requests with API credentials to allow - serving graphs to UI users without them needing individual access tokens for - the metrics backend. - - ~> **Security Note:** Exposing your metrics backend via Consul in this way - should be carefully considered in production. As Consul doesn't understand - the requests, it can't limit access to only specific resources. For example - **this might make it possible for a malicious user on the network to query - for arbitrary metrics about any server or workload in your infrastructure, - or overload the metrics infrastructure with queries**. See [Metrics Proxy - Security](/consul/docs/connect/observability/ui-visualization#metrics-proxy-security) - for more details. - - The following sub-keys are available: - - - `base_url` ((#ui_config_metrics_provider_base_url)) - This is required to - enable the proxy. It should be set to the base URL that the Consul agent - should proxy requests for metrics too. For example a value of - `http://prometheus-server` would target a Prometheus instance with local - DNS name "prometheus-server" on port 80. This may include a path prefix - which will then not be necessary in provider requests to the backend and - the proxy will prevent any access to paths without that prefix on the - backend. - - - `path_allowlist` ((#ui_config_metrics_provider_path_allowlist)) - This - specifies the paths that may be proxies to when appended to the - `base_url`. It defaults to `["/api/v1/query_range", "/api/v1/query"]` - which are the endpoints required for the built-in Prometheus provider. If - a [custom - provider](/consul/docs/connect/observability/ui-visualization#custom-metrics-providers) - is used that requires the metrics proxy, the correct allowlist must be - specified to enable proxying to necessary endpoints. See [Path - Allowlist](/consul/docs/connect/observability/ui-visualization#path-allowlist) - for more information. - - - `add_headers` ((#ui_config_metrics_proxy_add_headers)) - This is an - optional list if headers to add to requests that are proxied to the - metrics backend. It may be used to inject Authorization tokens within the - agent without exposing those to UI users. - - Each item in the list is an object with the following keys: - - - `name` ((#ui_config_metrics_proxy_add_headers_name)) - Specifies the - HTTP header name to inject into proxied requests. - - - `value` ((#ui_config_metrics_proxy_add_headers_value)) - Specifies the - value in inject into proxied requests. - - - `dashboard_url_templates` ((#ui_config_dashboard_url_templates)) - This map - specifies URL templates that may be used to render links to external - dashboards in various contexts in the UI. It is a map with the name of the - template as a key. The value is a string URL with optional placeholders. - - Each template may contain placeholders which will be substituted for the - correct values in content when rendered in the UI. The placeholders - available are listed for each template. - - For more information and examples see [UI - Visualization](/consul/docs/connect/observability/ui-visualization#configuring-dashboard-urls) - - The following named templates are defined: - - - `service` ((#ui_config_dashboard_url_templates_service)) - This is the URL - to use when linking to the dashboard for a specific service. It is shown - as part of the [Topology - Visualization](/consul/docs/connect/observability/ui-visualization). - - The placeholders available are: - - - `{{Service.Name}}` - Replaced with the current service's name. - - `{{Service.Namespace}}` - Replaced with the current service's namespace or empty if namespaces are not enabled. - - `{{Service.Partition}}` - Replaced with the current service's admin - partition or empty if admin partitions are not enabled. - - `{{Datacenter}}` - Replaced with the current service's datacenter. - -- `ui_dir` - **This field is deprecated in Consul 1.9.0. See the [`ui_config.dir`](#ui_config_dir) field instead.** - Equivalent to the [`-ui-dir`](/consul/docs/agent/config/cli-flags#_ui_dir) command-line - flag. This configuration key is not required as of Consul version 0.7.0 and later. - Specifying this configuration key will enable the web UI. There is no need to specify - both ui-dir and ui. Specifying both will result in an error. - -## TLS Configuration Reference - -This section documents all of the configuration settings that apply to Agent TLS. Agent -TLS is used by the HTTP API, internal RPC, and gRPC/xDS interfaces. Some of these settings -may also be applied automatically by [auto_config](#auto_config) or [auto_encrypt](#auto_encrypt). - -~> **Security Note:** The Certificate Authority (CA) configured on the internal RPC interface -(either explicitly by `tls.internal_rpc` or implicitly by `tls.defaults`) should be a private -CA, not a public one. We recommend using a dedicated CA which should not be used with any other -systems. Any certificate signed by the CA will be allowed to communicate with the cluster and a -specially crafted certificate signed by the CA can be used to gain full access to Consul. - -- `tls` Added in Consul 1.12, for previous versions see - [Deprecated Options](#tls_deprecated_options). - - - `defaults` ((#tls_defaults)) Provides default settings that will be applied - to every interface unless explicitly overridden by `tls.grpc`, `tls.https`, - or `tls.internal_rpc`. - - - `ca_file` ((#tls_defaults_ca_file)) This provides a file path to a - PEM-encoded certificate authority. The certificate authority is used to - check the authenticity of client and server connections with the - appropriate [`verify_incoming`](#tls_defaults_verify_incoming) or - [`verify_outgoing`](#tls_defaults_verify_outgoing) flags. - - - `ca_path` ((#tls_defaults_ca_path)) This provides a path to a directory - of PEM-encoded certificate authority files. These certificate authorities - are used to check the authenticity of client and server connections with - the appropriate [`verify_incoming`](#tls_defaults_verify_incoming) or - [`verify_outgoing`](#tls_defaults_verify_outgoing) flags. - - - `cert_file` ((#tls_defaults_cert_file)) This provides a file path to a - PEM-encoded certificate. The certificate is provided to clients or servers - to verify the agent's authenticity. It must be provided along with - [`key_file`](#tls_defaults_key_file). - - - `key_file` ((#tls_defaults_key_file)) This provides a the file path to a - PEM-encoded private key. The key is used with the certificate to verify - the agent's authenticity. This must be provided along with - [`cert_file`](#tls_defaults_cert_file). - - - `tls_min_version` ((#tls_defaults_tls_min_version)) This specifies the - minimum supported version of TLS. The following values are accepted: - * `TLSv1_0` - * `TLSv1_1` - * `TLSv1_2` (default) - * `TLSv1_3` - - - `verify_server_hostname` ((#tls_internal_rpc_verify_server_hostname)) When - set to true, Consul verifies the TLS certificate presented by the servers - match the hostname `server..`. By default this is false, - and Consul does not verify the hostname of the certificate, only that it - is signed by a trusted CA. - - **WARNING: TLS 1.1 and lower are generally considered less secure and - should not be used if possible.** - - The following values are also valid, but only when using the - [deprecated top-level `tls_min_version` config](#tls_deprecated_options), - and will be removed in a future release: - - * `tls10` - * `tls11` - * `tls12` - * `tls13` - - A warning message will appear if a deprecated value is specified. - - - `tls_cipher_suites` ((#tls_defaults_tls_cipher_suites)) This specifies - the list of supported ciphersuites as a comma-separated-list. Applicable - to TLS 1.2 and below only. The list of all ciphersuites supported by Consul is - available in [the TLS configuration source code](https://github.com/hashicorp/consul/search?q=%22var+goTLSCipherSuites%22). - - ~> **Note:** The ordering of cipher suites will not be guaranteed from - Consul 1.11 onwards. See this [post](https://go.dev/blog/tls-cipher-suites) - for details. - - - `verify_incoming` - ((#tls_defaults_verify_incoming)) If set to true, - Consul requires that all incoming connections make use of TLS and that - the client provides a certificate signed by a Certificate Authority from - the [`ca_file`](#tls_defaults_ca_file) or [`ca_path`](#tls_defaults_ca_path). - By default, this is false, and Consul will not enforce the use of TLS or - verify a client's authenticity. - - - `verify_outgoing` - ((#tls_defaults_verify_outgoing)) If set to true, - Consul requires that all outgoing connections from this agent make use - of TLS and that the server provides a certificate that is signed by a - Certificate Authority from the [`ca_file`](#tls_defaults_ca_file) or - [`ca_path`](#tls_defaults_ca_path). By default, this is false, and Consul - will not make use of TLS for outgoing connections. This applies to clients - and servers as both will make outgoing connections. This setting does not - apply to the gRPC interface as Consul makes no outgoing connections on this - interface. When set to true for the HTTPS interface, this parameter applies to [watches](/consul/docs/dynamic-app-config/watches), which operate by making HTTPS requests to the local agent. - - - `grpc` ((#tls_grpc)) Provides settings for the gRPC/xDS interface. To enable - the gRPC interface you must define a port via [`ports.grpc_tls`](#grpc_tls_port). - - - `ca_file` ((#tls_grpc_ca_file)) Overrides [`tls.defaults.ca_file`](#tls_defaults_ca_file). - - - `ca_path` ((#tls_grpc_ca_path)) Overrides [`tls.defaults.ca_path`](#tls_defaults_ca_path). - - - `cert_file` ((#tls_grpc_cert_file)) Overrides [`tls.defaults.cert_file`](#tls_defaults_cert_file). - - - `key_file` ((#tls_grpc_key_file)) Overrides [`tls.defaults.key_file`](#tls_defaults_key_file). - - - `tls_min_version` ((#tls_grpc_tls_min_version)) Overrides [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). - - - `tls_cipher_suites` ((#tls_grpc_tls_cipher_suites)) Overrides [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). - - - `verify_incoming` - ((#tls_grpc_verify_incoming)) Overrides [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). - - - `use_auto_cert` - (Defaults to `false`) Enables or disables TLS on gRPC servers. Set to `true` to allow `auto_encrypt` TLS settings to apply to gRPC listeners. We recommend disabling TLS on gRPC servers if you are using `auto_encrypt` for other TLS purposes, such as enabling HTTPS. - - - `https` ((#tls_https)) Provides settings for the HTTPS interface. To enable - the HTTPS interface you must define a port via [`ports.https`](#https_port). - - - `ca_file` ((#tls_https_ca_file)) Overrides [`tls.defaults.ca_file`](#tls_defaults_ca_file). - - - `ca_path` ((#tls_https_ca_path)) Overrides [`tls.defaults.ca_path`](#tls_defaults_ca_path). - - - `cert_file` ((#tls_https_cert_file)) Overrides [`tls.defaults.cert_file`](#tls_defaults_cert_file). - - - `key_file` ((#tls_https_key_file)) Overrides [`tls.defaults.key_file`](#tls_defaults_key_file). - - - `tls_min_version` ((#tls_https_tls_min_version)) Overrides [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). - - - `tls_cipher_suites` ((#tls_https_tls_cipher_suites)) Overrides [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). - - - `verify_incoming` - ((#tls_https_verify_incoming)) Overrides [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). - - - `verify_outgoing` - ((#tls_https_verify_outgoing)) Overrides [`tls.defaults.verify_outgoing`](#tls_defaults_verify_outgoing). - - - `internal_rpc` ((#tls_internal_rpc)) Provides settings for the internal - "server" RPC interface configured by [`ports.server`](#server_rpc_port). - - - `ca_file` ((#tls_internal_rpc_ca_file)) Overrides [`tls.defaults.ca_file`](#tls_defaults_ca_file). - - - `ca_path` ((#tls_internal_rpc_ca_path)) Overrides [`tls.defaults.ca_path`](#tls_defaults_ca_path). - - - `cert_file` ((#tls_internal_rpc_cert_file)) Overrides [`tls.defaults.cert_file`](#tls_defaults_cert_file). - - - `key_file` ((#tls_internal_rpc_key_file)) Overrides [`tls.defaults.key_file`](#tls_defaults_key_file). - - - `tls_min_version` ((#tls_internal_rpc_tls_min_version)) Overrides [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). - - - `tls_cipher_suites` ((#tls_internal_rpc_tls_cipher_suites)) Overrides [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). - - - `verify_incoming` - ((#tls_internal_rpc_verify_incoming)) Overrides [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). - - ~> **Security Note:** `verify_incoming` *must* be set to true to prevent - anyone with access to the internal RPC port from gaining full access to - the Consul cluster. - - - `verify_outgoing` ((#tls_internal_rpc_verify_outgoing)) Overrides [`tls.defaults.verify_outgoing`](#tls_defaults_verify_outgoing). - - ~> **Security Note:** Servers that specify `verify_outgoing = true` will - always talk to other servers over TLS, but they still _accept_ non-TLS - connections to allow for a transition of all clients to TLS. Currently the - only way to enforce that no client can communicate with a server unencrypted - is to also enable `verify_incoming` which requires client certificates too. - - - `verify_server_hostname` Overrides [tls.defaults.verify_server_hostname](#tls_defaults_verify_server_hostname). When - set to true, Consul verifies the TLS certificate presented by the servers - match the hostname `server..`. By default this is false, - and Consul does not verify the hostname of the certificate, only that it - is signed by a trusted CA. - - ~> **Security Note:** `verify_server_hostname` *must* be set to true to prevent a - compromised client from gaining full read and write access to all cluster - data *including all ACL tokens and service mesh CA root keys*. - -- `server_name` When provided, this overrides the [`node_name`](#_node) - for the TLS certificate. It can be used to ensure that the certificate name matches - the hostname we declare. - -### Deprecated Options ((#tls_deprecated_options)) - -The following options were deprecated in Consul 1.12, please use the -[`tls`](#tls-1) stanza instead. - -- `ca_file` See: [`tls.defaults.ca_file`](#tls_defaults_ca_file). - -- `ca_path` See: [`tls.defaults.ca_path`](#tls_defaults_ca_path). - -- `cert_file` See: [`tls.defaults.cert_file`](#tls_defaults_cert_file). - -- `key_file` See: [`tls.defaults.key_file`](#tls_defaults_key_file). - -- `tls_min_version` Added in Consul 0.7.4. - See: [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). - -- `tls_cipher_suites` Added in Consul 0.8.2. - See: [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). - -- `tls_prefer_server_cipher_suites` Added in Consul 0.8.2. This setting will - be ignored (see [this post](https://go.dev/blog/tls-cipher-suites) for details). - -- `verify_incoming` See: [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). - -- `verify_incoming_rpc` See: [`tls.internal_rpc.verify_incoming`](#tls_internal_rpc_verify_incoming). - -- `verify_incoming_https` See: [`tls.https.verify_incoming`](#tls_https_verify_incoming). - -- `verify_outgoing` See: [`tls.defaults.verify_outgoing`](#tls_defaults_verify_outgoing). - -- `verify_server_hostname` See: [`tls.internal_rpc.verify_server_hostname`](#tls_internal_rpc_verify_server_hostname). - -### Example Configuration File, with TLS - -~> **Security Note:** all three verify options should be set as `true` to enable -secure mTLS communication, enabling both encryption and authentication. Failing -to set [`verify_incoming`](#tls_defaults_verify_incoming) or -[`verify_outgoing`](#tls_defaults_verify_outgoing) either in the -interface-specific stanza (e.g. `tls.internal_rpc`, `tls.https`) or in -`tls.defaults` will result in TLS not being enabled at all, even when specifying -a [`ca_file`](#tls_defaults_ca_file), [`cert_file`](#tls_defaults_cert_file), -and [`key_file`](#tls_defaults_key_file). - -See, especially, the use of the `ports` setting highlighted below. - - - - - -```hcl -datacenter = "east-aws" -data_dir = "/opt/consul" -log_level = "INFO" -node_name = "foobar" -server = true - -addresses = { - https = "0.0.0.0" -} -ports { - https = 8501 -} - -tls { - defaults { - key_file = "/etc/pki/tls/private/my.key" - cert_file = "/etc/pki/tls/certs/my.crt" - ca_file = "/etc/pki/tls/certs/ca-bundle.crt" - verify_incoming = true - verify_outgoing = true - verify_server_hostname = true - } -} -``` - - - - - -```json -{ - "datacenter": "east-aws", - "data_dir": "/opt/consul", - "log_level": "INFO", - "node_name": "foobar", - "server": true, - "addresses": { - "https": "0.0.0.0" - }, - "ports": { - "https": 8501 - }, - "tls": { - "defaults": { - "key_file": "/etc/pki/tls/private/my.key", - "cert_file": "/etc/pki/tls/certs/my.crt", - "ca_file": "/etc/pki/tls/certs/ca-bundle.crt", - "verify_incoming": true, - "verify_outgoing": true, - "verify_server_hostname": true - } - } -} -``` - - - - - -Consul will not enable TLS for the HTTP or gRPC API unless the `https` port has -been assigned a port number `> 0`. We recommend using `8501` for `https` as this -default will automatically work with some tooling. - -## xDS Server Parameters - -- `xds`: This object allows you to configure the behavior of Consul's -[xDS protocol](https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol) -server. - - - `update_max_per_second`: Specifies the number of proxy configuration updates across all connected xDS streams that are allowed per second. This configuration prevents updates to global resources, such as wildcard intentions, from consuming system resources at the expense of other processes, such as Raft and Gossip, which could cause general cluster instability. - - The default value is `250`. It is based on a load test of 5,000 streams connected to a single server with two CPU cores. - - If necessary, you can lower or increase the limit without a rolling restart by using the `consul reload` command or by sending the server a `SIGHUP`. diff --git a/website/content/docs/agent/config/index.mdx b/website/content/docs/agent/config/index.mdx deleted file mode 100644 index c620ef72e05a..000000000000 --- a/website/content/docs/agent/config/index.mdx +++ /dev/null @@ -1,95 +0,0 @@ ---- -layout: docs -page_title: Agents - Configuration Explained -description: >- - Agent configuration is the process of defining server and client agent properties with CLI flags and configuration files. Learn what properties can be configured on reload and how Consul sets precedence for configuration settings. ---- - -# Agent Configuration - -The agent has various configuration options that can be specified via -the command-line or via configuration files. All of the configuration -options are completely optional. Defaults are specified with their -descriptions. - -Configuration precedence is evaluated in the following order: - -1. [Command line arguments](/consul/docs/agent/config/cli-flags) -2. [Configuration files](/consul/docs/agent/config/config-files) - -When loading configuration, the Consul agent loads the configuration from files and -directories in lexical order. For example, configuration file -`basic_config.json` will be processed before `extra_config.json`. Configuration -can be in either [HCL](https://github.com/hashicorp/hcl#syntax) or JSON format. -Available in Consul 1.0 and later, the HCL support now requires an `.hcl` or -`.json` extension on all configuration files in order to specify their format. - -Configuration specified later will be merged into configuration specified -earlier. In most cases, "merge" means that the later version will override the -earlier. In some cases, such as event handlers, merging appends the handlers to -the existing configuration. The exact merging behavior is specified for each -option below. - -The Consul agent also supports reloading configuration when it receives the -SIGHUP signal. Not all changes are respected, but those that are -documented below in the -[Reloadable Configuration](#reloadable-configuration) section. The -[reload command](/consul/commands/reload) can also be used to trigger a -configuration reload. - -You can test the following configuration options by following the -[Get Started](/consul/tutorials/get-started-vms?utm_source=docs) -tutorials to install an agent in a VM. - -## Ports Used - -Consul requires up to 6 different ports to work properly, some on -TCP, UDP, or both protocols. - -Review the [required ports](/consul/docs/install/ports) table for a list of -required ports and their default settings. - -## Reloadable Configuration - -Some agent configuration options are reloadable at runtime. -You can run the [`consul reload` command](/consul/commands/reload) to manually reload supported options from configuration files in the configuration directory. -To configure the agent to automatically reload configuration files updated on disk, -set the [`auto_reload_config` configuration option](/consul/docs/agent/config/config-files#auto_reload_config) parameter to `true`. - -The following agent configuration options are reloadable at runtime: -- ACL Tokens -- [Configuration Entry Bootstrap](/consul/docs/agent/config/config-files#config_entries_bootstrap) -- Checks -- [Discard Check Output](/consul/docs/agent/config/config-files#discard_check_output) -- HTTP Client Address -- Log level -- [Metric Prefix Filter](/consul/docs/agent/config/config-files#telemetry-prefix_filter) -- [Node Metadata](/consul/docs/agent/config/config-files#node_meta) -- Some Raft options (since Consul 1.10.0) - - [`raft_snapshot_threshold`](/consul/docs/agent/config/config-files#_raft_snapshot_threshold) - - [`raft_snapshot_interval`](/consul/docs/agent/config/config-files#_raft_snapshot_interval) - - [`raft_trailing_logs`](/consul/docs/agent/config/config-files#_raft_trailing_logs) - - These can be important in certain outage situations so being able to control - them without a restart provides a recovery path that doesn't involve - downtime. They generally shouldn't be changed otherwise. -- [RPC rate limits](/consul/docs/agent/config/config-files#limits) -- [Reporting](/consul/docs/agent/config/config-files#reporting) -- [HTTP Maximum Connections per Client](/consul/docs/agent/config/config-files#http_max_conns_per_client) -- Services -- TLS Configuration - - Please be aware that this is currently limited to reload a configuration that is already TLS enabled. You cannot enable or disable TLS only with reloading. - - To avoid a potential security issue, the following TLS configuration parameters do not automatically reload when [-auto-reload-config](/consul/docs/agent/config/cli-flags#_auto_reload_config) is enabled: - - [encrypt_verify_incoming](/consul/docs/agent/config/config-files#encrypt_verify_incoming) - - [verify_incoming](/consul/docs/agent/config/config-files#verify_incoming) - - [verify_incoming_rpc](/consul/docs/agent/config/config-files#verify_incoming_rpc) - - [verify_incoming_https](/consul/docs/agent/config/config-files#verify_incoming_https) - - [verify_outgoing](/consul/docs/agent/config/config-files#verify_outgoing) - - [verify_server_hostname](/consul/docs/agent/config/config-files#verify_server_hostname) - - [ca_file](/consul/docs/agent/config/config-files#ca_file) - - [ca_path](/consul/docs/agent/config/config-files#ca_path) - - If any of those configurations are changed while [-auto-reload-config](/consul/docs/agent/config/cli-flags#_auto_reload_config) is enabled, - Consul will issue the following warning, `Static Runtime config has changed and need a manual config reload to be applied`. - You must manually issue the `consul reload` command or send a `SIGHUP` to the Consul process to reload the new values. -- Watches -- [License](/consul/docs/enterprise/license/overview) diff --git a/website/content/docs/agent/index.mdx b/website/content/docs/agent/index.mdx deleted file mode 100644 index 468e9087c2ae..000000000000 --- a/website/content/docs/agent/index.mdx +++ /dev/null @@ -1,464 +0,0 @@ ---- -layout: docs -page_title: Agents Overview -description: >- - Agents maintain register services, respond to queries, maintain datacenter membership information, and make most of Consul’s functions possible. Learn how to start, stop, and configure agents, as well as their requirements and lifecycle. ---- - -# Agents Overview - -This topic provides an overview of the Consul agent, which is the core process of Consul. -The agent maintains membership information, registers services, runs checks, responds to queries, and more. -The agent must run on every node that is part of a Consul cluster. - -Agents run in either client or server mode. Client nodes are lightweight processes that make up the majority of the cluster. -They interface with the server nodes for most operations and maintain very little state of their own. -Clients run on every node where services are running. - -In addition to the core agent operations, server nodes participate in the [consensus quorum](/consul/docs/architecture/consensus). -The quorum is based on the Raft protocol, which provides strong consistency and availability in the case of failure. -Server nodes should run on dedicated instances because they are more resource intensive than client nodes. - -## Lifecycle - -Every agent in the Consul cluster goes through a lifecycle. -Understanding the lifecycle is useful for building a mental model of an agent's interactions with a cluster and how the cluster treats a node. -The following process describes the agent lifecycle within the context of an existing cluster: - -1. **An agent is started** either manually or through an automated or programmatic process. - Newly-started agents are unaware of other nodes in the cluster. -1. **An agent joins a cluster**, which enables the agent to discover agent peers. - Agents join clusters on startup when the [`join`](/consul/commands/join) command is issued or according the [auto-join configuration](/consul/docs/install/cloud-auto-join). -1. **Information about the agent is gossiped to the entire cluster**. - As a result, all nodes will eventually become aware of each other. -1. **Existing servers will begin replicating to the new node** if the agent is a server. - -### Failures and crashes - -In the event of a network failure, some nodes may be unable to reach other nodes. -Unreachable nodes will be marked as _failed_. - -Distinguishing between a network failure and an agent crash is impossible. -As a result, agent crashes are handled in the same manner is network failures. - -Once a node is marked as failed, this information is updated in the service -catalog. - --> **Note:** Updating the catalog is only possible if the servers can still [form a quorum](/consul/docs/architecture/consensus). -Once the network recovers or a crashed agent restarts, the cluster will repair itself and unmark a node as failed. -The health check in the catalog will also be updated to reflect the current state. - -### Exiting nodes - -When a node leaves a cluster, it communicates its intent and the cluster marks the node as having _left_. -In contrast to changes related to failures, all of the services provided by a node are immediately deregistered. -If a server agent leaves, replication to the exiting server will stop. - -To prevent an accumulation of dead nodes (nodes in either _failed_ or _left_ -states), Consul will automatically remove dead nodes out of the catalog. This -process is called _reaping_. This is currently done on a configurable -interval of 72 hours (changing the reap interval is _not_ recommended due to -its consequences during outage situations). Reaping is similar to leaving, -causing all associated services to be deregistered. - -## Limit traffic rates -You can define a set of rate limiting configurations that help operators protect Consul servers from excessive or peak usage. The configurations enable you to gracefully degrade Consul servers to avoid a global interruption of service. Consul supports global server rate limiting, which lets configure Consul servers to deny requests that exceed the read or write limits. Refer to [Traffic Rate Limits Overview](/consul/docs/agent/limits). - -## Requirements - -You should run one Consul agent per server or host. -Instances of Consul can run in separate VMs or as separate containers. -At least one server agent per Consul deployment is required, but three to five server agents are recommended. -Refer to the following sections for information about host, port, memory, and other requirements: - -- [Server Performance](/consul/docs/install/performance) -- [Required Ports](/consul/docs/install/ports) - -The [Datacenter Deploy tutorial](/consul/tutorials/production-deploy/reference-architecture#deployment-system-requirements) contains additional information, including licensing configuration, environment variables, and other details. - -### Maximum latency network requirements - -Consul uses the gossip protocol to share information across agents. To function properly, you cannot exceed the protocol's maximum latency threshold. The latency threshold is calculated according to the total round trip time (RTT) for communication between all agents. Other network usages outside of Gossip are not bound by these latency requirements (i.e. client to server RPCs, HTTP API requests, xDS proxy configuration, DNS). - -For data sent between all Consul agents the following latency requirements must be met: - -- Average RTT for all traffic cannot exceed 50ms. -- RTT for 99 percent of traffic cannot exceed 100ms. - -## Starting the Consul agent - -Start a Consul agent with the `consul` command and `agent` subcommand using the following syntax: - -```shell-session -$ consul agent -``` - -Consul ships with a `-dev` flag that configures the agent to run in server mode and several additional settings that enable you to quickly get started with Consul. -The `-dev` flag is provided for learning purposes only. -We strongly advise against using it for production environments. - --> **Getting Started Tutorials**: You can test a local agent in a VM by following the -[Get Started tutorials](/consul/tutorials/get-started-vms?utm_source=docs). - -When starting Consul with the `-dev` flag, the only additional information Consul needs to run is the location of a directory for storing agent state data. -You can specify the location with the `-data-dir` flag or define the location in an external file and point the file with the `-config-file` flag. - -You can also point to a directory containing several configuration files with the `-config-dir` flag. -This enables you to logically group configuration settings into separate files. See [Configuring Consul Agents](/consul/docs/agent#configuring-consul-agents) for additional information. - -The following example starts an agent in dev mode and stores agent state data in the `tmp/consul` directory: - -```shell-session -$ consul agent -data-dir=tmp/consul -dev -``` - -Agents are highly configurable, which enables you to deploy Consul to any infrastructure. Many of the default options for the `agent` command are suitable for becoming familiar with a local instance of Consul. In practice, however, several additional configuration options must be specified for Consul to function as expected. Refer to [Agent Configuration](/consul/docs/agent/config) topic for a complete list of configuration options. - -### Understanding the agent startup output - -Consul prints several important messages on startup. -The following example shows output from the [`consul agent`](/consul/commands/agent) command: - -```shell-session -$ consul agent -data-dir=/tmp/consul -==> Starting Consul agent... -==> Consul agent running! - Node name: 'Armons-MacBook-Air' - Datacenter: 'dc1' - Server: false (bootstrap: false) - Client Addr: 127.0.0.1 (HTTP: 8500, DNS: 8600) - Cluster Addr: 192.168.1.43 (LAN: 8301, WAN: 8302) - -==> Log data will now stream in as it occurs: - - [INFO] serf: EventMemberJoin: Armons-MacBook-Air.local 192.168.1.43 -... -``` - -- **Node name**: This is a unique name for the agent. By default, this - is the hostname of the machine, but you may customize it using the - [`-node`](/consul/docs/agent/config/cli-flags#_node) flag. - -- **Datacenter**: This is the datacenter in which the agent is configured to - run. For single-DC configurations, the agent will default to `dc1`, but you can configure which datacenter the agent reports to with the [`-datacenter`](/consul/docs/agent/config/cli-flags#_datacenter) flag. - Consul has first-class support for multiple datacenters, but configuring each node to report its datacenter improves agent efficiency. - -- **Server**: This indicates whether the agent is running in server or client - mode. - Running an agent in server mode requires additional overhead. This is because they participate in the consensus quorum, store cluster state, and handle queries. A server may also be - in ["bootstrap"](/consul/docs/agent/config/cli-flags#_bootstrap_expect) mode, which enables the server to elect itself as the Raft leader. Multiple servers cannot be in bootstrap mode because it would put the cluster in an inconsistent state. - -- **Client Addr**: This is the address used for client interfaces to the agent. - This includes the ports for the HTTP and DNS interfaces. By default, this - binds only to localhost. If you change this address or port, you'll have to - specify a `-http-addr` whenever you run commands such as - [`consul members`](/consul/commands/members) to indicate how to reach the - agent. Other applications can also use the HTTP address and port - [to control Consul](/consul/api-docs). - -- **Cluster Addr**: This is the address and set of ports used for communication - between Consul agents in a cluster. Not all Consul agents in a cluster have to - use the same port, but this address **MUST** be reachable by all other nodes. - -When running under `systemd` on Linux, Consul notifies systemd by sending -`READY=1` to the `$NOTIFY_SOCKET` when a LAN join has completed. For -this either the `join` or `retry_join` option has to be set and the -service definition file has to have `Type=notify` set. - -## Configuring Consul agents - -You can specify many options to configure how Consul operates when issuing the `consul agent` command. -You can also create one or more configuration files and provide them to Consul at startup using either the `-config-file` or `-config-dir` option. -Configuration files must be written in either JSON or HCL format. - --> **Consul Terminology**: Configuration files are sometimes called "service definition" files when they are used to configure client agents. -This is because clients are most commonly used to register services in the Consul catalog. - -The following example starts a Consul agent that takes configuration settings from a file called `server.json` located in the current working directory: - -```shell-session hideClipboard -$ consul agent -config-file=server.json -``` - -The configuration options necessary to successfully use Consul depend on several factors, including the type of agent you are configuring (client or server), the type of environment you are deploying to (e.g., on-premise, multi-cloud, etc.), and the security options you want to implement (ACLs, gRPC encryption). -The following examples are intended to help you understand some of the combinations you can implement to configure Consul. - -### Common configuration settings - -The following settings are commonly used in the configuration file (also called a service definition file when registering services with Consul) to configure Consul agents: - -| Parameter | Description | Default | -| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | -| `node_name` | String value that specifies a name for the agent node.
See [`-node-id`](/consul/docs/agent/config/cli-flags#_node_id) for details. | Hostname of the machine | -| `server` | Boolean value that determines if the agent runs in server mode.
See [`-server`](/consul/docs/agent/config/cli-flags#_server) for details. | `false` | -| `datacenter` | String value that specifies which datacenter the agent runs in.
See [-datacenter](/consul/docs/agent/config/cli-flags#_datacenter) for details. | `dc1` | -| `data_dir` | String value that specifies a directory for storing agent state data.
See [`-data-dir`](/consul/docs/agent/config/cli-flags#_data_dir) for details. | none | -| `log_level` | String value that specifies the level of logging the agent reports.
See [`-log-level`](/consul/docs/agent/config/cli-flags#_log_level) for details. | `info` | -| `retry_join` | Array of string values that specify one or more agent addresses to join after startup. The agent will continue trying to join the specified agents until it has successfully joined another member.
See [`-retry-join`](/consul/docs/agent/config/cli-flags#_retry_join) for details. | none | -| `addresses` | Block of nested objects that define addresses bound to the agent for internal cluster communication. | `"http": "0.0.0.0"` See the Agent Configuration page for [default address values](/consul/docs/agent/config/config-files#addresses) | -| `ports` | Block of nested objects that define ports bound to agent addresses.
See (link to addresses option) for details. | See the Agent Configuration page for [default port values](/consul/docs/agent/config/config-files#ports) | - -### Server node in a service mesh - -The following example configuration is for a server agent named "`consul-server`". The server is [bootstrapped](/consul/docs/agent/config/cli-flags#_bootstrap) and the Consul GUI is enabled. -The reason this server agent is configured for a service mesh is that the `connect` configuration is enabled. The connect subsystem provides Consul's service mesh capabilities, including service-to-service connection authorization and encryption using mutual Transport Layer Security (TLS). Applications can use sidecar proxies in a service mesh configuration to establish TLS connections for inbound and outbound connections without being aware of Consul service mesh at all. Refer to [Consul Service Mesh](/consul/docs/connect) for details. - - - -```hcl -node_name = "consul-server" -server = true -bootstrap = true -ui_config { - enabled = true -} -datacenter = "dc1" -data_dir = "consul/data" -log_level = "INFO" -addresses { - http = "0.0.0.0" -} -connect { - enabled = true -} -``` - - -```json -{ - "node_name": "consul-server", - "server": true, - "bootstrap": true, - "ui_config": { - "enabled": true - }, - "datacenter": "dc1", - "data_dir": "consul/data", - "log_level": "INFO", - "addresses": { - "http": "0.0.0.0" - }, - "connect": { - "enabled": true - } -} -``` - - - -### Server node with encryption enabled - -The following example shows a server node configured with encryption enabled. -Refer to the [Security](/consul/docs/security) chapter for additional information about how to configure security options for Consul. - - - -```hcl -node_name = "consul-server" -server = true -ui_config { - enabled = true -} -data_dir = "consul/data" -addresses { - http = "0.0.0.0" -} -retry_join = [ - "consul-server2", - "consul-server3" -] -encrypt = "aPuGh+5UDskRAbkLaXRzFoSOcSM+5vAK+NEYOWHJH7w=" - -tls { - defaults { - verify_incoming = true - verify_outgoing = true - ca_file = "/consul/config/certs/consul-agent-ca.pem" - cert_file = "/consul/config/certs/dc1-server-consul-0.pem" - key_file = "/consul/config/certs/dc1-server-consul-0-key.pem" - verify_server_hostname = true - } -} - -``` - - -```json -{ - "node_name": "consul-server", - "server": true, - "ui_config": { - "enabled": true - }, - "data_dir": "consul/data", - "addresses": { - "http": "0.0.0.0" - }, - "retry_join": ["consul-server1", "consul-server2"], - "encrypt": "aPuGh+5UDskRAbkLaXRzFoSOcSM+5vAK+NEYOWHJH7w=", - "tls": { - "defaults": { - "verify_incoming": true, - "verify_outgoing": true, - "ca_file": "/consul/config/certs/consul-agent-ca.pem", - "cert_file": "/consul/config/certs/dc1-server-consul-0.pem", - "key_file": "/consul/config/certs/dc1-server-consul-0-key.pem" - }, - "internal_rpc": { - "verify_server_hostname": true - } - } -} -``` - - - -### Client node registering a service - -Using Consul as a central service registry is a common use case. -The following example configuration includes common settings to register a service with a Consul agent and enable health checks. Refer to [Define Health Checks](/consul/docs/services/usage/checks) to learn more about health checks. - - - -```hcl -node_name = "consul-client" -server = false -datacenter = "dc1" -data_dir = "consul/data" -log_level = "INFO" -retry_join = ["consul-server"] -service { - id = "dns" - name = "dns" - tags = ["primary"] - address = "localhost" - port = 8600 - check { - id = "dns" - name = "Consul DNS TCP on port 8600" - tcp = "localhost:8600" - interval = "10s" - timeout = "1s" - } -} - -``` - -```json -{ - "node_name": "consul-client", - "server": false, - "datacenter": "dc1", - "data_dir": "consul/data", - "log_level": "INFO", - "retry_join": ["consul-server"], - "service": { - "id": "dns", - "name": "dns", - "tags": ["primary"], - "address": "localhost", - "port": 8600, - "check": { - "id": "dns", - "name": "Consul DNS TCP on port 8600", - "tcp": "localhost:8600", - "interval": "10s", - "timeout": "1s" - } - } -} -``` - - - -## Client node with multiple interfaces or IP addresses - -The following example shows how to configure Consul to listen on multiple interfaces or IP addresses using a [go-sockaddr template]. - -The `bind_addr` is used for internal RPC and Serf communication ([read the Agent Configuration for more information](/consul/docs/agent/config/config-files#bind_addr)). - -The `client_addr` configuration specifies IP addresses used for HTTP, HTTPS, DNS and gRPC servers. ([read the Agent Configuration for more information](/consul/docs/agent/config/config-files#client_addr)). - - - -```hcl -node_name = "consul-client" -server = false -bootstrap = true -ui_config { - enabled = true -} -datacenter = "dc1" -data_dir = "consul/data" -log_level = "INFO" - -# used for internal RPC and Serf -bind_addr = "0.0.0.0" - -# Used for HTTP, HTTPS, DNS, and gRPC addresses. -# loopback is not included in GetPrivateInterfaces because it is not routable. -client_addr = "{{ GetPrivateInterfaces | exclude \"type\" \"ipv6\" | join \"address\" \" \" }} {{ GetAllInterfaces | include \"flags\" \"loopback\" | join \"address\" \" \" }}" - -# advertises gossip and RPC interface to other nodes -advertise_addr = "{{ GetInterfaceIP \"en0\" }}" -``` - -```json -{ - "node_name": "consul-client", - "server": false, - "bootstrap": true, - "ui_config": { - "enabled": true - }, - "datacenter": "dc1", - "data_dir": "consul/data", - "log_level": "INFO", - "bind_addr": "{{ GetPrivateIP }}", - "client_addr": "{{ GetPrivateInterfaces | exclude \"type\" \"ipv6\" | join \"address\" \" \" }} {{ GetAllInterfaces | include \"flags\" \"loopback\" | join \"address\" \" \" }}", - "advertise_addr": "{{ GetInterfaceIP \"en0\"}}" -} -``` - - - -## Stopping an agent - -An agent can be stopped in two ways: gracefully or forcefully. Servers and -Clients both behave differently depending on the leave that is performed. There -are two potential states a process can be in after a system signal is sent: -_left_ and _failed_. - -To gracefully halt an agent, send the process an _interrupt signal_ (usually -`Ctrl-C` from a terminal, or running `kill -INT consul_pid` ). For more -information on different signals sent by the `kill` command, see -[here](https://www.linux.org/threads/kill-signals-and-commands-revised.11625/) - -When a Client is gracefully exited, the agent first notifies the cluster it -intends to leave the cluster. This way, other cluster members notify the -cluster that the node has _left_. - -When a Server is gracefully exited, the server will not be marked as _left_. -This is to minimally impact the consensus quorum. Instead, the Server will be -marked as _failed_. To remove a server from the cluster, the -[`force-leave`](/consul/commands/force-leave) command is used. Using -`force-leave` will put the server instance in a _left_ state so long as the -Server agent is not alive. - -Alternatively, you can forcibly stop an agent by sending it a -`kill -KILL consul_pid` signal. This will stop any agent immediately. The rest -of the cluster will eventually (usually within seconds) detect that the node has -died and notify the cluster that the node has _failed_. - -For client agents, the difference between a node _failing_ and a node _leaving_ -may not be important for your use case. For example, for a web server and load -balancer setup, both result in the same outcome: the web node is removed -from the load balancer pool. - -The [`skip_leave_on_interrupt`](/consul/docs/agent/config/config-files#skip_leave_on_interrupt) and -[`leave_on_terminate`](/consul/docs/agent/config/config-files#leave_on_terminate) configuration -options allow you to adjust this behavior. - - - -[go-sockaddr template]: https://godoc.org/github.com/hashicorp/go-sockaddr/template diff --git a/website/content/docs/agent/limits/index.mdx b/website/content/docs/agent/limits/index.mdx deleted file mode 100644 index ecf6deac49bd..000000000000 --- a/website/content/docs/agent/limits/index.mdx +++ /dev/null @@ -1,61 +0,0 @@ ---- -layout: docs -page_title: Limit Traffic Rates Overview -description: Rate limiting is a set of Consul server agent configurations that you can use to mitigate the risks to Consul servers when clients send excessive requests to Consul resources. ---- - -# Traffic rate limiting overview - - -This topic provides overview information about the traffic rates limits you can configure for Consul datacenters. - -## Introduction - -Configuring rate limits on RPC and gRPC traffic mitigates the risks to Consul servers when client agents or services send excessive read or write requests to Consul resources. A _read_ request is defined as any request that does not modify Consul internal state. A _write_ request is defined as any request that modifies Consul internal state. Configure read and write request limits independently. - -## Workflow - -You can set global limits on the rate of read and write requests that affect individual servers in the datacenter. You can set limits for all source IP addresses, which enables you to specify a budget for read and write requests to prevent any single source IP from overwhelming the Consul server and negatively affecting the network. The following steps describe the general process for setting global read and write rate limits: - -1. Set arbitrary limits to begin understanding the upper boundary of RPC and gRPC loads in your network. Refer to [Initialize rate limit settings](/consul/docs/agent/limits/usage/init-rate-limits) for additional information. - -1. Monitor the metrics and logs and readjust the initial configurations as necessary. Refer to [Monitor rate limit data](/consul/docs/agent/limits/usage/monitor-rate-limits) - -1. Define your final operational limits based on your observations. If you are defining global rate limits, refer to [Set global traffic rate limits](/consul/docs/agent/limits/usage/set-global-traffic-rate-limits) for additional information. For information about setting limits per source IP address, refer to [Limit traffic rates for a source IP](/consul/docs/agent/limits/usage/limit-request-rates-from-ips). - - -Setting limits per source IP requires Consul Enterprise. - - -### Order of operations - -You can define request rate limits in the agent configuration and in the control plane request limit configuration entry. The configuration entry also supports rate limit configurations for Consul resources. Consul performs the following order of operations when determining request rate limits: - -![Flowchart that describes the order of operations for determining request rate limits.](/img/agent-rate-limiting-ops-order.jpg#light-theme-only) -![Flowchart that describes the order of operations for determining request rate limits.](/img/agent-rate-limiting-ops-order-dark.jpg#dark-theme-only) - - - -## Kubernetes - -To define global rate limits, configure the `request_limits` settings in the Consul Helm chart. Refer to the [Helm chart reference](/consul/docs/k8s/helm) for additional information. Refer to the [control plane request limit configuration entry reference](/consul/docs/connect/config-entries/control-plane-request-limit) for information about applying a CRD for limiting traffic rates from source IPs. diff --git a/website/content/docs/agent/limits/usage/init-rate-limits.mdx b/website/content/docs/agent/limits/usage/init-rate-limits.mdx deleted file mode 100644 index 1c84ca4f6e58..000000000000 --- a/website/content/docs/agent/limits/usage/init-rate-limits.mdx +++ /dev/null @@ -1,31 +0,0 @@ ---- -layout: docs -page_title: Initialize rate limit settings -description: Learn how to determine regular and peak loads in your network so that you can set the initial global rate limit configurations. ---- - -# Initialize rate limit settings - -Because each network has different needs and application, you need to find out what the regular and peak loads in your network are before you set traffic limits. We recommend completing the following steps to benchmark request rates in your environment so that you can implement limits appropriate for your applications. - -1. In the agent configuration file, specify a global rate limit with arbitrary values based on the following conditions: - - - Environment where Consul servers are running - - Number of servers and the projected load - - Existing metrics expressing requests per second - -1. Set the [`limits.request_limits.mode`](/consul/docs/agent/config/config-files#mode-1) parameter in the agent configuration to `permissive`. In the following example, the configuration allows up to 1000 reads and 500 writes per second for each Consul agent: - - ```hcl - request_limits { - mode = "permissive" - read_rate = 1000.0 - write_rate = 500.0 - } - ``` -1. Observe the logs and metrics for your application's typical cycle, such as a 24 hour period. Refer to [Monitor traffic rate limit data](/consul/docs/agent/limits/usage/monitor-rate-limits) for additional information. Call the [`/agent/metrics`](/consul/api-docs/agent#view-metrics) HTTP API endpoint and check the data for the following metrics: - - - `rpc.rate_limit.exceeded` with value `global/read` for label `limit_type` - - `rpc.rate_limit.exceeded` with value `global/write` for label `limit_type` - -1. If the limits are not reached, set the `mode` configuration to `enforcing`. Otherwise, continue to adjust and iterate until you find your network's unique limits. \ No newline at end of file diff --git a/website/content/docs/agent/limits/usage/limit-request-rates-from-ips.mdx b/website/content/docs/agent/limits/usage/limit-request-rates-from-ips.mdx deleted file mode 100644 index 530ad7b26a7b..000000000000 --- a/website/content/docs/agent/limits/usage/limit-request-rates-from-ips.mdx +++ /dev/null @@ -1,72 +0,0 @@ ---- -layout: docs -page_title: Limit traffic rates for a source IP address -description: Learn how to set read and request rate limits on RPC and gRPC traffic from all source IP addresses to a Consul resource. ---- - -# Limit traffic rates from source IP addresses - -This topic describes how to configure RPC and gRPC traffic rate limits for source IP addresses. This enables you to specify a budget for read and write requests to prevent any single source IP from overwhelming the Consul server and negatively affecting the network. For information about setting global traffic rate limits, refer to [Set a global limit on traffic rates](/consul/docs/agent/limits/usage/set-global-traffic-rate-limits). For an overview of Consul's server rate limiting capabilities, refer to [Limit traffic rates overview](/consul/docs/agent/limits). - - - -This feature requires Consul Enterprise. Refer to the [feature compatibility matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -## Overview - -You can set limits on the rate of read and write requests from source IP addresses to specific resources, which mitigates the risks to Consul servers when consul clients send excessive requests to a specific resource type. Before configuring traffic rate limits, you should complete the initialization process to understand normal traffic loads in your network. Refer to [Initialize rate limit settings](/consul/docs/agent/limits/init-rate-limits) for additional information. - -Complete the following steps to configure traffic rate limits from a source IP address: - -1. Define rate limits in a control plan request limit configuration entry. You can set limits for different types of resources calls. - -1. Apply the configuration entry to enact the limits. - -You should also monitor read and write rate activity and make any necessary adjustments. Refer to [Monitor rate limit data](/consul/docs/agent/limits/usage/monitor-rate-limits) for additional information. - -## Define rate limits - -Create a control plane request limit configuration entry in the `default` partition. The configuration entry applies to all client requests targeting any partition. Refer to the [control plane request limit configuration entry](/consul/docs/connect/config-entries/control-plane-request-limit) reference documentation for details about the available configuration parameters. - -Specify the following parameters: - -- `kind`: This must be set to `control-plane-request-limit`. -- `name`: Specify the name of the service that you want to limit read and write operations to. -- `read_rate`: Specify overall number of read operations per second allowed from the service. -- `write_rate`: Specify overall number of write operations per second allowed from the service. - -You can also configure limits on calls to the key-value store, ACL system, and Consul catalog. - -## Apply the configuration entry - -If your network is deployed to virtual machines, use the `consul config write` command and specify the control plane request limit configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. - - - - -```shell-session -$ consul config write control-plane-request-limit.hcl -``` - - - - -```shell-session -$ consul config write control-plane-request-limit.json -``` - - - - -```shell-session -$ kubectl apply control-plane-request-limit.yaml -``` - - - - -## Disable request rate limits - -Set the [limits.request_limits.mode](/consul/docs/agent/config/config-files#mode-1) in the agent configuration to `disabled` to allow services to exceed the specified read and write requests limits. The `disabled` mode applies to all request rate limits, even limits specified in the [control plane request limits configuration entry](/consul/docs/connect/config-entries/control-plane-request-limit). Note that any other mode specified in the agent configuration only applies to global traffic rate limits. diff --git a/website/content/docs/agent/limits/usage/monitor-rate-limits.mdx b/website/content/docs/agent/limits/usage/monitor-rate-limits.mdx deleted file mode 100644 index 23502d1cb149..000000000000 --- a/website/content/docs/agent/limits/usage/monitor-rate-limits.mdx +++ /dev/null @@ -1,77 +0,0 @@ ---- -layout: docs -page_title: Monitor traffic rate limit data -description: Learn about the metrics and logs you can use to monitor server rate limiting activity, include rate limits for read operations and writer operations ---- - -# Monitor traffic rate limit data - -This topic describes Consul functionality that enables you to monitor read and write request operations taking place in your network. Use the functionality to help you understand normal workloads and set safe limits on the number of requests Consul client agents and services can make to Consul servers. - -## Access rate limit logs - -Consul prints a log line for each rate limit request. The log provides the necessary information for identifying the source of the request and the configured limit. The log provides the information necessary for identifying the source of the request and the configured limit. Consul prints the log `DEBUG` log level and can drop the log to avoid affecting the server health. Dropping a log line increments the `rpc.rate_limit.log_dropped` metric. - -The following example log shows that RPC request from `127.0.0.1:53562` to `KVS.Apply` exceeded the limit: - -```text -2023-02-17T10:01:15.565-0500 [DEBUG] agent.server.rpc-rate-limit: RPC -exceeded allowed rate limit: rpc=KVS.Apply source_addr=127.0.0.1:53562 -limit_type=global/write limit_enforced=false -``` - -Refer to [`log_file`](/consul/docs/agent/config/config-files#log_file) for information about where to retrieve log files. - -## Review rate limit metrics - -Consul captures the following metrics associated with rate limits: - -- Type of limit -- Operation -- Rate limit mode - -Call the `/agent/metrics` API endpoint to view the metrics associated with rate limits. Refer to [View Metrics](/consul/api-docs/agent#view-metrics) for API usage information. In the following example, Consul dropped a call to the consul service because it exceeded the limit by one call: - -```shell-session -$ curl http://127.0.0.1:8500/v1/agent/metrics -{ - . . . - "Counters": [ - { - "Name": "consul.rpc.rate_limit.exceeded", - "Count": 1, - "Sum": 1, - "Min": 1, - "Max": 1, - "Mean": 1, - "Stddev": 0, - "Labels": { - "service": "consul" - } - }, - { - "Name": "consul.rpc.rate_limit.log_dropped", - "Count": 1, - "Sum": 1, - "Min": 1, - "Max": 1, - "Mean": 1, - "Stddev": 0, - "Labels": {} - } - ], - . . . -} -``` - -Refer to [Telemetry](/consul/docs/agent/telemetry) for additional information. - -## Request denials - -When an HTTP request is denied for rate limiting reason, Consul returns one of the following errors: - -- **429 Resource Exhausted**: Indicates that a server is not able to perform the request but that another server could potentially fulfill it. This error is most common on stale reads because any server may fulfill stale read requests. To resolve this type of error, we recommend immediately retrying the request to another server. If the request came from a Consul client agent, the agent automatically retries the request up to the limit set in the [`rpc_hold_timeout`](/consul/docs/agent/config/config-files#rpc_hold_timeout) configuration . - -- **503 Service Unavailable**: Indicates that server is unable to perform the request and that no other server can fulfill the request, either. This usually occurs on consistent reads or for writes. In this case we recommend retrying according to an exponential backoff schedule. If the request came from a Consul client agent, the agent automatically retries the request according to the [`rpc_hold_timeout`](/consul/docs/agent/config/config-files#rpc_hold_timeout) configuration. - -Refer to [Rate limit reached on the server](/consul/docs/troubleshoot/common-errors#rate-limit-reached-on-the-server) for additional information. \ No newline at end of file diff --git a/website/content/docs/agent/limits/usage/set-global-traffic-rate-limits.mdx b/website/content/docs/agent/limits/usage/set-global-traffic-rate-limits.mdx deleted file mode 100644 index c0afeec9010e..000000000000 --- a/website/content/docs/agent/limits/usage/set-global-traffic-rate-limits.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -layout: docs -page_title: Set a global limit on traffic rates -description: Use global rate limits to prevent excessive rates of requests to Consul servers. ---- - -# Set a global limit on traffic rates - -This topic describes how to configure rate limits for RPC and gRPC traffic to the Consul server. - -## Introduction - -Rate limits apply to each Consul server separately and limit the number of read requests or write requests to the server on the RPC and internal gRPC endpoints. - -Because all requests coming to a Consul server eventually perform an RPC or an internal gRPC request, global rate limits apply to Consul's user interfaces, such as the HTTP API interface, the CLI, and the external gRPC endpoint for services in the service mesh. - -Refer to [Initialize Rate Limit Settings](/consul/docs/agent/limits/init-rate-limits) for additional information about right-sizing your gRPC request configurations. - -## Set a global rate limit for a Consul server - -Configure the following settings in your Consul server configuration to limit the RPC and gRPC traffic rates. - -- Set the rate limiter [`mode`](/consul/docs/agent/config/config-files#mode-1) -- Set the [`read_rate`](/consul/docs/agent/config/config-files#read_rate) -- Set the [`write_rate`](/consul/docs/agent/config/config-files#write_rate) - -In the following example, the Consul server is configured to prevent more than `500` read and `200` write RPC calls: - - - -```hcl -limits = { - rate_limit = { - mode = "enforcing" - read_rate = 500 - write_rate = 200 - } -} -``` - -```json -{ - "limits" : { - "rate_limit" : { - "mode" : "enforcing", - "read_rate" : 500, - "write_rate" : 200 - } - } -} - -``` - - - -## Monitor request rate traffic - -You should continue to monitor request traffic to ensure that request rates remain within the threshold you defined. Refer to [Monitor traffic rate limit data](/consul/docs/agent/limits/usage/monitor-rate-limits) for instructions about checking metrics and log entries, as well as troubleshooting information. - -## Disable request rate limits - -Set the [`limits.request_limits.mode`](/consul/docs/agent/config/config-files#mode-1) to `disabled` to allow services to exceed the specified read and write requests limits, even limits specified in the [control plane request limits configuration entry](/consul/docs/connect/config-entries/control-plane-request-limit). Note that any other mode specified in the agent configuration only applies to global traffic rate limits. diff --git a/website/content/docs/agent/monitor/components.mdx b/website/content/docs/agent/monitor/components.mdx deleted file mode 100644 index 1c3d49270e42..000000000000 --- a/website/content/docs/agent/monitor/components.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -layout: docs -page_title: Monitoring Consul components -description: >- - Apply best practices monitoring your Consul control and data plane. ---- - -# Monitoring Consul components - -This document will guide you recommendations for monitoring your Consul control and data plane. By keeping track of these components and setting up alerts, you can better maintain the overall health and resilience of your service mesh. - -## Background - -A Consul datacenter is the smallest unit of Consul infrastructure that can perform basic Consul operations like service discovery or service mesh. A datacenter contains at least one Consul server agent, but a real-world deployment contains three or five server agents and several Consul client agents. - -The Consul control plane consists of server agents that store all state information, including service and node IP addresses, health checks, and configuration. In addition, the control plane is responsible for securing the mesh, facilitating service discovery, health checking, policy enforcement, and other similar operational concerns. In addition, the control plane contains client agents that report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. - -The Consul data plane consists of proxies deployed locally alongside each service instance. These proxies, called sidecar proxies, receive mesh configuration data from the control plane, and control network communication between their local service instance and other services in the network. The sidecar proxy handles inbound and outbound service connections, and ensures TLS connections between services are both verified and encrypted. - -If you have Kubernetes workloads, you can also run Consul with an alternate service mesh configuration that deploys Envoy proxies but not client agents. Refer to [Simplified service mesh with Consul dataplanes](/consul/docs/connect/dataplane) for more information. - -## Consul control plane monitoring - -The Consul control plane consists of the following components: - -- RPC Communication between Consul servers and clients. -- Data plane routing instructions for the Envoy Layer 7 proxy. -- Serf Traffic: LAN and WAN -- Consul cluster peering and server federation - -It is important to monitor and establish baseline and alert thresholds for Consul control plane components for abnormal behavior detection. Note that these alerts can also be triggered by some planned events like Consul cluster upgrades, configuration changes, or leadership change. - -To help monitor your Consul control plane, we recommend to establish a baseline and standard deviation for the following: - -- [Server health](/consul/docs/agent/telemetry#server-health) -- [Leadership changes](/consul/docs/agent/telemetry#leadership-changes) -- [Key metrics](/consul/docs/agent/telemetry#key-metrics) -- [Autopilot](/consul/docs/agent/telemetry#autopilot) -- [Network activity](/consul/docs/agent/telemetry#network-activity-rpc-count) -- [Certificate authority expiration](/consul/docs/agent/telemetry#certificate-authority-expiration) - -It is important to have a highly performant network with low network latency. Ensure network latency for gossip in all datacenters are within the 8ms latency budget for all Consul agents. View the [Production server requirements](/consul/docs/install/performance#production-server-requirements) for more information. - -### Raft recommendations - -Consul uses [Raft for consensus protocol](/consul/docs/architecture/consensus). High saturation of the Raft goroutines can lead to elevated latency in the rest of the system and may cause the Consul cluster to be unstable. As a result, it is important to monitor Raft to track your control plane health. We recommend the following actions to keep control plane healthy: -- Create an alert that notifies you when [Raft thread saturation](/consul/docs/agent/telemetry#raft-thread-saturation) exceeds 50%. -- Monitor [Raft replication capacity](/consul/docs/agent/telemetry#raft-replication-capacity-issues) when Consul is handling large amounts of data and high write throughput. -- Lower [`raft_multiplier`](/consul/docs/install/performance#production) to keep your Consul cluster stable. The value of `raft_multiplier` defines the scaling factor for Consul. Default value for raft_multiplier is 5. - - A short multiplier minimizes failure detection and election time but may trigger frequently in high latency situations. This can cause constant leadership churn and associated unavailability. A high multiplier reduces the chances that spurious failures will cause leadership churn but it does this at the expense of taking longer to detect real failures and thus takes longer to restore Consul cluster availability. - - Wide networks with higher latency will perform better with larger `raft_multiplier` values. - -Raft uses BoltDB for storing data and maintaining its own state. Refer to the [Bolt DB performance metrics](/consul/docs/agent/telemetry#bolt-db-performance) when you are troubleshooting Raft performance issues. - -## Consul data plane monitoring - -The data plane of Consul consists of Consul clients or [Connect proxies](/consul/docs/connect/proxies) interacting with each other through service-to-service communication. Service-to-service traffic always stays within the data plane, while the control plane only enforces traffic rules. Monitoring service-to-service communication is important but may become extremely complex in an enterprise setup with multiple services communicating to each other across federated Consul clusters through mesh, ingress and terminating gateways. - -### Service monitoring - -You can extract the following service-related information: - -- Use the [`catalog`](/consul/commands/catalog) command or the Consul UI to query all registered services in a Consul datacenter. -- Use the [`/agent/service/:service_id`](/consul/api-docs/agent/service#get-service-configuration) API endpoint to query individual services. Connect proxies use this endpoint to discover embedded configuration. - -### Proxy monitoring - -Envoy is the supported Connect proxy for Consul service mesh. For virtual machines (VMs), Envoy starts as a sidecar service process. For Kubernetes, Envoy starts as a sidecar container in a Kubernetes service pod. -Refer to the [Supported Envoy versions](/consul/docs/connect/proxies/envoy#supported-versions) documentation to find the compatible Envoy versions for your version of Consul. - -For troubleshooting service mesh issues, set Consul logs to `trace` or `debug`. The following example annotation sets Envoy logging to `debug`. - -```yaml -annotations: - consul.hashicorp.com/envoy-extra-args: '--log-level debug --disable-hot-restart' -``` - -Refer to the [Enable logging on Envoy sidecar pods](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-envoy-extra-args) documentation for more information. - -#### Envoy Admin Interface - -To troubleshoot service-to-service communication issues, monitor Envoy host statistics. Envoy exposes a local administration interface that can be used to query and modify different aspects of the server on port `19000` by default. Envoy also exposes a public listener port to receive mTLS connections from other proxies in the mesh on port `20000` by default. - -All endpoints exposed by Envoy are available at the node running Envoy on port `19000`. The node can either be a pod in Kubernetes or VM running Consul Service Mesh. For example, if you forward the Envoy port to your local machine, you can access the Envoy admin interface at `http://localhost:19000/`. - -The following Envoy admin interface endpoints are particularly useful: - -- The `listeners` endpoint lists all listeners running on `localhost`. This allows you to confirm whether the upstream services are binding correctly to Envoy. - -```shell-session -$ curl http://localhost:19000/listeners -public_listener:192.168.19.168:20000::192.168.19.168:20000 -Outbound_listener:127.0.0.1:15001::127.0.0.1:15001 -``` - -- The `/clusters` endpoint displays information about the xDS clusters, such as service requests and mTLS related data. The following example shows a truncated output. - -```shell-session -$ http://localhost:19000/clusters -`local_app::observability_name::local_app -local_app::default_priority::max_connections::1024 -local_app::default_priority::max_pending_requests::1024 -local_app::default_priority::max_requests::1024 -local_app::default_priority::max_retries::3 -local_app::high_priority::max_connections::1024 -local_app::high_priority::max_pending_requests::1024 -local_app::high_priority::max_requests::1024 -local_app::high_priority::max_retries::3 -local_app::added_via_api::true -## ... -``` - -Visit the main admin interface (`http://localhost:19000`) to find the full list of possible Consul admin endpoints. Refer to the [Envoy docs](https://www.envoyproxy.io/docs/envoy/latest/operations/admin) for more information. - -## Next steps - -In this guide, you learned recommendations for monitoring your Consul control and data plane. - -To learn about monitoring the Consul host and instance resources, visit our [Monitoring best practices](/well-architected-framework/reliability/reliability-monitoring-service-to-service-communication-with-envoy) documentation. diff --git a/website/content/docs/agent/monitor/telemetry.mdx b/website/content/docs/agent/monitor/telemetry.mdx deleted file mode 100644 index 322ec40997d1..000000000000 --- a/website/content/docs/agent/monitor/telemetry.mdx +++ /dev/null @@ -1,809 +0,0 @@ ---- -layout: docs -page_title: Agents - Enable Telemetry Metrics -description: >- - Configure agent telemetry to collect operations metrics you can use to debug and observe Consul behavior and performance. Learn about configuration options, the metrics you can collect, and why they're important. ---- - -# Agent Telemetry - -The Consul agent collects various runtime metrics about the performance of -different libraries and subsystems. These metrics are aggregated on a ten -second (10s) interval and are retained for one minute. An _interval_ is the period of time between instances of data being collected and aggregated. - -When telemetry is being streamed to an external metrics store, the interval is defined to be that store's flush interval. - -|External Store|Interval (seconds)| -|:--------|:--------| -|[dogstatsd](https://docs.datadoghq.com/developers/dogstatsd/?tab=hostagent#how-it-works)|10s| -|[Prometheus](https://vector.dev/docs/reference/configuration/sinks/prometheus_exporter/#flush_period_secs)| 60s| -|[statsd](https://github.com/statsd/statsd/blob/master/docs/metric_types.md#timing)|10s| - -To view this data, you must send a signal to the Consul process: on Unix, -this is `USR1` while on Windows it is `BREAK`. Once Consul receives the signal, -it will dump the current telemetry information to the agent's `stderr`. - -This telemetry information can be used for debugging or otherwise -getting a better view of what Consul is doing. Review the [Monitoring and -Metrics tutorial](/consul/tutorials/day-2-operations/monitor-datacenter-health?utm_source=docs) to learn how collect and interpret Consul data. - -By default, all metric names of gauge type are prefixed with the hostname of the consul agent, e.g., -`consul.hostname.server.isLeader`. To disable prefixing the hostname, set -`telemetry.disable_hostname=true` in the [agent configuration](/consul/docs/agent/config/config-files#telemetry). - -Additionally, if the [`telemetry` configuration options](/consul/docs/agent/config/config-files#telemetry) -are provided, the telemetry information will be streamed to a -[statsite](http://github.com/armon/statsite) or [statsd](http://github.com/etsy/statsd) server where -it can be aggregated and flushed to Graphite or any other metrics store. -For a configuration example for Telegraf, review the [Monitoring with Telegraf tutorial](/consul/tutorials/day-2-operations/monitor-health-telegraf?utm_source=docs). - -This -information can also be viewed with the [metrics endpoint](/consul/api-docs/agent#view-metrics) in JSON -format or using [Prometheus](https://prometheus.io/) format. - - - -```log -[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000 -[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000 -[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000 -[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.free_count': 4387.000 -[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.heap_objects': 3163.000 -[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_pause_ns': 1151002.000 -[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_runs': 4.000 -[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.accept': Count: 5 Sum: 5.000 -[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.command': Count: 10 Sum: 10.000 -[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events': Count: 5 Sum: 5.000 -[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.foo': Count: 4 Sum: 4.000 -[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.baz': Count: 1 Sum: 1.000 -[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.memberlist.gossip': Count: 50 Min: 0.007 Mean: 0.020 Max: 0.041 Stddev: 0.007 Sum: 0.989 -[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Intent': Count: 10 Sum: 0.000 -[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Event': Count: 10 Min: 0.000 Mean: 2.500 Max: 5.000 Stddev: 2.121 Sum: 25.000 -``` - - - -# Key Metrics - -These are some metrics emitted that can help you understand the health of your cluster at a glance. A [Grafana dashboard](https://grafana.com/grafana/dashboards/13396) is also available, which is maintained by the Consul team and displays these metrics for easy visualization. For a full list of metrics emitted by Consul, see [Metrics Reference](#metrics-reference) - -### Transaction timing - -| Metric Name | Description | Unit | Type | -| :----------------------- | :----------------------------------------------------------------------------------- | :--------------------------- | :------ | -| `consul.kvs.apply` | Measures the time it takes to complete an update to the KV store. | ms | timer | -| `consul.txn.apply` | Measures the time spent applying a transaction operation. | ms | timer | -| `consul.raft.apply` | Counts the number of Raft transactions applied during the interval. This metric is only reported on the leader. | raft transactions / interval | counter | -| `consul.raft.commitTime` | Measures the time it takes to commit a new entry to the Raft log on the leader. | ms | timer | - -**Why they're important:** Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves. - -**What to look for:** Deviations (in any of these metrics) of more than 50% from baseline over the previous hour. - -### Leadership changes - -| Metric Name | Description | Unit | Type | -| :------------------------------- | :------------------------------------------------------------------------------------------------------------- | :-------- | :------ | -| `consul.raft.leader.lastContact` | Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. | ms | timer | -| `consul.raft.state.candidate` | Increments whenever a Consul server starts an election. | elections | counter | -| `consul.raft.state.leader` | Increments whenever a Consul server becomes a leader. | leaders | counter | -| `consul.server.isLeader` | Track if a server is a leader(1) or not(0). | 1 or 0 | gauge | - -**Why they're important:** Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load. - -**What to look for:** For a healthy cluster, you're looking for a `lastContact` lower than 200ms, `leader` > 0 and `candidate` == 0. Deviations from this might indicate flapping leadership. - -### Certificate Authority Expiration - -| Metric Name | Description | Unit | Type | -| :------------------------- | :---------------------------------------------------------------------------------- | :------ | :---- | -| `consul.mesh.active-root-ca.expiry` | The number of seconds until the root CA expires, updated every hour. | seconds | gauge | -| `consul.mesh.active-signing-ca.expiry` | The number of seconds until the signing CA expires, updated every hour. | seconds | gauge | -| `consul.agent.tls.cert.expiry` | The number of seconds until the server agent's TLS certificate expires, updated every hour. | seconds | gauge | - -** Why they're important:** Consul Mesh requires a CA to sign all certificates -used to connect the mesh and the mesh network ceases to work if they expire and -become invalid. The Root is particularly important to monitor as Consul does -not automatically rotate it. The TLS certificate metric monitors the certificate -that the server's agent uses to connect with the other agents in the cluster. - -** What to look for:** The Root CA should be monitored for an approaching -expiration, to indicate it is time for you to rotate the "root" CA either -manually or with external automation. Consul should rotate the signing (intermediate) certificate -automatically, but we recommend monitoring the rotation. When the certificate does not rotate, check the server agent logs for -messages related to the CA system. The agent TLS certificate's rotation handling -varies based on the configuration. - -### Autopilot - -| Metric Name | Description | Unit | Type | -| :------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------- | :---- | -| `consul.autopilot.healthy` | Tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. | health state | gauge | - -**Why it's important:** Autopilot can expose the overall health of your cluster with a simple boolean. - -**What to look for:** Alert if `healthy` is 0. Some other indicators of an unhealthy cluster would be: -- `consul.raft.commitTime` - This can help reflect the speed of state store -changes being performed by the agent. If this number is rising, the server may -be experiencing an issue due to degraded resources on the host. -- [Leadership change metrics](#leadership-changes) - Check for deviation from -the recommended values. This can indicate failed leadership elections or -flapping nodes. - -### Memory usage - -| Metric Name | Description | Unit | Type | -| :--------------------------- | :----------------------------------------------------------------- | :---- | :---- | -| `consul.runtime.alloc_bytes` | Measures the number of bytes allocated by the Consul process. | bytes | gauge | -| `consul.runtime.sys_bytes` | Measures the total number of bytes of memory obtained from the OS. | bytes | gauge | - -**Why they're important:** Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash. - -**What to look for:** If `consul.runtime.sys_bytes` exceeds 90% of total available system memory. - -**NOTE:** This metric is calculated using Go's runtime package -[MemStats](https://golang.org/pkg/runtime/#MemStats). This will have a -different output than using information gathered from `top`. For more -information, see [GH-4734](https://github.com/hashicorp/consul/issues/4734). - -### Garbage collection - -| Metric Name | Description | Unit | Type | -| :--------------------------------- | :---------------------------------------------------------------------------------------------------- | :--- | :---- | -| `consul.runtime.total_gc_pause_ns` | Number of nanoseconds consumed by stop-the-world garbage collection (GC) pauses since Consul started. | ns | gauge | - -**Why it's important:** GC pause is a "stop-the-world" event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul. - -**What to look for:** Warning if `total_gc_pause_ns` exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute. - -**NOTE:** `total_gc_pause_ns` is a cumulative counter, so in order to calculate rates (such as GC/minute), -you will need to apply a function such as InfluxDB's [`non_negative_difference()`](https://docs.influxdata.com/influxdb/v1.5/query_language/functions/#non-negative-difference). - -### Network activity - RPC Count - -| Metric Name | Description | Unit | Type | -| :--------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- | :------ | -| `consul.client.rpc` | Increments whenever a Consul agent makes an RPC request to a Consul server | requests | counter | -| `consul.client.rpc.exceeded` | Increments whenever a Consul agent makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/consul/docs/agent/config/config-files#limits) configuration. | requests | counter | -| `consul.client.rpc.failed` | Increments whenever a Consul agent makes an RPC request to a Consul server and fails. | requests | counter | - -**Why they're important:** These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from `consul.client.rpcexceeded` meaning that the requests are being rate-limited, could imply a misconfigured Consul agent. - -**What to look for:** -Sudden large changes to the `consul.client.rpc` metrics (greater than 50% deviation from baseline). -`consul.client.rpc.exceeded` or `consul.client.rpc.failed` count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server - -### Raft Thread Saturation - -| Metric Name | Description | Unit | Type | -| :----------------------------------- | :----------------------------------------------------------------------------------------------------------------------- | :--------- | :----- | -| `consul.raft.thread.main.saturation` | An approximate measurement of the proportion of time the main Raft goroutine is busy and unavailable to accept new work. | percentage | sample | -| `consul.raft.thread.fsm.saturation` | An approximate measurement of the proportion of time the Raft FSM goroutine is busy and unavailable to accept new work. | percentage | sample | - -**Why they're important:** These measurements are a useful proxy for how much -capacity a Consul server has to accept additional write load. High saturation -of the Raft goroutines can lead to elevated latency in the rest of the system -and cause cluster instability. - -**What to look for:** Generally, a server's steady-state saturation should be -less than 50%. - -**NOTE:** These metrics are approximate and under extremely heavy load won't -give a perfect fine-grained view of how much headroom a server has available. -Instead, treat them as an early warning sign. - -** Requirements: ** -* Consul 1.13.0+ - -### Raft Replication Capacity Issues - -| Metric Name | Description | Unit | Type | -| :--------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- | :------ | -| `consul.raft.fsm.lastRestoreDuration` | Measures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent. | ms | gauge | -| `consul.raft.leader.oldestLogAge` | The number of milliseconds since the _oldest_ log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with `consul.raft.fsm.lastRestoreDuration` and `consul.raft.rpc.installSnapshot` to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. | ms | gauge | -| `consul.raft.rpc.installSnapshot` | Measures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state. | ms | timer | - -**Why they're important:** These metrics allow operators to monitor the health -and capacity of raft replication on servers. **When Consul is handling large -amounts of data and high write throughput** it is possible for the cluster to -get into the following state: - * Write throughput is high (say 500 commits per second or more) and constant - * The leader is writing out a large snapshot every minute or so - * The snapshot is large enough that it takes considerable time to restore from - disk on a restart or from the leader if a follower gets behind - * Disk IO available allows the leader to write a snapshot faster than it can be - restored from disk on a follower - -Under these conditions, a follower after a restart may be unable to catch up on -replication and become a voter again since it takes longer to restore from disk -or the leader than the leader takes to write a new snapshot and truncate its -logs. Servers retain -[`raft_trailing_logs`](/consul/docs/agent/config/config-files#raft_trailing_logs) (default -`10240`) log entries even if their snapshot was more recent. On a leader -processing 500 commits/second, that is only about 20 seconds worth of logs. -Assuming the leader is able to write out a snapshot and truncate the logs in -less than 20 seconds, there will only be 20 seconds worth of "recent" logs -available on the leader right after the leader has taken a snapshot and never -more than about 80 seconds worth assuming it is taking a snapshot and truncating -logs every 60 seconds. - -In this state, followers must be able to restore a snapshot into memory and -resume replication in under 80 seconds otherwise they will never be able to -rejoin the cluster until write rates reduce. If they take more than 20 seconds -then there will be a chance that they are unlucky with timing when they restart -and have to download a snapshot again from the servers one or more times. If -they take 50 seconds or more then they will likely fail to catch up more often -than they succeed and will remain non-voters for some time until they happen to -complete the restore just before the leader truncates its logs. - -In the worst case, the follower will be left continually downloading snapshots -from the leader which are always too old to use by the time they are restored. -This can put additional strain on the leader transferring large snapshots -repeatedly as well as reduce the fault tolerance and serving capacity of the -cluster. - -Since Consul 1.5.3 -[`raft_trailing_logs`](/consul/docs/agent/config/config-files#raft_trailing_logs) has been -configurable. Increasing it allows the leader to retain more logs and give -followers more time to restore and catch up. The tradeoff is potentially -slower appends which eventually might affect write throughput and latency -negatively so setting it arbitrarily high is not recommended. Before Consul -1.10.0 it required a rolling restart to change this configuration on the leader -though and since no followers could restart without loosing health this could -mean loosing cluster availability and needing to recover the cluster from a loss -of quorum. - -Since Consul 1.10.0 -[`raft_trailing_logs`](/consul/docs/agent/config/config-files#raft_trailing_logs) is now -reloadable with `consul reload` or `SIGHUP` allowing operators to increase this -without the leader restarting or loosing leadership allowing the cluster to be -recovered gracefully. - -Monitoring these metrics can help avoid or diagnose this state. - -**What to look for:** - -`consul.raft.leader.oldestLogAge` should look like a saw-tooth wave increasing -linearly with time until the leader takes a snapshot and then jumping down as -the oldest logs are truncated. The lowest point on that line should remain -comfortably higher (i.e. 2x or more) than the time it takes to restore a -snapshot. - -There are two ways a snapshot can be restored on a follower: from disk on -startup or from the leader during an `installSnapshot` RPC. The leader only -sends an `installSnapshot` RPC if the follower is new and has no state, or if -it's state is too old for it to catch up with the leaders logs. - -`consul.raft.fsm.lastRestoreDuration` shows the time it took to restore from -either source the last time it happened. Most of the time this is when the -server was started. It's a gauge that will always show the last restore duration -(in Consul 1.10.0 and later) however long ago that was. - -`consul.raft.rpc.installSnapshot` is the timing information from the leader's -perspective when it installs a new snapshot on a follower. It includes the time -spent transferring the data as well as the follower restoring it. Since these -events are typically infrequent, you may need to graph the last value observed, -for example using `max_over_time` with a large range in Prometheus. While the -restore part will also be reflected in `lastRestoreDuration`, it can be useful -to observe this too since the logs need to be able to cover this entire -operation including the snapshot delivery to ensure followers can always catch -up safely. - -Graphing `consul.raft.leader.oldestLogAge` on the same axes as the other two -metrics here can help see at a glance if restore times are creeping dangerously -close to the limit of what the leader is retaining at the current write rate. - -Note that if servers don't restart often, then the snapshot could have grown -significantly since the last restore happened so last restore times might not -reflect what would happen if an agent restarts now. - -### License Expiration - -| Metric Name | Description | Unit | Type | -| :-------------------------------- | :--------------------------------------------------------------- | :---- | :---- | -| `consul.system.licenseExpiration` | Number of hours until the Consul Enterprise license will expire. | hours | gauge | - -**Why they're important:** - -This measurement indicates how many hours are left before the Consul Enterprise license expires. When the license expires some -Consul Enterprise features will cease to work. An example of this is that after expiration, it is no longer possible to create -or modify resources in non-default namespaces or to manage namespace definitions themselves even though reads of namespaced -resources will still work. - -**What to look for:** - -This metric should be monitored to ensure that the license doesn't expire to prevent degradation of functionality. - - -### Bolt DB Performance - -| Metric Name | Description | Unit | Type | -| :-------------------------------- | :--------------------------------------------------------------- | :---- | :---- | -| `consul.raft.boltdb.freelistBytes` | Represents the number of bytes necessary to encode the freelist metadata. When [`raft_logstore.boltdb.no_freelist_sync`](/consul/docs/agent/config/config-files#raft_logstore_boltdb_no_freelist_sync) is set to `false` these metadata bytes must also be written to disk for each committed log. | bytes | gauge | -| `consul.raft.boltdb.logsPerBatch` | Measures the number of logs being written per batch to the db. | logs | sample | -| `consul.raft.boltdb.storeLogs` | Measures the amount of time spent writing logs to the db. | ms | timer | -| `consul.raft.boltdb.writeCapacity` | Theoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can perform | logs/second | sample | - - -** Requirements: ** -* Consul 1.11.0+ - -**Why they're important:** - -The `consul.raft.boltdb.storeLogs` metric is a direct indicator of disk write performance of a Consul server. If there are issues with the disk or -performance degradations related to Bolt DB, these metrics will show the issue and potentially the cause as well. - -**What to look for:** - -The primary thing to look for are increases in the `consul.raft.boltdb.storeLogs` times. Its value will directly govern an -upper limit to the throughput of write operations within Consul. - -In Consul each write operation will turn into a single Raft log to be committed. Raft will process these -logs and store them within Bolt DB in batches. Each call to store logs within Bolt DB is measured to record how long -it took as well as how many logs were contained in the batch. Writing logs in this fashion is serialized so that -a subsequent log storage operation can only be started after the previous one completed. The maximum number -of log storage operations that can be performed each second is represented with the `consul.raft.boltdb.writeCapacity` -metric. When log storage operations are becoming slower you may not see an immediate decrease in write capacity -due to increased batch sizes of the each operation. However, the max batch size allowed is 64 logs. Therefore if -the `logsPerBatch` metric is near 64 and the `storeLogs` metric is seeing increased time to write each batch to disk, -then it is likely that increased write latencies and other errors may occur. - -There can be a number of potential issues that can cause this. Often times it could be performance of the underlying -disks that is the issue. Other times it may be caused by Bolt DB behavior. Bolt DB keeps track of free space within -the `raft.db` file. When needing to allocate data it will use existing free space first before further expanding the -file. By default, Bolt DB will write a data structure containing metadata about free pages within the DB to disk for -every log storage operation. Therefore if the free space within the database grows excessively large, such as after -a large spike in writes beyond the normal steady state and a subsequent slow down in the write rate, then Bolt DB -could end up writing a large amount of extra data to disk for each log storage operation. This has the potential -to drastically increase disk write throughput, potentially beyond what the underlying disks can keep up with. To -detect this situation you can look at the `consul.raft.boltdb.freelistBytes` metric. This metric is a count of -the extra bytes that are being written for each log storage operation beyond the log data itself. While not a clear -indicator of an actual issue, this metric can be used to diagnose why the `consul.raft.boltdb.storeLogs` metric -is high. - -If Bolt DB log storage performance becomes an issue and is caused by free list management then setting -[`raft_logstore.boltdb.no_freelist_sync`](/consul/docs/agent/config/config-files#raft_logstore_boltdb_no_freelist_sync) to `true` in the server's configuration -may help to reduce disk IO and log storage operation times. Disabling free list syncing will however increase -the startup time for a server as it must scan the raft.db file for free space instead of loading the already -populated free list structure. - -Consul includes an experiment backend configuration that you can use instead of BoldDB. Refer to [Experimental WAL LogStore backend](/consul/docs/agent/wal-logstore) for more information. - -## Metrics Reference - -This is a full list of metrics emitted by Consul. - -| Metric | Description | Unit | Type | -|--------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|---------| -| `consul.acl.blocked.{check,service}.deregistration` | Increments whenever a deregistration fails for an entity (check or service) is blocked by an ACL. | requests | counter | -| `consul.acl.blocked.{check,node,service}.registration` | Increments whenever a registration fails for an entity (check, node or service) is blocked by an ACL. | requests | counter | -| `consul.api.http` | This samples how long it takes to service the given HTTP request for the given verb and path. Includes labels for `path` and `method`. `path` does not include details like service or key names, for these an underscore will be present as a placeholder (eg. path=`v1.kv._`) | ms | timer | -| `consul.client.rpc` | Increments whenever a Consul agent makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers. | requests | counter | -| `consul.client.rpc.exceeded` | Increments whenever a Consul agent makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/consul/docs/agent/config/config-files#limits) configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers. | rejected requests | counter | -| `consul.client.rpc.failed` | Increments whenever a Consul agent makes an RPC request to a Consul server and fails. | requests | counter | -| `consul.client.api.catalog_register` | Increments whenever a Consul agent receives a catalog register request. | requests | counter | -| `consul.client.api.success.catalog_register` | Increments whenever a Consul agent successfully responds to a catalog register request. | requests | counter | -| `consul.client.rpc.error.catalog_register` | Increments whenever a Consul agent receives an RPC error for a catalog register request. | errors | counter | -| `consul.client.api.catalog_deregister` | Increments whenever a Consul agent receives a catalog deregister request. | requests | counter | -| `consul.client.api.success.catalog_deregister` | Increments whenever a Consul agent successfully responds to a catalog deregister request. | requests | counter | -| `consul.client.rpc.error.catalog_deregister` | Increments whenever a Consul agent receives an RPC error for a catalog deregister request. | errors | counter | -| `consul.client.api.catalog_datacenters` | Increments whenever a Consul agent receives a request to list datacenters in the catalog. | requests | counter | -| `consul.client.api.success.catalog_datacenters` | Increments whenever a Consul agent successfully responds to a request to list datacenters. | requests | counter | -| `consul.client.rpc.error.catalog_datacenters` | Increments whenever a Consul agent receives an RPC error for a request to list datacenters. | errors | counter | -| `consul.client.api.catalog_nodes` | Increments whenever a Consul agent receives a request to list nodes from the catalog. | requests | counter | -| `consul.client.api.success.catalog_nodes` | Increments whenever a Consul agent successfully responds to a request to list nodes. | requests | counter | -| `consul.client.rpc.error.catalog_nodes` | Increments whenever a Consul agent receives an RPC error for a request to list nodes. | errors | counter | -| `consul.client.api.catalog_services` | Increments whenever a Consul agent receives a request to list services from the catalog. | requests | counter | -| `consul.client.api.success.catalog_services` | Increments whenever a Consul agent successfully responds to a request to list services. | requests | counter | -| `consul.client.rpc.error.catalog_services` | Increments whenever a Consul agent receives an RPC error for a request to list services. | errors | counter | -| `consul.client.api.catalog_service_nodes` | Increments whenever a Consul agent receives a request to list nodes offering a service. | requests | counter | -| `consul.client.api.success.catalog_service_nodes` | Increments whenever a Consul agent successfully responds to a request to list nodes offering a service. | requests | counter | -| `consul.client.api.error.catalog_service_nodes` | Increments whenever a Consul agent receives an RPC error for request to list nodes offering a service. | requests | counter | -| `consul.client.rpc.error.catalog_service_nodes` | Increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service.   | errors | counter | -| `consul.client.api.catalog_node_services` | Increments whenever a Consul agent receives a request to list services registered in a node.   | requests | counter | -| `consul.client.api.success.catalog_node_services` | Increments whenever a Consul agent successfully responds to a request to list services in a node.   | requests | counter | -| `consul.client.rpc.error.catalog_node_services` | Increments whenever a Consul agent receives an RPC error for a request to list services in a node.   | errors | counter | -| `consul.client.api.catalog_node_service_list` | Increments whenever a Consul agent receives a request to list a node's registered services. | requests | counter | -| `consul.client.rpc.error.catalog_node_service_list` | Increments whenever a Consul agent receives an RPC error for request to list a node's registered services. | errors | counter | -| `consul.client.api.success.catalog_node_service_list` | Increments whenever a Consul agent successfully responds to a request to list a node's registered services. | requests | counter | -| `consul.client.api.catalog_gateway_services` | Increments whenever a Consul agent receives a request to list services associated with a gateway. | requests | counter | -| `consul.client.api.success.catalog_gateway_services` | Increments whenever a Consul agent successfully responds to a request to list services associated with a gateway. | requests | counter | -| `consul.client.rpc.error.catalog_gateway_services` | Increments whenever a Consul agent receives an RPC error for a request to list services associated with a gateway. | errors | counter | -| `consul.runtime.num_goroutines` | Tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value. | number of goroutines | gauge | -| `consul.runtime.alloc_bytes` | Measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value. | bytes | gauge | -| `consul.runtime.heap_objects` | Measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value. | number of objects | gauge | -| `consul.state.nodes` | Measures the current number of nodes registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | -| `consul.state.peerings` | Measures the current number of peerings registered with Consul. It is only emitted by Consul servers. Added in v1.13.0. | number of objects | gauge | -| `consul.state.services` | Measures the current number of unique services registered with Consul, based on service name. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | -| `consul.state.service_instances` | Measures the current number of unique service instances registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | -| `consul.state.kv_entries` | Measures the current number of entries in the Consul KV store. It is only emitted by Consul servers. Added in v1.10.3. | number of objects | gauge | -| `consul.state.connect_instances` | Measures the current number of unique mesh service instances registered with Consul labeled by Kind (e.g. connect-proxy, connect-native, etc). Added in v1.10.4 | number of objects | gauge | -| `consul.state.config_entries` | Measures the current number of configuration entries registered with Consul labeled by Kind (e.g. service-defaults, proxy-defaults, etc). See [Configuration Entries](/consul/docs/connect/config-entries) for more information. Added in v1.10.4 | number of objects | gauge | -| `consul.members.clients` | Measures the current number of client agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of clients | gauge | -| `consul.members.servers` | Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of servers | gauge | -| `consul.dns.stale_queries` | Increments when an agent serves a query within the allowed stale threshold. | queries | counter | -| `consul.dns.ptr_query` | Measures the time spent handling a reverse DNS query for the given node. | ms | timer | -| `consul.dns.domain_query` | Measures the time spent handling a domain query for the given node. | ms | timer | -| `consul.system.licenseExpiration` | This measures the number of hours remaining on the agents license. | hours | gauge | -| `consul.version` | Represents the Consul version. | agents | gauge | - -## Server Health - -These metrics are used to monitor the health of the Consul servers. - -| Metric | Description | Unit | Type | -|-----------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------|---------| -| `consul.acl.ResolveToken` | Measures the time it takes to resolve an ACL token. | ms | timer | -| `consul.acl.ResolveTokenToIdentity` | Measures the time it takes to resolve an ACL token to an Identity. This metric was removed in Consul 1.12. The time will now be reflected in `consul.acl.ResolveToken`. | ms | timer | -| `consul.acl.token.cache_hit` | Increments if Consul is able to resolve a token's identity from the cache. | cache read op | counter | -| `consul.acl.token.cache_miss` | Increments if Consul cannot resolve a token's identity from the cache. | cache read op | counter | -| `consul.cache.bypass` | Counts how many times a request bypassed the cache because no cache-key was provided. | counter | counter | -| `consul.cache.fetch_success` | Counts the number of successful fetches by the cache. | counter | counter | -| `consul.cache.fetch_error` | Counts the number of failed fetches by the cache. | counter | counter | -| `consul.cache.evict_expired` | Counts the number of expired entries that are evicted. | counter | counter | -| `consul.raft.applied_index` | Represents the raft applied index. | index | gauge | -| `consul.raft.apply` | Counts the number of Raft transactions occurring over the interval, which is a general indicator of the write load on the Consul servers. | raft transactions / interval | counter | -| `consul.raft.barrier` | Counts the number of times the agent has started the barrier i.e the number of times it has issued a blocking call, to ensure that the agent has all the pending operations that were queued, to be applied to the agent's FSM. | blocks / interval | counter | -| `consul.raft.boltdb.freelistBytes` | Represents the number of bytes necessary to encode the freelist metadata. When [`raft_logstore.boltdb.no_freelist_sync`](/consul/docs/agent/config/config-files#raft_logstore_boltdb_no_freelist_sync) is set to `false` these metadata bytes must also be written to disk for each committed log. | bytes | gauge | -| `consul.raft.boltdb.freePageBytes` | Represents the number of bytes of free space within the raft.db file. | bytes | gauge | -| `consul.raft.boltdb.getLog` | Measures the amount of time spent reading logs from the db. | ms | timer | -| `consul.raft.boltdb.logBatchSize` | Measures the total size in bytes of logs being written to the db in a single batch. | bytes | sample | -| `consul.raft.boltdb.logsPerBatch` | Measures the number of logs being written per batch to the db. | logs | sample | -| `consul.raft.boltdb.logSize` | Measures the size of logs being written to the db. | bytes | sample | -| `consul.raft.boltdb.numFreePages` | Represents the number of free pages within the raft.db file. | pages | gauge | -| `consul.raft.boltdb.numPendingPages` | Represents the number of pending pages within the raft.db that will soon become free. | pages | gauge | -| `consul.raft.boltdb.openReadTxn` | Represents the number of open read transactions against the db | transactions | gauge | -| `consul.raft.boltdb.totalReadTxn` | Represents the total number of started read transactions against the db | transactions | gauge | -| `consul.raft.boltdb.storeLogs` | Measures the amount of time spent writing logs to the db. | ms | timer | -| `consul.raft.boltdb.txstats.cursorCount` | Counts the number of cursors created since Consul was started. | cursors | counter | -| `consul.raft.boltdb.txstats.nodeCount` | Counts the number of node allocations within the db since Consul was started. | allocations | counter | -| `consul.raft.boltdb.txstats.nodeDeref` | Counts the number of node dereferences in the db since Consul was started. | dereferences | counter | -| `consul.raft.boltdb.txstats.pageAlloc` | Represents the number of bytes allocated within the db since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. | bytes | gauge | -| `consul.raft.boltdb.txstats.pageCount` | Represents the number of pages allocated since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. | pages | gauge | -| `consul.raft.boltdb.txstats.rebalance` | Counts the number of node rebalances performed in the db since Consul was started. | rebalances | counter | -| `consul.raft.boltdb.txstats.rebalanceTime` | Measures the time spent rebalancing nodes in the db. | ms | timer | -| `consul.raft.boltdb.txstats.spill` | Counts the number of nodes spilled in the db since Consul was started. | spills | counter | -| `consul.raft.boltdb.txstats.spillTime` | Measures the time spent spilling nodes in the db. | ms | timer | -| `consul.raft.boltdb.txstats.split` | Counts the number of nodes split in the db since Consul was started. | splits | counter | -| `consul.raft.boltdb.txstats.write` | Counts the number of writes to the db since Consul was started. | writes | counter | -| `consul.raft.boltdb.txstats.writeTime` | Measures the amount of time spent performing writes to the db. | ms | timer | -| `consul.raft.boltdb.writeCapacity` | Theoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can perform | logs/second | sample | -| `consul.raft.commitNumLogs` | Measures the count of logs processed for application to the FSM in a single batch. | logs | gauge | -| `consul.raft.commitTime` | Measures the time it takes to commit a new entry to the Raft log on the leader. | ms | timer | -| `consul.raft.fsm.lastRestoreDuration` | Measures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent. | ms | gauge | -| `consul.raft.fsm.snapshot` | Measures the time taken by the FSM to record the current state for the snapshot. | ms | timer | -| `consul.raft.fsm.apply` | Measures the time to apply a log to the FSM. | ms | timer | -| `consul.raft.fsm.enqueue` | Measures the amount of time to enqueue a batch of logs for the FSM to apply. | ms | timer | -| `consul.raft.fsm.restore` | Measures the time taken by the FSM to restore its state from a snapshot. | ms | timer | -| `consul.raft.last_index` | Represents the raft applied index. | index | gauge | -| `consul.raft.leader.dispatchLog` | Measures the time it takes for the leader to write log entries to disk. | ms | timer | -| `consul.raft.leader.dispatchNumLogs` | Measures the number of logs committed to disk in a batch. | logs | gauge | -| `consul.raft.logstore.verifier.checkpoints_written` | Counts the number of checkpoint entries written to the LogStore. | checkpoints | counter | -| `consul.raft.logstore.verifier.dropped_reports` | Counts how many times the verifier routine was still busy when the next checksum came in and so verification for a range was skipped. If you see this happen, consider increasing the interval between checkpoints with [`raft_logstore.verification.interval`](/consul/docs/agent/config/config-files#raft_logstore_verification) | reports dropped | counter | -| `consul.raft.logstore.verifier.ranges_verified` | Counts the number of log ranges for which a verification report has been completed. Refer to [Monitor Raft metrics and logs for WAL](/consul/docs/agent/wal-logstore/monitoring) for more information. | log ranges verifications | counter | -| `consul.raft.logstore.verifier.read_checksum_failures` | Counts the number of times a range of logs between two check points contained at least one disk corruption. Refer to [Monitor Raft metrics and logs for WAL](/consul/docs/agent/wal-logstore/monitoring) for more information. | disk corruptions | counter | -| `consul.raft.logstore.verifier.write_checksum_failures` | Counts the number of times a follower has a different checksum to the leader at the point where it writes to the log. This could be caused by either a disk-corruption on the leader (unlikely) or some other corruption of the log entries in-flight. | in-flight corruptions | counter | -| `consul.raft.leader.lastContact` | Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. It can be used as a measure for how stable the Raft timing is and how close the leader is to timing out its lease.The lease timeout is 500 ms times the [`raft_multiplier` configuration](/consul/docs/agent/config/config-files#raft_multiplier), so this telemetry value should not be getting close to that configured value, otherwise the Raft timing is marginal and might need to be tuned, or more powerful servers might be needed. See the [Server Performance](/consul/docs/install/performance) guide for more details. | ms | timer | -| `consul.raft.leader.oldestLogAge` | The number of milliseconds since the _oldest_ log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with `consul.raft.fsm.lastRestoreDuration` and `consul.raft.rpc.installSnapshot` to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. Note: this metric won't be emitted until the leader writes a snapshot. After an upgrade to Consul 1.10.0 it won't be emitted until the oldest log was written after the upgrade. | ms | gauge | -| `consul.raft.replication.heartbeat` | Measures the time taken to invoke appendEntries on a peer, so that it doesn't timeout on a periodic basis. | ms | timer | -| `consul.raft.replication.appendEntries` | Measures the time it takes to replicate log entries to followers. This is a general indicator of the load pressure on the Consul servers, as well as the performance of the communication between the servers. | ms | timer | -| `consul.raft.replication.appendEntries.rpc` | Measures the time taken by the append entries RPC to replicate the log entries of a leader agent onto its follower agent(s). | ms | timer | -| `consul.raft.replication.appendEntries.logs` | Counts the number of logs replicated to an agent to bring it up to speed with the leader's logs. | logs appended/ interval | counter | -| `consul.raft.restore` | Counts the number of times the restore operation has been performed by the agent. Here, restore refers to the action of raft consuming an external snapshot to restore its state. | operation invoked / interval | counter | -| `consul.raft.restoreUserSnapshot` | Measures the time taken by the agent to restore the FSM state from a user's snapshot | ms | timer | -| `consul.raft.rpc.appendEntries` | Measures the time taken to process an append entries RPC call from an agent. | ms | timer | -| `consul.raft.rpc.appendEntries.storeLogs` | Measures the time taken to add any outstanding logs for an agent, since the last appendEntries was invoked | ms | timer | -| `consul.raft.rpc.appendEntries.processLogs` | Measures the time taken to process the outstanding log entries of an agent. | ms | timer | -| `consul.raft.rpc.installSnapshot` | Measures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state. | ms | timer | -| `consul.raft.rpc.processHeartBeat` | Measures the time taken to process a heartbeat request. | ms | timer | -| `consul.raft.rpc.requestVote` | Measures the time taken to process the request vote RPC call. | ms | timer | -| `consul.raft.snapshot.create` | Measures the time taken to initialize the snapshot process. | ms | timer | -| `consul.raft.snapshot.persist` | Measures the time taken to dump the current snapshot taken by the Consul agent to the disk. | ms | timer | -| `consul.raft.snapshot.takeSnapshot` | Measures the total time involved in taking the current snapshot (creating one and persisting it) by the Consul agent. | ms | timer | -| `consul.serf.snapshot.appendLine` | Measures the time taken by the Consul agent to append an entry into the existing log. | ms | timer | -| `consul.serf.snapshot.compact` | Measures the time taken by the Consul agent to compact a log. This operation occurs only when the snapshot becomes large enough to justify the compaction . | ms | timer | -| `consul.raft.state.candidate` | Increments whenever a Consul server starts an election. If this increments without a leadership change occurring it could indicate that a single server is overloaded or is experiencing network connectivity issues. | election attempts / interval | counter | -| `consul.raft.state.leader` | Increments whenever a Consul server becomes a leader. If there are frequent leadership changes this may be indication that the servers are overloaded and aren't meeting the soft real-time requirements for Raft, or that there are networking problems between the servers. | leadership transitions / interval | counter | -| `consul.raft.state.follower` | Counts the number of times an agent has entered the follower mode. This happens when a new agent joins the cluster or after the end of a leader election. | follower state entered / interval | counter | -| `consul.raft.transition.heartbeat_timeout` | The number of times an agent has transitioned to the Candidate state, after receive no heartbeat messages from the last known leader. | timeouts / interval | counter | -| `consul.raft.verify_leader` | This metric doesn't have a direct correlation to the leader change. It just counts the number of times an agent checks if it is still the leader or not. For example, during every consistent read, the check is done. Depending on the load in the system, this metric count can be high as it is incremented each time a consistent read is completed. | checks / interval | Counter | -| `consul.raft.wal.head_truncations` | Counts how many log entries have been truncated from the head - i.e. the oldest entries. by graphing the rate of change over time you can see individual truncate calls as spikes. | logs entries truncated | counter | -| `consul.raft.wal.last_segment_age_seconds` | A gauge that is set each time we rotate a segment and describes the number of seconds between when that segment file was first created and when it was sealed. this gives a rough estimate how quickly writes are filling the disk. | seconds | gauge | -| `consul.raft.wal.log_appends` | Counts the number of calls to StoreLog(s) i.e. number of batches of entries appended. | calls | counter | -| `consul.raft.wal.log_entries_read` | Counts the number of log entries read. | log entries read | counter | -| `consul.raft.wal.log_entries_written` | Counts the number of log entries written. | log entries written | counter | -| `consul.raft.wal.log_entry_bytes_read` | Counts the bytes of log entry read from segments before decoding. actual bytes read from disk might be higher as it includes headers and index entries and possible secondary reads for large entries that don't fit in buffers. | bytes | counter | -| `consul.raft.wal.log_entry_bytes_written` | Counts the bytes of log entry after encoding with Codec. Actual bytes written to disk might be slightly higher as it includes headers and index entries. | bytes | counter | -| `consul.raft.wal.segment_rotations` | Counts how many times we move to a new segment file. | rotations | counter | -| `consul.raft.wal.stable_gets` | Counts how many calls to StableStore.Get or GetUint64. | calls | counter | -| `consul.raft.wal.stable_sets` | Counts how many calls to StableStore.Set or SetUint64. | calls | counter | -| `consul.raft.wal.tail_truncations` | Counts how many log entries have been truncated from the head - i.e. the newest entries. by graphing the rate of change over time you can see individual truncate calls as spikes. | logs entries truncated | counter | -| `consul.rpc.accept_conn` | Increments when a server accepts an RPC connection. | connections | counter | -| `consul.rpc.rate_limit.exceeded` | Increments whenever an RPC is over a configured rate limit. In permissive mode, the RPC is still allowed to proceed. | RPCs | counter | -| `consul.rpc.rate_limit.log_dropped` | Increments whenever a log that is emitted because an RPC exceeded a rate limit gets dropped because the output buffer is full. | log messages dropped | counter | -| `consul.catalog.register` | Measures the time it takes to complete a catalog register operation. | ms | timer | -| `consul.catalog.deregister` | Measures the time it takes to complete a catalog deregister operation. | ms | timer | -| `consul.server.isLeader` | Track if a server is a leader(1) or not(0) | 1 or 0 | gauge | -| `consul.fsm.register` | Measures the time it takes to apply a catalog register operation to the FSM. | ms | timer | -| `consul.fsm.deregister` | Measures the time it takes to apply a catalog deregister operation to the FSM. | ms | timer | -| `consul.fsm.session` | Measures the time it takes to apply the given session operation to the FSM. | ms | timer | -| `consul.fsm.kvs` | Measures the time it takes to apply the given KV operation to the FSM. | ms | timer | -| `consul.fsm.tombstone` | Measures the time it takes to apply the given tombstone operation to the FSM. | ms | timer | -| `consul.fsm.coordinate.batch-update` | Measures the time it takes to apply the given batch coordinate update to the FSM. | ms | timer | -| `consul.fsm.prepared-query` | Measures the time it takes to apply the given prepared query update operation to the FSM. | ms | timer | -| `consul.fsm.txn` | Measures the time it takes to apply the given transaction update to the FSM. | ms | timer | -| `consul.fsm.autopilot` | Measures the time it takes to apply the given autopilot update to the FSM. | ms | timer | -| `consul.fsm.persist` | Measures the time it takes to persist the FSM to a raft snapshot. | ms | timer | -| `consul.fsm.intention` | Measures the time it takes to apply an intention operation to the state store. | ms | timer | -| `consul.fsm.ca` | Measures the time it takes to apply CA configuration operations to the FSM. | ms | timer | -| `consul.fsm.ca.leaf` | Measures the time it takes to apply an operation while signing a leaf certificate. | ms | timer | -| `consul.fsm.acl.token` | Measures the time it takes to apply an ACL token operation to the FSM. | ms | timer | -| `consul.fsm.acl.policy` | Measures the time it takes to apply an ACL policy operation to the FSM. | ms | timer | -| `consul.fsm.acl.bindingrule` | Measures the time it takes to apply an ACL binding rule operation to the FSM. | ms | timer | -| `consul.fsm.acl.authmethod` | Measures the time it takes to apply an ACL authmethod operation to the FSM. | ms | timer | -| `consul.fsm.system_metadata` | Measures the time it takes to apply a system metadata operation to the FSM. | ms | timer | -| `consul.kvs.apply` | Measures the time it takes to complete an update to the KV store. | ms | timer | -| `consul.leader.barrier` | Measures the time spent waiting for the raft barrier upon gaining leadership. | ms | timer | -| `consul.leader.reconcile` | Measures the time spent updating the raft store from the serf member information. | ms | timer | -| `consul.leader.reconcileMember` | Measures the time spent updating the raft store for a single serf member's information. | ms | timer | -| `consul.leader.reapTombstones` | Measures the time spent clearing tombstones. | ms | timer | -| `consul.leader.replication.acl-policies.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL policy replication was successful or 0 if there was an error. | healthy | gauge | -| `consul.leader.replication.acl-policies.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL policies in the primary datacenter that have been successfully replicated. | index | gauge | -| `consul.leader.replication.acl-roles.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL role replication was successful or 0 if there was an error. | healthy | gauge | -| `consul.leader.replication.acl-roles.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL roles in the primary datacenter that have been successfully replicated. | index | gauge | -| `consul.leader.replication.acl-tokens.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL token replication was successful or 0 if there was an error. | healthy | gauge | -| `consul.leader.replication.acl-tokens.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL tokens in the primary datacenter that have been successfully replicated. | index | gauge | -| `consul.leader.replication.config-entries.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of config entry replication was successful or 0 if there was an error. | healthy | gauge | -| `consul.leader.replication.config-entries.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of config entries in the primary datacenter that have been successfully replicated. | index | gauge | -| `consul.leader.replication.federation-state.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of federation state replication was successful or 0 if there was an error. | healthy | gauge | -| `consul.leader.replication.federation-state.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of federation states in the primary datacenter that have been successfully replicated. | index | gauge | -| `consul.leader.replication.namespaces.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of namespace replication was successful or 0 if there was an error. | healthy | gauge | -| `consul.leader.replication.namespaces.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of namespaces in the primary datacenter that have been successfully replicated. | index | gauge | -| `consul.prepared-query.apply` | Measures the time it takes to apply a prepared query update. | ms | timer | -| `consul.prepared-query.execute_remote` | Measures the time it takes to process a prepared query execute request that was forwarded to another datacenter. | ms | timer | -| `consul.prepared-query.execute` | Measures the time it takes to process a prepared query execute request. | ms | timer | -| `consul.prepared-query.explain` | Measures the time it takes to process a prepared query explain request. | ms | timer | -| `consul.rpc.raft_handoff` | Increments when a server accepts a Raft-related RPC connection. | connections | counter | -| `consul.rpc.request` | Increments when a server receives a Consul-related RPC request. | requests | counter | -| `consul.rpc.request_error` | Increments when a server returns an error from an RPC request. | errors | counter | -| `consul.rpc.query` | Increments when a server receives a read RPC request, indicating the rate of new read queries. See consul.rpc.queries_blocking for the current number of in-flight blocking RPC calls. This metric changed in 1.7.0 to only increment on the start of a query. The rate of queries will appear lower, but is more accurate. | queries | counter | -| `consul.rpc.queries_blocking` | The current number of in-flight blocking queries the server is handling. | queries | gauge | -| `consul.rpc.cross-dc` | Increments when a server sends a (potentially blocking) cross datacenter RPC query. | queries | counter | -| `consul.rpc.consistentRead` | Measures the time spent confirming that a consistent read can be performed. | ms | timer | -| `consul.session.apply` | Measures the time spent applying a session update. | ms | timer | -| `consul.session.renew` | Measures the time spent renewing a session. | ms | timer | -| `consul.session_ttl.invalidate` | Measures the time spent invalidating an expired session. | ms | timer | -| `consul.txn.apply` | Measures the time spent applying a transaction operation. | ms | timer | -| `consul.txn.read` | Measures the time spent returning a read transaction. | ms | timer | -| `consul.grpc.client.request.count` | Counts the number of gRPC requests made by the client agent to a Consul server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | requests | counter | -| `consul.grpc.client.connection.count` | Counts the number of new gRPC connections opened by the client agent to a Consul server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | counter | -| `consul.grpc.client.connections` | Measures the number of active gRPC connections open from the client agent to any Consul servers. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | gauge | -| `consul.grpc.server.request.count` | Counts the number of gRPC requests received by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | requests | counter | -| `consul.grpc.server.connection.count` | Counts the number of new gRPC connections received by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | counter | -| `consul.grpc.server.connections` | Measures the number of active gRPC connections open on the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | gauge | -| `consul.grpc.server.stream.count` | Counts the number of new gRPC streams received by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | streams | counter | -| `consul.grpc.server.streams` | Measures the number of active gRPC streams handled by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | streams | gauge | -| `consul.xds.server.streams` | Measures the number of active xDS streams handled by the server split by protocol version. | streams | gauge | -| `consul.xds.server.streamsUnauthenticated` | Measures the number of active xDS streams handled by the server that are unauthenticated because ACLs are not enabled or ACL tokens were missing. | streams | gauge | -| `consul.xds.server.idealStreamsMax` | The maximum number of xDS streams per server, chosen to achieve a roughly even spread of load across servers. | streams | gauge | -| `consul.xds.server.streamDrained` | Counts the number of xDS streams that are drained when rebalancing the load between servers. | streams | counter | -| `consul.xds.server.streamStart` | Measures the time taken to first generate xDS resources after an xDS stream is opened. | ms | timer | - - -## Server Workload - -** Requirements: ** -* Consul 1.12.0+ - -The following label-based RPC metrics provide insight about the workload on a Consul server and the source of the workload. - -The [`prefix_filter`](/consul/docs/agent/config/config-files#telemetry-prefix_filter) telemetry configuration setting blocks or enables all RPC metric method calls. Specify the RPC metrics you want to allow in the `prefix_filter`: - - - -```hcl -telemetry { - prefix_filter = ["+consul.rpc.server.call"] -} -``` - -```json -{ - "telemetry": { - "prefix_filter": [ - "+consul.rpc.server.call" - ] - } -} -``` - - - -| Metric | Description | Unit | Type | -| ------------------------------------- | --------------------------------------------------------- | ------ | --------- | -| `consul.rpc.server.call` | Measures the elapsed time taken to complete an RPC call. | ms | summary | - -### Labels - -The server workload metrics above come with the following labels: - -| Label Name | Description | Possible values | -| ------------------------------------- | -------------------------------------------------------------------- | --------------------------------------- | -| `method` | The name of the RPC method. | The value of any RPC request in Consul. | -| `errored` | Indicates whether the RPC call errored. | `true` or `false`. | -| `request_type` | Whether it is a `read` or `write` request. | `read`, `write` or `unreported`. | -| `rpc_type` | The RPC implementation. | `net/rpc` or `internal`. | -| `leader` | Whether the server was a `leader` or not at the time of the request. | `true`, `false` or `unreported`. | - -#### Label Explanations - -The `internal` value for the `rpc_type` in the table above refers to leader and cluster management RPC operations that Consul performs. -Historically, `internal` RPC operation metrics were accounted under the same metric names. - -The `unreported` value for the `request_type` in the table above refers to RPC requests within Consul where it is difficult to ascertain whether a request is `read` or `write` type. - -The `unreported` value for the `leader` label in the table above refers to RPC requests where Consul cannot determine the leadership status for a server. - -#### Read Request Labels - -In addition to the labels above, for read requests, the following may be populated: - -| Label Name | Description | Possible values | -| ------------------------------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ | -| `blocking` | Whether the read request passed in a `MinQueryIndex`. | `true` if a MinQueryIndex was passed, `false` otherwise. | -| `target_datacenter` | The target datacenter for the read request. | The string value of the target datacenter for the request. | -| `locality` | Gives an indication of whether the RPC request is local or has been forwarded. | `local` if current server data center is the same as `target_datacenter`, otherwise `forwarded`. | - -Here is a Prometheus style example of an RPC metric and its labels: - - - -``` - ... - consul_rpc_server_call{errored="false",method="Catalog.ListNodes",request_type="read",rpc_type="net/rpc",quantile="0.5"} 255 - ... -``` - - - - -## Cluster Health - -These metrics give insight into the health of the cluster as a whole. -Query for the `consul.memberlist.*` and `consul.serf.*` metrics can be appended -with certain labels to further distinguish data between different gossip pools. -The supported label for CE is `network`, while `segment`, `partition`, `area` -are allowed for . - -| Metric | Description | Unit | Type | -|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------| -| `consul.memberlist.degraded.probe` | Counts the number of times the agent has performed failure detection on another agent at a slower probe rate. The agent uses its own health metric as an indicator to perform this action. (If its health score is low, means that the node is healthy, and vice versa.) | probes / interval | counter | -| `consul.memberlist.degraded.timeout` | Counts the number of times an agent was marked as a dead node, whilst not getting enough confirmations from a randomly selected list of agent nodes in an agent's membership. | occurrence / interval | counter | -| `consul.memberlist.msg.dead` | Counts the number of times an agent has marked another agent to be a dead node. | messages / interval | counter | -| `consul.memberlist.health.score` | Describes a node's perception of its own health based on how well it is meeting the soft real-time requirements of the protocol. This metric ranges from 0 to 8, where 0 indicates "totally healthy". This health score is used to scale the time between outgoing probes, and higher scores translate into longer probing intervals. For more details see section IV of the Lifeguard paper: https://arxiv.org/pdf/1707.00788.pdf | score | gauge | -| `consul.memberlist.msg.suspect` | Increments when an agent suspects another as failed when executing random probes as part of the gossip protocol. These can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the [required ports](/consul/docs/agent/config/config-files#ports). | suspect messages received / interval | counter | -| `consul.memberlist.tcp.accept` | Counts the number of times an agent has accepted an incoming TCP stream connection. | connections accepted / interval | counter | -| `consul.memberlist.udp.sent/received` | Measures the total number of bytes sent/received by an agent through the UDP protocol. | bytes sent or bytes received / interval | counter | -| `consul.memberlist.tcp.connect` | Counts the number of times an agent has initiated a push/pull sync with an other agent. | push/pull initiated / interval | counter | -| `consul.memberlist.tcp.sent` | Measures the total number of bytes sent by an agent through the TCP protocol | bytes sent / interval | counter | -| `consul.memberlist.gossip` | Measures the time taken for gossip messages to be broadcasted to a set of randomly selected nodes. | ms | timer | -| `consul.memberlist.msg_alive` | Counts the number of alive messages, that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter | -| `consul.memberlist.msg_dead` | The number of dead messages that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter | -| `consul.memberlist.msg_suspect` | The number of suspect messages that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter | -| `consul.memberlist.node.instances` | Tracks the number of instances in each of the node states: alive, dead, suspect, and left. | nodes | gauge | -| `consul.memberlist.probeNode` | Measures the time taken to perform a single round of failure detection on a select agent. | nodes / Interval | counter | -| `consul.memberlist.pushPullNode` | Measures the number of agents that have exchanged state with this agent. | nodes / Interval | counter | -| `consul.memberlist.queue.broadcasts` | Measures the number of messages waiting to be broadcast to other gossip participants. | messages | sample | -| `consul.memberlist.size.local` | Measures the size in bytes of the memberlist before it is sent to another gossip recipient. | bytes | gauge | -| `consul.memberlist.size.remote` | Measures the size in bytes of incoming memberlists from other gossip participants. | bytes | gauge | -| `consul.serf.member.failed` | Increments when an agent is marked dead. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the [required ports](/consul/docs/agent/config/config-files#ports). | failures / interval | counter | -| `consul.serf.member.flap` | Available in Consul 0.7 and later, this increments when an agent is marked dead and then recovers within a short time period. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the [required ports](/consul/docs/agent/config/config-files#ports). | flaps / interval | counter | -| `consul.serf.member.join` | Increments when an agent joins the cluster. If an agent flapped or failed this counter also increments when it re-joins. | joins / interval | counter | -| `consul.serf.member.left` | Increments when an agent leaves the cluster. | leaves / interval | counter | -| `consul.serf.events` | Increments when an agent processes an [event](/consul/commands/event). Consul uses events internally so there may be additional events showing in telemetry. There are also a per-event counters emitted as `consul.serf.events.`. | events / interval | counter | -| `consul.serf.events.` | Breakdown of `consul.serf.events` by type of event. | events / interval | counter | -| `consul.serf.msgs.sent` | This metric is sample of the number of bytes of messages broadcast to the cluster. In a given time interval, the sum of this metric is the total number of bytes sent and the count is the number of messages sent. | message bytes / interval | counter | -| `consul.autopilot.failure_tolerance` | Tracks the number of voting servers that the cluster can lose while continuing to function. | servers | gauge | -| `consul.autopilot.healthy` | Tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. | boolean | gauge | -| `consul.session_ttl.active` | Tracks the active number of sessions being tracked. | sessions | gauge | -| `consul.catalog.service.query` | Increments for each catalog query for the given service. | queries | counter | -| `consul.catalog.service.query-tag` | Increments for each catalog query for the given service with the given tag. | queries | counter | -| `consul.catalog.service.query-tags` | Increments for each catalog query for the given service with the given tags. | queries | counter | -| `consul.catalog.service.not-found` | Increments for each catalog query where the given service could not be found. | queries | counter | -| `consul.catalog.connect.query` | Increments for each mesh-based catalog query for the given service. | queries | counter | -| `consul.catalog.connect.query-tag` | Increments for each mesh-based catalog query for the given service with the given tag. | queries | counter | -| `consul.catalog.connect.query-tags` | Increments for each mesh-based catalog query for the given service with the given tags. | queries | counter | -| `consul.catalog.connect.not-found` | Increments for each mesh-based catalog query where the given service could not be found. | queries | counter | - -## Service Mesh Built-in Proxy Metrics - -Consul service mesh's built-in proxy is by default configured to log metrics to the -same sink as the agent that starts it. - -When running in this mode it emits some basic metrics. These will be expanded -upon in the future. - -All metrics are prefixed with `consul.proxy.` to distinguish -between multiple proxies on a given host. The table below use `web` as an -example service name for brevity. - -### Labels - -Most labels have a `dst` label and some have a `src` label. When using metrics -sinks and timeseries stores that support labels or tags, these allow aggregating -the connections by service name. - -Assuming all services are using a managed built-in proxy, you can get a complete -overview of both number of open connections and bytes sent and received between -all services by aggregating over these metrics. - -For example aggregating over all `upstream` (i.e. outbound) connections which -have both `src` and `dst` labels, you can get a sum of all the bandwidth in and -out of a given service or the total number of connections between two services. - -### Metrics Reference - -The standard go runtime metrics are exported by `go-metrics` as with Consul -agent. The table below describes the additional metrics exported by the proxy. - -| Metric | Description | Unit | Type | -| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ------- | -| `consul.proxy.web.runtime.*` | The same go runtime metrics as documented for the agent above. | mixed | mixed | -| `consul.proxy.web.inbound.conns` | Shows the current number of connections open from inbound requests to the proxy. Where supported a `dst` label is added indicating the service name the proxy represents. | connections | gauge | -| `consul.proxy.web.inbound.rx_bytes` | Increments by the number of bytes received from an inbound client connection. Where supported a `dst` label is added indicating the service name the proxy represents. | bytes | counter | -| `consul.proxy.web.inbound.tx_bytes` | Increments by the number of bytes transferred to an inbound client connection. Where supported a `dst` label is added indicating the service name the proxy represents. | bytes | counter | -| `consul.proxy.web.upstream.conns` | Shows the current number of connections open from a proxy instance to an upstream. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | connections | gauge | -| `consul.proxy.web.inbound.rx_bytes` | Increments by the number of bytes received from an upstream connection. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | bytes | counter | -| `consul.proxy.web.inbound.tx_bytes` | Increments by the number of bytes transferred to an upstream connection. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | bytes | counter | - -## Peering metrics - -**Requirements:** -- Consul 1.13.0+ - -[Cluster peering](/consul/docs/connect/cluster-peering) refers to Consul clusters that communicate through a peer connection, as opposed to a federated connection. Consul collects metrics that describe the number of services exported to a peered cluster. Peering metrics are only emitted by the leader server. These metrics are emitted every 9 seconds. - -| Metric | Description | Unit | Type | -| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------- | -| `consul.peering.exported_services` | Counts the number of services exported with [exported service configuration entries](/consul/docs/connect/config-entries/exported-services) to a peer cluster. | count | gauge | -| `consul.peering.healthy` | Tracks the health of a peering connection as reported by the server. If Consul detects errors while sending or receiving from a peer which do not recover within a reasonable time, this metric returns 0. Healthy connections return 1. | health | gauge | - -### Labels - -Consul attaches the following labels to metric values. - -| Label Name | Description | Possible values | -| ------------------------------------- | -------------------------------------------------------------------------------- | ----------------------------------------- | -| `peer_name` | The name of the peering on the reporting cluster or leader. | Any defined peer name in the cluster | -| `peer_id` | The ID of a peer connected to the reporting cluster or leader. | Any UUID | -| `partition` | Name of the partition that the peering is created in. | Any defined partition name in the cluster | - -## Server Host Metrics - -Consul servers can report the following metrics about the host's system resources. -This feature must be enabled in the [agent telemetry configuration](/consul/docs/agent/config/config-files#telemetry-enable_host_metrics). -Note that if the Consul server is operating inside a container these metrics -still report host resource usage and do not report any resource limits placed -on the container. - -**Requirements:** -- Consul 1.15.3+ - -| Metric | Description | Unit | Type | -| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ------- | -| `consul.host.memory.total` | The total physical memory in bytes | mixed | mixed | -| `consul.host.memory.available` | The available physical memory in bytes | mixed | mixed | -| `consul.host.memory.free` | The free physical memory in bytes | mixed | mixed | -| `consul.host.memory.used` | The used physical memory in bytes | mixed | mixed | -| `consul.host.memory.used_percent` | The used physical memory as a percentage of total physical memory | mixed | mixed | -| `consul.host.cpu.total` | The host's total cpu utilization -| `consul.host.cpu.user` | The cpu utilization in user space -| `consul.host.cpu.idle` | The cpu utilization in idle state -| `consul.host.cpu.iowait` | The cpu utilization in iowait state -| `consul.host.cpu.system` | The cpu utilization in system space -| `consul.host.disk.size` | The size in bytes of the data_dir disk -| `consul.host.disk.used` | The number of bytes used on the data_dir disk -| `consul.host.disk.available` | The number of bytes available on the data_dir disk -| `consul.host.disk.used_percent` | The percentage of disk space used on the data_dir disk -| `consul.host.disk.inodes_percent` | The percentage of inode usage on the data_dir disk -| `consul.host.uptime` | The uptime of the host in seconds diff --git a/website/content/docs/agent/wal-logstore/enable.mdx b/website/content/docs/agent/wal-logstore/enable.mdx deleted file mode 100644 index e80315928f2a..000000000000 --- a/website/content/docs/agent/wal-logstore/enable.mdx +++ /dev/null @@ -1,151 +0,0 @@ ---- -layout: docs -page_title: Enable the experimental WAL LogStore backend -description: >- - Learn how to safely configure and test the experimental WAL backend in your Consul deployment. ---- - -# Enable the experimental WAL LogStore backend - -This topic describes how to safely configure and test the WAL backend in your Consul deployment. - -The overall process for enabling the WAL LogStore backend for one server consists of the following steps. In production environments, we recommend starting by enabling the backend on a single server . If you eventually choose to expand the test to further servers, you must repeat these steps for each one. - -1. Enable log verification. -1. Select target server to enable WAL. -1. Stop target server gracefully. -1. Remove data directory from target server. -1. Update target server's configuration. -1. Start the target server. -1. Monitor target server raft metrics and logs. - -!> **Experimental feature:** The WAL LogStore backend is experimental and may contain bugs that could cause data loss. Follow this guide to manage risk during testing. - -## Requirements - -- Consul v1.15 or later is required for all servers in the datacenter. Refer to the [standard upgrade procedure](/consul/docs/upgrading/instructions/general-process) and the [1.15 upgrade notes](/consul/docs/upgrading/upgrade-specific#consul-1-15-x) for additional information. -- A Consul cluster with at least three nodes are required to safely test the WAL backend without downtime. - -We recommend taking the following additional measures: - -- Take a snapshot prior to testing. -- Monitor Consul server metrics and logs, and set an alert on specific log events that occur when WAL is enabled. Refer to [Monitor Raft metrics and logs for WAL](/consul/docs/agent/wal-logstore/monitoring) for more information. -- Enable WAL in a pre-production environment and run it for a several days before enabling it in production. - -## Known issues - -The following issues exist in Consul 1.15.0 and 1.15.1. - - * A follower that is disconnected may be unable to catch up if it is using the WAL backend. - * Restoring user snapshots can break replication to WAL-enabled followers. - * Restoring user snapshots can cause a WAL-enabled leader to panic. - -## Risks - -While their likelihood remains low to very low, be aware of the following risks before implementing the WAL backend: - - - If WAL corrupts data on a Consul server agent, the server data cannot be recovered. Restart the server with an empty data directory and reload its state from the leader to resolve the issue. - - WAL may corrupt data or contain a defect that causes the server to panic and crash. WAL may not restart if the defect recurs when WAL reads from the logs on startup. Restart the server with an empty data directory and reload its state from the leader to resolve the issue. - - If WAL corrupts data, clients may read corrupted data from the Consul server, such as invalid IP addresses or unmatched tokens. This outcome is unlikely even if a recurring defect causes WAL to corrupt data because replication uses objects cached in memory instead of reads from disk. Restart the server with an empty data directory and reload its state from the leader to resolve the issue. - - If you enable a Consul CE server to use WAL or enable WAL on a voting server with Consul Enterprise, WAL may corrupt the server's state, become the leader, and replicate the corrupted state to all other servers. In this case, restoring from backup is required to recover a completely uncorrupted state. Test WAL on a non-voting server in Enterprise to prevent this outcome. You can add a new non-voting server to the cluster to test with if there are no existing ones. - -## Enable log verification - -You must enable log verification on all voting servers in Enterprise and all servers in CE because the leader writes verification checkpoints. - -1. On each voting server, add the following to the server's configuration file: - - ```hcl - raft_logstore { - verification { - enabled = true - interval = "60s" - } - } - ``` - -1. Restart the server to apply the changes. The `consul reload` command is not sufficient to apply `raft_logstore` configuration changes. -1. Run the `consul operator raft list-peers` command to wait for each server to become a healthy voter before moving on to the next. This may take a few minutes for large snapshots. - -When complete, the server's logs should contain verifier reports that appear like the following example: - -```log hideClipboard -2023-01-31T14:44:31.174Z [INFO] agent.server.raft.logstore.verifier: verification checksum OK: elapsed=488.463268ms leaderChecksum=f15db83976f2328c rangeEnd=357802 rangeStart=298132 readChecksum=f15db83976f2328c -``` - -## Select target server to enable WAL - -If you are using Consul CE, or Consul Enterprise without non-voting servers, select a follower server to enable WAL. As noted in [Risks](#risks), Consul Enterprise users with non-voting servers should first select a non-voting server, or consider adding another server as a non-voter to test on. - -Retrieve the current state of the servers by running the following command: - -```shell-session -$ consul operator raft list-peers -``` - -## Stop target server - -Stop the target server gracefully. For example, if you are using `systemd`, -run the following command: - -```shell-session -$ systemctl stop consul -``` - -If your environment uses configuration management automation that might interfere with this process, such as Chef or Puppet, you must disable them until you have completely enabled WAL as a storage backend. - -## Remove data directory from target server - -Temporarily moving the data directory to a different location is less destructive than deleting it. We recommend moving it in cases where you unsuccessfully enable WAL. Do not use the old data directory (`/data-dir/raft.bak`) for recovery after restarting the server. We recommend eventually deleting the old directory. - -The following example assumes the `data_dir` in the server's configuration is `/data-dir` and renames it to `/data-dir.bak`. - -```shell-session -$ mv /data-dir/raft /data-dir/raft.bak -``` - -When switching backends, you must always remove _the entire raft directory_, not just the `raft.db` file or `wal` directory. The log must always be consistent with the snapshots to avoid undefined behavior or data loss. - -## Update target server configuration - -Add the following to the target server's configuration file: - -```hcl -raft_logstore { - backend = "wal" - verification { - enabled = true - interval = "60s" - } -} -``` - -## Start target server - -Start the target server. For example, if you are using `systemd`, run the following command: - -```shell-session -$ systemctl start consul -``` - -Watch for the server to become a healthy voter again. - -```shell-session -$ consul operator raft list-peers -``` - -## Monitor target server Raft metrics and logs - -Refer to [Monitor Raft metrics and logs for WAL](/consul/docs/agent/wal-logstore/monitoring) for details. - -We recommend leaving the cluster in the test configuration for several days or weeks, as long as you observe no errors. An extended test provides more confidence that WAL operates correctly under varied workloads and during routine server restarts. If you observe any errors, end the test immediately and report them. - -If you disabled configuration management automation, consider reenabling it during the testing phase to pick up other updates for the host. You must ensure that it does not revert the Consul configuration file and remove the altered backend configuration. One way to do this is add the `raft_logstore` block to a separate file that is not managed by your automation. This file can either be added to the directory if [`-config-dir`](/consul/docs/agent/config/cli-flags#_config_dir) is used or as an additional [`-config-file`](/consul/docs/agent/config/cli-flags#_config_file) argument. - -## Next steps - -- If you observe any verification errors, performance anomalies, or other suspicious behavior from the target server during the test, you should immediately follow [the procedure to revert back to BoltDB](/consul/docs/agent/wal-logstore/revert-to-boltdb). Report failures through GitHub. - -- If you do not see errors and would like to expand the test further, you can repeat the above procedure on another target server. We suggest waiting after each test expansion and slowly rolling WAL out to other parts of your environment. Once the majority of your servers use WAL, any bugs not yet discovered may result in cluster unavailability. - -- If you wish to permanently enable WAL on all servers, repeat the steps described in this topic for each server. Even if `backend = "wal"` is set in logs, servers continue to use BoltDB if they find an existing raft.db file in the data directory. diff --git a/website/content/docs/agent/wal-logstore/index.mdx b/website/content/docs/agent/wal-logstore/index.mdx deleted file mode 100644 index b215db158c8b..000000000000 --- a/website/content/docs/agent/wal-logstore/index.mdx +++ /dev/null @@ -1,53 +0,0 @@ ---- -layout: docs -page_title: WAL LogStore Backend Overview -description: >- - The experimental WAL (write-ahead log) LogStore backend shipped in Consul 1.15 is intended to replace the BoltDB backend, improving performance and log storage issues. ---- - -# Experimental WAL LogStore backend overview - -This topic provides an overview of the WAL (write-ahead log) LogStore backend. -The WAL backend is an experimental feature. Refer to -[Requirements](/consul/docs/agent/wal-logstore/enable#requirements) for -supported environments and known issues. - -We do not recommend enabling the WAL backend in production without following -[our guide for safe -testing](/consul/docs/agent/wal-logstore/enable). - -## WAL versus BoltDB - -WAL implements a traditional log with rotating, append-only log files. WAL resolves many issues with the existing `LogStore` provided by the BoltDB backend. The BoltDB `LogStore` is a copy-on-write BTree, which is not optimized for append-only, write-heavy workloads. - -### BoltDB storage scalability issues - -The existing BoltDB log store inefficiently stores append-only logs to disk because it was designed as a full key-value database. It is a single file that only ever grows. Deleting the oldest logs, which Consul does regularly when it makes new snapshots of the state, leaves free space in the file. The free space must be tracked in a `freelist` so that BoltDB can reuse it on future writes. By contrast, a simple segmented log can delete the oldest log files from disk. - -A burst of writes at double or triple the normal volume can suddenly cause the log file to grow to several times its steady-state size. After Consul takes the next snapshot and truncates the oldest logs, the resulting file is mostly empty space. - -To track the free space, Consul must write extra metadata to disk with every write. The metadata is proportional to the amount of free pages, so after a large burst write latencies tend to increase. In some cases, the latencies cause serious performance degradation to the cluster. - -To mitigate risks associated with sudden bursts of log data, Consul tries to limit lots of logs from accumulating in the LogStore. Significantly larger BoltDB files are slower to append to because the tree is deeper and freelist larger. For this reason, Consul's default options associated with snapshots, truncating logs, and keeping the log history have been aggressively set toward keeping BoltDB small rather than using disk IO optimally. - -But the larger the file, the more likely it is to have a large freelist or suddenly form one after a burst of writes. For this reason, the many of Consul's default options associated with snapshots, truncating logs, and keeping the log history aggressively keep BoltDT small rather than using disk IO more efficiently. - -Other reliability issues, such as [raft replication capacity issues](/consul/docs/agent/monitor/telemetry#raft-replication-capacity-issues), are much simpler to solve without the performance concerns caused by storing more logs in BoltDB. - -### WAL approaches storage issues differently - -When directly measured, WAL is more performant than BoltDB because it solves a simpler storage problem. Despite this, some users may not notice a significant performance improvement from the upgrade with the same configuration and workload. In this case, the benefit of WAL is that retaining more logs does not affect write performance. As a result, strategies for reducing disk IO with slower snapshots or for keeping logs to permit slower followers to catch up with cluster state are all possible, increasing the reliability of the deployment. - -## WAL quality assurance - -The WAL backend has been tested thoroughly during development: - -- Every component in the WAL, such as [metadata management](https://github.com/hashicorp/raft-wal/blob/main/types/meta.go), [log file encoding](https://github.com/hashicorp/raft-wal/blob/main/types/segment.go) to actual [file-system interaction](https://github.com/hashicorp/raft-wal/blob/main/types/vfs.go) are abstracted so unit tests can simulate difficult-to-reproduce disk failures. - -- We used the [application-level intelligent crash explorer (ALICE)](https://github.com/hashicorp/raft-wal/blob/main/alice/README.md) to exhaustively simulate thousands of possible crash failure scenarios. WAL correctly recovered from all scenarios. - -- We ran hundreds of tests in a performance testing cluster with checksum verification enabled and did not detect data loss or corruption. We will continue testing before making WAL the default backend. - -We are aware of how complex and critical disk-persistence is for your data. - -We hope that many users at different scales will try WAL in their environments after upgrading to 1.15 or later and report success or failure so that we can confidently replace BoltDB as the default for new clusters in a future release. diff --git a/website/content/docs/agent/wal-logstore/monitoring.mdx b/website/content/docs/agent/wal-logstore/monitoring.mdx deleted file mode 100644 index f4f81a986d27..000000000000 --- a/website/content/docs/agent/wal-logstore/monitoring.mdx +++ /dev/null @@ -1,85 +0,0 @@ ---- -layout: docs -page_title: Monitor Raft metrics and logs for WAL -description: >- - Learn how to monitor Raft metrics emitted the experimental WAL (write-ahead log) LogStore backend shipped in Consul 1.15. ---- - -# Monitor Raft metrics and logs for WAL - -This topic describes how to monitor Raft metrics and logs if you are testing the WAL backend. We strongly recommend monitoring the Consul cluster, especially the target server, for evidence that the WAL backend is not functioning correctly. Refer to [Enable the experimental WAL LogStore backend](/consul/docs/agent/wal-logstore/enable) for additional information about the WAL backend. - -!> **Upgrade warning:** The WAL LogStore backend is experimental. - -## Monitor for checksum failures - -Log store verification failures on any server, regardless of whether you are running the BoltDB or WAL backed, are unrecoverable errors. Consul may report the following errors in logs. - -### Read failures: Disk Corruption - -```log hideClipboard -2022-11-15T22:41:23.546Z [ERROR] agent.raft.logstore: verification checksum FAILED: storage corruption rangeStart=1234 rangeEnd=3456 leaderChecksum=0xc1... readChecksum=0x45... -``` - -This indicates that the server read back data that is different from what it wrote to disk. This indicates corruption in the storage backend or filesystem. - -For convenience, Consul also increments a metric `consul.raft.logstore.verifier.read_checksum_failures` when this occurs. - -### Write failures: In-flight Corruption - -The following error indicates that the checksum on the follower did not match the leader when the follower received the logs. The error implies that the corruption happened in the network or software and not the log store: - -```log hideClipboard -2022-11-15T22:41:23.546Z [ERROR] agent.raft.logstore: verification checksum FAILED: in-flight corruption rangeStart=1234 rangeEnd=3456 leaderChecksum=0xc1... followerWriteChecksum=0x45... -``` - -It is unlikely that this error indicates an issue with the storage backend, but you should take the same steps to resolve and report it. - -The `consul.raft.logstore.verifier.write_checksum_failures` metric increments when this error occurs. - -## Resolve checksum failures - -If either type of corruption is detected, complete the instructions for [reverting to BoltDB](/consul/docs/agent/wal-logstore/revert-to-boltdb). If the server already uses BoltDB, the errors likely indicate a latent bug in BoltDB or a bug in the verification code. In both cases, you should follow the revert instructions. - -Report all verification failures as a [GitHub -issue](https://github.com/hashicorp/consul/issues/new?assignees=&labels=&template=bug_report.md&title=WAL:%20Checksum%20Failure). - -In your report, include the following: - - Details of your server cluster configuration and hardware - - Logs around the failure message - - Context for how long they have been running the configuration - - Any metrics or description of the workload you have. For example, how many raft - commits per second. Also include the performance metrics described on this page. - -We recommend setting up an alert on Consul server logs containing `verification checksum FAILED` or on the `consul.raft.logstore.verifier.{read|write}_checksum_failures` metrics. The sooner you respond to a corrupt server, the lower the chance of any of the [potential risks](/consul/docs/agent/wal-logstore/enable#risks) causing problems in your cluster. - -## Performance metrics - -The key performance metrics to watch are: - -- `consul.raft.commitTime` measures the time to commit new writes on a quorum of - servers. It should be the same or lower after deploying WAL. Even if WAL is - faster for your workload and hardware, it may not be reflected in `commitTime` - until enough followers are using WAL that the leader does not have to wait for - two slower followers in a cluster of five to catch up. - -- `consul.raft.rpc.appendEntries.storeLogs` measures the time spent persisting - logs to disk on each _follower_. It should be the same or lower for - WAL-enabled followers. - -- `consul.raft.replication.appendEntries.rpc` measures the time taken for each - `AppendEntries` RPC from the leader's perspective. If this is significantly - higher than `consul.raft.rpc.appendEntries` on the follower, it indicates a - known queuing issue in the Raft library and is unrelated to the backend. - Followers with WAL enabled should not be slower than the others. You can - determine which follower is associated with which metric by running the - `consul operator raft list-peers` command and matching the - `peer_id` label value to the server IDs listed. - -- `consul.raft.compactLogs` measures the time take to truncate the logs after a - snapshot. WAL-enabled servers should not be slower than BoltDB servers. - -- `consul.raft.leader.dispatchLog` measures the time spent persisting logs to - disk on the _leader_. It is only relevant if a WAL-enabled server becomes a - leader. It should be the same or lower than before when the leader was using - BoltDB. \ No newline at end of file diff --git a/website/content/docs/agent/wal-logstore/revert-to-boltdb.mdx b/website/content/docs/agent/wal-logstore/revert-to-boltdb.mdx deleted file mode 100644 index 9ba6923b42db..000000000000 --- a/website/content/docs/agent/wal-logstore/revert-to-boltdb.mdx +++ /dev/null @@ -1,76 +0,0 @@ ---- -layout: docs -page_title: Revert to BoltDB -description: >- - Learn how to revert Consul to the BoltDB backend after enabled the WAL (write-ahead log) LogStore backend shipped in Consul 1.15. ---- - -# Revert storage backend to BoltDB from WAL - -This topic describes how to revert your Consul storage backend from the experimental WAL LogStore backend to the default BoltDB. - -The overall process for reverting to BoltDB consists of the following steps. Repeat the steps for all Consul servers that you need to revert. - -1. Stop target server gracefully. -1. Remove data directory from target server. -1. Update target server's configuration. -1. Start target server. - -## Stop target server gracefully - -Stop the target server gracefully. For example, if you are using `systemd`, -run the following command: - -```shell-session -$ systemctl stop consul -``` - -If your environment uses configuration management automation that might interfere with this process, such as Chef or Puppet, you must disable them until you have completely reverted the storage backend. - -## Remove data directory from target server - -Temporarily moving the data directory to a different location is less destructive than deleting it. We recommend moving the data directory instead of deleted it in cases where you unsuccessfully enable WAL. Do not use the old data directory (`/data-dir/raft.bak`) for recovery after restarting the server. We recommend eventually deleting the old directory. - -The following example assumes the `data_dir` in the server's configuration is `/data-dir` and renames it to `/data-dir.wal.bak`. - -```shell-session -$ mv /data-dir/raft /data-dir/raft.wal.bak -``` - -When switching backend, you must always remove _the entire raft directory_ not just the `raft.db` file or `wal` directory. This is because the log must always be consistent with the snapshots to avoid undefined behavior or data loss. - -## Update target server's configuration - -Modify the `backend` in the target server's configuration file: - -```hcl -raft_logstore { - backend = "boltdb" - verification { - enabled = true - interval = "60s" - } -} -``` - -## Start target server - -Start the target server. For example, if you are using `systemd`, run the following command: - -```shell-session -$ systemctl start consul -``` - -Watch for the server to become a healthy voter again. - -```shell-session -$ consul operator raft list-peers -``` - -### Clean up old data directories - -If necessary, clean up any `raft.wal.bak` directories. Replace `/data-dir` with the value you specified in your configuration file. - -```shell-session -$ rm /data-dir/raft.bak -``` diff --git a/website/content/docs/architecture/anti-entropy.mdx b/website/content/docs/architecture/anti-entropy.mdx deleted file mode 100644 index 292f5c0070d5..000000000000 --- a/website/content/docs/architecture/anti-entropy.mdx +++ /dev/null @@ -1,123 +0,0 @@ ---- -layout: docs -page_title: Anti-Entropy Enforcement -description: >- - Anti-entropy keeps distributed systems consistent. Learn how Consul uses an anti-entropy mechanism to periodically sync agent states with the service catalog to prevent the catalog from becoming stale. ---- - -# Anti-Entropy Enforcement - -Consul uses an advanced method of maintaining service and health information. -This page details how services and checks are registered, how the catalog is -populated, and how health status information is updated as it changes. - -### Components - -It is important to first understand the moving pieces involved in services and -health checks: the [agent](#agent) and the [catalog](#catalog). These are -described conceptually below to make anti-entropy easier to understand. - -#### Agent - -Each Consul agent maintains its own set of service and check registrations as -well as health information. The agents are responsible for executing their own -health checks and updating their local state. - -Services and checks within the context of an agent have a rich set of -configuration options available. This is because the agent is responsible for -generating information about its services and their health through the use of -[health checks](/consul/docs/services/usage/checks). - -#### Catalog - -Consul's service discovery is backed by a service catalog. This catalog is -formed by aggregating information submitted by the agents. The catalog maintains -the high-level view of the cluster, including which services are available, -which nodes run those services, health information, and more. The catalog is -used to expose this information via the various interfaces Consul provides, -including DNS and HTTP. - -Services and checks within the context of the catalog have a much more limited -set of fields when compared with the agent. This is because the catalog is only -responsible for recording and returning information _about_ services, nodes, and -health. - -The catalog is maintained only by server nodes. This is because the catalog is -replicated via the [Raft log](/consul/docs/architecture/consensus) to provide a -consolidated and consistent view of the cluster. - -### Anti-Entropy - -Entropy is the tendency of systems to become increasingly disordered. Consul's -anti-entropy mechanisms are designed to counter this tendency, to keep the -state of the cluster ordered even through failures of its components. - -Consul has a clear separation between the global service catalog and the agent's -local state as discussed above. The anti-entropy mechanism reconciles these two -views of the world: anti-entropy is a synchronization of the local agent state and -the catalog. For example, when a user registers a new service or check with the -agent, the agent in turn notifies the catalog that this new check exists. -Similarly, when a check is deleted from the agent, it is consequently removed from -the catalog as well. - -Anti-entropy is also used to update availability information. As agents run -their health checks, their status may change in which case their new status -is synced to the catalog. Using this information, the catalog can respond -intelligently to queries about its nodes and services based on their -availability. - -During this synchronization, the catalog is also checked for correctness. If -any services or checks exist in the catalog that the agent is not aware of, they -will be automatically removed to make the catalog reflect the proper set of -services and health information for that agent. Consul treats the state of the -agent as authoritative; if there are any differences between the agent -and catalog view, the agent-local view will always be used. - -### Periodic Synchronization - -In addition to running when changes to the agent occur, anti-entropy is also a -long-running process which periodically wakes up to sync service and check -status to the catalog. This ensures that the catalog closely matches the agent's -true state. This also allows Consul to re-populate the service catalog even in -the case of complete data loss. - -To avoid saturation, the amount of time between periodic anti-entropy runs will -vary based on cluster size. The table below defines the relationship between -cluster size and sync interval: - -| Cluster Size | Periodic Sync Interval | -| ------------ | ---------------------- | -| 1 - 128 | 1 minute | -| 129 - 256 | 2 minutes | -| 257 - 512 | 3 minutes | -| 513 - 1024 | 4 minutes | -| ... | ... | - -The intervals above are approximate. Each Consul agent will choose a randomly -staggered start time within the interval window to avoid a thundering herd. - -### Best-effort sync - -Anti-entropy can fail in a number of cases, including misconfiguration of the -agent or its operating environment, I/O problems (full disk, filesystem -permission, etc.), networking problems (agent cannot communicate with server), -among others. Because of this, the agent attempts to sync in best-effort -fashion. - -If an error is encountered during an anti-entropy run, the error is logged and -the agent continues to run. The anti-entropy mechanism is run periodically to -automatically recover from these types of transient failures. - -### Enable Tag Override - -Synchronization of service registration can be partially modified to -allow external agents to change the tags for a service. This can be -useful in situations where an external monitoring service needs to be -the source of truth for tag information. For example, the Redis -database and its monitoring service Redis Sentinel have this kind of -relationship. Redis instances are responsible for much of their -configuration, but Sentinels determine whether the Redis instance is a -primary or a secondary. Enable the -[`enable_tag_override`](/consul/docs/services/configuration/services-configuration-reference#enable_tag_override) parameter in your service definition file to tell the Consul agent where the Redis database is running to bypass -tags during anti-entropy synchronization. Refer to -[Modify anti-entropy synchronization](/consul/docs/services/usage/define-services#modify-anti-entropy-synchronization) for additional information. diff --git a/website/content/docs/architecture/backend.mdx b/website/content/docs/architecture/backend.mdx new file mode 100644 index 000000000000..d765ece700d1 --- /dev/null +++ b/website/content/docs/architecture/backend.mdx @@ -0,0 +1,31 @@ +--- +layout: docs +page_title: Persistent backend architecture +description: >- + Consul persists the Raft index, which logs cluster activities, with the Write-ahead log (WAL) LogStore backend. Consul saves the Raft index in the server's data directory. +--- + +# Persistent data backend architecture + +This page introduces the architecture of the backend that Consul server agents use to store Raft index data. + +## Raft index + +Consul uses the Raft protocol to manage [server consensus](/consul/docs/concept/consensus), to maintain a [reliable, fault-tolerant](/consul/docs/concept/reliability) state across all servers. This consensus mechanism ensures [consistent service discovery and health monitoring](/consul/docs/concept/consistency), even when individual servers fail or become temporarily disconnected from the cluster. + +The Raft index provides a record of the cluster's state. It tracks interactions between Consul servers as they conduct elections, register service instances into [the Consul catalog](/consul/docs/concept/catalog), and update the catalog with the results of service node health checks. + +You can also use Consul's [snapshot agent](/consul/commands/snapshot/agent) to save a copy of the entire Raft index. This snapshot lets you restore a datacenter from a backup in the event of an outage or catastrophic failure. You can save snapshot to a cloud storage bucket to ensure data persistence. + +## Data directory + +Consul writes the Raft index to the data directory specified in the agent configuration with the `data_dir` parameter or `-data_dir` CLI flag. The data directory is a requirement for all agents, and should be durable across reboots. + +## Write-ahead log (WAL) LogStore backend + +Consul logs the Raft index with the write-ahead log (WAL) LogStore backend. The WAL backend implements a traditional log with rotating, append-only log files, and it retains logs without affecting a cluster's write performance at scale. + +Previous versions of Consul used BoltDB as the default LogStore backend. Refer +to the [WAL LogStore backend overview](/consul/docs/deploy/server/wal) for more +information. To use BoltDB instead of WAL, refer to [Revert storage backend to +BoltDB from WAL](/consul/docs/deploy/server/wal/revert-boltdb). diff --git a/website/content/docs/architecture/capacity-planning.mdx b/website/content/docs/architecture/capacity-planning.mdx deleted file mode 100644 index 2f80c4cf289a..000000000000 --- a/website/content/docs/architecture/capacity-planning.mdx +++ /dev/null @@ -1,188 +0,0 @@ ---- -layout: docs -page_title: Consul capacity planning -description: >- - Learn how to maintain your Consul cluster in a healthy state by provisioning the correct resources. ---- - -# Consul capacity planning - -This page describes our capacity planning recommendations when deploying and maintaining a Consul cluster in production. When your organization designs a production environment, you should consider your available resources and their impact on network capacity. - -## Introduction - -It is important to select the correct size for your server instances. Consul server environments have a standard set of minimum requirements. However, these requirements may vary depending on what you are using Consul for. - -Insufficient resource allocations may cause network issues or degraded performance in general. When a slowdown in performance results in a Consul leader node that is unable to respond to requests in sufficient time, the Consul cluster triggers a new leader election. Consul pauses all network requests and Raft updates until the election ends. - -## Hardware requirements - -The minimum hardware requirements for Consul servers in production clusters as recommended by the [reference architecture](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers) are: - -| CPU | Memory | Disk Capacity | Disk IO | Disk Throughput | Avg Round-Trip-Time | 99% Round-Trip-Time | -| --------- | ------------ | ------------- | ----------- | --------------- | ------------------- | ------------------- | -| 8-16 core | 32-64 GB RAM | 200+ GB | 7500+ IOPS | 250+ MB/s | Lower than 50ms | Lower than 100ms | - -For the major cloud providers, we recommend starting with one of the following instances that meet the minimum requirements. Then scale up as needed. We also recommend avoiding "burstable" CPU and storage options where performance may drop after a consistent load. - -| Provider | Size | Instance/VM Types | Disk Volume Specs | -| --------- | ----- | ------------------------------------- | --------------------------------- | -| **AWS** | Large | `m5.2xlarge`, `m5.4xlarge` | 200+GB `gp3`, 10000 IOPS, 250MB/s | -| **Azure** | Large | `Standard_D8s_v3`, `Standard_D16s_v3` | 2048GB `Premium SSD`, 7500 IOPS, 200MB/s | -| **GCP** | Large | `n2-standard-8`, `n2-standard-16` | 1000GB `pd-ssd`, 30000 IOPS, 480MB/s | - - -For HCP Consul Dedicated, cluster size is measured in the number of service instances supported. Find out more information in the [HCP Consul Dedicated pricing page](https://cloud.hashicorp.com/products/consul/pricing). - -## Workload input and output requirements - -Workloads are any actions that interact with the Consul cluster. These actions consist of key/value reads and writes, service registrations and deregistrations, adding or removing Consul client agents, and more. - -Input/output operations per second (IOPS) is a unit of measurement for the amount of reads and writes to non-adjacent storage locations. -For high workloads, ensure that the Consul server disks support a [high number of IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html#ebs-io-iops) to keep up with the rapid Raft log update rate. -Unlike bare-metal environments, IOPS for virtual instances in cloud environments is often tied to storage sizing. More storage GBs typically grants you more IOPS. Therefore, we recommend deploying on [IOPS-optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/provisioned-iops.html). - -Consul server agents are generally I/O bound for writes and CPU bound for reads. For additional tuning recommendations, refer to [raft tuning](#raft-tuning). - -## Memory requirements - -You should allocate RAM for server agents so that they contain 2 to 4 times the working set size. You can determine the working set size of a running cluster by noting the value of `consul.runtime.alloc_bytes` in the leader node's telemetry data. Inspect your monitoring solution for the telemetry value, or run the following commands with the [jq](https://stedolan.github.io/jq/download/) tool installed on your Consul leader instance. - - - -For Kubernetes, execute the command from the leader pod. `jq` is available in the Consul server containers. - - - -Set `$CONSUL_HTTP_TOKEN` to an ACL token with valid permissions, then retrieve the working set size. - -```shell-session -$ curl --silent --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" http://127.0.0.1:8500/v1/agent/metrics | jq '.Gauges[] | select(.Name=="consul.runtime.alloc_bytes") | .Value'` -616017920 -``` - -## Kubernetes storage requirements - -When you set up persistent volumes (PV) resources, you should define the correct server storage class parameter because the defaults are likely insufficient in performance. To set the [storageClass Helm chart parameter](/consul/docs/k8s/helm#v-server-storageclass), refer to the [Kubernetes documentation on storageClasses](https://kubernetes.io/docs/concepts/storage/storage-classes/) for more information about your specific cloud provider. - -## Read and write heavy workload recommendations - -In production, your use case may lead to Consul performing read-heavy workloads, write-heavy workloads, or both. Refer to the following table for specific resource recommendations for these types of workloads. - -| Workload type | Instance Recommendations | Workload element examples | Enterprise Feature Recommendations | -| ------------- | ------------------------- | ------------------------ | ------------------------ | -| Read-heavy | Instances of type `m5.4xlarge (AWS)`, `Standard_D16s_v3 (Azure)`, `n2-standard-16 (GCP)` | Raft RPCs calls, DNS queries, key/value retrieval | [Read replicas](/consul/docs/enterprise/read-scale) | -| Write-heavy | IOPS performance of `10 000+` | Consul agent joins and leaves, services registration and deregistration, key/value writes | [Network segments](/consul/docs/enterprise/network-segments/network-segments-overview) | - -For recommendations on troubleshooting issues with read-heavy or write-heavy workloads, refer to [Consul at Scale](/consul/docs/architecture/scale#resource-usage-and-metrics-recommendations). - -## Monitor performance - -Monitoring is critical to ensure that your Consul datacenter has sufficient resources to continue operations. A proactive monitoring strategy helps you find problems in your network before they impact your deployments. - -We recommend completing the [Monitor Consul server health and performance with metrics and logs](/consul/tutorials/observe-your-network/server-metrics-and-logs) tutorial as a starting point for Consul metrics and telemetry. The following tutorials guide you through specific monitoring solutions for your Consul cluster. - -- [Monitor Consul server health and performance with metrics and logs](/consul/tutorials/observe-your-network/server-metrics-and-logs) -- [Observe Consul service mesh traffic](/consul/tutorials/get-started-kubernetes/kubernetes-gs-observability) - -### Important metrics - -In production environments, create baselines for your Consul cluster's metrics. After you discover the baselines, you will be able to define alerts and receive notifications when there are unexpected values. For a detailed explanation on the metrics and their values, refer to [Consul Agent telemetry](/consul/docs/agent/telemetry). - -### Transaction metrics - -These metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. - -- [`consul.kvs.apply`](/consul/docs/agent/monitor/telemetry#transaction-timing) measures the time it takes to complete an update to the KV store. -- [`consul.txn.apply`](/consul/docs/agent/monitor/telemetry#transaction-timing) measures the time spent applying a transaction operation. -- [`consul.raft.apply`](/consul/docs/agent/monitor/telemetry#transaction-timing) counts the number of Raft transactions applied during the measurement interval. This metric is only reported on the leader. -- [`consul.raft.commitTime`](/consul/docs/agent/monitor/telemetry#transaction-timing) measures the time it takes to commit a new entry to the Raft log on disk on the leader. - -### Memory metrics - -These performance indicators can help you diagnose if the current instance sizing is unable to handle the workload. - -- [`consul.runtime.alloc_bytes`](/consul/docs/agent/monitor/telemetry#memory-usage) measures the number of bytes allocated by the Consul process. -- [`consul.runtime.sys_bytes`](/consul/docs/agent/monitor/telemetry#memory-usage) measures the total number of bytes of memory obtained from the OS. -- [`consul.runtime.heap_objects`](/consul/docs/agent/monitor/telemetry#metrics-reference) measures the number of objects allocated on the heap and is a general memory pressure indicator. - -### Leadership metrics - -Leadership changes are not a cause for concern but frequent changes may be a symptom of a deeper problem. Frequent elections or leadership changes may indicate network issues between the Consul servers, or the Consul servers are unable to keep up with the load. - -- [`consul.raft.leader.lastContact`](/consul/docs/agent/monitor/telemetry#leadership-changes) measures the time since the leader was last able to contact the follower nodes when checking its leader lease. -- [`consul.raft.state.candidate`](/consul/docs/agent/monitor/telemetry#leadership-changes) increments whenever a Consul server starts an election. -- [`consul.raft.state.leader`](/consul/docs/agent/monitor/telemetry#leadership-changes) increments whenever a Consul server becomes a leader. -- [`consul.server.isLeader`](/consul/docs/agent/monitor/telemetry#leadership-changes) tracks whether a server is a leader. - -### Network metrics - -Network activity and RPC count measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. If an unusually high RPC count occurs, you should investigate before it overloads the cluster. - -- [`consul.client.rpc`](/consul/docs/agent/monitor/telemetry#network-activity-rpc-count) increments whenever a Consul agent in client mode makes an RPC request to a Consul server. -- [`consul.client.rpc.exceeded`](/consul/docs/agent/monitor/telemetry#network-activity-rpc-count) increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's limits configuration. -- [`consul.client.rpc.failed`](/consul/docs/agent/monitor/telemetry#network-activity-rpc-count) increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. - -## Network constraints and alternate approaches - -If it is impossible for you to allocate the required resources, you can make changes to Consul's performance so that it operates with lower speed or resilience. These changes ensure that your cluster remains within its resource capacity. - -- Soft limits prevent your cluster from degrading due to overload. -- Raft tuning lets you compensate for unfavorable environments. - -### Soft limits - -The recommended maximum size for a single datacenter is 5,000 Consul client agents. This recommendation is based on a standard, non-tuned environment and considers a blast radius's risk management factor. The maximum number of agents may be lower, depending on how you use Consul. - -If you require more than 5,000 client agents, you should break up the single Consul datacenter into multiple smaller datacenters. - -- When the nodes are spread across separate physical locations such as different regions, you can model multiple datacenter structures based on physical locations. -- Use [network segments](/consul/docs/enterprise/network-segments/network-segments-overview) in a single available zone or region to lower overall resource usage in a single datacenter. - -When deploying [Consul in Kubernetes](/consul/docs/k8s), we recommend you set both _requests_ and _limits_ in the Helm chart. Refer to the [Helm chart documentation](/consul/docs/k8s/helm#v-server-resources) for more information. - -- Requests allocate the required resources for your Consul workloads. -- Limits prevent your pods from being terminated and restarted if they consume more resources than requested and Kubernetes needs to reclaim these resources. Limits can prevent outage situations where the Consul leader's container gets terminated and redeployed due to resource constraints. - -The following is an example Helm configuration that allocates 16 CPU cores and 64 gigabytes of memory: - - - -```yaml -global: - image: "hashicorp/consul" -## ... -resources: - requests: - memory: '64G' - cpu: '16000m' - limits: - memory: '64G' - cpu: '16000m' -``` - - - -### Raft tuning - -Consul uses the [Raft consensus algorithm](/consul/docs/architecture/consensus) to provide consistency. -You may need to adjust Raft to suit your specific environment. Adjust the [`raft_multiplier` configuration](/consul/docs/agent/config/config-files#raft_multiplier) to define the trade-off between leader stability and time to recover from a leader failure. - -- A lower multiplier minimizes failure detection and election time, but it may trigger frequently in high latency situations. -- A higher multiplier reduces the chances that failures cause leadership churn, but your cluster takes longer to detect real failures and restore availability. - -The value of `raft_multiplier` has a default value of 5. It is a scaling factor setting that directly affects the following parameters: - -| Parameter name | Default value | Derived from | -| --- | --- | --- | -| HeartbeatTimeout | 5000ms | 5 x 1000ms | -| ElectionTimeout | 5000ms | 5 x 1000ms | -| LeaderLeaseTimeout | 2500ms | 5 x 500ms | - -You can use the telemetry from [`consul.raft.leader.lastContact`](/consul/docs/agent/telemetry#leadership-changes) to observe Raft timing performance. - -Wide networks with more latency perform better with larger values of `raft_multiplier`, but cluster failure detection will take longer. If your network operates with low latency, we recommend that you do not set the Raft multiplier higher than 5. Instead, you should either replace the servers with more powerful ones or minimize the network latency between nodes. - -We recommend you start from a baseline and perform [chaos engineering testing](/consul/tutorials/resiliency/introduction-chaos-engineering?in=consul%2Fresiliency) with different values for the Raft multiplier to find the acceptable time for problem detection and recovery for the cluster. Then scale the cluster and its dedicated resources with the number of workloads handled. This approach gives you the best balance between pure resource growth and pure Raft tuning strategies because it lets you use Raft tuning as a backup plan if you cannot scale your resources. - -The types of workloads the Consul cluster handles also play an important role in Raft tuning. For example, if your Consul clusters are mostly static and do not handle many events, you should increase your Raft multiplier instead of scaling your resources because the risk of an important event happening while the cluster is converging or re-electing a leader is lower. diff --git a/website/content/docs/architecture/catalog.mdx b/website/content/docs/architecture/catalog.mdx deleted file mode 100644 index dad1ef9aceb7..000000000000 --- a/website/content/docs/architecture/catalog.mdx +++ /dev/null @@ -1,39 +0,0 @@ ---- -layout: docs -page_title: v1 Catalog API -description: Learn about version 1 of the Consul catalog, including what Consul servers record when they register a service. ---- - -# v1 Catalog API - -This topic provides conceptual information about version 1 (v1) of the Consul catalog API. The catalog tracks registered services and their locations for both service discovery and service mesh use cases. - -For more information about the information returned when querying the catalog, including filtering options when querying the catalog for a list of nodes, services, or gateways, refer to the [`/catalog` endpoint reference in the HTTP API documentation](/consul/api-docs/catalog). - -## Introduction - -Consul tracks information about registered services through its catalog API. This API records user-defined information about the external services, such as their partitions and required health checks. It also records information that Consul assigns for its own operations, such as an ID for each service instance and the [Raft indices](/consul/docs/architecture/consensus) when the instance is registered and modified. - -### v2 Catalog - -Consul introduced an experimental v2 Catalog API in v1.17.0. This API supported multi-port Service configurations on Kubernetes, and it was made available for testing and development purposes. The v2 catalog and its support for multiport Kubernetes Services were deprecated in the v1.19.0 release. - -## Catalog structure - -When Consul registers a service instance using the v1 catalog API, it records the following information about each instance: - -| v1 Catalog field | Description | Source | -| :--------------- | :---------- | :----- | -| ID | A unique identifier for a service instance. | Defined by user in [service definition](/consul/docs/services/configuration/services-configuration-reference#id). | -| Node | The connection point where the service is available. | On VMs, defined by user.

On Kubernetes, computed by Consul according to [Kubernetes Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/). | -| Address | The registered address of the service instance. | Defined by user in [service definition](/consul/docs/services/configuration/services-configuration-reference#address). | -| Tagged Addresses | User-defined labels for addresses. | Defined by user in [service definition](/consul/docs/services/configuration/services-configuration-reference#tagged_addresses). | -| NodeMeta | User-defined metadata about the node. | Defined by user | -| Datacenter | The name of the datacenter the service is registered in. | Defined by user | -| Service | The name of the service Consul registers the service instance under. | Defined by user | -| Agent Check | The health checks defined for a service instance managed by a Consul client agent. | Computed by Consul | -| Health Checks | The health checks defined for the service. Refer to [define health checks](/consul/docs/services/usage/checks) for more information. | Defined by user | -| Partition | The name of the admin partition the service is registered in. Refer to [admin partitions](/consul/docs/enterprise/admin-partitions) for more information. | Defined by user | -| Locality | Region and availability zone of the service. Refer to [`locality`](/consul/docs/agent/config/config-files#locality) for more information. | Defined by user | - -Depending on the configuration entries or custom resource definitions you apply to your Consul installation, additional information such as [proxy default behavior](/consul/docs/connect/config-entries/proxy-defaults) is automatically recorded to the catalog for services. You can return this information using the [`/catalog` HTTP API endpoint](/consul/api-docs/catalog). diff --git a/website/content/docs/architecture/consensus.mdx b/website/content/docs/architecture/consensus.mdx deleted file mode 100644 index 9e10a05e571d..000000000000 --- a/website/content/docs/architecture/consensus.mdx +++ /dev/null @@ -1,223 +0,0 @@ ---- -layout: docs -page_title: Consensus Protocol | Raft -description: >- - Consul ensures a consistent state using the Raft protocol. A quorum, or a majority of server agents with one leader, agree to state changes before committing to the state log. Learn how Raft works in Consul to ensure state consistency and how that state can be read with different consistency modes to balance read latency and consistency. ---- - -# Consensus Protocol - -Consul uses a [consensus protocol]() -to provide [Consistency (as defined by CAP)](https://en.wikipedia.org/wiki/CAP_theorem). -The consensus protocol is based on -["Raft: In search of an Understandable Consensus Algorithm"](https://raft.github.io/raft.pdf). -For a visual explanation of Raft, see [The Secret Lives of Data](http://thesecretlivesofdata.com/raft). - -## Raft Protocol Overview - -Raft is a consensus algorithm that is based on -[Paxos](https://en.wikipedia.org/wiki/Paxos_%28computer_science%29). Compared -to Paxos, Raft is designed to have fewer states and a simpler, more -understandable algorithm. - -There are a few key terms to know when discussing Raft: - -- Log - The primary unit of work in a Raft system is a log entry. The problem - of consistency can be decomposed into a _replicated log_. A log is an ordered - sequence of entries. Entries includes any cluster change: adding nodes, adding services, new key-value pairs, etc. We consider the log consistent - if all members agree on the entries and their order. - -- FSM - [Finite State Machine](https://en.wikipedia.org/wiki/Finite-state_machine). - An FSM is a collection of finite states with transitions between them. As new logs - are applied, the FSM is allowed to transition between states. Application of the - same sequence of logs must result in the same state, meaning behavior must be deterministic. - -- Peer set - The peer set is the set of all members participating in log replication. - For Consul's purposes, all server nodes are in the peer set of the local datacenter. - -- Quorum - A quorum is a majority of members from a peer set: for a set of size `N`, - quorum requires at least `(N/2)+1` members. - For example, if there are 5 members in the peer set, we would need 3 nodes - to form a quorum. If a quorum of nodes is unavailable for any reason, the - cluster becomes _unavailable_ and no new logs can be committed. - -- Committed Entry - An entry is considered _committed_ when it is durably stored - on a quorum of nodes. Once an entry is committed it can be applied. - -- Leader - At any given time, the peer set elects a single node to be the leader. - The leader is responsible for ingesting new log entries, replicating to followers, - and managing when an entry is considered committed. - -Raft is a complex protocol and will not be covered here in detail (for those who -desire a more comprehensive treatment, the full specification is available in this -[paper](https://raft.github.io/raft.pdf)). -We will, however, attempt to provide a high level description which may be useful -for building a mental model. - -Raft nodes are always in one of three states: follower, candidate, or leader. All -nodes initially start out as a follower. In this state, nodes can accept log entries -from a leader and cast votes. If no entries are received for some time, nodes -self-promote to the candidate state. In the candidate state, nodes request votes from -their peers. If a candidate receives a quorum of votes, then it is promoted to a leader. -The leader must accept new log entries and replicate to all the other followers. -In addition, if stale reads are not acceptable, all queries must also be performed on -the leader. - -Once a cluster has a leader, it is able to accept new log entries. A client can -request that a leader append a new log entry (from Raft's perspective, a log entry -is an opaque binary blob). The leader then writes the entry to durable storage and -attempts to replicate to a quorum of followers. Once the log entry is considered -_committed_, it can be _applied_ to a finite state machine. The finite state machine -is application specific; in Consul's case, we use -[MemDB](https://github.com/hashicorp/go-memdb) to maintain cluster state. Consul's writes -block until it is both _committed_ and _applied_. This achieves read after write semantics -when used with the [consistent](/consul/api-docs/features/consistency#consistent) mode for queries. - -Obviously, it would be undesirable to allow a replicated log to grow in an unbounded -fashion. Raft provides a mechanism by which the current state is snapshotted and the -log is compacted. Because of the FSM abstraction, restoring the state of the FSM must -result in the same state as a replay of old logs. This allows Raft to capture the FSM -state at a point in time and then remove all the logs that were used to reach that -state. This is performed automatically without user intervention and prevents unbounded -disk usage while also minimizing time spent replaying logs. One of the advantages of -using MemDB is that it allows Consul to continue accepting new transactions even while -old state is being snapshotted, preventing any availability issues. - -Consensus is fault-tolerant up to the point where quorum is available. -If a quorum of nodes is unavailable, it is impossible to process log entries or reason -about peer membership. For example, suppose there are only 2 peers: A and B. The quorum -size is also 2, meaning both nodes must agree to commit a log entry. If either A or B -fails, it is now impossible to reach quorum. This means the cluster is unable to add -or remove a node or to commit any additional log entries. This results in -_unavailability_. At this point, manual intervention would be required to remove -either A or B and to restart the remaining node in bootstrap mode. - -A Raft cluster of 3 nodes can tolerate a single node failure while a cluster -of 5 can tolerate 2 node failures. The recommended configuration is to either -run 3 or 5 Consul servers per datacenter. This maximizes availability without -greatly sacrificing performance. The [deployment table](#deployment_table) below -summarizes the potential cluster size options and the fault tolerance of each. - -In terms of performance, Raft is comparable to Paxos. Assuming stable leadership, -committing a log entry requires a single round trip to half of the cluster. -Thus, performance is bound by disk I/O and network latency. Although Consul is -not designed to be a high-throughput write system, it should handle on the order -of hundreds to thousands of transactions per second depending on network and -hardware configuration. - -## Raft in Consul - -Only Consul server nodes participate in Raft and are part of the peer set. All -client nodes forward requests to servers. Part of the reason for this design is -that, as more members are added to the peer set, the size of the quorum also increases. -This introduces performance problems as you may be waiting for hundreds of machines -to agree on an entry instead of a handful. - -When getting started, a single Consul server is put into "bootstrap" mode. This mode -allows it to self-elect as a leader. Once a leader is elected, other servers can be -added to the peer set in a way that preserves consistency and safety. Eventually, -once the first few servers are added, bootstrap mode can be disabled. See [this -document](/consul/docs/install/bootstrapping) for more details. - -Since all servers participate as part of the peer set, they all know the current -leader. When an RPC request arrives at a non-leader server, the request is -forwarded to the leader. If the RPC is a _query_ type, meaning it is read-only, -the leader generates the result based on the current state of the FSM. If -the RPC is a _transaction_ type, meaning it modifies state, the leader -generates a new log entry and applies it using Raft. Once the log entry is committed -and applied to the FSM, the transaction is complete. - -Because of the nature of Raft's replication, performance is sensitive to network -latency. For this reason, each datacenter elects an independent leader and maintains -a disjoint peer set. Data is partitioned by datacenter, so each leader is responsible -only for data in their datacenter. When a request is received for a remote datacenter, -the request is forwarded to the correct leader. This design allows for lower latency -transactions and higher availability without sacrificing consistency. - -## Consistency Modes - -Although all writes to the replicated log go through Raft, reads are more -flexible. To support various trade-offs that developers may want, Consul -supports 3 different consistency modes for reads. - -The three read modes are: - -- `default` - Raft makes use of leader leasing, providing a time window - in which the leader assumes its role is stable. However, if a leader - is partitioned from the remaining peers, a new leader may be elected - while the old leader is holding the lease. This means there are 2 leader - nodes. There is no risk of a split-brain since the old leader will be - unable to commit new logs. However, if the old leader services any reads, - the values are potentially stale. The default consistency mode relies only - on leader leasing, exposing clients to potentially stale values. We make - this trade-off because reads are fast, usually strongly consistent, and - only stale in a hard-to-trigger situation. The time window of stale reads - is also bounded since the leader will step down due to the partition. - -- `consistent` - This mode is strongly consistent without caveats. It requires - that a leader verify with a quorum of peers that it is still leader. This - introduces an additional round-trip to all server nodes. The trade-off is - always consistent reads but increased latency due to the extra round trip. - -- `stale` - This mode allows any server to service the read regardless of whether - it is the leader. This means reads can be arbitrarily stale but are generally - within 50 milliseconds of the leader. The trade-off is very fast and scalable - reads but with stale values. This mode allows reads without a leader meaning - a cluster that is unavailable will still be able to respond. - -For more documentation about using these various modes, see the -[HTTP API](/consul/api-docs/features/consistency). - -## Deployment Table ((#deployment_table)) - -Below is a table that shows quorum size and failure tolerance for various -cluster sizes. The recommended deployment is either 3 or 5 servers. A single -server deployment is _**highly**_ discouraged as data loss is inevitable in a -failure scenario. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ServersQuorum SizeFailure Tolerance
110
220
321
431
532
642
743
diff --git a/website/content/docs/architecture/control-plane/dataplane.mdx b/website/content/docs/architecture/control-plane/dataplane.mdx new file mode 100644 index 000000000000..dfaaa142fa3e --- /dev/null +++ b/website/content/docs/architecture/control-plane/dataplane.mdx @@ -0,0 +1,163 @@ +--- +layout: docs +page_title: Consul dataplane +description: >- + Consul Dataplane removes the need to run a client agent for service discovery and service mesh by leveraging orchestrator functions. Learn about Consul Dataplane, how it can lower latency for Consul on Kubernetes and AWS ECS, and how it enables Consul support for AWS Fargate and GKE Autopilot. +--- + +# Consul dataplane + +This topic provides an overview of Consul dataplane, a lightweight process for managing Envoy proxies. Consul dataplanes remove the need to run client agents on every node in a cluster for service discovery and service mesh. Instead, Consul deploys sidecar proxies that provide lower latency, support additional runtimes, and integrate with cloud infrastructure providers. + +## Supported environments + +- Dataplanes can connect to Consul servers v1.14.0 and newer. +- Dataplanes on Kubernetes requires Consul K8s v1.0.0 and newer. +- Dataplanes on AWS Elastic Container Services (ECS) requires Consul ECS v0.7.0 and newer. + +## What is Consul dataplane? + +When deployed to virtual machines or bare metal environments, the Consul control plane requires _server agents_ and _client agents_. Server agents maintain the service catalog and service mesh, including its security and consistency, while client agents manage communications between service instances, their sidecar proxies, and the servers. While this model is optimal for applications deployed on virtual machines or bare metal servers, orchestrators such as Kubernetes and ECS have native components that support health checking and service location functions typically provided by the client agent. + +Consul dataplane manages Envoy proxies and leaves responsibility for other functions to the orchestrator. As a result, it removes the need to run client agents on every node. In addition, services no longer need to be reregistered to a local client agent after restarting a service instance, as a client agent’s lack of access to persistent data storage in container-orchestrated deployments is no longer an issue. + +The following diagram shows how Consul dataplanes facilitate service mesh in a Kubernetes-orchestrated environment. + +![Diagram of Consul Dataplanes in Kubernetes deployment](/img/k8s-dataplanes-architecture.png) + +### Impact on performance + +ConsuldDataplanes replace node-level client agents and function as sidecars attached to each service instance. Dataplanes handle communication between Consul servers and Envoy proxies, using fewer resources than client agents. Consul servers need to consume additional resources in order to generate xDS resources for Envoy proxies. + +As a result, small deployments require fewer overall resources. For especially large deployments or deployments that expect to experience high levels of churn, consider the following impacts to your network's performance: + +1. In our internal tests, which used 5000 proxies and services flapping every 2 seconds, additional CPU utilization remained under 10% on the control plane. +1. As you deploy more services, the resource usage for dataplanes grows on a linear scale. +1. Envoy reconfigurations are rate limited to prevent excessive configuration changes from generating significant load on the servers. +1. To avoid generating significant load on an individual server, proxy configuration is load balanced proactively. +1. The frequency of the orchestrator's liveness and readiness probes determine how quickly Consul's control plane can become aware of failures. There is no impact on service mesh applications, however, as Envoy proxies have a passive ability to detect endpoint failure and steer traffic to healthy instances. + +## Benefits + +**Fewer networking requirements**: Without client agents, Consul does not require bidirectional network connectivity across multiple protocols to enable gossip communication. Instead, it requires a single gRPC connection to the Consul servers, which significantly simplifies requirements for the operator. + +**Simplified set up**: Because there are no client agents to engage in gossip, you do not have to generate and distribute a gossip encryption key to agents during the initial bootstrapping process. Securing agent communication also becomes simpler, with fewer tokens to track, distribute, and rotate. + +**Additional environment and runtime support**: Consul on Kubernetes versions _prior_ to v1.0 (Consul v1.14) require the use of hostPorts and DaemonSets for client agents, which limits Consul’s ability to be deployed in environments where those features are not supported. +As of Consul on Kubernetes version 1.0 (Consul 1.14), `hostPorts` are no longer required and Consul now supports AWS Fargate and GKE Autopilot. + +**Easier upgrades**: With Consul dataplane, updating Consul to a new version no longer requires upgrading client agents. Consul Dataplane also has better compatibility across Consul server versions, so the process to upgrade Consul servers becomes easier. + +## Get started + +To get started with Consul dataplane, use the following reference resources: + +- For `consul-dataplane` commands and usage examples, including required flags for startup, refer to the [`consul-dataplane` CLI reference](/consul/docs/reference/dataplane/cli). +- For Helm chart information, refer to the [Helm Chart reference](/consul/docs/reference/k8s/helm). +- For Envoy, Consul, and Consul Dataplane version compatibility, refer to the [Envoy compatibility matrix](/consul/docs/reference/proxy/envoy). +- For Consul on ECS workloads, refer to [Consul on AWS Elastic Container Service (ECS) Overview](/consul/docs/ecs). + +## Installation + + + + + +To install Consul dataplane, set `VERSION` to `1.0.0` or higher and then follow the instructions to install a specific version of Consul [with the Helm Chart](/consul/docs/k8s/installation/install#install-consul) or [with the Consul-k8s CLI](/consul/docs/k8s/installation/install-cli#install-a-previous-version). + +### Helm + +```shell-session +$ export VERSION=1.0.0 +$ helm install consul hashicorp/consul --set global.name=consul --version ${VERSION} --create-namespace --namespace consul +``` + +### Consul-k8s CLI + +```shell-session +$ export VERSION=1.0.0 && \ + curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip +``` + + + + +Refer to the following documentation for Consul on ECS workloads: + +- [Deploy Consul with the Terraform module](/consul/docs/register/service/ecs/) +- [Deploy Consul manually](/consul/docs/register/service/ecs/manual) + + + + + +### Namespace ACL permissions + +If ACLs are enabled, exported services between partitions that use dataplanes may experience errors when you define namespace partitions with the `*` wildcard. Consul dataplanes use a token with the `builtin/service` policy attached, but this policy does not include access to all namespaces. + +Add the following policies to the service token attached to Consul dataplanes to grant Consul access to exported services across all namespaces: + +```hcl +partition "default" { + namespace "default" { + query_prefix "" { + policy = "read" + } + } +} + +partition_prefix "" { + namespace_prefix "" { + node_prefix "" { + policy = "read" + } + service_prefix "" { + policy = "read" + } + } +} +``` + +## Upgrade dataplane version + + + + + +Before you upgrade Consul to a version that uses Consul dataplane, you must edit your Helm chart so that client agents are removed from your deployments. Refer to [upgrading to Consul Dataplane](/consul/docs/k8s/upgrade#upgrading-to-consul-dataplanes) for more information. + + + + + +Refer to [Upgrade to dataplane architecture](/consul/docs/upgrade/ecs/dataplane) for instructions. + + + + + +## Feature support + +Consul dataplanes on Kubernetes supports the following features: + +- Single and multi-cluster installations, including those with WAN federation, cluster peering, and admin partitions are supported. +- Ingress, terminating, and mesh gateways are supported. +- Running Consul service mesh in AWS Fargate and GKE Autopilot is supported. +- xDS load balancing is supported. +- Servers running in Kubernetes and servers external to Kubernetes are both supported. +- HCP Consul Dedicated is supported. +- Consul API Gateway + +Consul dataplanes on ECS support the following features: + +- Single and multi-cluster installations, including those with WAN federation, cluster peering, and admin partitions +- Mesh gateways +- Running Consul service mesh in AWS Fargate and EC2 +- xDS load balancing +- Self-managed Enterprise and HCP Consul Dedicated servers + +## Technical constraints and limitations + +- Consul Dataplane is not supported on Windows. +- Consul Dataplane requires the `NET_BIND_SERVICE` capability. Refer to [Set capabilities for a Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) in the Kubernetes Documentation for more information. +- When ACLs are enabled, dataplanes use the [service token](/consul/docs/security/acl/tokens/create/create-a-service-token) and the `builtin/service` policy for their default permissions. diff --git a/website/content/docs/architecture/control-plane/index.mdx b/website/content/docs/architecture/control-plane/index.mdx new file mode 100644 index 000000000000..6c0e54bb5714 --- /dev/null +++ b/website/content/docs/architecture/control-plane/index.mdx @@ -0,0 +1,115 @@ +--- +layout: docs +page_title: Consul control plane architecture +description: >- + Consul datacenters consist of clusters of server agents (control plane) and client agents deployed alongside service instances (data plane). Learn how these components and their different communication methods make Consul possible. +--- + +# Consul control plane architecture + +This topic provides an overview of the Consul architecture. We recommend reviewing the [Consul glossary](/consul/docs/glossary) as a companion to this topic to help you become familiar with HashiCorp terms. + +> Refer to the [Reference Architecture tutorial](/consul/tutorials/production-deploy/reference-architecture) for hands-on guidance about deploying Consul in production. + +## Introduction + +Consul provides a control plane that enables you to register, access, and secure services deployed across your network. The _control plane_ is the part of the network infrastructure that maintains a central registry to track services and their respective IP addresses. + +When using Consul’s service mesh capabilities, Consul dynamically configures sidecar and gateway proxies in the request path, which enables you to authorize service-to-service connections, route requests to healthy service instances, and enforce mTLS encryption without modifying your service’s code. This ensures that communication remains performant and reliable. Refer to [Service Mesh Proxy Overview](/consul/docs/connect/proxy) for an overview of sidecar proxies. + +![Diagram of the Consul control plane](/img/consul-arch/consul-arch-overview-control-plane.svg) + +## Datacenters + +@include 'text/descriptions/datacenter.mdx' + +### Clusters + +@include 'text/descriptions/cluster.mdx' + +## Agents + +You can run the Consul binary to start Consul _agents_, which are daemons that implement Consul control plane functionality. You can start agents as servers or clients. Refer to [Consul agent](/consul/docs/fundamentals/agent) for additional information. + +### Server agents + +Consul server agents store all state information, including service and node IP addresses, health checks, and configuration. We recommend deploying three or five servers in a cluster. The more servers you deploy, the greater the resilience and availability in the event of a failure. More servers, however, slow down cluster consensus, which is a critical server function that enables Consul to efficiently and effectively process information. + +#### Consensus protocol + +Consul clusters elect a single server to be the _leader_ through a process called _consensus_. The leader processes all queries and transactions, which prevents conflicting updates in clusters containing multiple servers. + +Servers that are not currently acting as the cluster leader are called _followers_. Followers forward requests from client agents to the cluster leader. The leader replicates the requests to all other servers in the cluster. Replication ensures that if the leader is unavailable, other servers in the cluster can elect another leader without losing any data. + +Consul servers establish consensus using the Raft algorithm on port `8300`. Refer to [Consensus Protocol](/consul/docs/concept/consensus) for more information. + +![Diagram of the Consul control plane consensus traffic](/img/consul-arch/consul-arch-overview-consensus.svg) + +### Client agents + +Consul clients report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. Clients use remote procedure calls (RPC) to interact with servers. By default, clients send RPC requests to the servers on port `8300`. + +There are no limits to the number of client agents or services you can use with Consul, but production deployments should distribute services across multiple Consul datacenters. Using a multi-datacenter deployment enhances infrastructure resilience and limits control plane issues. We recommend deploying a maximum of 5,000 client agents per datacenter. Some large organizations have deployed tens of thousands of client agents and hundreds of thousands of service instances across a multi-datacenter deployment. Refer to [Cross-datacenter requests](#cross-datacenter-requests) for additional information. + +You can also run Consul with an alternate service mesh configuration that deploys Envoy proxies but not client agents. Refer to [Simplified Service Mesh with Consul Dataplanes](/consul/docs/architecture/control-plane/dataplane) for more information. + +## LAN gossip pool + +Client and server agents participate in a LAN gossip pool so that they can distribute and perform node [health checks](/consul/docs/register/health-check/vm). Agents in the pool propagate the health check information across the cluster. Agent gossip communication occurs on port `8301` using UDP. Agent gossip falls back to TCP if UDP is not available. Refer to [gossip protocol](/consul/docs/concept/gossip) for additional information. + +The following simplified diagram shows the interactions between servers and clients. + + + + + +![Diagram of the Consul LAN gossip pool](/img/consul-arch/consul-arch-overview-lan-gossip-pool.svg) + + + + +![Diagram of RPC communication between Consul agents](/img/consul-arch/consul-arch-overview-rpc.svg) + + + + +## Cross-datacenter requests + +Each Consul datacenter maintains its own catalog of services and their health. By default, the information is not replicated across datacenters. WAN federation and cluster peering are two multi-datacenter deployment models that enable service connectivity across datacenters. + +### WAN federation + +WAN federation is an approach for connecting multiple Consul datacenters. It requires you to designate a _primary datacenter_ that contains authoritative information about all datacenters, including service mesh configurations and access control list (ACL) resources. + +In this model, when a client agent requests a resource in a remote secondary datacenter, a local Consul server forwards the RPC request to a remote Consul server that has access to the resource. A remote server sends the results to the local server. If the remote datacenter is unavailable, its resources are also unavailable. By default, WAN-federated servers send cross-datacenter requests over TCP on port `8300`. + +You can configure control plane and data plane traffic to go through mesh gateways, which simplifies networking requirements. + +> **Hands-on**: To enable services to communicate across datacenters when the ACL system is enabled, refer to the [ACL Replication for Multiple Datacenters](/consul/tutorials/security-operations/access-control-replication-multiple-datacenters) tutorial. + +#### WAN gossip pool + +Servers may also participate in a WAN gossip pool, which is optimized for greater latency imposed by the Internet. The pool enables servers to exchange information, such as their addresses and health, and gracefully handle loss of connectivity in the event of a failure. + +In the following diagram, the servers in each data center participate in a WAN gossip pool by sending data over TCP/UDP on port `8302`. Refer to [Gossip Protocol](/consul/docs/concept/gossip) for additional information. + + + + + +![Diagram of the Consul LAN gossip pool](/img/consul-arch/consul-arch-overview-wan-gossip-cross-cluster.svg) + + + + +![Diagram of RPC communication between Consul agents](/img/consul-arch/consul-arch-overview-remote-dc-forwarding-cross-cluster.svg) + + + + +### Cluster peering + +You can create peering connections between two or more independent clusters so that services deployed to different datacenters or admin partitions can communicate. An [admin partition](/consul/docs/multi-tenant/admin-partition) is a feature in Consul Enterprise that enables you to define isolated network regions that use the same Consul servers. In the cluster peering model, you create a token in one of the datacenters or partitions and configure another datacenter or partition to present the token to establish the connection. + +Refer to [cluster peering overview](/consul/docs/east-west/cluster-peering) for +additional information. diff --git a/website/content/docs/architecture/control-plane/k8s.mdx b/website/content/docs/architecture/control-plane/k8s.mdx new file mode 100644 index 000000000000..4c18c1c2cd2b --- /dev/null +++ b/website/content/docs/architecture/control-plane/k8s.mdx @@ -0,0 +1,40 @@ +--- +layout: docs +page_title: Consul on Kubernetes architecture +description: >- + When running on Kubernetes, Consul’s control plane architecture does not change significantly. Server agents are deployed as a StatefulSet with a persistent volume, while client agents can run as a k8s DaemonSet with an exposed API port or be omitted with Consul Dataplanes. +--- + +# Consul on Kubernetes architecture + +This topic describes the architecture, components, and resources associated with Consul deployments to Kubernetes. Consul employs the same architectural design on Kubernetes as it does with other platforms, but Kubernetes provides additional benefits that make operating a Consul cluster easier. Refer to [Consul control plane architecture](/consul/docs/architecture/control-plane) for more general information on Consul's architecture. + +> **More specific guidance:** +> - For guidance on datacenter design, refer to [Consul and Kubernetes Reference Architecture](/consul/tutorials/kubernetes-production/kubernetes-reference-architecture). +> - For step-by-step deployment guidance, refer to [Consul and Kubernetes Deployment Guide](/consul/tutorials/kubernetes-production/kubernetes-deployment-guide). +> - For non-Kubernetes guidance, refer to the standard [production deployment guide](/consul/tutorials/production-deploy/deployment-guide). + +## Server agents on Kubernetes + +The server agents are deployed as a `StatefulSet` and use persistent volume claims to store the server state. This state ensures that the [node ID](/consul/docs/reference/agent/configuration-file/node#node_id) is persisted so that servers can be rescheduled onto new IP addresses without causing issues. + +The server agents are configured with [anti-affinity rules](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) so that they are placed on different nodes. A readiness probe is also configured to mark the pod as ready only when it has established a leader. + +A Kubernetes `Service` is registered to represent each Consul server and Kubernetes exposes ports that are required to communicate to the Consul server pods. The servers use the DNS address of this service to join a Consul cluster, without requiring any other access to the Kubernetes cluster. Additional Consul servers may also utilize non-ready endpoints that are published by the Kubernetes Service so that servers can use the service for joining during bootstrap and upgrades. + +A **PodDisruptionBudget** is configured so the Consul server cluster maintains quorum during voluntary operational events. The maximum unavailable is `(n/2)-1` where `n` is the number of server agents. + +-> **Note:** Kubernetes and Helm do not delete Persistent Volumes or Persistent Volume Claims when a [StatefulSet is deleted](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage). You must perform this action manually when removing servers. + +## Consul dataplane on Kubernetes + +By default, Consul on Kubernetes uses an alternate service mesh configuration that injects sidecars without client agents. _Consul dataplane_ manages Envoy proxies and leaves responsibility for other functions to the orchestrator, which removes the need to run client agents on every node. + +![Diagram of Consul Dataplanes in Kubernetes deployment](/img/k8s-dataplanes-architecture.png) + +Refer to [Simplified Service Mesh with Consul dataplanes](/consul/docs/architecture/control-plane/dataplane) for more information. + +Consul dataplane is the default proxy manager in Consul on Kubernetes 1.14 and +later. If you are on Consul 1.13 or older, refer to [upgrading to Consul +Dataplane](/consul/docs/k8s/upgrade#upgrading-to-consul-dataplanes) for specific +upgrade instructions. diff --git a/website/content/docs/architecture/coordinates.mdx b/website/content/docs/architecture/coordinates.mdx index 7bc37cc9c029..eb169a3e1aff 100644 --- a/website/content/docs/architecture/coordinates.mdx +++ b/website/content/docs/architecture/coordinates.mdx @@ -1,11 +1,11 @@ --- layout: docs -page_title: Network Coordinates +page_title: Network coordinates description: >- Network coordinates are node locations in network tomography used to estimate round trip time (RTT). Learn how network coordinates manifest in Consul, how it calculates RTT, and how to work with coordinates to sort catalog information by nearness to a given node. --- -# Network Coordinates +# Network coordinates Consul uses a [network tomography](https://en.wikipedia.org/wiki/Network_tomography) system to compute network coordinates for nodes in the cluster. These coordinates diff --git a/website/content/docs/architecture/cts.mdx b/website/content/docs/architecture/cts.mdx new file mode 100644 index 000000000000..f30e56de7720 --- /dev/null +++ b/website/content/docs/architecture/cts.mdx @@ -0,0 +1,79 @@ +--- +layout: docs +page_title: Architecture +description: >- + Learn about the Consul-Terraform-Sync architecture and high-level CTS components, such as the Terraform driver and tasks. +--- + +# Consul-Terraform-Sync Architecture + +Consul-Terraform-Sync (CTS) is a service-oriented tool for managing network infrastructure near real-time. CTS runs as a daemon and integrates the network topology maintained by your Consul cluster with your network infrastructure to dynamically secure and connect services. + +## CTS workflow + +The following diagram shows the CTS workflow as it monitors the Consul service catalog for updates. + +[![Consul-Terraform-Sync Architecture](/img/nia-highlevel-diagram.svg)](/img/nia-highlevel-diagram.svg) + +1. CTS monitors the state of Consul’s service catalog and its KV store. This process is described in [Watcher and views](#watcher-and-views). +1. CTS detects a change. +1. CTS prompts Terraform to update the state of the infrastructure. + + +## Watcher and views + +CTS uses Consul's [blocking queries](/consul/api-docs/features/blocking) functionality to monitor Consul for updates. If an endpoint does not support blocking queries, CTS uses polling to watch for changes. These mechanisms are referred to in CTS as *watchers*. + +The watcher maintains a separate thread for each value monitored and runs any tasks that depend on the watched value whenever it is updated. These threads are referred to as _views_. For example, a thread may run a task to update a proxy when the watcher detects that an instance has become unhealthy . + +## Tasks + +A task is the action triggered by the updated data monitored in Consul. It +takes that dynamic service data and translates it into a call to the +infrastructure application to configure it with the updates. It uses a driver +to push out these updates, the initial driver being a local Terraform run. An +example of a task is to automate a firewall security policy rule with +discovered IP addresses for a set of Consul services. + +## Drivers + +A driver encapsulates the resources required to communicate the updates to the +network infrastructure. The following [drivers](/consul/docs/nia/network-drivers#terraform) are supported: + +- Terraform driver +- HCP Terraform driver + +Each driver includes a set of providers that [enables support](/consul/docs/automate/infrastructure/module) for a wide variety of infrastructure applications. + +## State storage and persistence + +The following types of state information are associated with CTS. + +### Terraform state information + +By default, CTS stores [Terraform state data](/terraform/language/state) in the Consul KV, but you can specify where this information is stored by configuring the `backend` setting in the [Terraform driver configuration](/consul/docs/nia/configuration#backend). The data persists if CTS stops and the backend is configured to a remote location. + +### CTS task and event data + +By default, CTS stores task and event data in memory. This data is transient and does not persist. If you configure [CTS to run with high availability enabled](/consul/docs/automate/infrastructure/high-availability), CTS stores the data in the Consul KV. High availability is an enterprise feature that promotes CTS resiliency. When high availability is enabled, CTS stores and persists task changes and events that occur when an instance stops. + +The data stored when operating in high availability mode includes task changes made using the task API or CLI. Examples of task changes include creating a new task, deleting a task, and enabling or disabling a task. You can empty the leader’s stored state information by starting CTS with the [`-reset-storage` flag](/consul/docs/nia/cli/start#options). + +## Instance compatibility checks (high availability) + +If you [run CTS with high availability enabled](/consul/docs/automate/infrastructure/high-availability), CTS performs instance compatibility checks to ensure that all instances in the cluster behave consistently. Consistent instance behavior enables CTS to properly perform automations configured in the state storage. + +The CTS instance compatibility check reports an error if the task [module](/consul/docs/nia/configuration#module) is configured with a local module, but the module does not exist on the CTS instance. Refer to the [Terraform documentation](/terraform/language/modules/sources#module-sources) for additional information about module sources. Example log: + +```shell-session +[ERROR] ha.compat: error="compatibility check failure: stat ./example-module: no such file or directory" +``` +Refer to [Error Messages](/consul/docs/error-messages/cts) for additional information. + +CTS instances perform a compatibility check on start-up based on the stored state and every five minutes after starting. If the check detects an incompatible CTS instance, it generates a log so that an operator can address it. + +CTS logs the error message and continues to run when it finds an incompatibility. CTS can still elect an incompatible instance to be the leader, but tasks affected by the incompatibility do not run successfully. This can happen when all active CTS instances enter [`once-mode`](/consul/docs/nia/cli/start#modes) and run the tasks once when initially elected. + +## Security guidelines + +We recommend following the network security guidelines described in the [Secure Consul-Terraform-Sync for Production](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-secure?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. The tutorial contains a checklist of best practices to secure your CTS installation for a production environment. \ No newline at end of file diff --git a/website/content/docs/architecture/data-plane/connect.mdx b/website/content/docs/architecture/data-plane/connect.mdx new file mode 100644 index 000000000000..9816910a743a --- /dev/null +++ b/website/content/docs/architecture/data-plane/connect.mdx @@ -0,0 +1,74 @@ +--- +layout: docs +page_title: Consul service mesh +description: >- + Consul's service mesh enforces secure service communication using mutual TLS (mTLS) encryption and explicit authorization. Learn how service mesh certificate authorities, intentions, and agents work together to provide Consul's service mesh capabilities. +--- + +# Consul service mesh + +This topic describes how the core features of Consul's service mesh work. + +This document uses _connect_ to refer to the subsystem that provides Consul's service mesh capabilities. We use this word because you define the service mesh capabilities in the `connect` stanza of Consul and Nomad agent configurations. + +## Mutual transport layer security (mTLS) + +The core of Consul service mesh is based on [mutual TLS](https://en.wikipedia.org/wiki/Mutual_authentication). + +Consul service mesh secures service-to-service communication using TLS certificates for identity. These certificates comply with the [SPIFFE X.509 standard](https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md), ensuring interoperability with other SPIFFE-compliant systems. Consul includes a built-in certificate authority (CA) for generating and distributing these certificates, and also integrates with [Vault](/consul/docs/secure-mesh/certificate/vault). Consul's PKI system is designed to be extendable to support any system by adding CA providers. + +During the connection attempt, the client service first verifies the destination service's certificate using the [public CA bundle](/consul/api-docs/connect/ca#list-ca-root-certificates). The client also presents its own certificate to authenticate its identity to the destination service. The destination service, in turn, verifies the client's certificate against the same public CA bundle. If this mutual certificate validation is successful, an encrypted and authenticated TLS connection is established. + +Once the secure connection is in place, the destination service proceeds with authorization based on its configured application protocol: + +- TCP (L4) services must authorize _incoming connections_ against the configured set of [service intentions](/consul/docs/secure-mesh/intention). +- HTTP (L7) services must authorize _incoming requests_ against those same intentions. + +If the intention check is successful, the connection (for TCP) or the specific request (for HTTP) is permitted. Otherwise, it is rejected. + +All APIs required for Consul service mesh typically respond in microseconds and impose minimal overhead to existing services. To ensure this, Consul service mesh-related API calls +are all made to the local Consul agent over a loopback interface, and all [agent `/connect` endpoints](/consul/api-docs/agent/connect) implement local caching, background +updating, and support blocking queries. Most API calls operate on purely local in-memory data. + +## Agent caching and performance + +To enable fast responses on endpoints such as the [agent connect API](/consul/api-docs/agent/connect), the Consul agent locally caches most Consul service mesh-related +data and sets up background [blocking queries](/consul/api-docs/features/blocking) against the server to update the cache in the background. This setup allows most API calls to use in-memory data and respond quickly. + +All data cached locally by the agent is populated on demand. Therefore, if Consul service mesh is not used at all, the cache does not store any data. On first request, the following data is loaded from the server and cached: + +- public CA root certificates +- leaf certificates +- service intentions +- service discovery results for upstreams + +For leaf certificates and service intentions, the agent only caches data related to the service requested, not the full set of data. + +The cache is partitioned by ACL token and datacenters. This partition minimizes the complexity of the cache and prevents an ACL token from accessing data it should not have access to. This partition results in higher memory usage for cached data since it is duplicated per ACL token. + +With Consul service mesh enabled, you are likely to observe increased memory usage by the local Consul agent. Memory usage scales with the number of service intentions associated with the registered services on the agent. The other data, including leaf certificates and public CA certificates, is a relatively fixed size per service. In most cases, the overhead per service should be relatively small and measure in single-digit kilobytes at most. + +The cache does not evict entries due to memory pressure. If memory capacity is reached, the process will attempt to swap. If swap is disabled, the Consul agent may begin failing and eventually crash. Each cache entry has a default time-to-live (TTL) of 3 days and is automatically removed if not accessed during that period. + +## Connections across datacenters + +A [sidecar proxy's upstream configuration](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) may specify an alternative datacenter or a prepared query that can address services in multiple datacenters. + +[Service intentions](/consul/docs/secure-mesh/intention) verify connections between services by source and destination name seamlessly across datacenters. + +You can make connections with gateways to enable communication across network topologies, which enables connections between services in each datacenter without externally routable IPs at the service level. + +### Service intention replication + +You can specify a datacenter that is authoritative for intentions by setting the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file#primary_datacenter) configuration. When you do this, Consul automatically replicates intentions from the primary datacenter to the secondary datacenters. + +In production setups with ACLs enabled, you must also set the [replication token](/consul/docs/reference/agent/configuration-file#acl_tokens_replication) in the secondary datacenter server's configuration. + +### Certificate authority federation + +The primary datacenter also acts as the root certificate authority (CA) for Consul service mesh. The primary datacenter generates a trust-domain UUID and obtains a root certificate +from the configured CA provider which defaults to the built-in one. + +Secondary datacenters retrieve the root CA public key and trust-domain ID from the primary datacenter. They then create their own private key and generate a certificate signing request (CSR) to obtain an intermediate CA certificate. The primary datacenter's root CA signs this CSR and returns the signed intermediate certificate. With this intermediate certificate in place, the secondary datacenter can independently issue new certificates for its Consul service mesh without requiring WAN communication to the primary. For security, private CA keys remain isolated within their respective datacenters and are never shared between them. + +Secondary datacenters continuously monitor the root CA certificate in the primary datacenter. When the primary's root CA changes, whether due to planned rotation or CA migration, the secondary datacenter automatically generates new keys, gets them signed by the primary's updated root CA, and then systematically rotates all issued certificates within the secondary datacenter. This makes CA root key rotation fully automatic with zero downtime across multiple datacenters. diff --git a/website/content/docs/architecture/data-plane/gateway.mdx b/website/content/docs/architecture/data-plane/gateway.mdx new file mode 100644 index 000000000000..9f47b74b341a --- /dev/null +++ b/website/content/docs/architecture/data-plane/gateway.mdx @@ -0,0 +1,81 @@ +--- +layout: docs +page_title: Gateways +description: >- + Gateways are proxies that direct traffic into, out of, and inside of Consul's service mesh. They secure communication with external or non-mesh network resources and enable services on different runtimes, cloud providers, or with overlapping IP addresses to communicate with each other. +--- + +# Gateways + +This topic provides an overview of the gateway features shipped with Consul. Gateways provide connectivity into, out of, and between Consul service meshes. You can configure the following types of gateways: + +- [API gateways](/consul/docs/north-south/api-gateway) handle and secure incoming requests from external clients, routing them to services within the mesh. They offer advanced Layer 7 features like authentication and routing. +- [Ingress gateways](#ingress-gateways) (deprecated) handle incoming traffic from external clients to services inside the mesh. API gateway is the recommended alternative. +- [Terminating gateways](#terminating-gateways) enable services within the mesh to securely communicate with external services outside the mesh — such as legacy systems or third-party APIs. +- [Mesh gateways](#mesh-gateways) enable service-to-service traffic between Consul datacenters or between Consul admin partitions. They also enable datacenters to be federated across wide area networks. + +[![Gateway Architecture](/img/consul-connect/svgs/consul_gateway_overview.svg)](/img/consul-connect/svgs/consul_gateway_overview.svg) + +## API gateways + +API gateways enable network access, from outside a service mesh, to services running in a Consul service mesh. The systems accessing the services in the mesh may be within your organizational network or external to it. This type of network traffic is commonly called _north-south_ network traffic because it refers to the flow of data into and out of a specific environment. + +API gateways solve the following primary use cases: + +- **Control access at the point of entry**: Set the protocols of external connection requests and secure inbound connections with TLS certificates from trusted providers, such as Verisign and Let's Encrypt. + +- **Simplify traffic management**: Load balance requests across services and route traffic to the appropriate service by matching one or more criteria, such as hostname, path, header presence or value, and HTTP method. + +Refer to the following documentation for information on how to configure and deploy API gateways: +- [API Gateways on VMs](/consul/docs/north-south/api-gateway/vm/listener) +- [API Gateways for Kubernetes](/consul/docs/north-south/api-gateway/k8s/listener). + +## Ingress gateways + + + +Ingress gateway is deprecated and will not be enhanced beyond its current capabilities. Ingress gateway is fully supported in this version but will be removed in a future release of Consul. + +Consul's API gateway is the recommended alternative to ingress gateway. + + + +Ingress gateways enable connectivity within your organizational network from services outside the Consul service mesh to services in the mesh. To accept ingress traffic from the public internet, use Consul's [API Gateway](/consul/docs/north-south/api-gateway) instead. + +Ingress gateways let you define what services should be exposed, on what port, and by what hostname. You configure an ingress gateway by defining a set of listeners that can map to different sets of backing services. + +Ingress gateways are tightly integrated with Consul's L7 configuration and enable dynamic routing of HTTP requests by attributes like the request path. + +For more information about ingress gateways, review the [complete documentation](/consul/docs/north-south/ingress-gateway) and the [ingress gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-ingress-gateways). + +![Ingress Gateway Architecture](/img/ingress-gateways.png) + +## Terminating gateways + +Terminating gateways enable services within the mesh to securely communicate with external services outside the mesh — such as legacy systems or third-party APIs. + +Services outside the mesh do not have sidecar proxies or are not [integrated natively](/consul/docs/automate/native). These may be services running on legacy infrastructure or managed cloud services running on infrastructure you do not control. + +Terminating gateways effectively act as egress proxies that can represent one or more services. They terminate service mesh mTLS connections, enforce Consul intentions, and forward requests to the appropriate destination. + +These gateways also simplify authorization from dynamic service addresses. Consul's intentions determine whether connections through the gateway are authorized. Then traditional tools like firewalls or IAM roles can authorize the connections from the known gateway nodes to the destination services. + +For more information about terminating gateways, review the [complete documentation](/consul/docs/north-south/terminating-gateway) and the [terminating gateway tutorial](/consul/tutorials/developer-mesh/terminating-gateways-connect-external-services). + +![Terminating gateway architecture](/img/terminating-gateways.png) + +## Mesh gateways + +Mesh gateways enable service mesh traffic to be routed between different Consul datacenters and admin partitions. The datacenters or partitions can reside in different clouds or runtime environments where general interconnectivity between all services in all datacenters is not feasible. + +They operate by sniffing and extracting the server name indication (SNI) header from the service mesh session and routing the connection to the appropriate destination based on the server name requested. + +Mesh gateways enable the following scenarios: + +- **Federate multiple datacenters across a WAN.** Since Consul 1.8.0, mesh gateways can forward gossip and RPC traffic between Consul servers. See [WAN federation via mesh gateways](/consul/docs/east-west/mesh-gateway/enable) for additional information. +- **Service-to-service communication across WAN-federated datacenters.** Refer to [Enabling Service-to-service Traffic Across Datacenters](/consul/docs/east-west/mesh-gateway/federation) for additional information. +- **Service-to-service communication across admin partitions.** Since Consul 1.11.0, you can create administrative boundaries for single Consul deployments called "admin partitions". You can use mesh gateways to facilitate cross-partition communication. Refer to [Enabling Service-to-service Traffic Across Admin Partitions](/consul/docs/east-west/mesh-gateway/admin-partition) for additional information. +- **Bridge multiple datacenters using Cluster Peering.** Since Consul 1.14.0, mesh gateways can be used to route peering control-plane traffic between peered Consul Servers. See [Mesh Gateways for Peering Control Plane Traffic](/consul/docs/east-west/mesh-gateway/cluster-peer) for more information. +- **Service-to-service communication across peered datacenters.** Refer to [Establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/vm) for more information. + +-> **Mesh gateway tutorial**: Follow the [mesh gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-gateways) to learn concepts associated with mesh gateways. \ No newline at end of file diff --git a/website/content/docs/architecture/data-plane/index.mdx b/website/content/docs/architecture/data-plane/index.mdx new file mode 100644 index 000000000000..6abb30c3c807 --- /dev/null +++ b/website/content/docs/architecture/data-plane/index.mdx @@ -0,0 +1,39 @@ +--- +layout: docs +page_title: Consul data plane architecture +description: >- + Consul provides features that help you manage your application's data plane. Learn about Consul's data plane, including its architectural components. +--- + +# Consul data plane architecture + +This topic describes Consul's architecture and operations in an application's data plane. Consul can deploy gateways and sidecar proxies to help you secure, observe, and manage application traffic. + +For information about the lightweight workload agents Consul uses for container-based applications on Kubernetes and AWS ECS, refer to [Consul dataplanes](/consul/docs/architecture/control-plane/dataplane). + +## Introduction + +Consul provides control plane features that help you manage your application's data plane, but the Consul process does not run directly in the data plane. + +When using Consul for service discovery, no additional components or configurations are required for the data plane. + +When using Consul's service mesh features, you can use Consul to create sidecar proxies and gateways to manage, secure, and observe service-to-service traffic. + +## Sidecar proxies + +Consul uses proxies to secure, manage, and observe all service-to-service communication. The primary mechanism is sidecar proxies, which are deployed alongside each service instance to handle all incoming and outgoing traffic. Consul includes native support for Envoy proxies, but can be configured to work with other proxy implementations. + +## Gateways + +Gateways are specialized proxies that manage specific types of traffic into, out of, or across your service mesh. There are four kinds of gateways: + +1. **API gateways** handle and secure incoming requests from external clients, routing them to services within the mesh. They offer advanced Layer 7 features like authentication and routing. +2. **Ingress gateways** (deprecated) handle incoming traffic from external clients to services inside the mesh. API gateway is the recommended alternative. +3. **Terminating gateways** enable services within the mesh to securely communicate with external services outside the mesh — such as legacy systems or third-party APIs. +4. **Mesh gateways** enable service-to-service traffic between Consul datacenters or between Consul admin partitions. They also enable datacenters to be federated across wide area networks. + +For more information about each type of gateway, refer to [gateways](/consul/docs/architecture/data-plane/gateway). + +## Next steps + +Learn about [Consul's security architecture](/consul/docs/architecture/security) to learn about the encryption systems and verification protocols Consul uses to secure data plane operations. \ No newline at end of file diff --git a/website/content/docs/architecture/data-plane/service.mdx b/website/content/docs/architecture/data-plane/service.mdx new file mode 100644 index 000000000000..fcde59174308 --- /dev/null +++ b/website/content/docs/architecture/data-plane/service.mdx @@ -0,0 +1,48 @@ +--- +layout: docs +page_title: Services overview +description: >- + Learn about services and service discovery workflows and concepts for virtual machine environments. +--- + +# Services + +This topic provides overview information about services and how to make them discoverable in Consul when your network operates on virtual machines. If service mesh is enabled in your network, refer to the following articles for additional information about connecting services in a mesh: + +- [How Service Mesh Works](/consul/docs/architecture/data-plane/connect) +- [How Consul Service Mesh Works on Kubernetes](/consul/docs/k8s/connect) + +## Introduction + +A _service_ is an entity in your network that performs a specialized operation or set of related operations. In many contexts, a service is software that you want to make available to users or other programs with access to your network. Services can also refer to native Consul functionality, such as _service mesh proxies_ and _gateways_, that enable you to establish connections between different parts of your network. + +You can define and register services with Consul, which makes them discoverable to other services in the network. You can also configure health checks for these services to enable automated failover and load balancing. For example, health checks can help load balancers automatically remove unhealthy service instances, or trigger the promotion of a new database primary when the current one fails. + +## Workflow + +For service discovery, the core Consul workflow for services consists of three stages: + +1. **Define services and health checks:** A service definition lets you define various aspects of the service, including how it is discovered by other services in the network. You can define health checks in the service definitions to verify the health of the service. Refer to [Define Services](/consul/docs/register/service/vm/define) and [Define Health Checks](/consul/docs/register/health-check/vm) for additional information. + +1. **Register services and health checks:** After defining your services and health checks, you must register them with a Consul agent. Refer to [Register Services and Health Checks](/consul/docs/register/service/vm) for additional information. + +1. **Query for services:** After registering your services and health checks, other services in your network can use the DNS to perform static or dynamic lookups to access your service. Refer to [DNS Usage Overview](/consul/docs/discover/dns) for additional information about the different ways to discover services in your datacenters. + +## Service mesh use cases + +Consul routes service traffic through sidecar proxies if you use Consul service mesh. As a result, you must specify upstream configurations in service definitions. The service mesh experience is different for virtual machine (VM) and Kubernetes environments. + +### Virtual machines + +You must define upstream services in the service definition. Consul uses the upstream configuration to bind the service with its upstreams. After registering the service, you must start a sidecar proxy on the VM to enable mesh connectivity. Refer to [Deploy sidecar services](/consul/docs/connect/proxy/sidecar) for additional information. + +### Kubernetes + +If you use Consul on Kubernetes, enable the service mesh injector in your Consul Helm chart to have Consul automatically add a sidecar to each of your pods using the Kubernetes `Service` definition as a reference. You can specify upstream annotations in the `Deployment` definition to bind upstream services to the pods. +Refer to [`connectInject`](/consul/docs/k8s/connect#installation-and-configuration) and [the upstreams annotation documentation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) for additional information. + +### Multiple services + +You can define common characteristics for services in your mesh, such as the admin partition, namespace, or upstreams, by creating and applying a `service-defaults` configuration entry. You can also define override configurations for specific upstreams or service instances. To use `service-defaults` configuration entries, you must enable Consul service mesh in your network. + +Refer to [Define Service Defaults](/consul/docs/services/usage/define-services#define-service-defaults) for additional information. diff --git a/website/content/docs/architecture/gossip.mdx b/website/content/docs/architecture/gossip.mdx deleted file mode 100644 index 12a4ef8de7ac..000000000000 --- a/website/content/docs/architecture/gossip.mdx +++ /dev/null @@ -1,56 +0,0 @@ ---- -layout: docs -page_title: Gossip Protocol | Serf -description: >- - Consul agents manage membership in datacenters and WAN federations using the Serf protocol. Learn about the differences between LAN and WAN gossip pools and how `serfHealth` affects health checks. ---- - -# Gossip Protocol - -Consul uses a [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) -to manage membership and broadcast messages to the cluster. The protocol, membership management, and message broadcasting is provided -through the [Serf library](https://github.com/hashicorp/serf/). The gossip protocol -used by Serf is based on a modified version of the -[SWIM (Scalable Weakly-consistent Infection-style Process Group Membership)](https://www.cs.cornell.edu/projects/Quicksilver/public_pdfs/SWIM.pdf) protocol. -Refer to the [Serf documentation](https://github.com/hashicorp/serf/blob/master/docs/internals/gossip.html.markdown) for additional information about the gossip protocol. - -## Gossip in Consul - -Consul uses a LAN gossip pool and a WAN gossip pool to perform different functions. The pools -are able to perform their functions by leveraging an embedded [Serf](https://github.com/hashicorp/serf/) -library. The library is abstracted and masked by Consul to simplify the user experience, -but developers may find it useful to understand how the library is leveraged. - -### LAN Gossip Pool - -Each datacenter that Consul operates in has a LAN gossip pool containing all members -of the datacenter (clients _and_ servers). Membership information provided by the -LAN pool allows clients to automatically discover servers, reducing the amount of -configuration needed. Failure detection is also distributed and shared by the entire cluster, -instead of concentrated on a few servers. Lastly, the gossip pool allows for fast and -reliable event broadcasts. - -### WAN Gossip Pool - -The WAN pool is globally unique. All servers should participate in the WAN pool, -regardless of datacenter. Membership information provided by the WAN pool allows -servers to perform cross-datacenter requests. The integrated failure detection -allows Consul to gracefully handle loss of connectivity--whether the loss is for -an entire datacenter, or a single server in a remote datacenter. - -## Lifeguard Enhancements ((#lifeguard)) - -SWIM assumes that the local node is healthy, meaning that soft real-time packet -processing is possible. The assumption may be violated, however, if the local node -experiences CPU or network exhaustion. In these cases, the `serfHealth` check status -can flap. This can result in false monitoring alarms, additional telemetry noise, and -CPU and network resources being wasted as they attempt to diagnose non-existent failures. - -Lifeguard completely resolves this issue with novel enhancements to SWIM. - -For more details about Lifeguard, please see the -[Making Gossip More Robust with Lifeguard](https://www.hashicorp.com/blog/making-gossip-more-robust-with-lifeguard/) -blog post, which provides a high level overview of the HashiCorp Research paper -[Lifeguard : SWIM-ing with Situational Awareness](https://arxiv.org/abs/1707.00788). The -[Serf gossip protocol guide](https://github.com/hashicorp/serf/blob/master/docs/internals/gossip.html.markdown#lifeguard-enhancements) -also provides some lower-level details about the gossip protocol and Lifeguard. diff --git a/website/content/docs/architecture/improving-consul-resilience.mdx b/website/content/docs/architecture/improving-consul-resilience.mdx deleted file mode 100644 index aea40f3558e9..000000000000 --- a/website/content/docs/architecture/improving-consul-resilience.mdx +++ /dev/null @@ -1,177 +0,0 @@ ---- -layout: docs -page_title: Fault Tolerance in Consul -description: >- - Fault tolerance is a system's ability to operate without interruption despite component failure. Learn how a set of Consul servers provide fault tolerance through use of a quorum, and how to further improve control plane resilience through use of infrastructure zones and Enterprise redundancy zones. ---- - -# Fault tolerance - - -You must give careful consideration to reliability in the architecture frameworks that you build. When you build a resilient platform, it minimizes the remediation actions you need to take when a failure occurs. This document provides useful information on how to design and operate a resilient Consul cluster, including the methods and functionalities for this goal. - -Consul has many features that operate both locally and remotely that can help you offer a resilient service across multiple datacenters. - - -## Introduction - -Fault tolerance is the ability of a system to continue operating without interruption -despite the failure of one or more components. In Consul, the number of server agents determines the fault tolerance. - - -Each Consul datacenter depends on a set of Consul voting server agents. -The voting servers ensure Consul has a consistent, fault-tolerant state -by requiring a majority of voting servers, known as a quorum, to agree upon any state changes. -Examples of state changes include: adding or removing services, -adding or removing nodes, and changes in service or node health status. - -Without a quorum, Consul experiences an outage: -it cannot provide most of its capabilities because they rely on -the availability of this state information. -If Consul has an outage, normal operation can be restored by following the -[Disaster recovery for Consul clusters guide](/consul/tutorials/datacenter-operations/recovery-outage). - -If Consul is deployed with 3 servers, the quorum size is 2. The deployment can lose 1 -server and still maintain quorum, so it has a fault tolerance of 1. -If Consul is instead deployed with 5 servers, the quorum size increases to 3, so -the fault tolerance increases to 2. -To learn more about the relationship between the -number of servers, quorum, and fault tolerance, refer to the -[consensus protocol documentation](/consul/docs/architecture/consensus#deployment_table). - -Effectively mitigating your risk is more nuanced than just increasing the fault tolerance -because the infrastructure costs can outweigh the improved resiliency. You must also consider correlated risks at the infrastructure-level. There are occasions when multiple servers fail at the same time. That means that a single failure could cause a Consul outage, even if your server-level fault tolerance is 2. - -Different options for your resilient datacenter present trade-offs between operational complexity, computing cost, and Consul request performance. Consider these factors when designing your resilient architecture. - -## Fault tolerance - -The following sections explore several options for increasing Consul's fault tolerance. For enhanced reliability, we recommend taking a holistic approach by layering these multiple functionalities together. - -- Spread servers across infrastructure [availability zones](#availability-zones). -- Use a [minimum quorum size](#quorum-size) to avoid performance impacts. -- Use [redundancy zones](#redundancy-zones) to improve fault tolerance. -- Use [Autopilot](#autopilot) to automatically prune failed servers and maintain quorum size. -- Use [cluster peering](#cluster-peering) to provide service redundancy. - -### Availability zones - - -The cloud or on-premise infrastructure underlying your [Consul datacenter](/consul/docs/install/glossary#datacenter) can run across multiple availability zones. - -An availability zone is meant to share no points of failure with other zones by: -- Having power, cooling, and networking systems independent from other zones -- Being physically distant enough from other zones so that large-scale disruptions - such as natural disasters (flooding, earthquakes) are very unlikely to affect multiple zones - -Availability zones are available in the regions of most cloud providers and in some on-premise installations. -If possible, spread your Consul voting servers across 3 availability zones -to protect your Consul datacenter from a single zone-level failure. -For example, if deploying 5 Consul servers across 3 availability zones, place no more than 2 servers in each zone. -If one zone fails, at most 2 servers are lost and quorum will be maintained by the 3 remaining servers. - -To distribute your Consul servers across availability zones, modify your infrastructure configuration with your infrastructure provider. No change is needed to your Consul server's agent configuration. - -Additionally, you should leverage resources that can automatically restore your compute instance, -such as autoscaling groups, virtual machine scale sets, or compute engine autoscaler. -Customize autoscaling resources to re-deploy servers into specific availability zones and ensure the desired numbers of servers are available at all times. - -### Quorum size - -For most production use cases, we recommend using a minimum quorum of either 3 or 5 voting servers, -yielding a server-level fault tolerance of 1 or 2 respectively. - -Even though it would improve fault tolerance, -adding voting servers beyond 5 is **not recommended** because it decreases Consul's performance— -it requires Consul to involve more servers in every state change or consistent read. - -Consul Enterprise users can use redundancy zones to improve fault tolerance without this performance penalty. - -### Redundancy zones - -Use Consul Enterprise [redundancy zones](/consul/docs/enterprise/redundancy) to improve fault tolerance without the performance penalty of increasing the number of voting servers. - -![Reference architecture diagram for Consul Redundancy zones](/img/architecture/consul-redundancy-zones-light.png#light-theme-only) -![Reference architecture diagram for Consul Redundancy zones](/img/architecture/consul-redundancy-zones-dark.png#dark-theme-only) - -Each redundancy zone should be assigned 2 or more Consul servers. -If all servers are healthy, only one server per redundancy zone will be an active voter; -all other servers will be backup voters. -If a zone's voter is lost, it will be replaced by: -- A backup voter within the same zone, if any. Otherwise, -- A backup voter within another zone, if any. - -Consul can replace lost voters with backup voters within 30 seconds in most cases. -Because this replacement process is not instantaneous, -redundancy zones do not improve immediate fault tolerance— -the number of healthy voting servers that can fail at once without causing an outage. -Instead, redundancy zones improve optimistic fault tolerance: -the number of healthy active and back-up voting servers that can fail gradually without causing an outage. - -The relationship between these two types of fault tolerance is: - -_Optimistic fault tolerance = immediate fault tolerance + the number of healthy backup voters_ - -For example, consider a Consul datacenter with 3 redundancy zones and 2 servers per zone. -There will be 3 voting servers (1 per zone), meaning a quorum size of 2 and an immediate fault tolerance of 1. -There will also be 3 backup voters (1 per zone), each of which increase the optimistic fault tolerance. -Therefore, the optimistic fault tolerance is 4. -This provides performance similar to a 3 server setup with fault tolerance similar to a 7 server setup. - -We recommend associating each Consul redundancy zone with an infrastructure availability zone -to also gain the infrastructure-level fault tolerance benefits provided by availability zones. -However, Consul redundancy zones can be used even without the backing of infrastructure availability zones. - -For more information on redundancy zones, refer to: -- [Redundancy zone documentation](/consul/docs/enterprise/redundancy) - for a more detailed explanation -- [Redundancy zone tutorial](/consul/tutorials/enterprise/redundancy-zones) - to learn how to use them - -### Autopilot - -Autopilot is a set of functions that introduce servers to a cluster, cleans up dead servers, and monitors the state of the Raft protocol in the Consul cluster. - -When you enable Autopilot's dead server cleanup, Autopilot marks failed servers as `Left` and removes them from the Raft peer set to prevent them from interfering with the quorum size. Autopilot does that as soon as a replacement Consul server comes online. This behavior is beneficial when server nodes failed and have been redeployed but Consul considers them as new nodes because their IP address and hostnames have changed. Autopilot keeps the cluster peer set size correct and the quorum requirement simple. - -To illustrate the Autopilot advantage, consider a scenario where Consul has a cluster of five server nodes. The quorum is three, which means the cluster can lose two server nodes before the cluster fails. The following events happen: - -1. Two server nodes fail. -1. Two replacement nodes are deployed with new hostnames and IPs. -1. The two replacement nodes rejoin the Consul cluster. -1. Consul treats the replacement nodes as extra nodes, unrelated to the previously failed nodes. - -_With Autopilot not enabled_, the following happens: - -1. Consul does not immediately clean up the failed nodes when the replacement nodes join the cluster. -1. The cluster now has the three surviving nodes, the two failed nodes, and the two replacement nodes, for a total of seven nodes. - - The quorum is increased to four, which means the cluster can only afford to lose one node until after the two failed nodes are deleted in seventy-two hours. - - The redundancy level has decreased from its initial state. - -_With Autopilot enabled_, the following happens: - -1. Consul immediately cleans up the failed nodes when the replacement nodes join the cluster. -1. The cluster now has the three surviving nodes and the two replacement nodes, for a total of five nodes. - - The quorum stays at three, which means the cluster can afford to lose two nodes before it fails. - - The redundancy level remains the same. - -### Cluster peering - -Linking multiple Consul clusters together to provide service redundancy is the most effective method to prevent disruption from failure. This method is enhanced when you design individual Consul clusters with resilience in mind. Consul clusters interconnect in two ways: WAN federation and cluster peering. We recommend using cluster peering whenever possible. - -Cluster peering lets you connect two or more independent Consul clusters using mesh gateways, so that services can communicate between non-identical partitions in different datacenters. - -![Reference architecture diagram for Consul cluster peering](/img/architecture/cluster-peering-diagram-light.png#light-theme-only) -![Reference architecture diagram for Consul cluster peering](/img/architecture/cluster-peering-diagram-dark.png#dark-theme-only) - -Cluster peering is the preferred way to interconnect clusters because it is operationally easier to configure and manage than WAN federation. Cluster peering communication between two datacenters runs only on one port on the related Consul mesh gateway, which makes it operationally easy to expose for routing purposes. - -When you use cluster peering to connect admin partitions between datacenters, use Consul’s dynamic traffic management functionalities `service-splitter`, `service-router` and `service-failover` to configure your service mesh to automatically forward or failover service traffic between peer clusters. Consul can then manage the traffic intended for the service and do [failover](/consul/docs/connect/config-entries/service-resolver#spec-failover), [load-balancing](/consul/docs/connect/config-entries/service-resolver#spec-loadbalancer), or [redirection](/consul/docs/connect/config-entries/service-resolver#spec-redirect). - -Cluster peering also extends service discovery across different datacenters independent of service mesh functions. After you peer datacenters, you can refer to services between datacenters with `.virtual.peer.consul` in Consul DNS. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [Consul DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups. - -For more information on cluster peering, refer to: -- [Cluster peering documentation](/consul/docs/connect/cluster-peering) - for a more detailed explanation -- [Cluster peering tutorial](/consul/tutorials/implement-multi-tenancy/cluster-peering) - to learn how to implement cluster peering diff --git a/website/content/docs/architecture/index.mdx b/website/content/docs/architecture/index.mdx deleted file mode 100644 index dc3f7954bdd0..000000000000 --- a/website/content/docs/architecture/index.mdx +++ /dev/null @@ -1,114 +0,0 @@ ---- -layout: docs -page_title: Consul Architecture -description: >- - Consul datacenters consist of clusters of server agents (control plane) and client agents deployed alongside service instances (data plane). Learn how these components and their different communication methods make Consul possible. ---- - -# Consul Architecture - -This topic provides an overview of the Consul architecture. We recommend reviewing the Consul [glossary](/consul/docs/install/glossary) as a companion to this topic to help you become familiar with HashiCorp terms. - -> Refer to the [Reference Architecture tutorial](/consul/tutorials/production-deploy/reference-architecture) for hands-on guidance about deploying Consul in production. - -## Introduction - -Consul provides a control plane that enables you to register, access, and secure services deployed across your network. The _control plane_ is the part of the network infrastructure that maintains a central registry to track services and their respective IP addresses. - -When using Consul’s service mesh capabilities, Consul dynamically configures sidecar and gateway proxies in the request path, which enables you to authorize service-to-service connections, route requests to healthy service instances, and enforce mTLS encryption without modifying your service’s code. This ensures that communication remains performant and reliable. Refer to [Service Mesh Proxy Overview](/consul/docs/connect/proxies) for an overview of sidecar proxies. - -![Diagram of the Consul control plane](/img/consul-arch/consul-arch-overview-control-plane.svg) - -## Datacenters - -The Consul control plane contains one or more _datacenters_. A datacenter is the smallest unit of Consul infrastructure that can perform basic Consul operations. A datacenter contains at least one [Consul server agent](#server-agents), but a real-world deployment contains three or five server agents and several [Consul client agents](#client-agents). You can create multiple datacenters and allow nodes in different datacenters to interact with each other. Refer to [Bootstrap a Datacenter](/consul/docs/install/bootstrapping) for information about how to create a datacenter. - -### Clusters - -A collection of Consul agents that are aware of each other is called a _cluster_. The terms _datacenter_ and _cluster_ are often used interchangeably. In some cases, however, _cluster_ refers only to Consul server agents, such as in [HCP Consul Dedicated](https://cloud.hashicorp.com/products/consul). In other contexts, such as the [_admin partitions_](/consul/docs/enterprise/admin-partitions) feature included with Consul Enterprise, a cluster may refer to collection of client agents. - -## Agents - -You can run the Consul binary to start Consul _agents_, which are daemons that implement Consul control plane functionality. You can start agents as servers or clients. Refer to [Consul Agent](/consul/docs/agent) for additional information. - -### Server agents - -Consul server agents store all state information, including service and node IP addresses, health checks, and configuration. We recommend deploying three or five servers in a cluster. The more servers you deploy, the greater the resilience and availability in the event of a failure. More servers, however, slow down [consensus](#consensus-protocol), which is a critical server function that enables Consul to efficiently and effectively process information. - -#### Consensus protocol - -Consul clusters elect a single server to be the _leader_ through a process called _consensus_. The leader processes all queries and transactions, which prevents conflicting updates in clusters containing multiple servers. - -Servers that are not currently acting as the cluster leader are called _followers_. Followers forward requests from client agents to the cluster leader. The leader replicates the requests to all other servers in the cluster. Replication ensures that if the leader is unavailable, other servers in the cluster can elect another leader without losing any data. - -Consul servers establish consensus using the Raft algorithm on port `8300`. Refer to [Consensus Protocol](/consul/docs/architecture/consensus) for additional information. - -![Diagram of the Consul control plane consensus traffic](/img/consul-arch/consul-arch-overview-consensus.svg) - -### Client agents - -Consul clients report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. Clients use remote procedure calls (RPC) to interact with servers. By default, clients send RPC requests to the servers on port `8300`. - -There are no limits to the number of client agents or services you can use with Consul, but production deployments should distribute services across multiple Consul datacenters. Using a multi-datacenter deployment enhances infrastructure resilience and limits control plane issues. We recommend deploying a maximum of 5,000 client agents per datacenter. Some large organizations have deployed tens of thousands of client agents and hundreds of thousands of service instances across a multi-datacenter deployment. Refer to [Cross-datacenter requests](#cross-datacenter-requests) for additional information. - -You can also run Consul with an alternate service mesh configuration that deploys Envoy proxies but not client agents. Refer to [Simplified Service Mesh with Consul Dataplanes](/consul/docs/connect/dataplane) for more information. - -## LAN gossip pool - -Client and server agents participate in a LAN gossip pool so that they can distribute and perform node [health checks](/consul/docs/services/usage/checks). Agents in the pool propagate the health check information across the cluster. Agent gossip communication occurs on port `8301` using UDP. Agent gossip falls back to TCP if UDP is not available. Refer to [Gossip Protocol](/consul/docs/architecture/gossip) for additional information. - -The following simplified diagram shows the interactions between servers and clients. - - - - - -![Diagram of the Consul LAN gossip pool](/img/consul-arch/consul-arch-overview-lan-gossip-pool.svg) - - - - -![Diagram of RPC communication between Consul agents](/img/consul-arch/consul-arch-overview-rpc.svg) - - - - -## Cross-datacenter requests - -Each Consul datacenter maintains its own catalog of services and their health. By default, the information is not replicated across datacenters. WAN federation and cluster peering are two multi-datacenter deployment models that enable service connectivity across datacenters. - -### WAN federation - -WAN federation is an approach for connecting multiple Consul datacenters. It requires you to designate a _primary datacenter_ that contains authoritative information about all datacenters, including service mesh configurations and access control list (ACL) resources. - -In this model, when a client agent requests a resource in a remote secondary datacenter, a local Consul server forwards the RPC request to a remote Consul server that has access to the resource. A remote server sends the results to the local server. If the remote datacenter is unavailable, its resources are also unavailable. By default, WAN-federated servers send cross-datacenter requests over TCP on port `8300`. - -You can configure control plane and data plane traffic to go through mesh gateways, which simplifies networking requirements. - -> **Hands-on**: To enable services to communicate across datacenters when the ACL system is enabled, refer to the [ACL Replication for Multiple Datacenters](/consul/tutorials/security-operations/access-control-replication-multiple-datacenters) tutorial. - -#### WAN gossip pool - -Servers may also participate in a WAN gossip pool, which is optimized for greater latency imposed by the Internet. The pool enables servers to exchange information, such as their addresses and health, and gracefully handle loss of connectivity in the event of a failure. - -In the following diagram, the servers in each data center participate in a WAN gossip pool by sending data over TCP/UDP on port `8302`. Refer to [Gossip Protocol](/consul/docs/architecture/gossip) for additional information. - - - - - -![Diagram of the Consul LAN gossip pool](/img/consul-arch/consul-arch-overview-wan-gossip-cross-cluster.svg) - - - - -![Diagram of RPC communication between Consul agents](/img/consul-arch/consul-arch-overview-remote-dc-forwarding-cross-cluster.svg) - - - - -### Cluster peering - -You can create peering connections between two or more independent clusters so that services deployed to different datacenters or admin partitions can communicate. An [admin partition](/consul/docs/enterprise/admin-partitions) is a feature in Consul Enterprise that enables you to define isolated network regions that use the same Consul servers. In the cluster peering model, you create a token in one of the datacenters or partitions and configure another datacenter or partition to present the token to establish the connection. - -Refer to [What is Cluster Peering?](/consul/docs/connect/cluster-peering) for additional information. diff --git a/website/content/docs/architecture/jepsen.mdx b/website/content/docs/architecture/jepsen.mdx deleted file mode 100644 index 44a433c8d35b..000000000000 --- a/website/content/docs/architecture/jepsen.mdx +++ /dev/null @@ -1,118 +0,0 @@ ---- -layout: docs -page_title: Consistency Verification | Jepsen Testing Results -description: >- - Jepsen is a tool to measure the reliability and consistency of distributed systems across network partitions. Learn about the Jepsen testing performed on Consul to ensure it gracefully recovers from partitions and maintains consistent state. ---- - -# Jepsen Testing Results - -[Jepsen](http://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions) -is a tool, written by Kyle Kingsbury, designed to test the partition -tolerance of distributed systems. It creates network partitions while fuzzing -the system with random operations. The results are analyzed to see if the system -violates any of the consistency properties it claims to have. - -As part of our Consul testing, we ran a Jepsen test to determine if -any consistency issues could be uncovered. In our testing, Consul -gracefully recovered from partitions without introducing any consistency -issues. - -## Running the tests - -At the moment, testing with Jepsen is rather complex as it requires -setting up multiple virtual machines, SSH keys, DNS configuration, -and a working Clojure environment. We hope to contribute our Consul -testing code upstream and to provide a Vagrant environment for Jepsen -testing soon. - -## Output - -Below is the output captured from Jepsen. We ran Jepsen multiple times, -and it passed each time. This output is only representative of a single -run and has been edited for length. Please reach out on [Consul's Discuss](https://discuss.hashicorp.com/c/consul) -if you would like to reproduce the Jepsen results. - - - -```shell-session -$ lein test :only jepsen.system.consul-test - -lein test jepsen.system.consul-test -INFO jepsen.os.debian - :n5 setting up debian -INFO jepsen.os.debian - :n3 setting up debian -INFO jepsen.os.debian - :n4 setting up debian -INFO jepsen.os.debian - :n1 setting up debian -INFO jepsen.os.debian - :n2 setting up debian -INFO jepsen.os.debian - :n4 debian set up -INFO jepsen.os.debian - :n5 debian set up -INFO jepsen.os.debian - :n3 debian set up -INFO jepsen.os.debian - :n1 debian set up -INFO jepsen.os.debian - :n2 debian set up -INFO jepsen.system.consul - :n1 consul nuked -INFO jepsen.system.consul - :n4 consul nuked -INFO jepsen.system.consul - :n5 consul nuked -INFO jepsen.system.consul - :n3 consul nuked -INFO jepsen.system.consul - :n2 consul nuked -INFO jepsen.system.consul - Running nodes: {:n1 false, :n2 false, :n3 false, :n4 false, :n5 false} -INFO jepsen.system.consul - :n2 consul nuked -INFO jepsen.system.consul - :n3 consul nuked -INFO jepsen.system.consul - :n4 consul nuked -INFO jepsen.system.consul - :n5 consul nuked -INFO jepsen.system.consul - :n1 consul nuked -INFO jepsen.system.consul - :n1 starting consul -INFO jepsen.system.consul - :n2 starting consul -INFO jepsen.system.consul - :n4 starting consul -INFO jepsen.system.consul - :n5 starting consul -INFO jepsen.system.consul - :n3 starting consul -INFO jepsen.system.consul - :n3 consul ready -INFO jepsen.system.consul - :n2 consul ready -INFO jepsen.system.consul - Running nodes: {:n1 true, :n2 true, :n3 true, :n4 true, :n5 true} -INFO jepsen.system.consul - :n5 consul ready -INFO jepsen.system.consul - :n1 consul ready -INFO jepsen.system.consul - :n4 consul ready -INFO jepsen.core - Worker 0 starting -INFO jepsen.core - Worker 2 starting -INFO jepsen.core - Worker 1 starting -INFO jepsen.core - Worker 3 starting -INFO jepsen.core - Worker 4 starting -INFO jepsen.util - 2 :invoke :read nil -INFO jepsen.util - 3 :invoke :cas [4 4] -INFO jepsen.util - 0 :invoke :write 4 -INFO jepsen.util - 1 :invoke :write 1 -INFO jepsen.util - 4 :invoke :cas [4 0] -INFO jepsen.util - 2 :ok :read nil -INFO jepsen.util - 4 :fail :cas [4 0] -(Log Truncated...) -INFO jepsen.util - 4 :invoke :cas [3 3] -INFO jepsen.util - 4 :fail :cas [3 3] -INFO jepsen.util - :nemesis :info :stop nil -INFO jepsen.util - :nemesis :info :stop "fully connected" -INFO jepsen.util - 0 :fail :read nil -INFO jepsen.util - 1 :fail :write 0 -INFO jepsen.util - :nemesis :info :stop nil -INFO jepsen.util - :nemesis :info :stop "fully connected" -INFO jepsen.core - nemesis done -INFO jepsen.core - Worker 3 done -INFO jepsen.util - 1 :invoke :read nil -INFO jepsen.core - Worker 2 done -INFO jepsen.core - Worker 4 done -INFO jepsen.core - Worker 0 done -INFO jepsen.util - 1 :ok :read 3 -INFO jepsen.core - Worker 1 done -INFO jepsen.core - Run complete, writing -INFO jepsen.core - Analyzing -(Log Truncated...) -INFO jepsen.core - Analysis complete -INFO jepsen.system.consul - :n3 consul nuked -INFO jepsen.system.consul - :n2 consul nuked -INFO jepsen.system.consul - :n4 consul nuked -INFO jepsen.system.consul - :n1 consul nuked -INFO jepsen.system.consul - :n5 consul nuked -1964 element history linearizable. :D - -Ran 1 tests containing 1 assertions. -0 failures, 0 errors. -``` - - diff --git a/website/content/docs/architecture/scale.mdx b/website/content/docs/architecture/scale.mdx deleted file mode 100644 index 119e05454abf..000000000000 --- a/website/content/docs/architecture/scale.mdx +++ /dev/null @@ -1,283 +0,0 @@ ---- -layout: docs -page_title: Recommendations for operating Consul at scale -description: >- - When using Consul for large scale deployments, you can ensure network resilience by tailoring your network to your needs. Learn more about HashiCorp's recommendations for deploying Consul at scale. ---- - -# Operating Consul at Scale - -This page describes how Consul's architecture impacts its performance with large scale deployments and shares recommendations for operating Consul in production at scale. - -## Overview - -Consul is a distributed service networking system deployed as a centralized set of servers that coordinate network activity using sidecars that are located alongside user workloads. When Consul is used for its service mesh capabilities, servers also generate configurations for Envoy proxies that run alongside service instances. These proxies support service mesh capabilities like end-to-end mTLS and progressive deployments. - -Consul can be deployed in either a single datacenter or across multiple datacenters by establishing WAN federation or peering connections. In this context, a datacenter refers to a named environment whose hosts can communicate with low networking latency. Typically, users map a Consul datacenter to a cloud provider region such as AWS `us-east-1` or Azure `East US`. - -To ensure consistency and high availability, Consul servers share data using the [Raft consensus protocol](/consul/docs/architecture/consensus). When persisting data, Consul uses BoltDB to store Raft logs and a custom file format for state snapshots. For more information, refer to [Consul architecture](/consul/docs/architecture). - -## General deployment recommendations - -This section provides general configuration and monitoring recommendations for operating Consul at scale. - -### Data plane resiliency - -To make service-to-service communication resilient against outages and failures, we recommend spreading multiple service instances for a service across fault domains. Resilient deployments spread services across multiples of the following: - -- Infrastructure-level availability zones -- Runtime platform instances, such as Kubernetes clusters -- Consul datacenters - -In the event that any individual domain experiences a failure, service failover ensures that healthy instances in other domains remain discoverable. Consul automatically provides service failover between instances within a single [admin partition](/consul/docs/enterprise/admin-partitions) or datacenter. - -Service failover across Consul datacenters must be configured in the datacenters before you can use it. Use one of the following methods to configure failover across datacenters: - -- **If you are using Consul service mesh**: Implement failover using [service-resolver configuration entries](/consul/docs/connect/config-entries/service-resolver#failover). -- **If you are using Consul service discovery without service mesh**: Implement [geo-redundant failover using prepared queries](/consul/tutorials/developer-discovery/automate-geo-failover). - -### Control plane resiliency - -When a large number services are deployed to a single datacenter, the Consul servers may experience slower network performance. To make the control plane more resilient against slowdowns and outages, limit the size of individual datacenters by spreading deployments across availability zones, runtimes, and datacenters. - -#### Datacenter size - -To ensure resiliency, we recommend limiting deployments to a maximum of 5,000 Consul client agents per Consul datacenter. There are two reasons for this recommendation: - -1. **Blast radius reduction**: When Consul suffers a server outage in a datacenter or region, _blast radius_ refers to the number of Consul clients or dataplanes attached to that datacenter that can no longer communicate as a result. We recommend limiting the total number of clients attached to a single Consul datacenter in order to reduce the size of its blast radius. Even though Consul is able to run clusters with 10,000 or more nodes, it takes longer to bring larger deployments back online after an outage, which impacts time to recovery. -1. **Agent gossip management**: Consul agents use the [gossip protocol](/consul/docs/architecture/gossip) to share membership information in a gossip pool. By default, all client agents in a single Consul datacenter are in a single gossip pool. Whenever an agent joins or leaves the gossip pool, the other agents propagate that event throughout the pool. If a Consul datacenter experiences _agent churn_, or a consistently high rate of agents joining and leaving a single pool, cluster performance may be affected by gossip messages being generated faster than they can be transmitted. The result is an ever-growing message queue. - -To mitigate these risks, we recommend a maximum of 5,000 Consul client agents in a single gossip pool. There are several strategies for making gossip pools smaller: - -1. Run exactly one Consul agent per host in the infrastructure. -1. Break up the single Consul datacenter into multiple smaller datacenters. -1. Enterprise users can define [network segments](/consul/docs/enterprise/network-segments/network-segments-overview) to divide the single gossip pool in the Consul datacenter into multiple smaller pools. - -If appropriate for your use case, we recommend breaking up a single Consul datacenter into multiple smaller datacenters. Running multiple datacenters reduces your network’s blast radius more than applying network segments. - -Be aware that the number 5,000 is a heuristic for deployments. The number of agents you deploy per datacenter is limited by performance, not Consul itself. Because gossip stability risk is determined by _the rate of agent churn_ rather than _the number of nodes_, a gossip pool with mostly static nodes may be able to operate effectively with more than 5,000 agents. Meanwhile, a gossip pool with highly dynamic agents, such as spot fleet instances and serverless functions where 10% of agents are replaced each day, may need to be smaller than 5,000 agents. - -For additional information about the specific tests we conducted on Consul deployments at scale in order to generate these recommendations, refer to [Consul Scale Test Report to Observe Gossip Stability](https://www.hashicorp.com/blog/consul-scale-test-report-to-observe-gossip-stability) on the HashiCorp blog. - -For most use cases, a limit of 5,000 agents is appropriate. When the `consul.serf.queue.Intent` metric is consistently high, it is an indication that the gossip pool cannot keep up with the sustained level of churn. In this situation, reduce the churn by lowering the number agents per datacenter. - -#### Kubernetes-specific guidance - -In Kubernetes, even though it is possible to deploy Consul agents inside pods alongside services running in the same pod, this unsupported deployment pattern has known performance issues at scale. At large volumes, pod registration and deregistration in Kubernetes causes gossip instability that can lead to cascading failures as services are marked unhealthy, resulting in further cluster churn. - -In Consul v1.14 and higher, Consul on Kubernetes does not need to run client agents on every node in a cluster for service discovery and service mesh. This deployment configuration lowers Consul’s resource usage in the data plane, but requires additional resources in the control plane to process [xDS resources](/consul/docs/agent/config/config-files#xds-server-parameters). To learn more, refer to [simplified service mesh with Consul Dataplane](/consul/docs/connect/dataplane). - -**If you use Kubernetes and Consul as a backend for Vault**: Use Vault’s integrated storage backend instead of Consul. A runtime dependency conflict prevents Consul dataplanes from being compatible with Vault. If you need to use Consul v1.14 and higher as a backend for Vault in your Kubernetes deployment, create a separate Consul datacenter that is not federated or peered to your other Consul servers. You can size this datacenter according to your needs and use it exclusively for backend storage for Vault. - -## Consul server deployment recommendations - -Consul server agents are an important part of Consul’s architecture. This section summarizes the differences between running managed and self-managed servers, as well as recommendations on the number of servers to run, how to deploy servers across redundancy zones, hardware requirements, and cloud provider integrations. - -### Consul server runtimes - -Consul servers can be deployed on a few different runtimes: - -- **HashiCorp Cloud Platform (HCP) Consul Dedicated**. These Consul servers are deployed in a hosted environment managed by HCP. To get started with HCP Consul Dedicated servers in Kubernetes or VM deployments, refer to the [Deploy HCP Consul Dedicated tutorial](/consul/tutorials/get-started-hcp/hcp-gs-deploy). -- **VMs or bare metal servers (Self-managed)**. To get started with Consul on VMs or bare metal servers, refer to the [Deploy Consul server tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy). For a full list of configuration options, refer to [Agents Overview](/consul/docs/agent). -- **Kubernetes (Self-managed)**. To get started with Consul on Kubernetes, refer to the [Deploy Consul on Kubernetes tutorial](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy). -- **Other container environments, including Docker, Rancher, and Mesos (Self-managed)**. - -When operating Consul at scale, self-managed VM or bare metal server deployments offer the most flexibility. Some Consul Enterprise features that can enhance fault tolerance and read scalability, such as [redundancy zones](/consul/docs/enterprise/redundancy) and [read replicas](/consul/docs/enterprise/read-scale), are not available to server agents on Kubernetes runtimes. To learn more, refer to [Consul Enterprise feature availability by runtime](/consul/docs/enterprise#feature-availability-by-runtime). - -### Number of Consul servers - -Determining the number of Consul servers to deploy on your network has two key considerations: - -1. **Fault tolerance**: The number of server outages your deployment can tolerate while maintaining quorum. Additional servers increase a network’s fault tolerance. -1. **Performance scalability**: To handle more requests, additional servers produce latency and slow the quorum process. Having too many servers impedes your network instead of helping it. - -Fault tolerance should determine your initial decision for how many Consul server agents to deploy. Our recommendation for the number of servers to deploy depends on whether you have access to Consul Enterprise redundancy zones: - -- **With redundancy zones**: Deploy 6 Consul servers across 3 availability zones. This deployment provides the performance of a 3 server deployment with the fault tolerance of a 7 server deployment. -- **Without redundancy zones**: Deploy 5 Consul servers across 3 availability zones. All 5 servers should be voting servers, not [read replicas](/consul/docs/enterprise/read-scale). - -For more details, refer to [Improving Consul Resilience](/consul/docs/architecture/improving-consul-resilience). - -### Server requirements - -To ensure your server nodes are a sufficient size, we recommend reviewing [hardware sizing for Consul servers](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers). If your network needs to handle heavy workloads, refer to our recommendations in [read-heavy workload sources and solutions](#read-heavy-workload-sources-and-solutions) and [write-heavy workload sources and solutions](#write-heavy-workload-sources-and-solutions). - -#### File descriptors - -Consul's agents use network sockets for gossip communication with the other nodes and agents. As a result, servers create file descriptors for connections from clients, connections from other servers, watch handlers, health checks, and log files. For write-heavy clusters, you must increase the size limit for the number of file descriptions from the default value, 1024. We recommend using a number that is two times higher than your expected number of clients in the cluster. - -#### Auto scaling groups - -Auto scaling groups (ASGs) are infrastructure associations in cloud providers used to ensure a specific number of replicas are available for a deployment. When using ASGs for Consul servers, there are specific requirements and processes for bootstrapping Raft and maintaining quorum. - -We recommend using the [`bootstrap-expect` command-line flag](/consul/docs/agent/config/cli-flags#_bootstrap_expect) during cluster creation. However, if you spawn new servers to add to a cluster or upgrade servers, do not configure them to automatically bootstrap. If `bootstrap-expect` is set on these replicas, it is possible for them to create a separate Raft system, which causes a _split brain_ and leads to errors and general cluster instability. - -#### NUMA architecture awareness - -Some cloud providers offer extremely large instance sizes with Non-Uniform Memory Access (NUMA) architectures. Because the Go runtime is not NUMA aware, Consul is not NUMA aware. Even though you can run Consul on NUMA architecture, it will not take advantage of the multiprocessing capabilities. - -### Consistency modes - -Consul offers different [consistency modes](/consul/api-docs/features/consistency#stale) for both its DNS and HTTP APIs. - -#### DNS - -We strongly recommend using [stale consistency mode for DNS lookups](/consul/api-docs/features/consistency#consul-dns-queries) to optimize for performance over consistency when operating at scale. It is enabled by default and configured with `dns_config.allow_stale`. - -We also recommend that you do not configure [`dns_config.max_stale` to limit the staleness of DNS responses](/consul/api-docs/features/consistency#limiting-staleness-advanced-usage), as it may result in a prolonged outage if your Consul servers become overloaded. If bounded result consistency is required by a service, consider modifying the service to use consistent service discovery HTTP API queries instead of DNS lookups. - -Avoid using [`dns_config.use_cache`](/consul/docs/agent/config/config-files#dns_use_cache) when operating Consul at scale. Because the Consul agent cache allocates memory for each requested route and each allocation can live up to 3 days, severe memory issues may occur. To implement DNS caching, we instead recommend that you [configure TTLs for services and nodes](/consul/docs/services/discovery/dns-cache#ttl) to enable the DNS client to cache responses from Consul. - -#### HTTP API - -By default, all HTTP API read requests use the [`default` consistency mode](/consul/api-docs/features/consistency#default-1) unless overridden on a per-request basis. We do not recommend changing the default consistency mode for HTTP API requests. - -We also recommend that you do not configure [`http_config.discovery_max_stale`](/consul/api-docs/features/consistency#changing-the-default-consistency-mode-advanced-usage) to limit the staleness of HTTP responses. - -## Resource usage and metrics recommendations - -While operating Consul, monitor the CPU load on the Consul server agents and use metrics from agent telemetry to figure out the cause. Procedures for mitigating heavy resource usage depend on whether the load is caused by read operations, write operations, or Consul’s consensus protocol. - -### Read-heavy workload sources and solutions - -The highest CPU load usually belongs to the current leader. If the CPU load is high, request load is likely a major contributor. Check the following [server health metrics](/consul/docs/agent/telemetry#server-health): - -- `consul.rpc.*` - Traditional RPC metrics. The most relevant metrics for understanding server CPU load in read-heavy workloads are `consul.rpc.query` and `consul.rpc.queries_blocking`. -- `consul.grpc.server.*` - Metrics for the number of streams being processed by the server. -- `consul.xds.server.*` - Metrics for the Envoy xDS resources being processed by the server. In Consul v1.14 and higher, these metrics have the potential to become a significant source of read load. Refer to [Consul dataplanes](/consul/docs/connect/dataplane) for more information. - -Depending on your needs, choose one of the following strategies to mitigate server CPU load: - -- The fastest mitigation strategy is to vertically scale servers. However, this strategy increases compute costs and does not scale indefinitely. -- The most effective long term mitigation strategy is to use [stale consistency mode](/consul/api-docs/features/consistency#stale) for as many read requests as possible. In Consul v1.12 and higher, operators can use the [`consul.rpc.server.call` metric](/consul/docs/agent/telemetry#server-workload) to identify the most frequent type of read requests made to the Consul servers. Cross reference the results with each endpoint’s [HTTP API documentation](/consul/api-docs) and use stale consistency for endpoints that support it. -- If most read requests already use stale consistency mode and you still need to reduce your request load, add more non-voting servers to your deployment. You can use either [redundancy zones](/consul/docs/enterprise/redundancy) or [read replicas](/consul/docs/enterprise/read-scale) to scale reads without impacting write latency. We recommend adding more servers to redundancy zones because they improve both fault tolerance and stale read scalability. -- In Consul v1.14 and higher, servers handle Envoy XDS streams for [Consul Dataplane deployments](/consul/docs/connect/dataplane) in stale consistency mode. As a result, server consistency mode is not configurable. Use the `consul.xds.server.*` metrics to identify issues related to XDS streams. - -### Write-heavy workload sources and solutions - -Consul is write-limited by disk I/O. For write-heavy workloads, we recommend using NVMe disks. - -As a starting point, you should make sure your hardware meets the requirements for [large size server clusters](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers), which has 7500+ IOps and 250+ MB/s disk throughput. IOps should be around 5 to 10 times the expected write rate. Conduct further analysis around disk sizing and your expected write rates to understand your network’s specific needs. - -If you use network storage, such as AWS EBS, we recommend provisioned I/O volumes. While general purpose volumes function properly, their burstable IOps make it harder to capacity plan. A small peak in writes may not trigger alerts, but as usage grows you may reach a point where the burst limit runs out and workload performance worsens. - -For more information, refer to the [server performance read/write tuning](/consul/docs/install/performance#read-write-tuning). - -### Raft database performance sources and solutions - -Consul servers use the [Raft consensus protocol](/consul/docs/architecture/consensus) to maintain a consistent and fault-tolerant state. Raft stores most Consul data in a MemDB database, which is an in-memory database with indexing. In order to tolerate restarts and power outages, Consul writes Raft logs to disk using BoltDB. Refer to [Agent telemetry](/consul/docs/agent/telemetry) for more information on metrics for detecting write health. - -To monitor overall transaction performance, check for spikes in the [Transaction timing metrics](/consul/docs/agent/telemetry#transaction-timing). You can also use the [Raft replication capacity issues metrics](/consul/docs/agent/telemetry#raft-replication-capacity-issues) to monitor Raft log snapshots and restores, as spikes and longer durations can be symptoms of overall write and disk contention issues. - -In Consul v1.11 and higher, you can also monitor Raft performance with the [`consul.raft.boltdb.*` metrics](/consul/docs/agent/telemetry#bolt-db-performance). We recommend monitoring `consul.raft.boltdb.storeLogs` for increased activity above normal operating patterns. - -Refer to [Consul agent telemetry](/consul/docs/agent/telemetry#bolt-db-performance) for more information on agent metrics and how to use them. - -#### Raft database size - -Raft writes logs to BoltDB, which is designed as a single grow-only file. As a result, if you add 1GB of log entries and then you take a snapshot, only a small number of recent log entries may appear in the file. However, the actual file on disk never shrinks smaller than the 1GB size it grew. - -If you need to reclaim disk space, use the `bbolt` CLI to copy the data to a new database and repoint to the new database in the process. However, be aware that the `bbolt compact` command requires the database to be offline while being pointed to the new database. - -In many cases, including in large clusters, disk space is not a primary concern because Raft logs rarely grow larger than a small number of GiB. However, an inflated file with lots of free space significantly degrades write performance overall due to _freelist management_. - -After they are written to disk, Raft logs are eventually captured in a snapshot and log nodes are removed from BoltDB. BoltDB keeps track of the pages for the removed nodes in its freelist. BoltDB also writes this freelist to disk every time there is a Raft write. When the Raft log grows large quickly and then gets truncated, the size of the freelist can become very large. In the worst case reported to us, the freelist was over 10MB. When this large freelist is written to disk on every Raft commit, the result is a large write amplification for what should be a small Raft commit. - -To figure out if a Consul server’s disk performance issues are the result of BoldDB’s freelist, try the following strategies: - -- Compare network bandwidth inbound to the server against disk write bandwidth. If _disk write bandwidth_ is greater than or equal to 5 times the _inbound network bandwidth_, the disks are likely experiencing freelist management performance issues. While BoltDB freelist may cause problems at ratios lower than 5 to 1, high write bandwidth to inbound bandwidth ratios are a reliable indicator that BoltDB freelist is causing a problem. -- Use the [`consul.raft.leader.dispatchLog` metric](/consul/docs/agent/telemetry#server-health) to get information about how long it takes to write a batch of logs to disk. -- In Consul v1.13 and higher, you can use [Raft thread saturation metrics](/consul/docs/agent/telemetry#raft-thread-saturation) to figure out if Raft is experiencing back pressure and is unable to accept new work due disk limitations. - -In Consul v1.11 and higher, you can prevent BoltDB from writing the freelist to disk by setting [`raftboltdb.NoFreelistSync`](/consul/docs/agent/config/config-files#NoFreelistSync) to `true`. This setting causes BoltDB to retain the freelist in memory instead. However, be aware that when BoltDB restarts, it needs to scan the database file to manually create the freelist. Small delays in startup may occur. On a fast disk, we measured these delays at the order of tens of seconds for a raft.db file that was 5GiB in size with only 250MiB of used pages. - -In general, set [`raftboltdb.NoFreelistSync`](/consul/docs/agent/config/config-files#NoFreelistSync) to `true` to produce the following effects: - -- Reduce the amount of data written to disk -- Increase the amount of time it takes to load the raft.db file on startup - -We recommend operators optimize networks according to their individual concerns. For example, if your server runs into disk performance issues but Consul servers do not restart often, setting [`raftboltdb.NoFreelistSync`](/consul/docs/agent/config/config-files#NoFreelistSync) to `true` may solve your problems. However, the same action causes issues for deployments with large database files and frequent server restarts. - -#### Raft snapshots - -Each state change produces a Raft log entry, and each Consul server receives the same sequence of log entries, which results in servers sharing the same state. The sequence of Raft logs is periodically compacted by the leader into a _snapshot_ of state history. These snapshots are internal to Raft and are not the same as the snapshots generated through Consul's API, although they contain the same data. Raft snapshots are stored in the server's data directory in the `raft/` folder, alongside the logs in `raft.db`. - -When you add a new Consul server, it must catch up to the current state. It receives the latest snapshot from the leader followed by the sequence of logs between that snapshot and the leader’s current state. Each Raft log has a sequence number and each snapshot contains the last sequence number included in the snapshot. A combination of write-heavy workloads, a large state, congested networks, or busy servers makes it possible for new servers to struggle to catch up to the current state before the next log they need from the leader has already been truncated. The result is a _snapshot install loop_. - -For example, if snapshot A on the leader has an index of 99 and the current index is 150, then when a new server comes online the leader streams snapshot A to the new server for it to restore. However, this snapshot only enables the new server to catch up to index 99. Not only does the new server still need to catch up to index 150, but the leader continued to commit Raft logs in the meantime. - -When the leader takes snapshot B at index 199, it truncates the logs that accumulated between snapshot A and snapshot B, which means it truncates Raft logs with indexes between 100 and 199. - -Because the new server restored snapshot A, the new server has a current index of 99. It requests logs 100 to 150 because index 150 was the current index when it started the replication restore process. At this point, the leader recognizes that it only has logs 200 and higher, and does not have logs for indexes 100 to 150. The leader determines that the new server’s state is stale and starts the process over by sending the new server the latest snapshot, snapshot B. - -Consul keeps a configurable number of [Raft trailing logs](/consul/docs/agent/config/config-files#raft_trailing_logs) to prevent the snapshot install loop from repeating. The trailing logs are the last logs that went into the snapshot, and the new server can more easily catch up to the current state using these logs. The default Raft trailing logs configuration value is suitable for most deployments. - -In Consul v1.10 and higher, operators can try to prevent a snapshot install loop by monitoring and comparing Consul servers’ `consul.raft.rpc.installSnapshot` and `consul.raft.leader.oldestLogAge` timing metrics. Monitor these metrics for the following situations: - -- After truncation, the lowest number on `consul.raft.leader.oldestLogAge` should always be at least two times higher than the lowest number for `consul.raft.rpc.installSnapshot`. -- If these metrics are too close, increase the number of Raft trailing logs, which increases `consul.raft.leader.oldestLogAge`. Do not set the Raft trailing logs higher than necessary, as it can negatively affect write throughput and latency. - -For more information, refer to [Raft Replication Capacity Issues](/consul/docs/agent/telemetry#raft-replication-capacity-issues). - -## Performance considerations for specific use cases - -This section provides configuration and monitoring recommendations for Consul deployments according to the features you prioritize and their use cases. - -### Service discovery - -To optimize performance for service discovery, we recommend deploying multiple small clusters with consistent numbers of service instances and watches. - -Several factors influence Consul performance at scale when used primarily for its service discovery and health check features. The factors you have control over include: - -- The overall number of registered service instances -- The use of [stale reads](/consul/api-docs/features/consistency#consul-dns-queries) for DNS queries -- The number of entities, such as Consul client agents or dataplane components, that are monitoring Consul for changes in a service's instances, including registration and health status. When any service change occurs, all of those entities incur a computational cost because they must process the state change and reconcile it with previously known data for the service. In addition, the Consul server agents also incur a computational cost when sending these updates. -- Number of [watches](/consul/docs/dynamic-app-config/watches) monitoring for changes to a service. -- Rate of catalog updates, which is affected by the following events: - - A service instance’s health check status changes - - A service instance’s node loses connectivity to Consul servers - - The contents of the [service definition file](/consul/docs/services/configuration/services-configuration-reference) changes - - Service instances are registered or deregistered - - Orchestrators such as Kubernetes or Nomad move a service to a new node - -These factors can occur in combination with one another. Overall, the amount of work the servers complete for service discovery is the product of these factors: - -- Data size, which changes as the number of services and service instances increases -- The catalog update rate -- The number of active watches - -Because it is typical for these factors to increase in number as clusters grow, the CPU and network resources the servers require to distribute updates may eventually exceed linear growth. - -In situations where you can’t run a Consul client agent alongside the service instance you want to register with Consul, such as instances hosted externally or on legacy infrastructure, we recommend using [Consul ESM](https://github.com/hashicorp/consul-esm). - -Consul ESM enables health checks and monitoring for external services. When using Consul ESM, we recommend running multiple instances to ensure redundancy. - -### Service mesh - -Because Consul’s service mesh uses service discovery subsystems, service mesh performance is also optimized by deploying multiple small clusters with consistent numbers of service instances and watches. Service mesh performance is influenced by the following additional factors: - -- The [transparent proxy](/consul/docs/connect/transparent-proxy) feature causes client agents to listen for service instance updates across all services instead of a subset. To prevent performance issues, we recommend that you do not use the permissive intention, `default: allow`, with the transparent proxy feature. When combined, every service instance update propagates to every proxy, which causes additional server load. -- When you use the [built-in service mesh CA provider](/consul/docs/connect/ca/consul#built-in-ca), Consul leaders are responsible for signing certificates used for mTLS across the service mesh. The impact on CPU utilization depends on the total number of service instances and configured certificate TTLs. You can use the [CA provider configuration options](/consul/docs/agent/config/config-files#common-ca-config-options) to control the number of requests a server processes. We recommend adjusting [`csr_max_concurrent`](/consul/docs/agent/config/config-files#ca_csr_max_concurrent) and [`csr_max_per_second`](/consul/docs/agent/config/config-files#ca_csr_max_concurrent) to suit your environment. - -### K/V store - -While the K/V store in Consul has some similarities to object stores we recommend that you do not use it as a primary application data store. - -When using Consul's K/V store for application configuration and metadata, we recommend the following to optimize performance: - -- Values must be below 512 KB and transactions should be below 64 operations. -- The keyspace must be well bound. While 10,000 keys may not affect performance, millions of keys are more likely to cause performance issues. -- Total data size must fit in memory, with additional room for indexes. We recommend that the in-memory size is 3 times the raw key value size. -- Total data size should remain below 1 GB. Larger snapshots are possible on suitably fast hardware, but they significantly increase recovery times and the operational complexity needed for replication. We recommend limiting data size to keep the cluster healthy and able to recover during maintenance and outages. -- The K/V store is optimized for reading. To know when you need to make changes to server resources and capacity, we recommend carefully monitoring update rates after they exceed more than a hundred updates per second across the cluster. -- We recommend that you do not use the K/V store as a general purpose database or object store. - -In addition, we recommend that you do not use the [blocking query mechanism](/consul/api-docs/features/blocking) to listen for updates when your K/V store’s update rate is high. When a K/V result is updated too fast, blocking query loops degrade into busy loops. These loops consume excessive client CPU and cause high server load until appropriately throttled. Watching large key prefixes is unlikely to solve the issue because returning the entire key prefix every time it updates can quickly consume a lot of bandwidth. - -### Backend for Vault - -At scale, using Consul as a backend for Vault results in increased memory and CPU utilization on Consul servers. It also produces unbounded growth in Consul’s data persistence layer that is proportional to both the amount of data being stored in Vault and the rate the data is updated. - -In situations where Consul handles large amounts of data and has high write throughput, we recommend adding monitoring for the [capacity and health of raft replication on servers](/consul/docs/agent/telemetry#raft-replication-capacity-issues). If the server experiences heavy load when the size of its stored data is large enough, a follower may be unable to catch up on replication and become a voter after restarting. This situation occurs when the time it takes for a server to restore from disk takes longer than it takes for the leader to write a new snapshot and truncate its logs. Refer to [Raft snapshots](#raft-snapshots) for more information. - -Vault v1.4 and higher provides [integrated storage](/vault/docs/concepts/integrated-storage) as its recommended storage option. If you currently use Consul as a storage backend for Vault, we recommend switching to integrated storage. For a comparison between Vault's integrated storage and Consul as a backend for Vault, refer to [storage backends in the Vault documentation](/vault/docs/configuration/storage#integrated-storage-vs-consul-as-vault-storage). For detailed guidance on migrating the Vault backend from Consul to Vault's integrated storage, refer to the [storage migration tutorial](/vault/docs/configuration/storage#integrated-storage-vs-consul-as-vault-storage). Integrated storage improves resiliency by preventing a Consul outage from also affecting Vault functionality. diff --git a/website/content/docs/architecture/security.mdx b/website/content/docs/architecture/security.mdx new file mode 100644 index 000000000000..32b588b3d459 --- /dev/null +++ b/website/content/docs/architecture/security.mdx @@ -0,0 +1,22 @@ +--- +layout: docs +page_title: Consul security architecture +description: >- + Consul includes built-in security features and options to integrate exisiting security features to secure communication between users, agents in the control plane, and services in the data plane. +--- + +# Consul security architecture + +This page introduces the parts of Consul's architecture that secure communication between users, Consul agents in the control plane, and communication between services in your application's data plane. + +## Control plane + +@include 'tables/compare/architecture/security/control-plane.mdx' + +## Data plane + +@include 'tables/compare/architecture/security/data-plane.mdx' + +## Best practices + +@include 'text/best-practice/architecture/security.mdx' \ No newline at end of file diff --git a/website/content/docs/dynamic-app-config/sessions/application-leader-election.mdx b/website/content/docs/automate/application-leader-election.mdx similarity index 94% rename from website/content/docs/dynamic-app-config/sessions/application-leader-election.mdx rename to website/content/docs/automate/application-leader-election.mdx index 5b14bcdc9e10..796706f429cd 100644 --- a/website/content/docs/dynamic-app-config/sessions/application-leader-election.mdx +++ b/website/content/docs/automate/application-leader-election.mdx @@ -7,16 +7,17 @@ description: >- # Application leader election -This topic describes the process for building client-side leader elections for service instances using Consul's [session mechanism for building distributed locks](/consul/docs/dynamic-app-config/sessions) and the [Consul key/value store](/consul/docs/dynamic-app-config/kv), which is Consul's key/value datastore. +This topic describes the process for building client-side leader elections for service instances using Consul's [session mechanism for building distributed locks](/consul/docs/automate/session) and the [Consul key/value store](/consul/docs/automate/kv), which is Consul's key/value datastore. -This topic is not related to Consul's leader election. For more information about the Raft leader election used internally by Consul, refer to -[consensus protocol](/consul/docs/architecture/consensus) documentation. +This topic is not related to Consul's leader election. For more information +about the Raft leader election used internally by Consul, refer to the +[consensus protocol](/consul/docs/concept/consensus) documentation. ## Background Some distributed applications, like HDFS or ActiveMQ, require setting up one instance as a leader to ensure application data is current and stable. -Consul's support for [sessions](/consul/docs/dynamic-app-config/sessions) and [watches](/consul/docs/dynamic-app-config/watches) allows you to build a client-side leader election process where clients use a lock on a key in the KV datastore to ensure mutual exclusion and to gracefully handle failures. +Consul's support for [sessions](/consul/docs/automate/session) and [watches](/consul/docs/automate/watch) allows you to build a client-side leader election process where clients use a lock on a key in the KV datastore to ensure mutual exclusion and to gracefully handle failures. All service instances that are participating should coordinate on a key format. We recommend the following pattern: @@ -32,7 +33,7 @@ service//leader - `session:write` permissions over the service session name - `key:write` permissions over the key - The `curl` command - + Expose the token using the `CONSUL_HTTP_TOKEN` environment variable. ## Client-side leader election procedure @@ -54,7 +55,7 @@ The workflow for building a client-side leader election process has the followin ## Create a new session -Create a configuration for the session. +Create a configuration for the session. The minimum viable configuration requires that you specify the session name. The following example demonstrates this configuration. @@ -203,7 +204,7 @@ Error! Did not acquire lock This example used the node's `hostname` as the key data. This data can be used by the other services to create configuration files. -Be aware that this locking system has no enforcement mechanism that requires clients to acquire a lock before they perform an operation. Any client can read, write, and delete a key without owning the corresponding lock. +Be aware that this locking system has no enforcement mechanism that requires clients to acquire a lock before they perform an operation. Any client can read, write, and delete a key without owning the corresponding lock. ## Watch the KV key for locks @@ -393,4 +394,3 @@ Success! Lock released on: service/leader After a lock is released, the key data do not show a value for `Session` in the results. Other clients can use this as a way to coordinate their lock requests. - diff --git a/website/content/docs/automate/consul-template/configure.mdx b/website/content/docs/automate/consul-template/configure.mdx new file mode 100644 index 000000000000..21982dca1b08 --- /dev/null +++ b/website/content/docs/automate/consul-template/configure.mdx @@ -0,0 +1,54 @@ +--- +layout: docs +page_title: Configure Consul Template +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Configure Consul Template + +You can configure Consul Template with a configuration file written in the [HashiCorp Configuration Language](https://github.com/hashicorp/hcl). The configuration is also JSON compatible. +For a full list of configuration options, refer to the [Consul Template configuration reference](/consul/docs/reference/consul-template/configuration). + +To start using Consul Template, run the Consul Template CLI with the `-config` flag pointing at the configuration file: + +```shell-session +$ consul-template -config "/my/config.hcl" +``` + +Specify this argument multiple times to load multiple configuration files. The right-most configuration takes the highest precedence. If you provide the path to a directory instead of a file, all of the files in the given directory are merged in [lexical order](http://golang.org/pkg/path/filepath/#Walk), recursively. Be aware that symbolic links are not followed. + +The full list of available commands and options is available in the [Consul Template CLI reference](/consul/docs/reference/consul-template/cli). + +~> **Note** Commands specified on the CLI take precedence over a config file. + +Not all available fields for the configuration file are required. For example, if you are not retrieving secrets from Vault, you do not need to specify a Vault configuration section. Similarly, if you are not logging to syslog, you do not need to specify a syslog configuration. + +For additional security, tokens may also be read from the environment using the `CONSUL_TOKEN` or `VAULT_TOKEN` environment variables. We recommend that you do not include plain-text tokens in a configuration file. + +## Example + +The following example configures Consul Template for a Consul agent and renders a template with a value from [the Consul KV store](/consul/docs/automate/kv). It writes the output to a file on disk. + +```hcl +consul { + address = "127.0.0.1:8500" + + auth { + enabled = true + username = "test" + password = "test" + } +} + +log_level = "warn" + +template { + contents = "{{key \"hello\"}}" + destination = "out.txt" + exec { + command = "cat out.txt" + } +} +``` + diff --git a/website/content/docs/automate/consul-template/index.mdx b/website/content/docs/automate/consul-template/index.mdx new file mode 100644 index 000000000000..519b8035b7c1 --- /dev/null +++ b/website/content/docs/automate/consul-template/index.mdx @@ -0,0 +1,173 @@ +--- +layout: docs +page_title: Consul Template +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Consul Template + +This topic provides an overview of the Consul Template tool, which enables a programmatic method for rendering configuration files from a variety of locations, including the Consul KV store. It is an effective workflow option for replacing complicated API queries that often require custom formatting. + +For more information about the KV store and using it to automatically configure application deployments, refer to [Consul key/value (KV) store overview](/consul/docs/automate/kv). + +## Introduction + +The Consul template tool is not part of the Consul binary. It has a [dedicated GitHub repo](https://github.com/hashicorp/consul-template) and you must [install Consul Template](/consul/docs/automate/consul-template/install) before running it on the command line. + +Consul templates are based on Go templates and shares many of the same attributes. When initiated, Consul Template reads one or more template files and queries Consul for data to render the full configuration. + +In a typical scenario, you run Consul Template as a daemon that fetches the initial values and then continues to watch for updates. The template re-renders whenever there are relevant changes in the datacenter. The template can also run arbitrary commands after the update process completes. For example, it can send the HUP signal to the load balancer service after a configuration change has been made. + +The Consul template tool is flexible, and it can fit into many different environments and workflows. Depending on the use case, you may have a single Consul Template instance on a handful of hosts, or you may need to run several instances on every host. Each Consul Template process can manage multiple unrelated files and removes duplicated information as needed when files share data dependencies. + +Use Consul Template in the following situations: + +1. **Update configuration files**. The Consul Template tool can be used to update service configuration files. A common use case is managing load balancer configuration files that need to be updated regularly in a dynamic infrastructure. + +1. **Update configuration secrets**. You can configure Consul Template to use Vault secret engines to generate secrets for your application's configuration, including encryption keys, TLS certificates, passwords. Consul Template renders these secrets into files or environment variables for secure consumption by your application. + +1. **Discover data about the Consul datacenter and service**. It is possible to collect information about the services in your Consul datacenter. For example, you could + collect a list of all services running on the datacenter or you could discover all service addresses for the Redis service. Be aware that this use case has limited scope for production. + +## Workflow + +A typical workflow to add Consul Template to your datacenter consists of the following steps: + +1. [Install Consul Template](/consul/docs/automate/consul-template/install) on nodes in your datacenter. +1. [Configure Consul Template](/consul/docs/automate/consul-template/configure) on nodes on your datacenter. +1. Create a template for the data you need to retrieve. There are different options in order of complexity: + 1. Use an inline template for simple requests using the [`contents`](/consul/docs/reference/consul-template/configuration#contents) parameter. + 1. Create a template file using the [available functions](/consul/docs/reference/consul-template/go). + 1. [Define a custom plugin](/consul/docs/automate/consul-template/plugins) to execute custom data manipulation. +1. [Render the template](/consul/docs/automate/consul-template/render) and apply it to your datacenter. +1. [View Consul Template logs](/consul/docs/automate/consul-template/log) to monitor execution and debug issues. + +## Use case: Consul KV + +In this example, you render a template that pulls the HashiCorp address from Consul KV. To do this, you create a template that contains the HashiCorp address, run the `consul-template` command, add a value to Consul KV for HashiCorp's address, and finally view the rendered file. + + + +First, create a template file `find_address.tpl` to query Consul's KV store. + + + +```go +{{ key "/hashicorp/street_address" }} +``` + + + +Next, run `consul-template` specifying both the template to use and the file to update. + +```shell-session +$ consul-template -template "find_address.tpl:hashicorp_address.txt" +``` + +The `consul-template` process will continue to run until you kill it with `CTRL+C`. For now, leave it running. + +Finally, open a new terminal so you can write data to the KV store in Consul using the command line interface. + +```shell-session +$ consul kv put hashicorp/street_address "101 2nd St" + +Success! Data written to: hashicorp/street_address +``` + +Verify the data was written by viewing the `hashicorp_address.txt` file. This file is located in the same directory where Consul Template is running. + +```shell-session +$ cat hashicorp_address.txt + +101 2nd St +``` + +If you update the key `hashicorp/street_address`, you can observe the changes to the file immediately. + +Change the content of the key in Consul to observe the process. + +```shell-session +$ consul kv put hashicorp/street_address "22b Baker ST" + +Success! Data written to: hashicorp/street_address +``` + +Verify the new value was written by viewing the `hashicorp_address.txt` file. + +```shell-session +$ cat hashicorp_address.txt + +22b Baker ST +``` + +You can now kill the `consul-template` process with `CTRL+c`. + +## Use case: discover all services + +In this example, you use Consul Template to discover all the services running in the Consul datacenter. + +First, create a new template `all-services.tpl` to query all services. + + + +```go +{{ range services -}} +# {{ .Name }} +{{- range service .Name }} +{{ .Address }} +{{- end }} + +{{ end -}} +``` + + + +Next, run Consul Template and specify the template you just created and the `-once` flag. The `-once` flag tells the process to run once and then quit. + +```shell-session +$ consul-template -template="all-services.tpl:all-services.txt" -once +``` + +If you complete this command with a [local development agent](/consul/docs/fundamentals/install/dev, the answer contains only the default `consul` service when viewing `all-services.txt`. + +```plaintext hideClipboard +# consul +127.0.0.1 +``` + +On a development or production datacenter, you would get a list of all the services. + +For example: + +```plaintext hideClipboard +# consul +104.131.121.232 + +# redis +104.131.86.92 +104.131.109.224 +104.131.59.59 + +# web +104.131.86.92 +104.131.109.224 +104.131.59.59 +``` + +## Production scenarios + +The previous examples demonstrate how to use Consul Template to render files dynamically with data in the Consul KV store or in the Consul catalog. + +You can extend this process to realize complex configurations. + +For example. you can use Consul Template to [automate an NGINX reverse proxy configuration](/consul/tutorials/network-automation/consul-template-load-balancing). This process uses the Consul catalog as a source for the upstream IPs. + +You can also use Consul Template to automate Consul operations using Vault as secret management. For example, you can: + +- [Generate and rotate gossip encryption keys for Consul](/consul/docs/automate/consul-template/vault/gossip) +- [Generate and rotate mTLS certificates for Consul](/consul/docs/automate/consul-template/vault/mtls) + +For additional Consul Template examples, refer to [the examples folder in the `consul-template` GitHub repository](https://github.com/hashicorp/consul-template/tree/master/examples). + +To explore the available functions to use in your templates refer to the [Consul Template Go language reference](/consul/docs/reference/consul-template/go). \ No newline at end of file diff --git a/website/content/docs/automate/consul-template/install.mdx b/website/content/docs/automate/consul-template/install.mdx new file mode 100644 index 000000000000..bf645ab3bf0f --- /dev/null +++ b/website/content/docs/automate/consul-template/install.mdx @@ -0,0 +1,234 @@ +--- +layout: docs +page_title: Install Consul Template +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Install Consul Template + +Consul Template is available as a pre-compiled binary or as a package for several operating systems. You can also build Consul Template from source. + +## Precompiled Binaries + + + + + + + +Install the required packages. + +```shell-session +$ sudo apt-get update && \ + sudo apt-get install wget gpg coreutils +``` + +Add the HashiCorp [GPG key][gpg-key]. + +```shell-session +$ wget -O- https://apt.releases.hashicorp.com/gpg | \ + sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg +``` + +Add the official HashiCorp Linux repository. + +```shell-session +$ echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" \ +| sudo tee /etc/apt/sources.list.d/hashicorp.list +``` + +Update and install. + +```shell-session +$ sudo apt-get update && sudo apt-get install consul-template +``` + + + + +Install `yum-config-manager` to manage your repositories. + +```shell-session +$ sudo yum install -y yum-utils +``` + +Use `yum-config-manager` to add the official HashiCorp Linux repository. + +```shell-session +$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo +``` + +Install. + +```shell-session +$ sudo yum -y install consul-template +``` + + + + +Install `dnf config-manager` to manage your repositories. + +```shell-session +$ sudo dnf install -y dnf-plugins-core +``` + +Use `dnf config-manager` to add the official HashiCorp Linux repository. + +```shell-session +$ sudo dnf config-manager addrepo --from-repofile=https://rpm.releases.hashicorp.com/fedora/hashicorp.repo +``` + +Install. + +```shell-session +$ sudo dnf -y install consul-template +``` + + + + +Install `yum-config-manager` to manage your repositories. + +```shell-session +$ sudo yum install -y yum-utils +``` + +Use `yum-config-manager` to add the official HashiCorp Linux repository. + +```shell-session +$ sudo yum-config-manager \ + --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo +``` + +Install. + +```shell-session +$ sudo yum -y install consul-template +``` + + + + +Download a [precompiled binary](https://releases.hashicorp.com/consul-template/), verify the binary using the available `SHA-256` sums, and unzip the package to a location on your machine. Make sure that the location of the `consul-template` binary is available on your `PATH` before continuing with the other guides. + + + + + + + + + + + +[Homebrew](https://brew.sh) is a free and open source package management system +for Mac OS X. Install the official [Consul Template +formula](https://github.com/hashicorp/homebrew-tap) from the terminal. + +First, install the HashiCorp tap, a repository of all of the HashiCorp Homebrew +packages. + +```shell-session +$ brew tap hashicorp/tap +``` + +Now, install Consul Template with `hashicorp/tap/consul-template`. + +```shell-session +$ brew install hashicorp/tap/consul-template +``` + +-> This command installs a signed binary and is automatically updated with +every new official release. + +To update to the latest, run + +```shell-session +$ brew upgrade hashicorp/tap/consul-template +``` + + + +Download a [precompiled binary](https://releases.hashicorp.com/consul-template/), verify the binary using the available `SHA-256` sums, and unzip the package to a location on your machine. Make sure that the location of the `consul-template` binary is available on your `PATH` before continuing with the other guides. + + + + + + + + + + + +[Chocolatey](https://chocolatey.org/) is a free and open-source package +management system for Windows. Install the [Consul Template +package](https://chocolatey.org/packages/consul-template) from the command-line. + +```shell-session +$ choco install consul-template +``` + +-> Chocolatey and the Consul Template package are **NOT** directly maintained +by HashiCorp. The latest version of Consul Template is always available by manual +installation. + + + + +Download a [precompiled binary](https://releases.hashicorp.com/consul-template/), verify the binary using the available `SHA-256` sums, and unzip the package to a location on your machine. Make sure that the location of the `consul-template` binary is available on your `PATH` before continuing with the other guides. + + + + + + + + +## Compile from the source + +Clone the repository from GitHub [`hashicorp/consul-terraform-sync`](https://github.com/hashicorp/consul-terraform-sync) to build and install Consul Template binary in your path `$GOPATH/bin`. Building from source requires `git` and [Golang](https://go.dev/). + +```shell-session +$ git clone https://github.com/hashicorp/consul-template.git +``` + +Enter the repository directory. + +```shell-session +$ cd consul-template +``` + +Select the release you want to compile. + +```shell-session +$ git checkout tags/ +``` + +Build Consul Template for your system. The binary will be placed in `./bin`. + +```shell-session +$ make dev +``` + +Once installed, verify the installation works by prompting the `-version` or `-help` option. + +```shell-session +$ consul-template -version +``` + +## Run Consul Template as a Docker container + +Install and run Consul Template as a [Docker container](https://hub.docker.com/r/hashicorp/consul-template). + +```shell-session +$ docker pull hashicorp/consul-template +``` + +Once installed, verify the installation works by prompting the `-version` or `-help` option. + +```shell-session +$ docker run --rm hashicorp/consul-template -version +``` diff --git a/website/content/docs/automate/consul-template/log.mdx b/website/content/docs/automate/consul-template/log.mdx new file mode 100644 index 000000000000..3fd4ca1e9c0b --- /dev/null +++ b/website/content/docs/automate/consul-template/log.mdx @@ -0,0 +1,83 @@ +--- +layout: docs +page_title: Consul Template logs +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Consul Template logs + +This page describes the logging process for Consul Template. + +## Set log level + +To set the log level for Consul Template, use the `-log-level` flag: + +```shell-session +$ consul-template -log-level info ... +``` + +You can also use the `CONSUL_TEMPLATE_LOG_LEVEL` environment variable to set the log level. + +```shell-session +$ export CONSUL_TEMPLATE_LOG_LEVEL=info && consul-template ... +``` + +The command outputs the log in the standard output. + + + +```log +# ... +[INFO] (cli) received redis from Watcher +[INFO] (cli) invoking Runner +# ... +``` + + + +When debugging, you can also specify the level as debug: + +```shell-session +$ consul-template -log-level debug ... +``` + +The command outputs the log in the standard output. + + + +```log +# ... +[DEBUG] (cli) creating Runner +[DEBUG] (cli) creating Consul API client +[DEBUG] (cli) creating Watcher +[DEBUG] (cli) looping for data +[DEBUG] (watcher) starting watch +[DEBUG] (watcher) all pollers have started, waiting for finish +[DEBUG] (redis) starting poll +[DEBUG] (service redis) querying Consul with &{...} +[DEBUG] (service redis) Consul returned 2 services +[DEBUG] (redis) writing data to channel +[DEBUG] (redis) starting poll +[INFO] (cli) received redis from Watcher +[INFO] (cli) invoking Runner +[DEBUG] (service redis) querying Consul with &{...} +# ... +``` + + + +## Log to file + +Consul Template can log to file as well. Logging to file is particularly useful in use cases where it is not trivial to capture *stdout* and/or *stderr*. For example, we recommend logging to file when Consul Template is deployed as a long running service. + +These are the relevant CLI flags: + +- `-log-file` - writes all the Consul Template log messages to a file. This value is used as a prefix for the log file name. The current timestamp + is appended to the file name. If the value ends in a path separator, `consul-template-` will be appended to the value. If the file name is missing an extension, `.log` is appended. For example, setting `log-file` to `/var/log/` would result in a log file path of `/var/log/consul-template-{timestamp}.log`. `log-file` can be combined with `-log-rotate-bytes` and `-log-rotate-duration` for a fine-grained log rotation experience. + +- `-log-rotate-bytes` - to specify the number of bytes that should be written to a log before it needs to be rotated. Unless specified, there is no limit to the number of bytes that can be written to a log file. + +- `-log-rotate-duration` - to specify the maximum duration a log should be written to before it needs to be rotated. Must be a duration value such as `30s`. Defaults to `24h`. + +- `-log-rotate-max-files` - to specify the maximum number of older log file archives to keep. Defaults to 0 (no files are ever deleted). Set to `-1` to discard old log files when a new one is created. \ No newline at end of file diff --git a/website/content/docs/automate/consul-template/mode.mdx b/website/content/docs/automate/consul-template/mode.mdx new file mode 100644 index 000000000000..30045e651f22 --- /dev/null +++ b/website/content/docs/automate/consul-template/mode.mdx @@ -0,0 +1,235 @@ +--- +layout: docs +page_title: Consul Template modes +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Consul Template modes + +Consul Template can run in different modes that change its runtime behavior and process lifecycle. + +- [Once Mode](#once-mode) +- [De-Duplication Mode](#de-duplication-mode) +- [Exec Mode](#exec-mode) + +## Once Mode + +When running in Once Mode, Consul template will execute each template exactly once and exit. + +In Once Mode, Consul Template waits for all dependencies to be rendered. If +a template specifies a dependency that does not exist in Consul, +once mode waits until Consul returns data for that dependency. Be aware +that "returned data" and "empty data" are not mutually exclusive. + +To run in Once mode, include the `-once` flag or enable it in the [configuration file](/consul/docs/reference/consul-template/configuration). + +When you run the query `{{ service "foo" }}` to return all healthy services named "foo", you +are asking Consul to return all the healthy services named "foo." If there are +no services with that name, the response is the empty array. This response is identical to the +response if there are no _healthy_ services named "foo." + +Consul Template processes input templates multiple times because the first result could impact later dependencies: + +```go +{{ range services }} +{{ range service .Name }} +{{ end }} +{{ end }} +``` + +In this example, we have to process the output of `services` before we can +lookup each `service`, since the inner loops cannot be evaluated until the outer +loop returns a response. Consul Template waits until it gets a response from +Consul for all dependencies before rendering a template. It does not wait until +that response is non-empty though. + + + +Once mode implicitly disables any wait or quiescence timers specified in configuration files or passed on the command line. + + + +## De-Duplication Mode + +Consul Template works by parsing templates to determine what data is needed and then watching Consul for any changes to that data. This process allows Consul Template to efficiently re-render templates when a change occurs. However, if there are many instances of Consul Template rendering a common template, you may encounter a linear duplication of work as each instance is querying the same data. + +To make this pattern more efficient, Consul Template supports work de-duplication across instances. You can enable this feature with the `-dedup` flag or in the top-level [`deduplicate` configuration block](/consul/docs/reference/consul-template/configuration#de-duplication-mode). Once enabled, Consul Template uses leader election on a per-template basis so that only a single node performs the queries. Results are shared among other instances rendering the same template by passing compressed data through the Consul K/V store. + +Be aware that no Vault data is stored in the compressed template. Because ACLs around Vault are typically more closely controlled than those ACLs around Consul's KV, Consul Template still requests the secret from Vault on each iteration. + +When running in de-duplication mode, it is important that local template functions resolve correctly. + +For example, you may have a local template function that relies on the `env` helper like this: + +```hcl +{{ key (env "KEY") }} +``` + +It is crucial that the environment variable `KEY` in this example is consistent +across all machines engaged in de-duplicating this template. If the values are +different, Consul Template will be unable to resolve the template, and the template +does not render successfully. + +## Exec Mode + +As of version `0.16.0`, Consul Template has the ability to maintain an arbitrary +child process, similar to [envconsul](https://github.com/hashicorp/envconsul). + +This mode is most beneficial when running Consul Template in a container or on a +scheduler like Nomad or Kubernetes. When +activated, Consul Template will spawn and manage the lifecycle of the child +process. + +Configuration options for running Consul Template in exec mode can be found in +the [configuration documentation](/consul/docs/reference/consul-template/configuration#exec-mode). + +This mode is best explained through an example. Consider a simple application that +reads a configuration file from disk and spawns a server from that configuration. + +```shell-session +$ consul-template \ + -template "/tmp/config.ctmpl:/tmp/server.conf" \ + -exec "/bin/my-server -config /tmp/server.conf" +``` + +When Consul Template starts, it will pull the required dependencies and populate +the `/tmp/server.conf`, which the `my-server` binary consumes. After that +template is rendered completely the first time, Consul Template spawns and +manages a child process. When any of the list templates change, Consul Template +will send a configurable reload signal to the child process. Additionally, +Consul Template will proxy any signals it receives to the child process. This +enables a scheduler to control the lifecycle of the process and also eases the +friction of running inside a container. + +The same rules that apply to the commands apply here, +that is if you want to use a complex, shell-like command you need to be running +on a system with `sh` on your PATH. These commands are run using `sh -c` with +the shell handling all shell parsing. Otherwise you want the command to be a +a single command or a, list formatted, command with arguments. + + + +On supporting systems (*nix, with `sh`) the +[`setpgid`](https://man7.org/linux/man-pages/man2/setpgid.2.html) flag is set +on the execution which ensures all signals are sent to all processes. + + + +There are some additional caveats with Exec Mode, which should be considered +carefully before use: + +- If the child process dies, the Consul Template process will also die. Consul + Template **does not supervise the process.**. Supervision is generally the + responsibility of the scheduler or init system. + +- The child process must remain in the foreground. This is a requirement for + Consul Template to manage the process and send signals. + +- The exec command will only start after _all_ templates have been rendered at + least once. One may have multiple templates for a single Consul Template + process, all of which must be rendered before the process starts. Consider + something like an nginx or apache configuration where both the process + configuration file and individual site configuration must be written in order + for the service to successfully start. + +- After the child process is started, any change to any dependent template + causes the reload signal to be sent to the child process. If no reload signal + is provided, Consul Template will kill the process and spawn a new instance. + The reload signal can be specified and customized using the CLI or configuration + file. + +- When Consul Template is stopped gracefully, it will send the configurable kill + signal to the child process. The default value is SIGTERM, but it can be + customized in the CLI or configuration file. + +- Consul Template will forward all signals it receives to the child process + **except** its defined `reload_signal` and `kill_signal`. If you disable these + signals, Consul Template will forward them to the child process. + +- It is not possible to have more than one exec command, although each template + can still have its own reload command. + +- Individual template reload commands still fire independently of the exec + command. + +## Commands + +You can render templates with commands to execute on the host. + +### Environment + +The current process environment is used when executing commands with the following additional environment variables: + +- `CONSUL_HTTP_ADDR` +- `CONSUL_HTTP_TOKEN` +- `CONSUL_HTTP_TOKEN_FILE` +- `CONSUL_HTTP_AUTH` +- `CONSUL_HTTP_SSL` +- `CONSUL_HTTP_SSL_VERIFY` +- `NOMAD_ADDR` +- `NOMAD_NAMESPACE` +- `NOMAD_TOKEN` + +These environment variables are exported with their current values when the command executes. Other Consul tooling reads these environment variables, providing smooth integration with other Consul tools like `consul maint` or `consul lock`. Additionally, exposing these environment variables gives power users the ability to further customize their command script. + +### Multiple Commands + +The command configured for running on template rendering must take one of two forms. + +The first is as a list of the command and arguments split at spaces. The command can use an absolute path or be found on the execution environment's PATH and must be the first item in the list. This form allows for single or multi-word commands that can be executed directly with a system call. + +**Examples** + +```hcl +command = ["echo", "hello"] +## ... +command = ["/opt/foo-package/bin/run-foo"] +## ... +command = ["foo"] +``` + +~> **Note** If you provide a single command without the list denoting square brackets (`[]`), it is converted into a list with a single argument. For example `command = "foo"` gets converted into `command = ["foo"]` + +The second form is as a single quoted command using system shell features. This form **requires** a shell named `sh` be on the executable search path (eg. PATH on \*nix). This is the standard on all \*nix systems and should work out of the box on those systems. This won't work on, for example, Docker images with only the executable and without a minimal system like Alpine. Using this form you can join multiple commands with logical operators, `&&` and `||`, use pipelines with `|`, conditionals, etc. Note that the shell `sh` is normally `/bin/sh` on \*nix systems and is either a POSIX shell or a shell running in POSIX compatible mode, so it is best to stick to POSIX shell syntax in this command. + +**Examples** + +```hcl +command = "/opt/foo && /opt/bar" +##... +command = "if /opt/foo ; then /opt/bar ; fi" +``` + +Using this method you can run as many shell commands as you need with whatever logic you need. Though it is suggested that if it gets too long you might want to wrap it in a shell script, deploy and run that. + +### Shell Commands and Exec Mode + +Using the system shell based command has one additional caveat when used for the Exec mode process (the managed, executed process to which it will propagate signals). That is to get signals to work correctly means not only does anything the shell runs need to handle signals, but the shell itself needs to handle them. This needs to be managed by you as shells will exit upon receiving most signals. + +A common example configures the `SIGHUP` signal to trigger a reload of the underlying process and to be ignored by the shell process. There are two options: + +- Use `trap` to ignore the signal. + +- Use `exec` to replace the shell with another process. + +To use `trap` to ignore the signal, you call `trap` to catch the signal in the shell with no action. +For example if you have an underlying nginx process and you want to run it with a shell command and have the shell ignore the HUP signals, you can use the following command: + +```hcl +command = "trap '' HUP; /usr/sbin/nginx -c /etc/nginx/nginx.conf" +``` + +The `trap '' HUP;` bit is enough to get the shell to ignore the HUP signal. If you left off the `trap` command nginx would reload but the shell command would exit but leave the nginx still running, not unmanaged. + +Alternatively using `exec` will replace the shell's process with a sub-process, keeping the same PID and process grouping (allowing the sub-process to be managed). This is simpler, but a bit less flexible than `trap`, and looks like the following: + +```hcl +command = "exec /usr/sbin/nginx -c /etc/nginx/nginx.conf" +``` + +Where the nginx process would replace the enclosing shell process to be managed by consul-template, receiving the Signals directly. Basically `exec` eliminates the shell from the equation. + +Refer to your shell's documentation on `trap` and `exec` for more specific details. + + diff --git a/website/content/docs/automate/consul-template/plugins.mdx b/website/content/docs/automate/consul-template/plugins.mdx new file mode 100644 index 000000000000..dee13ea862a2 --- /dev/null +++ b/website/content/docs/automate/consul-template/plugins.mdx @@ -0,0 +1,92 @@ +--- +layout: docs +page_title: Execute custom scripts with Consul Template plugins +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Execute custom scripts with Consul Template plugins + +For some use cases, it may be necessary to write a plugin that offloads work to another system. This is especially useful for things that may not fit in the "standard library" of Consul Template, but still need to be shared across multiple instances. + +Consul Template plugins must have the following API: + + + +```shell-session +$ NAME [INPUT...] +``` + + + +- `NAME` - the name of the plugin - this is also the name of the binary, either a full path or just the program name. It will be executed in a shell with the inherited `PATH` so e.g. the plugin `cat` will run the first executable `cat` that is found on the `PATH`. + +- `INPUT` - input from the template. There will be one INPUT for every argument passed to the `plugin` function. If the arguments contain whitespace, that whitespace will be passed as if the argument were quoted by the shell. + +## Important notes + +- Plugins execute user-provided scripts and pass in potentially sensitive data from Consul or Vault. Nothing is validated or protected by Consul Template, so all necessary precautions and considerations should be made by template authors + +- Plugin output must be returned as a string on `stdout`. Only `stdout` will be parsed for output. Be sure to log all errors and debugging messages onto `stderr` to avoid errors when Consul Template returns the value. Note that output to `stderr` will only be output if the plugin returns a non-zero exit code. + +- Always `exit 0` or Consul Template will assume the plugin failed to execute. + +- Ensure the empty input case is handled correctly (see [Multi-phase execution](https://github.com/hashicorp/consul-template/blob/main/README.md#multi-phase-execution)) + +- Data piped into the plugin is appended after any parameters given explicitly. For example, {{ "sample-data" | plugin "my-plugin" "some-parameter"}}` will call `my-plugin some-parameter sample-data`. + +## Examples + +The following example plugin removes any JSON keys that start with an underscore and returns the JSON string: + + + + +```go +func main() { + arg := []byte(os.Args[1]) + + var parsed map[string]interface{} + if err := json.Unmarshal(arg, &parsed); err != nil { + fmt.Fprintln(os.Stderr, fmt.Sprintf("err: %s", err)) + os.Exit(1) + } + + for k, _ := range parsed { + if string(k[0]) == "_" { + delete(parsed, k) + } + } + + result, err := json.Marshal(parsed) + if err != nil { + fmt.Fprintln(os.Stderr, fmt.Sprintf("err: %s", err)) + os.Exit(1) + } + + fmt.Fprintln(os.Stdout, fmt.Sprintf("%s", result)) + os.Exit(0) +} +``` + + + + +```ruby +#! /usr/bin/env ruby +require "json" + +if ARGV.empty? + puts JSON.fast_generate({}) + Kernel.exit(0) +end + +hash = JSON.parse(ARGV.first) +hash.reject! { |k, _| k.start_with?("_") } +puts JSON.fast_generate(hash) +Kernel.exit(0) +``` + + + + diff --git a/website/content/docs/automate/consul-template/render.mdx b/website/content/docs/automate/consul-template/render.mdx new file mode 100644 index 000000000000..4e781808bff7 --- /dev/null +++ b/website/content/docs/automate/consul-template/render.mdx @@ -0,0 +1,52 @@ +--- +layout: docs +page_title: Render templates +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Render templates + +This page describes to process to render templates, including common integrations on the command line. + +Render the template `/tmp/template.ctmpl` to `/tmp/result` on disk: + +```shell-session +$ consul-template \ + -template "/tmp/template.ctmpl:/tmp/result" +``` + +Render multiple templates in the same process. The optional third argument to the template is a command that will execute each time the template changes. + +```shell-session +$ consul-template \ + -template "/tmp/nginx.ctmpl:/var/nginx/nginx.conf:nginx -s reload" \ + -template "/tmp/redis.ctmpl:/var/redis/redis.conf:service redis restart" \ + -template "/tmp/haproxy.ctmpl:/var/haproxy/haproxy.conf" +``` + +Render a template using a custom Consul and Vault address: + +```shell-session +$ consul-template \ + -consul-addr "10.4.4.6:8500" \ + -vault-addr "https://10.5.32.5:8200" \ + -template "/tmp/template.ctmpl:/tmp/result" +``` + +Render all templates and then spawn and monitor a child process as a supervisor: + +```shell-session +$ consul-template \ + -template "/tmp/in.ctmpl:/tmp/result" \ + -exec "/sbin/my-server" +``` + +For more information on supervising, refer to the [Consul Template Exec Mode documentation](/consul/docs/automate/consul-template/mode#exec-mode). + +Instruct Consul Template to use a configuration file with the `-config` flag: + +```shell +$ consul-template -config "/my/config.hcl" +``` + diff --git a/website/content/docs/automate/consul-template/vault/gossip.mdx b/website/content/docs/automate/consul-template/vault/gossip.mdx new file mode 100644 index 000000000000..dd8f1c68acb3 --- /dev/null +++ b/website/content/docs/automate/consul-template/vault/gossip.mdx @@ -0,0 +1,287 @@ +--- +layout: docs +page_title: Generate and manage gossip encryption for Consul with Vault and Consul Template +description: >- + Use Vault's secure secrets management and Consul Template to create and manage gossip key rotation for your Consul datacenter. +--- + +# Generate and manage gossip encryption for Consul with Vault and Consul Template + +This page describes the process to use HashiCorp Vault and Consul Template to automate the process to create and manage gossip encryption keys for your Consul datacenter. + +## Overview + +To configure your Consul datacenter for production use, one of the necessary steps is to enable gossip encryption for all the agents in the datacenter. The process is explained in detail in the [Manage gossip encryption](/consul/docs/secure/encryption/gossip/enable) documentation. + +Once the gossip communication is secured with a symmetric key, our recommended best practice is to define a policy for rotating the gossip keys based on a defined interval that meets your needs. You can review how to perform this process manually in the [Rotate Gossip Encryption Keys in Consul](/consul/docs/secure/encryption/gossip/rotate/vm) documentation. + +In this guide, you will integrate Consul with HashiCorp's Vault and Consul Template to securely store and rotate your encryption key. The process includes the following steps: + +1. Create the gossip encryption key. +1. Connect to a Vault instance. +1. Initialize a key value store in Vault. +1. Store and retrieve the encryption key from Vault. +1. Configure Consul Template to retrieve the key and then use a script to rotate it. +1. Start Consul Template. + +-> The guide provides an example custom script to automate the encryption key rotation. You can use the script as a starting point to create your own rotation automation. + +## Prerequisites + +- **Consul:** to complete this guide, you need a Consul datacenter configured with gossip encryption enabled. Follow [Deploy Consul on VMs](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy) to learn how to deploy a Consul datacenter with gossip encryption enabled. + +- **Vault:** this guide assumes you have a running Vault cluster in your network. You can use a [local Vault dev server](/vault/tutorials/get-started/setup#set-up-the-lab) or an existing Vault deployment. + +- **Consul Template:** to interact with your Consul agent you will need to [install the `consul-template` binary](/consul/docs/automate/consul-template/install) on a node. To rotate gossip keys you need the binary to be installed on one node only; changes will be automatically propagated across the Consul datacenter. + +The diagram below shows the minimal architecture needed to demonstrate the functionality. + +![Architectural diagram showing a Client server and a Vault server with an operator issuing a command to start an automation](/img/consul-template/consul-vault-gossip.png) + +## Generate a encryption key + +You can use Consul's `consul keygen` command to generate the encryption key. + +```shell-session +$ consul keygen | tee encryption.key + +T6kFttAkS3oSCS/nvlK8ONmfESmtKhCpRA2pc20RBcA= +``` + +## Configure Vault + +Vault provides a `kv` secrets engine that can be used to store arbitrary secrets. You will use this engine to store the encryption key. + +Before you can initialize the secrets engine, you need to set the `VAULT_ADDR` and `VAULT_TOKEN` environment variables so that you can connect to your local instance of Vault. + +```shell-session +$ export VAULT_ADDR='http://127.0.0.1:8200' +``` + +The `VAULT_ADDR` environment variable should be set to the target Vault server address you want to connect to. + +```shell-session +$ export VAULT_TOKEN="root" +``` + +The `VAULT_TOKEN` environment variable should store your client token (e.g. `root`). + +### Initialize the Vault secrets engine + +Enable key/value v2 secrets engine (`kv-v2`). + +```shell-session +$ vault secrets enable kv-v2 + +Success! Enabled the kv-v2 secrets engine at: kv-v2/ +``` + +Once the secret engine is enabled, verify it is functioning properly using the following command. + +```shell-session +$ vault secrets list + +Path Type Accessor Description +---- ---- -------- ----------- +... +kv-v2/ kv kv_5e0867f7 n/a +secret/ kv kv_b5027aee key/value secret storage +... +``` + +### Store the encryption key in Vault + +With the secret engine correctly initialized, you can store the gossip encryption key in it. + +```shell-session +$ vault kv put kv-v2/consul/config/encryption key=$(cat encryption.key) ttl=1h + +Success! Data written to: kv-v2/consul/config/encryption +``` + + + + Vault's KV secrets engine does not enforce TTLs for expiration. Instead, the `lease_duration` value can be used as a hint to consumers for how often they should check back for a new value. You can change the `ttl` value or not use it, however, if you want to integrate with Consul Template, you must define a TTL so that Consul Template can know when to check for new versions of the key. + + + + + + In this tutorial, the TTL for the encryption key is being set to 1 hour, meaning that the key will be valid only for 1 hour before expiring. You can try using a shorter TTL in a test environment to ensure keys are revoked properly after the TTL has expired. + + + +### Retrieve the gossip encryption key from Vault + +Once the key is stored in Vault, you can retrieve it from any machine that has access to Vault. + +From your Consul server node, use the `vault kv get` command with the `-field` parameter to retrieve the key value only. + +```shell-session +$ vault kv get -field=key kv-v2/consul/config/encryption | tee encryption.key + +T6kFttAkS3oSCS/nvlK8ONmfESmtKhCpRA2pc20RBcA= +``` + +Once you retrieved the encryption key from Vault you can use it to configure your new Consul datacenter or to rotate the key to an already existing datacenter with gossip encryption enabled. + +The process of key rotation should be automated when possible and the next paragraph will show you how to use Consul Template` to automate the process. + +## Rotate the gossip encryption key with Consul Template + +You can use Consul Template in your Consul datacenter to integrate with Vault's KV secrets engine and dynamically rotate Consul's gossip encryption keys. + + +### Create a template file + +Create a Go template for Consul Template to retrieve the key from Vault. + +In this example, you will place these templates under `/opt/consul/templates`. + +```shell-session +$ mkdir -p /opt/consul/templates +``` + +Create a file named `gossip.key.tpl` under `/opt/consul/templates` with the following content. + + + +```go +{{ with secret "kv-v2/data/consul/config/encryption" }} +{{ .Data.data.key}} +{{ end }} +``` + + + +The template will interact with Vault using the `kv-v2/data/consul/config/encryption` path and will only retrieve the `key` value for the secret at that path. + +### Create the Consul Template configuration + +Write a configuration file for Consul Template that uses the template you created. The configuration will execute the template and render a file locally on your machine. + +Create a file named `consul_template.hcl` under `/opt/consul/templates` with the following content. + + + +```hcl +# This denotes the start of the configuration section for Vault. All values +# contained in this section pertain to Vault. +vault { + # This is the address of the Vault leader. The protocol (http(s)) portion + # of the address is required. + address = "http://localhost:8200" + + # This value can also be specified via the environment variable VAULT_TOKEN. + token = "root" + + unwrap_token = false + + renew_token = false +} + +# This block defines the configuration for a template. Unlike other blocks, +# this block may be specified multiple times to configure multiple templates. +template { + # This is the source file on disk to use as the input template. This is often + # called the "consul-template template". + source = "/opt/consul/templates/gossip.key.tpl" + + # This is the destination path on disk where the source template will render. + # If the parent directories do not exist, consul-template will attempt to + # create them, unless create_dest_dirs is false. + destination = "/opt/consul/gossip/gossip.key" + + # This is the permission to render the file. If this option is left + # unspecified, consul-template will attempt to match the permissions of the + # file that already exists at the destination path. If no file exists at that + # path, the permissions are 0644. + perms = 0700 + + # This is the optional command to run when the template is rendered. The + # command will only run if the resulting template changes. + command = "/opt/rotate_key.sh" +} +``` + + + +### Write a rotation script + +The last line of the configuration file refers to a script, located at `/opt/rotate_key.sh`, that will run every time a new key is retrieved from Vault. + +You can use the following example to create your own rotation script. + + + +```shell +#!/usr/bin/env bash + +# Setup Consul address info +export CONSUL_HTTP_ADDR="http://localhost:8500" + +# The new key will be in a file generated by consul-template +# the script retrieves the key from the file +NEW_KEY=`cat /opt/consul/gossip/gossip.key | sed -e '/^$/d'` + +# Install the key +consul keyring -install ${NEW_KEY} + +# Set as primary +consul keyring -use ${NEW_KEY} + +# Retrieve all keys used by Consul +KEYS=`curl -s ${CONSUL_HTTP_ADDR}/v1/operator/keyring` + +ALL_KEYS=`echo ${KEYS} | jq -r '.[].Keys| to_entries[].key' | sort | uniq` + +for i in `echo ${ALL_KEYS}`; do + if [ $i != ${NEW_KEY} ] ; then + consul keyring -remove $i + fi +done +``` + + + +### Start Consul Template + +Start Consul Template using the `-config` parameter to provide the configuration file. + +```shell-session +$ consul-template -config "consul_template.hcl" +``` + +The command will start Consul Template as a long running daemon and it will keep listening for changes on Vault. + +## Rotate the Consul encryption key + +Once you have started the process, every time you update the key value in Vault, Consul Template will make sure that the new key is installed in Consul too. + +You can now use the `vault kv put` command to change the encryption key. + +```shell-session +$ vault kv put kv-v2/consul/config/encryption key=$(consul keygen) ttl=1s +``` + +The script will pick up the `gossip.key` file containing the new key and use it to rotate the Consul gossip encryption key. + +It should output the following lines. + +```plaintext hideClipboard +==> Installing new gossip encryption key... +==> Changing primary gossip encryption key... +==> Removing gossip encryption key... +``` + +You can test the key is actually changed in Consul using the `consul keyring` command: + +```shell-session +$ consul keyring -list + +==> Gathering installed encryption keys... +WAN: + ROfcQ/QLUgvBpIsWCCY9MtNqIyV7r3SS5eJmNZ6vUEA= [1/1] +dc1 (LAN): + ROfcQ/QLUgvBpIsWCCY9MtNqIyV7r3SS5eJmNZ6vUEA= [1/1] +``` diff --git a/website/content/docs/automate/consul-template/vault/mtls.mdx b/website/content/docs/automate/consul-template/vault/mtls.mdx new file mode 100644 index 000000000000..f6327757e042 --- /dev/null +++ b/website/content/docs/automate/consul-template/vault/mtls.mdx @@ -0,0 +1,602 @@ +--- +layout: docs +page_title: Generate mTLS Certificates for Consul with Vault and Consul Template +description: >- + Use Vault and Consul Template to create and configure Vault-managed mTLS certificates for Consul's API and RPC traffic. +--- + +# Generate mTLS Certificates for Consul with Vault and Consul Template + +This page describes the process to use Vault's [PKI Secrets Engine](/vault/docs/secrets/pki) to generate and renew dynamic X.509 certificates, using [Consul Template](/consul/docs/automate/consul-template) to rotate your certificates. + +This method enables each agent in the Consul datacenter to have a unique certificate, with a relatively short time-to-live (TTL), that is automatically rotated, which allows you to safely and securely scale your datacenter while using mutual TLS (mTLS). + +## Prerequisites + +- **Consul:** to complete this guide, you need at least a node to install a Consul server agent and ideally another node to install a Consul client agent. Follow [Deploy Consul on VMs](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy) to learn how to deploy a Consul agent. This page will provide you with the necessary specific configuration to apply for the scenario. + +- **Vault:** this guide assumes you have a running Vault cluster in your network. You can use a [local Vault dev server](/vault/tutorials/get-started/setup#set-up-the-lab) or an existing Vault deployment. + +- **Consul Template:** to interact with your Consul agent you will need to [install the `consul-template` binary](/consul/docs/automate/consul-template/install) on a node. To rotate gossip keys you need the binary to be installed on one node only; changes will be automatically propagated across the Consul datacenter. + +The diagram below shows the minimal architecture needed to demonstrate the functionality. + +![Architectural diagram showing a Client server and a Vault server with an operator issuing a command to start an automation](/img/consul-template/consul-vault-tls.png) + +## Configure Vault's PKI secrets engine + +Before you can initialize the secrets engine, you need to set the `VAULT_ADDR` and `VAULT_TOKEN` environment variables so that you can connect to your local instance of Vault. + +The `VAULT_ADDR` environment variable should be set to the target Vault server address you want to connect to. + +```shell-session +$ export VAULT_ADDR='http://127.0.0.1:8200' +``` + +The `VAULT_TOKEN` environment variable should store your client token (e.g. `root`). + +```shell-session +$ export VAULT_TOKEN="root" +``` + +Enable Vault's PKI secrets engine at the `pki` path. + +```shell-session +$ vault secrets enable pki + +Success! Enabled the pki secrets engine at: pki/ +``` + +Tune the PKI secrets engine to issue certificates with a maximum time-to-live (TTL) of `87600` hours. + +```shell-session +$ vault secrets tune -max-lease-ttl=87600h pki + +Success! Tuned the secrets engine at: pki/ +``` + + + +This tutorial uses a common and recommended pattern which is to have one mount act as the root CA and to use this CA only to sign intermediate CA CSRs from other PKI secrets engines (which you will create in the next few steps). For tighter security, you can store your CA outside of Vault and use the PKI engine only as an intermediate CA. + + + +## Configure Vault as Consul's CA + +Consul requires that all servers and clients have key pairs that are generated by a single Certificate Authority (CA). + +You will use Vault's PKI secrets engine to generate the necessary CA and certificates. + +#### 1. Generate the root CA + +Generate the root certificate and save the certificate in `CA_cert.crt`. + +```shell-session +$ vault write -field=certificate pki/root/generate/internal \ + common_name="dc1.consul" \ + ttl=87600h > CA_cert.crt +``` + +This generates a new self-signed CA certificate and private key. Vault will automatically revoke the generated root at the end of its lease period (TTL); the CA certificate will sign its own Certificate Revocation List (CRL). + + + +You can adapt the TTL to comply with your internal policies on certificate lifecycle. + + + +You can inspect the certificate created using `openssl x509 -text -noout -in CA_cert.crt` + +Configure the CA and CRL URLs. + +```shell-session +$ vault write pki/config/urls \ + issuing_certificates="http://127.0.0.1:8200/v1/pki/ca" \ + crl_distribution_points="http://127.0.0.1:8200/v1/pki/crl" +``` + +Example output: + + + +```plaintext +Success! Data written to: pki/config/urls +``` + + + +#### 2. Generate an intermediate CA + +Enable the PKI secrets engine at the `pki_int` path. + +```shell-session +$ vault secrets enable -path=pki_int pki + +Success! Enabled the pki secrets engine at: pki_int/ +``` + +Tune the `pki_int` secrets engine to issue certificates with a maximum time-to-live (TTL) of `43800` hours. + +```shell-session +$ vault secrets tune -max-lease-ttl=43800h pki_int + +Success! Tuned the secrets engine at: pki_int/ +``` + + + +You can adapt the TTL to comply with your internal policies on certificate lifecycle. + + + +Request an intermediate certificate signing request (CSR) and save request as `pki_intermediate.csr`. + +```shell-session +$ vault write -format=json pki_int/intermediate/generate/internal \ + common_name="dc1.consul Intermediate Authority" \ + | jq -r '.data.csr' > pki_intermediate.csr +``` + +The command has no output. + +#### 3. Sign the CSR and import the certificate into Vault + +```shell-session +$ vault write -format=json pki/root/sign-intermediate csr=@pki_intermediate.csr \ + format=pem_bundle ttl="43800h" \ + | jq -r '.data.certificate' > intermediate.cert.pem +``` + +The command has no output. + +Once the CSR is signed, and the root CA returns a certificate, it can be imported back into Vault. + +```shell-session +$ vault write pki_int/intermediate/set-signed certificate=@intermediate.cert.pem + +Success! Data written to: pki_int/intermediate/set-signed +``` + +#### 4. Create a Vault role + +A role is a logical name that maps to a policy used to generate credentials. + +```shell-session +$ vault write pki_int/roles/consul-dc1 \ + allowed_domains="dc1.consul" \ + allow_subdomains=true \ + generate_lease=true \ + max_ttl="720h" +``` + +Example output: + + + +```plaintext +Success! Data written to: pki_int/roles/consul-dc1 +``` + + + +For this guide, you are using the following options for the role: + +- `allowed_domains`: Specifies the domains of the role. The command uses `dc1.consul` as the domain, which is the default configuration you are going to use for Consul. +- `allow_subdomains`: Specifies if clients can request certificates with CNs that are subdomains of the CNs allowed by the other role options + + + + This includes wildcard subdomains. + + + +- `generate_lease`: Specifies if certificates issued/signed against this role will have Vault leases attached to them. Certificates can be added to the CRL by Vault revoke `` when certificates are associated with leases. + +This completes the Vault configuration as a CA. + +## Generate a server certificate + +You can test the `pki` engine is configured correctly by generating your first certificate. + +```shell-session +$ vault write pki_int/issue/consul-dc1 \ + common_name="server.dc1.consul" \ + ttl="24h" | tee certs.txt +``` + + + +The TTL for the certificate is being set to 24 hours in this guide, meaning that this certificate will be valid only for 24 hours before expiring. You can try using a shorter TTL on a test environment to ensure certificates are revoked properly after TTL is expired. + + + +Example output: + + + +```plaintext +Key Value +--- ----- +lease_id pki_int/issue/consul-dc1/lFfKfpxtM0xY0AHDlr9pJ2GM +lease_duration 23h59m59s +lease_renewable false +ca_chain [-----BEGIN CERTIFICATE----- +##... +-----END CERTIFICATE-----] +certificate -----BEGIN CERTIFICATE----- +##... +-----END CERTIFICATE----- +expiration 1599645187 +issuing_ca -----BEGIN CERTIFICATE----- +##... +-----END CERTIFICATE----- +private_key -----BEGIN RSA PRIVATE KEY----- +##... +-----END RSA PRIVATE KEY----- +private_key_type rsa +serial_number 3f:ec:bd:ea:01:a6:35:49:a7:6d:17:ba:13:88:c1:b8:35:b4:fc:4c +``` + + + +## Configure Consul + + + + +Configure Consul TLS using the following configuration: + + + +```hcl +tls { + defaults { + verify_incoming = true + verify_outgoing = true + verify_server_hostname = true + ca_file = "/opt/consul/agent-certs/ca.crt" + cert_file = "/opt/consul/agent-certs/agent.crt" + key_file = "/opt/consul/agent-certs/agent.key" + } +} + +auto_encrypt { + allow_tls = true +} +``` + +```json +{ + "tls": { + "defaults": { + "verify_incoming": true, + "verify_outgoing": true, + "verify_server_hostname": true, + "ca_file": "/opt/consul/agent-certs/ca.crt", + "cert_file": "/opt/consul/agent-certs/agent.crt", + "key_file": "/opt/consul/agent-certs/agent.key" + } + }, + "auto_encrypt": { + "allow_tls": true + } +} +``` + + + + +To configure TLS encryption for Consul servers, three files are required: + +- `ca_file` - CA (or intermediate) certificate to verify the identity of the other nodes. +- `cert_file` - Consul agent public certificate +- `key_file` - Consul agent private key + +For the first Consul startup, you will use the certificate generated earlier. + +Use the following commands to extract the two certificates and private key from the `certs.txt` and place them into the right file and location. + +Create the certificates folder. + +```shell-session +$ mkdir -p /opt/consul/agent-certs +``` + +Extract the root CA certificate. + +```shell-session +$ grep -Pzo "(?s)(?<=issuing_ca)[^\-]*.*?END CERTIFICATE[^\n]*\n" certs.txt | sed 's/^\s*-/-/g' > /opt/consul/agent-certs/ca.crt +``` + +Extract the agent certificate. + +```shell-session +$ grep -Pzo "(?s)(?<=certificate)[^\-]*.*?END CERTIFICATE[^\n]*\n" certs.txt | sed 's/^\s*-/-/g' > /opt/consul/agent-certs/agent.crt +``` + +Extract the agent key. + +```shell-session +$ grep -Pzo "(?s)(?<=private_key)[^\-]*.*?END RSA PRIVATE KEY[^\n]*\n" certs.txt | sed 's/^\s*-/-/g' > /opt/consul/agent-certs/agent.key +``` + + + + + + +This section describes the automated [client certificate deployment process](/consul/docs/reference/agent/configuration-file/encryption#auto_encrypt) available in Consul 1.5.2 and newer. + + + +With auto-encryption, you can configure the Consul servers to automatically distribute certificates to the clients. To use this feature, you will need to configure clients to automatically get the certificates from the server. + +Configure Consul client TLS using the following configuration: + + + +```hcl +tls { + defaults { + verify_incoming = true + verify_outgoing = true + verify_server_hostname = true + ca_file = "/opt/consul/agent-certs/ca.crt" + } +} + +auto_encrypt { + tls = true +} +``` + +```json +{ + "tls": { + "defaults": { + "verify_incoming": true, + "verify_outgoing": true, + "verify_server_hostname": true, + "ca_file": "/opt/consul/agent-certs/ca.crt", + } + }, + "auto_encrypt": { + "tls": true + } +} +``` + + + + +To configure TLS encryption for Consul clients only one file is required: + +- `ca_file` - CA (or intermediate) certificate to verify the identity of the other nodes. + +Use the following commands to extract the certificate from the `certs.txt` and place them into the right file and location. + +Create the certificates folder. + +```shell-session +$ mkdir -p /opt/consul/agent-certs +``` + +Extract the root CA certificate. + +```shell-session +$ grep -Pzo "(?s)(?<=certificate)[^\-]*.*?END CERTIFICATE[^\n]*\n" certs.txt | sed 's/^\s*-/-/g' > /opt/consul/agent-certs/agent.crt +``` + + + + +## Configure Consul Template + +The guide steps used `ttl="24h"` ad a parameter during certificate creation, meaning that this certificates will be valid only for 24 hours before expiring. + +Deciding the right trade-off for certificate lifespan is always a compromise between security and agility. A possible third way that does not require you to lower your security is to use Consul Template to automate certificate renewal for Consul when the TTL is expired. + +### Create template files + +You can instruct Consul Template to generate and retrieve those files from Vault using the following templates: + + + +```go +{{ with secret "pki_int/issue/consul-dc1" "common_name=server.dc1.consul" "ttl=24h" "alt_names=localhost" "ip_sans=127.0.0.1"}} +{{ .Data.certificate }} +{{ end }} +``` + + + +The template uses the `pki_int/issue/consul-dc1` endpoint that Vault exposes to generate new certificates. It also mentions the common name and alternate names for the certificate. + + + +```go +{{ with secret "pki_int/issue/consul-dc1" "common_name=server.dc1.consul" "ttl=24h" "alt_names=localhost" "ip_sans=127.0.0.1"}} +{{ .Data.private_key }} +{{ end }} +``` + + + +The same endpoint also exposes the CA certificate under the `.Data.issuing_ca` parameter. + + + +```go +{{ with secret "pki_int/issue/consul-dc1" "common_name=server.dc1.consul" "ttl=24h"}} +{{ .Data.issuing_ca }} +{{ end }} +``` + + + + +Copy the newly created files into `/opt/consul/templates`. + +```shell-session +$ cp *.tpl /opt/consul/templates/ +``` + +### Create Consul Template configuration + +Create a configuration file, `consul_template.hcl`, that will instruct Consul Template to retrieve the files needed for the Consul agents, client and server, to configure TLS encryption. + + + + + + +```hcl +# This denotes the start of the configuration section for Vault. All values +# contained in this section pertain to Vault. +vault { + # This is the address of the Vault leader. The protocol (http(s)) portion + # of the address is required. + address = "http://localhost:8200" + + # This value can also be specified via the environment variable VAULT_TOKEN. + token = "root" + + unwrap_token = false + + renew_token = false +} + +# This block defines the configuration for a template. Unlike other blocks, +# this block may be specified multiple times to configure multiple templates. +template { + # This is the source file on disk to use as the input template. This is often + # called the "consul-template template". + source = "agent.crt.tpl" + + # This is the destination path on disk where the source template will render. + # If the parent directories do not exist, consul-template will attempt to + # create them, unless create_dest_dirs is false. + destination = "/opt/consul/agent-certs/agent.crt" + + # This is the permission to render the file. If this option is left + # unspecified, consul-template will attempt to match the permissions of the + # file that already exists at the destination path. If no file exists at that + # path, the permissions are 0644. + perms = 0700 + + # This is the optional command to run when the template is rendered. The + # command will only run if the resulting template changes. + command = "sh -c 'date && consul reload'" +} + +template { + source = "agent.key.tpl" + destination = "/opt/consul/agent-certs/agent.key" + perms = 0700 + command = "sh -c 'date && consul reload'" +} + +template { + source = "ca.crt.tpl" + destination = "/opt/consul/agent-certs/ca.crt" + command = "sh -c 'date && consul reload'" +} +``` + + + +The configuration file for the server contains the information to retrieve the +CA certificate as well as the certificate/key pair for the server agent. + + + + + + + +```hcl +# This denotes the start of the configuration section for Vault. All values +# contained in this section pertain to Vault. +vault { + # This is the address of the Vault leader. The protocol (http(s)) portion + # of the address is required. + address = "http://localhost:8200" + + # This value can also be specified via the environment variable VAULT_TOKEN. + token = "root" + + unwrap_token = false + + renew_token = false +} + +template { + source = "ca.crt.tpl" + destination = "/opt/consul/agent-certs/ca.crt" + command = "sh -c 'date && consul reload'" +} +``` + + + +The configuration file for the client contains the information to retrieve the CA certificate only, the certificates for client agents are automatically +generated from Consul when using the `auto_encrypt` setting. + + + + + +To allow Consul Template to communicate with Vault, define the following parameters: + +- `address` : the address of your Vault server. If Vault runs on the same node as Consul, you can use `http://localhost:8200`. + +- `token` : a valid Vault ACL token with appropriate permissions. You can use Vault root token for this example. + + + +The use of Vault root token is not recommended for production use; the recommended security approach is to create a new token based on a specific policy with limited privileges. + + + +### Start Consul Template + +Start Consul Template using the `-config` parameter to provide the configuration file. + +```shell-session +$ consul-template -config "consul_template.hcl" + +Configuration reload triggered +``` + +## Verify certificate rotation + +The certificate you created manually for the Consul server had a TTL of 24 hours. + +This means that after the certificate expires Vault will renew it and Consul Template will update the files on your agent it reload Consul configuration automatically to make it pick up the new files. + +You can verify the rotation by checking that Consul Template keeps listing, every 24 hours, a timestamp and the log line: + +```plaintext hideClipboard +Configuration reload triggered +``` + +You can also use `openssl` to verify the certificate content: + +```shell-session +$ openssl x509 -text -noout -in /opt/consul/agent-certs/agent.crt + +Certificate: + Data: + Version: 3 (0x2) + Serial Number: + 1b:2d:d6:5d:63:9b:aa:05:84:7b:be:3b:6f:e1:95:bb:1c:36:8c:a4 + Signature Algorithm: sha256WithRSAEncryption + Issuer: CN=dc1.consul Intermediate Authority + Validity + Not Before: Sep 16 16:03:45 2020 GMT + Not After : Sep 16 16:06:15 2020 GMT + Subject: CN=server.dc1.consul +... +``` + +and verify that the `Not Before` and `Not After` values are being updated to reflect the new certificate. diff --git a/website/content/docs/automate/index.mdx b/website/content/docs/automate/index.mdx new file mode 100644 index 000000000000..a55ce8aec0b4 --- /dev/null +++ b/website/content/docs/automate/index.mdx @@ -0,0 +1,44 @@ +--- +layout: docs +page_title: Dynamically configure applications +description: >- + This topic provides an overview of deployment strategies that use Consul's key/value (KV) store to dynamically update applications and Consul configurations in response to deployment changes. +--- + +# Dynamically configure applications + +This topic provides an overview for dynamically configuring applications when Consul detects certain changes in your network. Many of these operations rely on Consul's key/value (KV)store. + +For information about automatically deploying infrastructure when Consul detects failed health checks or increased network traffic, refer to [Consul-Terraform-Sync](/consul/docs/automate/infrastructure). + +## Introduction + +Platform operators managing a Consul deployment at scale require automated processes for generating and distributing updated configurations dynamically as applications change. You can use Consul's KV store to store data for generating Consul agent configurations, and set up the agent to invoke custom scripts in response to changes. + +## Key/value store + +@include 'text/descriptions/kv/store.mdx' + +## Sessions + +@include 'text/descriptions/kv/session.mdx' + +## Watches + +@include 'text/descriptions/kv/watch.mdx' + +## Consul template + +@include 'text/descriptions/consul-template.mdx' + +## Network infrastructure automation + +@include 'text/descriptions/network-infrastructure-automation.mdx' + +## Guidance + +@include 'text/guidance/automate.mdx' + +### Constraints, limitations, and troubleshooting + +@include 'text/limitations/kv.mdx' \ No newline at end of file diff --git a/website/content/docs/automate/infrastructure/configure.mdx b/website/content/docs/automate/infrastructure/configure.mdx new file mode 100644 index 000000000000..ad95ddbb78ec --- /dev/null +++ b/website/content/docs/automate/infrastructure/configure.mdx @@ -0,0 +1,107 @@ +--- +layout: docs +page_title: Configure Consul-Terraform-Sync +description: >- + A high level guide to configure Consul-Terraform-Sync. +--- + +# Configure Consul-Terraform-Sync + +The page will cover the main components for configuring your Network Infrastructure Automation with Consul at a high level. For the full list of configuration options, visit the [Consul-Terraform-Sync (CTS) configuration page](/consul/docs/reference/cts). + +## Tasks + +A task captures a network automation process by defining which network resources to update on a given condition. Configure CTS with one or more tasks that contain a list of Consul services, a Terraform module, and various Terraform providers. + +Within the [`task` block](/consul/docs/nia/configuration#task), the list of services for a task represents the service layer that drives network automation. The `module` is the discovery location of the Terraform module that defines the network automation process for the task. The `condition`, not shown below, defaults to the services condition when unconfigured such that network resources are updated on changes to the list of services over time. + +Review the Terraform module to be used for network automation and identify the Terraform providers required by the module. If the module depends on a set of providers, include the list of provider names in the `providers` field to associate the corresponding provider configuration with the task. These providers will need to be configured later in a separate block. + +```hcl +task { + name = "website-x" + description = "automate services for website-x" + module = "namespace/example/module" + version = "1.0.0" + providers = ["myprovider"] + condition "services" { + names = ["web", "api"] + } +} +``` + +## Terraform Providers + +Configuring Terraform providers within CTS requires 2 config components. The first component is required within the [`driver.terraform` block](/consul/docs/nia/configuration#terraform-driver). All providers configured for CTS must be listed within the `required_providers` stanza to satisfy a [Terraform v0.13+ requirement](/terraform/language/providers/requirements#requiring-providers) for Terraform to discover and install them. The providers listed are later organized by CTS to be included in the appropriate Terraform configuration files for each task. + +```hcl +driver "terraform" { + required_providers { + myprovider = { + source = "namespace/myprovider" + version = "1.3.0" + } + } +} +``` + +The second component for configuring a provider is the [`terraform_provider` block](/consul/docs/nia/configuration#terraform-provider). This block resembles [provider blocks for Terraform configuration](/terraform/language/providers/configuration) and has the same responsibility for understanding API interactions and exposing resources for a specific infrastructure platform. + +Terraform modules configured for task automation may require configuring the referenced providers. For example, configuring the host address and authentication to interface with your network infrastructure. Refer to the Terraform provider documentation hosted on the [Terraform Registry](https://registry.terraform.io/browse/providers) to find available options. The `terraform_provider` block is loaded by CTS during runtime and processed to be included in [autogenerated Terraform configuration files](/consul/docs/nia/network-drivers#provider) used for task automation. Omitting the `terraform_provider` block for a provider will defer to the Terraform behavior assuming an empty default configuration. + +```hcl +terraform_provider "myprovider" { + address = "myprovider.example.com" +} +``` + +## Summary + +Piecing it all together, the configuration file for CTS will have several HCL blocks in addition to other options for configuring the CTS daemon: `task`, `driver.terraform`, and `terraform_provider` blocks. + +An example HCL configuration file is shown below to automate one task to execute a Terraform module on the condition when there are changes to two services. + + + +```hcl +log_level = "info" + +syslog { + enabled = true +} + +consul { + address = "consul.example.com" +} + +task { + name = "website-x" + description = "automate services for website-x" + module = "namespace/example/module" + version = "1.0.0" + providers = ["myprovider"] + condition "services" { + names = ["web", "api"] + } + buffer_period { + min = "10s" + } +} + +driver "terraform" { + log = true + + required_providers { + myprovider = { + source = "namespace/myprovider" + version = "1.3.0" + } + } +} + +terraform_provider "myprovider" { + address = "myprovider.example.com" +} +``` + + \ No newline at end of file diff --git a/website/content/docs/automate/infrastructure/high-availability.mdx b/website/content/docs/automate/infrastructure/high-availability.mdx new file mode 100644 index 000000000000..1fdfcdafe7c9 --- /dev/null +++ b/website/content/docs/automate/infrastructure/high-availability.mdx @@ -0,0 +1,183 @@ +--- +layout: docs +page_title: Run Consul-Terraform-Sync with high availability +description: >- + Improve network automation resiliency by enabling high availability for Consul-Terraform-Sync. HA enables persistent task and event data so that CTS functions as expected during a failover event. +--- + +# Run Consul-Terraform-Sync with high availability + + + An enterprise license is only required for enterprise distributions of Consul-Terraform-Sync (CTS). + + +This topic describes how to run Consul-Terraform-Sync (CTS) configured for high availability. High availability is an enterprise capability that ensures that all changes to Consul that occur during a failover transition are processed and that CTS continues to operate as expected. + +## Introduction + +A network always has exactly one instance of the CTS cluster that is the designated leader. The leader is responsible for monitoring and running tasks. If the leader fails, CTS triggers the following process when it is configured for high availability: + +1. The CTS cluster promotes a new leader from the pool of followers in the network. +1. The new leader begins running all existing tasks in `once-mode` in order to process changes that occurred during the failover transition period. In this mode, CTS runs all existing tasks one time. +1. The new leader logs any errors that occur during `once-mode` operation and the new leader continues to monitor Consul for changes. + +In a standard configuration, CTS exits if errors occur when the CTS instance runs tasks in `once-mode`. In a high availability configuration, CTS logs the errors and continues to operate without interruption. + +The following diagram shows operating state when high availability is enabled. CTS Instance A is the current leader and is responsible for monitoring and running tasks: + +![Consul-Terraform-Sync architecture configured for high availability before a shutdown event](/img/nia/cts-ha-before.svg) + +The following diagram shows the CTS cluster state after the leader stops. CTS Instance B becomes the leader responsible for monitoring and running tasks. + +![Consul-Terraform-Sync architecture configured for high availability before a shutdown event](/img/nia/cts-ha-after.svg) + +### Failover details + +- The time it takes for a new leader to be elected is determined by the `high_availability.cluster.storage.session_ttl` configuration. The minimum failover time is equal to the `session_ttl` value. The maximum failover time is double the `session_ttl` value. +- If failover occurs during task execution, a new leader is elected. The new leader will attempt to run all tasks once before continuing to monitor for changes. +- If using the [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud), the task finishes and CTS starts a new leader that attempts to queue a run for each task in HCP Terraform in once-mode. +- If using [Terraform driver](/consul/docs/automate/infrastructure/network-driver/terraform), the task may complete depending on the cause of the failover. The new leader starts and attempts to run each task in [once-mode](/consul/docs/nia/cli/start#modes). Depending on the module and provider, the task may require manual intervention to fix any inconsistencies between the infrastructure and Terraform state. +- If failover occurs when no task is executing, CTS elects a new leader that attempts to run all tasks in once-mode. + +Note that driver behavior is consistent whether or not CTS is running in high availability mode. + +## Requirements + +Verify that you have met the [basic requirements](/consul/docs/automate/infrastructure/requirements) for running CTS. + +* CTS Enterprise 0.7 or later +* Terraform CLI 0.13 or later +* All instances in a cluster must be in the same datacenter. + +You must configure appropriate ACL permissions for your cluster. Refer to [ACL permissions](#) for details. + +We recommend specifying the [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud) in your CTS configuration if you want to run in high availability mode. + +## Configuration + +Add the `high_availability` block in your CTS configuration and configure the required settings to enable high availability. Refer to the [Configuration reference](/consul/docs/nia/configuration#high-availability) for details about the configuration fields for the `high_availability` block. + +The following example configures high availability functionality for a cluster named `cts-cluster`: + + + +```hcl +high_availability { + cluster { + name = "cts-cluster" + storage "consul" { + parent_path = "cts" + namespace = "ns" + session_ttl = "30s" + } + } + + instance { + address = "cts-01.example.com" + } +} +``` + + +### ACL permissions + +The `session` and `keys` resources in your Consul environment must have `write` permissions. Refer to the [ACL documentation](/consul/docs/secure/acl) for details on how to define ACL policies. + +If the `high_availability.cluster.storage.namespace` field is configured, then your ACL policy must also enable `write` permissions for the `namespace` resource. + +## Start a new CTS cluster + +We recommend deploying a cluster that includes three CTS instances. This is so that the cluster has one leader and two followers. + +1. Create an HCL configuration file that includes the settings you want to include, including the `high_availability` block. Refer to [Configuration Options for Consul-Terraform-Sync](/consul/docs/reference/cts) for all configuration options. +1. Issue the startup command and pass the configuration file. Refer to the [`start` command reference](/consul/docs/nia/cli/start#modes) for additional information about CTS startup modes. + ```shell-session + $ consul-terraform-sync start -config-file ha-config.hcl + ``` +1. You can call the `/status` API endpoint to verify the status of tasks CTS is configured to monitor. Only the leader of the cluster will return a successful response. Refer to the [`/status` API reference documentation](/consul/docs/reference/cts/api/status) for information about usage and responses. + + ```shell-session + $ curl localhost:/status/tasks + ``` + +Repeat the procedure to start the remaining instances for your cluster. We recommend using near-identical configurations for all instances in your cluster. You may not be able to use exact configurations in all cases, but starting instances with the same configuration improves consistency and reduces confusion if you need to troubleshoot errors. + +## Modify an instance configuration + +You can implement a rolling update to update a non-task configuration for a CTS instance, such as the Consul connection settings. If you need to update a task in the instance configuration, refer to [Modify tasks](#modify-tasks). + +1. Identify the leader CTS instance by either making a call to the [`status/cluster` API endpoint](/consul/docs/nia/api/status#cluster-status) or by checking the logs for the following entry: + ```shell-session + [INFO] ha: acquired leadership lock: id= + ``` +1. Stop one of the follower CTS instances and apply the new configuration. +1. Restart the follower instance. +1. Repeat steps 2 and 3 for other follower instances in your cluster. +1. Stop the leader instance. One of the follower instances becomes the leader. +1. Apply the new configuration to the former leader instance and restart it. + +## Modify tasks + +When high availability is enabled, CTS persists task and event data. Refer to [State storage and persistence](/consul/docs/nia/architecture#state-storage-and-persistence) for additional information. + +You can use the following methods for modifying tasks when high availability is enabled. We recommend choosing a single method to make all task configuration changes because inconsistencies between the state and the configuration can occur when mixing methods. + +### Delete and recreate the task + +We recommend deleting and recreating a task if you need to make a modification. Use the CTS API to identify the CTS leader instance and replace a task. + +1. Identify the leader CTS instance by either making a call to the [`status/cluster` API endpoint](/consul/docs/nia/api/status#cluster-status) or by checking the logs for the following entry: + + ```shell-session + [INFO] ha: acquired leadership lock: id= + ``` +1. Send a `DELETE` call to the [`/task/` endpoint](/consul/docs/nia/api/tasks#delete-task) to delete the task. In the following example, the leader instance is at `localhost:8558`: + + ```shell-session + $ curl --request DELETE localhost:8558/v1/tasks/task_a + ``` + + You can also use the [`task delete` command](/consul/docs/nia/cli/task#task-delete) to complete this step. + +1. Send a `POST` call to the `/task/` endpoint and include the updated task in your payload. + ```shell-session + $curl --header "Content-Type: application/json" \ + --request POST \ + --data @payload.json \ + localhost:8558/v1/tasks + ``` + + You can also use the [`task-create` command](/consul/docs/nia/cli/task#task-create) to complete this step. + +### Discard data with the `-reset-storage` flag + +You can restart the CTS cluster using the [`-reset-storage` flag](/consul/docs/nia/cli/start#options) to discard persisted data if you need to update a task. + +1. Stop a follower instance. +1. Update the instance’s task configuration. +1. Restart the instance and include the `-reset-storage` flag. +1. Stop all other instances so that the updated instance becomes the leader. +1. Start all other instances again. +1. Restart the instance you restarted in step 3 without the `-reset-storage` flag so that it starts up with the current state. If you continue to run an instance with the `-reset-storage` flag enabled, then CTS will reset the state data whenever the instance becomes the leader. + +## Troubleshooting + +Use the following troubleshooting procedure if a previous leader had been running a task successfully but the new leader logs an error after a failover: + +1. Check the logs printed to the console for errors. Refer to the [`syslog` configuration](/consul/docs/nia/configuration#syslog) for information on how to locate the logs. In the following example output, CTS reported a `401: Bad credentials` error: + ```shell-session + 2022-08-23T09:25:09.501-0700 [ERROR] tasksmanager: error applying task: task_name=config-task + error= + | error tf-apply for 'config-task': exit status 1 + | + | Error: GET https://api.github.com/user: 401 Bad credentials [] + | + | with module.config-task.provider["registry.terraform.io/integrations/github"], + | on .terraform/modules/config-task/main.tf line 11, in provider "github": + | 11: provider "github" { + | + ``` +1. Check for differences between the previous leader and new leader, such as differences in configurations, environment variables, and local resources. +1. Start a new instance with the fix that resolves the issue. +1. Tear down the leader instance that has the issue and any other instances that may have the same issue. +1. Restart the affected instances to implement the fix. \ No newline at end of file diff --git a/website/content/docs/automate/infrastructure/index.mdx b/website/content/docs/automate/infrastructure/index.mdx new file mode 100644 index 000000000000..ca7537e44cc9 --- /dev/null +++ b/website/content/docs/automate/infrastructure/index.mdx @@ -0,0 +1,76 @@ +--- +layout: docs +page_title: Network infrastructure automation +description: >- + Network infrastructure automation (NIA) is the concept of dynamically updating infrastructure devices triggered by service changes. Consul-Terraform-Sync is a tool that performs NIA and utilizes Consul as a data source that contains networking information about services and monitors those services. Terraform is used as the underlying automation tool and leverages the Terraform provider ecosystem to drive relevant changes to the network infrastructure. +--- + +# Network infrastructure automation + +Network infrastructure automation (NIA) enables dynamic updates to network infrastructure devices triggered by service changes. Consul-Terraform-Sync (CTS) utilizes Consul as a data source that contains networking information about services and monitors those services. Terraform is used as the underlying automation tool and leverages the Terraform provider ecosystem to drive relevant changes to the network infrastructure. + +CTS executes one or more automation tasks with the most recent service variable values from the Consul service catalog. Each task consists of a runbook automation written as a CTS compatible Terraform module using resources and data sources for the underlying network infrastructure. The `consul-terraform-sync` daemon runs on the same node as a Consul agent. + +CTS is available as an open source and enterprise distribution. Follow the [Automate your network configuration with Consul-Terraform-Sync tutorial](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-intro?utm_source=docs) to get started with CTS OSS or read more about [CTS Enterprise](/consul/docs/enterprise/cts). + +## Use Cases + +**Application teams must wait for manual changes in the network to release, scale up/down and re-deploy their applications.** This creates a bottleneck, especially in frequent workflows related to scaling up/down the application, breaking the DevOps goal of self-service enablement. CTS automates this process, thus decreasing the possibility of human error in manually editing configuration files, as well as decreasing the overall time taken to push out configuration changes. + +**Networking and security teams cannot scale processes to the speed and changes needed.** Manual approaches don't scale well, causing backlogs in network and security teams. Even in organizations that have some amount of automation (such as scripting), there is a need for an accurate, real-time source of data to trigger and drive their network automation workflows. CTS runs in near real-time to keep up with the rate of change. + +## Glossary + +- `Condition` - A task-level defined environmental requirement. When a task's condition is met, CTS executes that task to update network infrastructure. Depending on the condition type, the condition definition may also define and enable the module input that the task provides to the configured Terraform Module. + +- `Consul objects` - Consul objects are the response request objects returned from the Consul API that CTS monitor for changes. Examples of Consul objects can be service instance information, Consul key-value pairs, and service registration. The Consul objects are used to inform a task's condition and/or module input. + +- `Consul-Terraform-Sync (CTS)` - [GitHub repo](https://github.com/hashicorp/consul-terraform-sync) and binary/CLI name for the project that is used to perform Network Infrastructure Automation. + +- `Dynamic Tasks` - A dynamic task is a type of task that is dynamically triggered on a change to any relevant Consul catalog values e.g. service instances, Consul KV, catalog-services. See scheduled tasks for a type of non-dynamic task. + + -> **Note:** The terminology "tasks" used throughout the documentation refers to all types of tasks except when specifically stated otherwise. + +- `Network Drivers` - CTS uses [network drivers](/consul/docs/automate/infrastructure/network-driver) to execute and update network infrastructure. Drivers transform Consul service-level information into downstream changes by processing and abstracting API and resource details tied to specific network infrastructure. + +- `Network Infrastructure Automation (NIA)` - Enables dynamic updates to network infrastructure devices triggered when specific conditions, such as service changes and registration, are met. + +- `Scheduled Tasks` - A scheduled task is a type of task that is triggered only on a schedule. It is configured with a [schedule condition](/consul/docs/nia/configuration#schedule-condition). + +- `Services` - A service in CTS represents a service that is registered with Consul for service discovery. Services are grouped by their service names. There may be more than one instance of a particular service, each with its own unique ID. CTS monitors services based on service names and can provide service instance details to a Terraform module for network automation. + +- `Module Input` - A module input defines objects that provide values or metadata to the Terraform module. See [module input](/consul/docs/nia/terraform-modules#module-input) for the supported metadata and values. For example, a user can configure a Consul KV module input to provide KV pairs as variables to their respective Terraform Module. + + The module input can be configured in a couple of ways: + - Setting the `condition` block's `use_as_module_input` field to true + - Field was previously named `source_includes_var` (deprecated) + - Configuring `module_input` block(s) + - Block was previously named `source_input` (deprecated) + + ~> "Module input" was renamed from "source input" in CTS 0.5.0 due to updates to the configuration names seen above. + + -> **Note:** The terminology "tasks" used throughout the documentation refers to all types of tasks except when specifically stated otherwise. + +- `Tasks` - A task is the translation of dynamic service information from the Consul Catalog into network infrastructure changes downstream. + +- `HCP Terraform` - Per the [Terraform documentation](/terraform/cloud-docs), "HCP Terraform" describes both HCP Terraform and Terraform Enterprise, which are different distributions of the same application. Documentation will apply to both distributions unless specifically stated otherwise. + +- `Terraform Module` - A [Terraform module](/terraform/language/modules) is a container for multiple Terraform resources that are used together. + +- `Terraform Provider` - A [Terraform provider](/terraform/language/providers) is responsible for understanding API interactions and exposing resources for an infrastructure type. + +## Getting Started With Network Infrastructure Automation + +The [Network Infrastructure Automation (NIA)](/consul/tutorials/network-infrastructure-automation?utm_source=docs) +collection contains examples on how to configure CTS to +perform Network Infrastructure Automation. The collection contains also a +tutorial to secure your CTS configuration for a production +environment and one to help you build you own CTS compatible +module. + +## Community + +- [Contribute](https://github.com/hashicorp/consul-terraform-sync) to the open source project +- [Report](https://github.com/hashicorp/consul-terraform-sync/issues) bugs or request enhancements +- [Discuss](https://discuss.hashicorp.com/tags/c/consul/29/consul-terraform-sync) with the community or ask questions +- [Build integrations](/consul/docs/automate/infrastructure/module) for CTS \ No newline at end of file diff --git a/website/content/docs/automate/infrastructure/install.mdx b/website/content/docs/automate/infrastructure/install.mdx new file mode 100644 index 000000000000..0fbecedf675a --- /dev/null +++ b/website/content/docs/automate/infrastructure/install.mdx @@ -0,0 +1,124 @@ +--- +layout: docs +page_title: Install Consul and Consul-Terraform-Sync +description: >- + Consul-Terraform-Sync is a daemon that runs alongside Consul. Consul-Terraform-Sync is not included with the Consul binary and will need to be installed separately. +--- + +# Install Consul-Terraform-Sync + +Refer to the [introduction](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-intro?utm_source=docs) tutorial for details about installing, configuring, and running Consul-Terraform-Sync (CTS) on your local machine with the Terraform driver. + +## Install Consul-Terraform-Sync + + + + +To install CTS, find the [appropriate package](https://releases.hashicorp.com/consul-terraform-sync/) for your system and download it as a zip archive. For the CTS Enterprise binary, download a zip archive with the `+ent` metadata. [CTS Enterprise requires a Consul Enterprise license](/consul/docs/enterprise/license/cts) to run. + +Unzip the package to extract the binary named `consul-terraform-sync`. Move the `consul-terraform-sync` binary to a location available on your `PATH`. + +Example: + +```shell-session +$ echo $PATH +/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin +$ mv ./consul-terraform-sync /usr/local/bin/consul-terraform-sync +``` + +Once installed, verify the installation works by prompting the `-version` or `-help` option. The version outputted for the CTS Enterprise binary includes the `+ent` metadata. + +```shell-session +$ consul-terraform-sync -version +``` + + + + +Install and run CTS as a [Docker container](https://hub.docker.com/r/hashicorp/consul-terraform-sync). + +For the CTS Enterprise, use the Docker image [`hashicorp/consul-terraform-sync-enterprise`](https://hub.docker.com/r/hashicorp/consul-terraform-sync-enterprise). + +```shell-session +$ docker pull hashicorp/consul-terraform-sync +``` + +Once installed, verify the installation works by prompting the `-version` or `-help` option. The version outputted for the CTS Enterprise image includes the `+ent` metadata. + +```shell-session +$ docker run --rm hashicorp/consul-terraform-sync -version +``` + + + + +The CTS OSS binary is available in the HashiCorp tap, which is a repository of all our Homebrew packages. + +```shell-session +$ brew tap hashicorp/tap +$ brew install hashicorp/tap/consul-terraform-sync +``` + +Run the following command to update to the latest version: + +```shell-session +$ brew upgrade hashicorp/tap/consul-terraform-sync +``` + +Once installed, verify the installation works by prompting the `-version` or `-help` option. + +```shell-session +$ consul-terraform-sync -version +``` + + + + +Clone the repository from GitHub [`hashicorp/consul-terraform-sync`](https://github.com/hashicorp/consul-terraform-sync) to build and install the CTS OSS binary in your path `$GOPATH/bin`. Building from source requires `git` and [Golang](https://go.dev/). + +```shell-session +$ git clone https://github.com/hashicorp/consul-terraform-sync.git +$ cd consul-terraform-sync +$ git checkout tags/ +$ go install +``` + +Once installed, verify the installation works by prompting the `-version` or `-help` option. + +```shell-session +$ consul-terraform-sync -version +``` + + + + +## Connect your Consul Cluster + +CTS connects with your Consul cluster in order to monitor the Consul catalog for service changes. These service changes lead to downstream updates to your network devices. You can configure your Consul cluster in CTS with the [Consul block](/consul/docs/nia/configuration#consul). Below is an example: + +```hcl +consul { + address = "localhost:8500" + token = "my-consul-acl-token" +} +``` + +## Connect your Network Device + +CTS interacts with your network device through a network driver. For the Terraform network driver, CTS uses Terraform providers to make changes to your network infrastructure resources. You can reference existing provider docs on the Terraform Registry to configure each provider or create a new Terraform provider. + +Once you have identified a Terraform provider for all of your network devices, you can configure them in CTS with a [`terraform_provider` block](/consul/docs/nia/configuration#terraform-provider) for each network device. Below is an example: + +```hcl +terraform_provider "fake-firewall" { + address = "10.10.10.10" + username = "admin" + password = "password123" +} +``` + +This provider is then used by task(s) to execute a Terraform module that will update the related network device. + +### Multiple Instances per Provider + +You might have multiple instances of the same type of network device; for example, multiple instances of a firewall or load balancer. You can configure each instance with its own provider block and distinguish it by the `alias` meta-argument. See [multiple provider configurations](/consul/docs/nia/configuration#multiple-provider-configurations) for more details and an example of the configuration. \ No newline at end of file diff --git a/website/content/docs/automate/infrastructure/module.mdx b/website/content/docs/automate/infrastructure/module.mdx new file mode 100644 index 000000000000..60340d4ecb02 --- /dev/null +++ b/website/content/docs/automate/infrastructure/module.mdx @@ -0,0 +1,384 @@ +--- +layout: docs +page_title: Compatible Terraform Modules for NIA +description: >- + Consul-Terraform-Sync automates execution Terraform modules for network infrastructure automation. +--- + +# Compatible Terraform Modules for Network Infrastructure Automation + +Consul-Terraform-Sync (CTS) automates execution of Terraform modules through tasks. A task is a construct in CTS that defines the automation of Terraform and the module. + +## Module Specifications + +Compatible modules for CTS follow the [standard module](/terraform/language/modules/develop#module-structure) structure. Modules can use syntax supported by Terraform version 0.13 and newer. + +### Compatibility Requirements + +Below are the two required elements for module compatibility with CTS + +1. **Root module** - Terraform has one requirement for files in the root directory of the repository to function as the primary entrypoint for the module. It should encapsulate the core logic to be used by CTS for task automation. `main.tf` is the recommended filename for the main file where resources are created. +2. [**`services` input variable**](#services-variable) - CTS requires all modules to have the following input variable declared within the root module. The declaration of the `services` variable can be included at the top of the suggested `variables.tf` file where other input variables are commonly declared. This variable functions as the response object from the Consul catalog API and surfaces network information to be consumed by the module. It is structured as a map of objects. + +### Optional Input Variables + +In addition to the required `services` input variable, CTS provides additional, optional input variables to be used within your module. Support for an optional input variable requires two changes: + +1. Updating the Terraform Module to declare the input variable in the suggested `variables.tf` +1. Adding configuration to the CTS task block to define the module input values that should be provided to the input variables + +See below sections for more information on [defining module input](#module-input) and [declaring optional input variables](#how-to-create-a-compatible-terraform-module) in your Terraform module. + +### Module Input ((#source-input)) + +A task monitors [Consul objects](/consul/docs/nia#consul-objects) that are defined by the task's configuration. The Consul objects can be used for the module input that satisfies the requirements defined by the task's Terraform module's [input variables](/terraform/language/values/variables). + +A task's module input is slightly different from the task's condition, even though both monitor defined objects. The task's condition monitors defined objects with a configured criteria. When this criteria is satisfied, the task will trigger. + +The module input, however, monitors defined objects with the intent of providing values or metadata about these objects to the Terraform module. The monitored module input and condition objects can be the same object, such as a task configured with a `condition "services"` block and `use_as_module_input` set to `true`. The module input and condition can also be different objects and configured separately, such as a task configured with a `condition "catalog-services` and `module_input "consul-kv"` block. As a result, the monitored module input is decoupled from the provided condition in order to satisfy the Terraform module. + +Each type of object that CTS monitors can only be defined through one configuration within a task definition. For example, if a task monitors services, the task cannot have both `condition "services"` and `module_input "services"` configured. See [Task Module Input configuration](/consul/docs/nia/configuration#task-module-input) for more details. + +There are a few ways that a module input can be defined: + +- [**`services` list**](/consul/docs/nia/configuration#services) (deprecated) - The list of services to use as module input. +- **`condition` block's `use_as_module_input` field** - When set to true, the condition's objects are used as module input. + - Field was previously named `source_includes_var` (deprecated) +- [**`module_input` blocks**](/consul/docs/nia/configuration#module-input) - This block can be configured multiple times to define objects to use as module input. + - Block was previously named `source_input` (deprecated) + +Multiple ways of defining a module input adds configuration flexibility, and allows for optional additional input variables to be supported by CTS alongside the `services` input variable. + +Additional optional input variable types: + +- [**`catalog_services` variable**](#catalog-services-variable) +- [**`consul_kv` variable**](#consul-kv-variable) + +#### Services Module Input ((#services-source-input)) + +Tasks configured with a services module input monitor for changes to services. Monitoring is either performed on a configured list of services or on any services matching a provided regex. + +Sample rendered services input: + + + +```hcl +services = { + "web.test-server.dc1" = { + id = "web" + name = "web" + kind = "" + address = "127.0.0.1" + port = 80 + meta = {} + tags = ["example"] + namespace = "" + status = "passing" + node = "pm8902" + node_id = "307625d3-a1cf-9e85-ff81-12017ca4d848" + node_address = "127.0.0.1" + node_datacenter = "dc1" + node_tagged_addresses = { + lan = "127.0.0.1" + lan_ipv4 = "127.0.0.1" + wan = "127.0.0.1" + wan_ipv4 = "127.0.0.1" + } + node_meta = { + consul-network-segment = "" + } + }, +} +``` + + + +In order to configure a task with the services module input, the list of services that will be used for the input must be configured in one of the following ways: + +- the task's [`services`](/consul/docs/nia/configuration#services) (deprecated) +- a [`condition "services"` block](/consul/docs/nia/configuration#services-condition) configured with `use_as_module_input` field set to true + - Field was previously named `source_includes_var` (deprecated) +- a [`module_input "services"` block](/consul/docs/nia/configuration#services-module-input) + - Block was previously named `source_input "services"` (deprecated) + +The services module input operates by monitoring the [Health List Nodes For Service API](/consul/api-docs/health#list-nodes-for-service) and provides the latest service information to the input variables. A complete list of service information that would be provided to the module is expanded below: + +| Attribute | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------- | +| `id` | A unique Consul ID for this service. The service id is unique per Consul agent. | +| `name` | The logical name of the service. Many service instances may share the same logical service name. | +| `address` | IP address of the service host -- if empty, node address should be used. | +| `port` | Port number of the service | +| `meta` | List of user-defined metadata key/value pairs for the service | +| `tags` | List of tags for the service | +| `namespace` | Consul Enterprise namespace of the service instance | +| `status` | Representative status for the service instance based on an aggregate of the list of health checks | +| `node` | Name of the Consul node on which the service is registered | +| `node_id` | ID of the node on which the service is registered. | +| `node_address` | The IP address of the Consul node on which the service is registered. | +| `node_datacenter` | Data center of the Consul node on which the service is registered. | +| `node_tagged_addresses` | List of explicit LAN and WAN IP addresses for the agent | +| `node_meta` | List of user-defined metadata key/value pairs for the node | + +Below is an example configuration for a task that will execute on a schedule and provide information about the services matching the `regexp` parameter to the task's module. + +```hcl +task { + name = "services_condition_task" + description = "execute on changes to services whose name starts with web" + providers = ["my-provider"] + module = "path/to/services-condition-module" + condition "schedule" { + cron = "* * * * Mon" + } + module_input "services" { + regexp = "^web.*" + } +} +``` + +#### Consul KV Module Input ((#consul-kv-source-input)) + +Tasks configured with a Consul KV module input monitor Consul KV for changes to KV pairs that satisfy the provided configuration. The Consul KV module input operates by monitoring the [Consul KV API](/consul/api-docs/kv#read-key) and provides these key values to the task's module. + +Sample rendered consul KV input: + + + +```hcl +consul_kv = { + "my-key" = "some value" +} +``` + + + +To configure a task with the Consul KV module input, the KVs which will be used for the input must be configured in one of the following ways: + +- a [`condition "consul-kv"` block](/consul/docs/nia/configuration#consul-kv-condition) configured with the `use_as_module_input` field set to true. + - Field was previously named `source_includes_var` (deprecated) +- a [`module_input "consul-kv"` block](/consul/docs/nia/configuration#consul-kv-module-input). + - Block was previously named `source_input "consul-kv"` (deprecated) + +Below is a similar example to the one provided in the [Consul KV Condition](/consul/docs/nia/tasks#consul-kv-condition) section. However, the difference in this example is that instead of triggering based on a change to Consul KV, this task will instead execute on a schedule. Once execution is triggered, Consul KV information is then provided to the task's module. + +```hcl +task { + name = "consul_kv_schedule_task" + description = "executes on Monday monitoring Consul KV" + module = "path/to/consul-kv-module" + + condition "schedule" { + cron = "* * * * Mon" + } + + module_input "consul-kv" { + path = "my-key" + recurse = true + datacenter = "dc1" + namespace = "default" + } +} +``` + +#### Catalog Services Module Input ((#catalog-services-source-input)) + +Tasks configured with a Catalog Services module input monitors for service and tag information provided by the [Catalog List Services API](/consul/api-docs/catalog#list-services). The module input is a map of service names to a list of tags. + +Sample rendered catalog-services input: + + + +```hcl +catalog_services = { + "api" = ["prod", "staging"] + "consul" = [] + "web" = ["blue", "green"] +} +``` + + + +To configure a task with the Catalog Services module input, the catalog services which will be used for the input must be configured in one of the following ways: + +- a [`condition "catalog-services"` block](/consul/docs/nia/configuration#consul-kv-condition) configured with `use_as_module_input` field. + - Field was previously named `source_includes_var` (deprecated) + +-> **Note:** Currently there is no support for a `module_input "catalog-services"` block. + +Example of a catalog-services condition which supports module input through `use_as_module_input`: + +```hcl +task { + name = "catalog_services_condition_task" + description = "execute on registration/deregistration of services" + providers = ["my-provider"] + module = "path/to/catalog-services-module" + condition "catalog-services" { + datacenter = "dc1" + namespace = "default" + regexp = "web.*" + use_as_module_input = true + node_meta { + key = "value" + } + } +} +``` + +## How to Create a Compatible Terraform Module + +You can read more on how to [create a module](/terraform/language/modules/develop) or work through a [tutorial to build a module](/terraform/tutorials/modules/module-create?utm_source=docs). CTS is designed to integrate with any module that satisfies the specifications in the following section. + +The repository [hashicorp/consul-terraform-sync-template-module](https://github.com/hashicorp/consul-terraform-sync-template-module) can be cloned and used as a starting point for structuring a compatible Terraform module. The template repository has the files described in the next steps prepared. + +First, create a directory to organize Terraform configuration files that make up the module. You can start off with creating two files `main.tf` and `variables.tf` and expand from there based on your module and network infrastructure automation needs. + +The `main.tf` is the entry point of the module and this is where you can begin authoring your module. It can contain multiple Terraform resources related to an automation task that uses Consul service discovery information, particularly the required [`services` input variable](#services-variable). The code example below shows a resource using the `services` variable. When this example is used in automation with CTS, the content of the local file would dynamically update as Consul service discovery information changes. + + + +```hcl +# Create a file with service names and their node addresses +resource "local_file" "consul_services" { + content = join("\n", [ + for _, service in var.services : "${service.name} ${service.id} ${service.node_address}" + ]) + filename = "consul_services.txt" +} +``` + + + +Something important to consider before authoring your module is deciding the [condition under which it will execute](/consul/docs/nia/tasks#task-execution). This will allow you to potentially use other types of CTS provided input variables in your module. It will also help inform your documentation and how users should configure their task for your module. + +### Services Variable + +To satisfy the specification requirements for a compatible module, copy the `services` variable declaration to the `variables.tf` file. Your module can optionally have other [variable declarations](#module-input-variables) and [CTS provided input variables](/consul/docs/nia/terraform-modules#optional-input-variables) in addition to `var.services`. + + + +```hcl +variable "services" { + description = "Consul services monitored by Consul-Terraform-Sync" + type = map( + object({ + id = string + name = string + kind = string + address = string + port = number + meta = map(string) + tags = list(string) + namespace = string + status = string + + node = string + node_id = string + node_address = string + node_datacenter = string + node_tagged_addresses = map(string) + node_meta = map(string) + + cts_user_defined_meta = map(string) + }) + ) +} +``` + + + +Keys of the `services` map are unique identifiers of the service across Consul agents and data centers. Keys follow the format `service-id.node.datacenter` (or `service-id.node.namespace.datacenter` for Consul Enterprise). A complete list of attributes available for the `services` variable is included in the [documentation for CTS tasks](/consul/docs/nia/tasks#services-condition). + +Terraform variables when passed as module arguments can be [lossy for object types](/terraform/language/expressions/type-constraints#conversion-of-complex-types). This allows CTS to declare the full variable with every object attribute in the generated root module, and pass the variable to a child module that contains a subset of these attributes for its variable declaration. Modules compatible with CTS may simplify the `var.services` declaration within the module by omitting unused attributes. For example, the following services variable has 4 attributes with the rest omitted. + + + +```hcl +variable "services" { + description = "Consul services monitored by Consul-Terraform-Sync" + type = map( + object({ + id = string + name = string + node_address = string + port = number + status = string + }) + ) +} +``` + + + +### Catalog Services Variable + +If you are creating a module for a [catalog-services condition](/consul/docs/nia/tasks#catalog-services-condition), then you have the option to add the `catalog_services` variable, which contains service registration and tag information. If your module would benefit from consuming this information, you can copy the `catalog_services` variable declaration to your `variables.tf` file in addition to the other variables. + + + +```hcl +variable "catalog_services" { + description = "Consul catalog service names and tags monitored by Consul-Terraform-Sync" + type = map(list(string)) +} +``` + + + +The keys of the `catalog_services` map are the names of the services that are registered with Consul at the given datacenter. The value for each service name is a list of all known tags for that service. + +We recommend that if you make a module with with a catalog-services condition, that you document this in the README. This way, users that want to configure a task with your module will know to configure a catalog-services [condition](/consul/docs/nia/configuration#condition) block. + +Similarly, if you use the `catalog_services` variable in your module, we recommend that you also document this usage in the README. Users of your module will then know to set the catalog-services condition [`use_as_module_input`](/consul/docs/nia/configuration#catalog-services-condition) configuration to be true. When this field is set to true, CTS will declare the `catalog_services` variable in the generated root module, and pass the variable to a child module. Therefore, if this field is configured inconsistently, CTS will error and exit. + +### Consul KV Variable + +If you are creating a module for a [consul-kv condition](/consul/docs/nia/tasks#consul-kv-condition), then you have the option to add the `consul_kv` variable, which contains a map of the keys and values for the Consul KV pairs. If your module would benefit from consuming this information, you can copy the `consul_kv` variable declaration to your `variables.tf` file in addition to the other variables. + + + +```hcl +variable "consul_kv" { + description = "Keys and values of the Consul KV pairs monitored by Consul-Terraform-Sync" + type = map(string) +} +``` + + + +If your module contains the `consul_kv` variable, we recommend documenting the usage in the README file so that users know to set the [`use_as_module_input`](/consul/docs/nia/configuration#consul-kv-condition) configuration to `true` in the `consul-kv` condition. Setting the field to `true` instructs CTS to declare the `consul_kv` variable in the generated root module and pass the variable to a child module. Therefore, if this field is configured inconsistently, CTS will error and exit. + +### Module Input Variables + +Network infrastructure differs vastly across teams and organizations, and the automation needs of practitioners are unique based on their existing setup. [Input variables](/terraform/language/values/variables) can be used to serve as customization parameters to the module for practitioners. + +1. Identify areas in the module where practitioners could tailor the automation to fit their infrastructure. +2. Declare input variables and insert the use of variables throughout module resources to expose these options to practitioners. +3. Include descriptions to capture what the variables are and how they are used, and specify [custom validation rules for variables](/terraform/language/values/variables#custom-validation-rules) to provide context to users the expected format and conditions for the variables. +4. Set reasonable default values for variables that are optional, or omit default values for variables that are required module arguments. +5. Set the [sensitive argument](/terraform/language/values/variables) for variables that contain secret or sensitive values. When set, Terraform will redact the value from output when Terraform commands are run. + +Terraform is an explicit configuration language and requires variables to be declared, typed, and passed explicitly through as module arguments. CTS abstracts this by creating intermediate variables at the root level from the module input. These values are configured by practitioners within the [`task` block](/consul/docs/nia/configuration#variable_files). Value assignments are parsed to interpolate the corresponding variable declaration and are written to the appropriate Terraform files. A few assumptions are made for the intermediate variables: the variables users provide CTS are declared and supported by the module, matching name and type. + +### Module Guidelines + +This section covers guidelines for authoring compatible CTS modules. + +#### Scope + +We recommend scoping the module to a few related resources for a provider. Small modules are easier and more flexible for end users to adopt for CTS. It allows them to iteratively combine different modules and use them as building blocks to meet their unique network infrastructure needs. + +#### Complexity + +Consider authoring modules with low complexity to reduce the run time for Terraform execution. Complex modules that have a large number of dependencies may result in longer runs, which adds delay to the near real time network updates. + +#### Providers + +The Terraform module must declare which providers it requires within the [`terraform.required_providers` block](/terraform/language/providers/requirements#requiring-providers). We suggest to also include a version constraint for the provider to specify which versions the module is compatible with. + +Aside from the `required_providers` block, provider configurations should not be included within the sharable module for network integrations. End users will configure the providers through CTS, and CTS will then translate provider configuration to the generated root module appropriately. + +#### Documentation + +Modules for CTS are Terraform modules and can effectively run independently from the `consul-terraform-sync` daemon and Consul environment. They should be written and designed with Terraform best practices and should be clear to a Terraform user what the module does and how to use it. Module documentation should be named `README` or `README.md`. The description should capture what the module should be used for and the implications of running it in automation with CTS. \ No newline at end of file diff --git a/website/content/docs/nia/network-drivers/hcp-terraform.mdx b/website/content/docs/automate/infrastructure/network-driver/hcp-terraform.mdx similarity index 99% rename from website/content/docs/nia/network-drivers/hcp-terraform.mdx rename to website/content/docs/automate/infrastructure/network-driver/hcp-terraform.mdx index 4deb27ead9cf..05b501aeb919 100644 --- a/website/content/docs/nia/network-drivers/hcp-terraform.mdx +++ b/website/content/docs/automate/infrastructure/network-driver/hcp-terraform.mdx @@ -88,7 +88,7 @@ sync-tasks/ - `provider` blocks - The provider blocks generated in the root module resemble the `terraform_provider` blocks from the configuration for CTS. They have identical arguments present and are set from the intermediate variable created per provider. - `module` block - The module block is where the task's module is called as a [child module](/terraform/language/modules#calling-a-child-module). The child module contains the core logic for automation. Required and optional input variables are passed as arguments to the module. - `variables.tf` - This file contains three types of variable declarations: - - `services` input variable (required) determines module compatibility with Consul-Terraform Sync (read more on [compatible Terraform modules](/consul/docs/nia/terraform-modules) for more details). + - `services` input variable (required) determines module compatibility with Consul-Terraform Sync (read more on [compatible Terraform modules](/consul/docs/automate/infrastructure/module) for more details). - Any additional [optional input variables](/consul/docs/nia/terraform-modules#optional-input-variables) provided by CTS that the module may use. - Various intermediate variables used to configure providers. Intermediate provider variables are interpolated from the provider blocks and arguments configured in the CTS configuration. - `variables.module.tf` - This file is created if there are [variables configured for the task](/consul/docs/nia/configuration#variable_files) and contains the interpolated variable declarations that match the variables from configuration. These are then used to proxy the configured variables to the module through explicit assignment in the module block. diff --git a/website/content/docs/automate/infrastructure/network-driver/index.mdx b/website/content/docs/automate/infrastructure/network-driver/index.mdx new file mode 100644 index 000000000000..5a84ac4c33ff --- /dev/null +++ b/website/content/docs/automate/infrastructure/network-driver/index.mdx @@ -0,0 +1,33 @@ +--- +layout: docs +page_title: Network Drivers +description: >- + Consul-Terraform-Sync Network Drivers with Terraform and HCP Terraform +--- + +# Network Drivers + +Consul-Terraform-Sync (CTS) uses network drivers to execute and update network infrastructure. Drivers transform Consul service-level information into downstream changes by processing and abstracting API and resource details tied to specific network infrastructure. + +CTS is a HashiCorp solution to Network Infrastructure Automation. It bridges Consul's networking features and Terraform infrastructure management capabilities. The solution seamlessly embeds Terraform as network drivers to manage automation of Terraform modules. This expands the Consul ecosystem and taps into the rich features and community of Terraform and Terraform providers. + +The following table highlights some of the additional features Terraform and HCP Terraform offer when used as a network driver for CTS. Visit the [Terraform product page](https://www.hashicorp.com/products/terraform) or [contact our sales team](https://www.hashicorp.com/contact-sales) for a comprehensive list of features. + +| Network Driver | Description | Features | +| -------------- | ----------- | -------- | +| [Terraform driver](/consul/docs/automate/infrastructure/network-driver/terraform) | CTS automates a local installation of the [Terraform CLI](https://www.terraform.io/) | - Local Terraform execution
- Local workspace directories
- [Backend options](/consul/docs/nia/configuration#backend) available for state storage
| +| [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud) | CTS Enterprise automates remote workspaces on [HCP Terraform](/terraform/cloud-docs) | - [Remote Terraform execution](/terraform/cloud-docs/run/remote-operations)
- Concurrent runs
- [Secured variables](/terraform/cloud-docs/workspaces/variables)
- [State versions](/terraform/cloud-docs/workspaces/state)
- [Sentinel](/terraform/cloud-docs/policy-enforcement) to enforce governance policies as code
- Audit [logs](/terraform/enterprise/admin/infrastructure/logging) and [trails](/terraform/cloud-docs/api-docs/audit-trails)
- Run [history](/terraform/cloud-docs/run/manage), [triggers](/terraform/cloud-docs/workspaces/settings/run-triggers), and [notifications](/terraform/cloud-docs/workspaces/settings/notifications)
- [Terraform Cloud Agents](/terraform/cloud-docs/agents) | + +## Understanding Terraform Automation + +CTS automates Terraform execution using a templated configuration to carry out infrastructure changes. The auto-generated configuration leverages input variables sourced from Consul and builds on top of reusable Terraform modules published and maintained by HashiCorp partners and the community. CTS can also run your custom built modules that suit your team's specific network automation needs. + +The network driver for CTS determines how the Terraform automation operates. Visit the driver pages to read more about the [Terraform driver](/consul/docs/automate/infrastructure/network-driver/terraform) and the [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud). + +### Upgrading Terraform + +Upgrading the Terraform version used by CTS may introduce breaking changes that can impact the Terraform modules. Refer to the Terraform [upgrade guides](/terraform/language/upgrade-guides) for details before upgrading. + +The following versions were identified as containing changes that may impact Terraform modules. + +- [Terraform v0.15](/terraform/language/v1.1.x/upgrade-guides/0-15) diff --git a/website/content/docs/automate/infrastructure/network-driver/terraform.mdx b/website/content/docs/automate/infrastructure/network-driver/terraform.mdx new file mode 100644 index 000000000000..57447cf998b1 --- /dev/null +++ b/website/content/docs/automate/infrastructure/network-driver/terraform.mdx @@ -0,0 +1,61 @@ +--- +layout: docs +page_title: Terraform Driver +description: >- + Consul-Terraform-Sync Network Drivers with Terraform +--- + +# Terraform Driver + +Consul-Terraform-Sync (CTS) extends the Consul ecosystem to include Terraform as an officially supported tooling project. With the Terraform driver, CTS installs the [Terraform CLI](/terraform/downloads) locally and runs Terraform commands based on monitored Consul changes. This page details how the Terraform driver operates using local workspaces and templated files. + +## Terraform CLI Automation + +On startup, CTS: +1. Downloads and installs Terraform +2. Prepares local workspace directories. Terraform configuration and execution for each task is organized as separate [Terraform workspaces](/terraform/language/state/workspaces). The state files for tasks are independent of each other. +3. Generates Terraform configuration files that make up the root module for each task. + +Once all workspaces are set up, CTS monitors the Consul catalog for service changes. When relevant changes are detected, the Terraform driver dynamically updates input variables for that task using a template to render them to a file named [`terraform.tfvars`](/consul/docs/nia/network-drivers#terraform-tfvars). This file is passed as a parameter to the Terraform CLI when executing `terraform plan` and `terraform apply` to update your network infrastructure with the latest Consul service details. + +### Local Workspaces + +Within the CTS configuration for a task, practitioners can select the desired module to run for the task as well as set the condition to execute the task. Each task executed by the Terraform driver corresponds to an automated root module that calls the selected module in an isolated Terraform environment. CTS will manage concurrent execution of these tasks. + +Autogenerated root modules for tasks are maintained in local subdirectories of the CTS working directory. Each subdirectory represents the local workspace for a task. By default, the working directory `sync-tasks` is created in the current directory. To configure where Terraform configuration files are stored, set [`working_dir`](/consul/docs/nia/configuration#working_dir) to the desired path or configure the [`task.working_dir`](/consul/docs/nia/configuration#working_dir-1) individually. + +~> **Note:** Although Terraform state files for task workspaces are independent, this does not guarantee the infrastructure changes from concurrent task executions are independent. Ensure that modules across all tasks are not modifying the same resource objects or have overlapping changes that may result in race conditions during automation. + +### Root Module + +The root module proxies Consul information, configuration, and other variables to the Terraform module for the task. The content of the files that make up the root module are sourced from CTS configuration, information for task's module to use as the automation playbook, and information from Consul such as service information. + +A working directory with one task named "cts-example" would have the folder structure below when running with the Terraform driver. + +```shell-session +$ tree sync-tasks/ + +sync-tasks/ +└── cts-example/ + ├── main.tf + ├── variables.tf + ├── terraform.tfvars + └── terraform.tfvars.tmpl +``` + +The following files of the root module are generated for each task. An [example of a root module created by CTS](https://github.com/hashicorp/consul-terraform-sync/tree/master/examples) can be found in the project repository. + +- `main.tf` - The main file contains the terraform block, provider blocks, and a module block calling the module configured for the task. + - `terraform` block - The corresponding provider source and versions for the task from the configuration files are placed into this block for the root module. The Terraform backend from the configuration is also templated here. + - `provider` blocks - The provider blocks generated in the root module resemble the `terraform_provider` blocks from the configuration for CTS. They have identical arguments present and are set from the intermediate variable created per provider. + - `module` block - The module block is where the task's module is called as a [child module](/terraform/language/modules). The child module contains the core logic for automation. Required and optional input variables are passed as arguments to the module. +- `variables.tf` - This file contains three types of variable declarations. + - `services` input variable (required) determines module compatibility with Consul-Terraform Sync (read more on [compatible Terraform modules](/consul/docs/automate/infrastructure/module) for more details). + - Any additional [optional input variables](/consul/docs/nia/terraform-modules#optional-input-variables) provided by CTS that the module may use. + - Various intermediate variables used to configure providers. Intermediate provider variables are interpolated from the provider blocks and arguments configured in the CTS configuration. +- `variables.module.tf` - This file is created if there are [variables configured for the task](/consul/docs/nia/configuration#variable_files) and contains the interpolated variable declarations that match the variables from configuration. These are then used to proxy the configured variables to the module through explicit assignment in the module block. +- `providers.tfvars` - This file is created if there are [providers configured for the task](/consul/docs/nia/configuration#providers) and defined [`terraform_provider` blocks](/consul/docs/nia/configuration#terraform-provider). This file may contain sensitive information. To omit sensitive information from this file, you can [securely configure Terraform providers for CTS](/consul/docs/nia/configuration#securely-configure-terraform-providers) using environment variables or templating. +- `terraform.tfvars` - The variable definitions file is where the services input variable and any optional CTS input variables are assigned values from Consul. It is periodically updated, typically when the task condition is met, to reflect the current state of Consul. +- `terraform.tfvars.tmpl` - The template file is used by CTS to template information from Consul by using the HashiCorp configuration and templating library ([hashicorp/hcat](https://github.com/hashicorp/hcat)). + +~> **Note:** Generated template and Terraform configuration files are crucial for the automation of tasks. Any manual changes to the files may not be preserved and could be overwritten by a subsequent update. Unexpected manual changes to the format of the files may cause automation to error. diff --git a/website/content/docs/automate/infrastructure/requirements.mdx b/website/content/docs/automate/infrastructure/requirements.mdx new file mode 100644 index 000000000000..f10d7ad35b13 --- /dev/null +++ b/website/content/docs/automate/infrastructure/requirements.mdx @@ -0,0 +1,136 @@ +--- +layout: docs +page_title: Requirements +description: >- + Consul-Terraform-Sync requires a Terraform Provider, a Terraform Module, and a running Consul cluster outside of the `consul-terraform-sync` daemon. +--- + +# Requirements + +The following components are required to run Consul-Terraform-Sync (CTS): + +- A Terraform provider +- A Terraform module +- A Consul cluster running outside of the `consul-terraform-sync` daemon + +You can add support for your network infrastructure through Terraform providers so that you can apply Terraform modules to implement network integrations. + +The following guidance is for running CTS using the Terraform driver. The HCP Terraform driver has [additional prerequisites](/consul/docs/nia/network-drivers/terraform-cloud#setting-up-terraform-cloud-driver). + +## Run a Consul cluster + +Below are several steps towards a minimum Consul setup required for running CTS. + +### Install Consul + +CTS is a daemon that runs alongside Consul, similar to other Consul ecosystem tools like Consul Template. CTS is not included with the Consul binary and needs to be installed separately. + +To install a local Consul agent, refer to the [Getting Started: Install Consul Tutorial](/consul/tutorials/get-started-vms?utm_source=docs). + +For information on compatible Consul versions, refer to the [Consul compatibility matrix](/consul/docs/nia/compatibility#consul). + +### Run an agent + +The Consul agent must be running in order to dynamically update network devices. Refer to the [Consul agent documentation](/consul/docs/fundamentals/agent) for information about configuring and starting a Consul agent. + +When running a Consul agent with CTS in production, consider that CTS uses [blocking queries](/consul/api-docs/features/blocking) to monitor task dependencies, such as changes to registered services. This results in multiple long-running TCP connections between CTS and the agent to poll changes for each dependency. Consul may quickly reach the agent connection limits if CTS is monitoring a high number of services. + +To avoid reaching the limit prematurely, we recommend using HTTP/2 (requires HTTPS) to communicate between CTS and the Consul agent. When using HTTP/2, CTS establishes a single connection and reuses it for all communication. Refer to the [Consul Configuration section](/consul/docs/nia/configuration#consul) for details. + +Alternatively, you can configure the [`limits.http_max_conns_per_client`](/consul/docs/reference/agent/configuration-file/general#http_max_conns_per_client) option to set a maximum number of connections to meet your needs. + +### Register services + +CTS monitors the Consul catalog for service changes that lead to downstream changes to your network devices. Without services, your CTS daemon is operational but idle. You can register services with your Consul agent by either loading a service definition or by sending an HTTP API request. + +The following HTTP API request example registers a service named `web` with your Consul agent: + +```shell-session +$ echo '{ + "ID": "web", + "Name": "web", + "Address": "10.10.10.10", + "Port": 8000 +}' > payload.json + +$ curl --request PUT --data @payload.json http://localhost:8500/v1/agent/service/register +``` + +The example represents a non-existent web service running at `10.10.10.10:8000` that is now available for CTS to consume. + +You can configure CTS to monitor the web service, execute a task, and update network device(s) by configuring `web` in the [`condition "services"`](/consul/docs/nia/configuration#services-condition) task block. If the web service has any non-default values, it can also be configured in `condition "services"`. + +For more details on registering a service using the HTTP API endpoint, refer to the [register service API docs](/consul/api-docs/agent/service#register-service). + +For hands-on instructions on registering a service by loading a service definition, refer to the [Getting Started: Register a Service with Consul Service Discovery Tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery). + +### Run a cluster + +For production environments, we recommend operating a Consul cluster rather than a single agent. Refer to [Getting Started: Deploy a Consul Datacenter Tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy) for instructions on starting multiple Consul agents and joining them into a cluster. + +## Network infrastructure using a Terraform provider + +CTS integrations for the Terraform driver use Terraform providers as plugins to interface with specific network infrastructure platforms. The Terraform driver for CTS inherits the expansive collection of Terraform providers to integrate with. You can also specify a provider `source` in the [`required_providers` configuration](/terraform/language/providers/requirements#requiring-providers) to use providers written by the community (requires Terraform 0.13 or later). + +### Finding Terraform providers + +To find providers for the infrastructure platforms you use, browse the providers section of the [Terraform Registry](https://registry.terraform.io/browse/providers). + +### How to create a provider + +If a Terraform provider does not exist for your environment, you can create a new Terraform provider and publish it to the registry so that you can use it within a network integration task or create a compatible Terraform module. Refer to the following Terraform tutorial and documentation for additional information on creating and publishing providers: + +- [Setup and Implement Read](/terraform/tutorials/providers/provider-setup) +- [Publishing Providers](/terraform/registry/providers/publishing). + +## Network integration using a Terraform module + +The Terraform module for a task in CTS is the core component of the integration. It declares which resources to use and how your infrastructure is dynamically updated. The module, along with how it is configured within a task, determines the conditions under which your infrastructure is updated. + +Working with a Terraform provider, you can write an integration task for CTS by [creating a Terraform module](/consul/docs/automate/infrastructure/module) that is compatible with the Terraform driver. You can also use a [module built by partners](#partner-terraform-modules). + +Refer to [Configuration](/consul/docs/reference/cts) for information about configuring CTS and how to use Terraform providers and modules for tasks. + +### Partner Terraform Modules + +The modules listed below are available to use and are compatible with CTS. + +#### A10 Networks + +- Dynamic Load Balancing with Group Member Updates: [Terraform Registry](https://registry.terraform.io/modules/a10networks/service-group-sync-nia/thunder/latest) / [GitHub](https://github.com/a10networks/terraform-thunder-service-group-sync-nia) + +#### Avi Networks + +- Scale Up and Scale Down Pool and Pool Members (Servers): [GitHub](https://github.com/vmware/terraform-provider-avi/tree/20.1.5/modules/nia/pool) + +#### AWS Application Load Balancer (ALB) + +- Create Listener Rule and Target Group for an AWS ALB, Forward Traffic to Consul Ingress Gateway: [Terraform Registry](https://registry.terraform.io/modules/aws-quickstart/cts-alb_listener-nia/hashicorp/latest) / [GitHub](https://github.com/aws-quickstart/terraform-hashicorp-cts-alb_listener-nia) + +#### Checkpoint + +- Dynamic Firewalling with Address Object Updates: [Terraform Registry](https://registry.terraform.io/modules/CheckPointSW/dynobj-nia/checkpoint/latest) / [GitHub](https://github.com/CheckPointSW/terraform-checkpoint-dynobj-nia) + +#### Cisco ACI + +- Policy Based Redirection: [Terraform Registry](https://registry.terraform.io/modules/CiscoDevNet/autoscaling-nia/aci/latest) / [GitHub](https://github.com/CiscoDevNet/terraform-aci-autoscaling-nia) +- Create and Update Cisco ACI Endpoint Security Groups: [Terraform Registry](https://registry.terraform.io/modules/CiscoDevNet/esg-nia/aci/latest) / [GitHub](https://github.com/CiscoDevNet/terraform-aci-esg-nia) + +#### Citrix ADC + +- Create, Update, and Delete Service Groups in Citrix ADC: [Terraform Registry](https://registry.terraform.io/modules/citrix/servicegroup-consul-sync-nia/citrixadc/latest) / [GitHub](https://github.com/citrix/terraform-citrixadc-servicegroup-consul-sync-nia) + +#### F5 + +- Dynamic Load Balancing with Pool Member Updates: [Terraform Registry](https://registry.terraform.io/modules/f5devcentral/app-consul-sync-nia/bigip/latest) / [GitHub](https://github.com/f5devcentral/terraform-bigip-app-consul-sync-nia) + +#### NS1 + +- Create, Delete, and Update DNS Records and Zones: [Terraform Registry](https://registry.terraform.io/modules/ns1-terraform/record-sync-nia/ns1/latest) / [GitHub](https://github.com/ns1-terraform/terraform-ns1-record-sync-nia) + +#### Palo Alto Networks + +- Dynamic Address Group (DAG) Tags: [Terraform Registry](https://registry.terraform.io/modules/PaloAltoNetworks/dag-nia/panos/latest) / [GitHub](https://github.com/PaloAltoNetworks/terraform-panos-dag-nia) +- Address Group and Dynamic Address Group (DAG) Tags: [Terraform + Registry](https://registry.terraform.io/modules/PaloAltoNetworks/ag-dag-nia/panos/latest) + / [GitHub](https://github.com/PaloAltoNetworks/terraform-panos-ag-dag-nia) diff --git a/website/content/docs/automate/infrastructure/run.mdx b/website/content/docs/automate/infrastructure/run.mdx new file mode 100644 index 000000000000..531197d819aa --- /dev/null +++ b/website/content/docs/automate/infrastructure/run.mdx @@ -0,0 +1,38 @@ +--- +layout: docs +page_title: Run Consul-Terraform-Sync +description: >- + Consul-Terraform-Sync requires a Terraform Provider, a Terraform Module and a running Consul Cluster outside of the `consul-terraform-sync` daemon. +--- + +# Run Consul-Terraform-Sync + +This topic describes the basic procedure for running Consul-Terraform-Sync (CTS). Verify that you have met the [basic requirements](/consul/docs/automate/infrastructure/requirements) before attempting to run CTS. + +1. Move the `consul-terraform-sync` binary to a location available on your `PATH`. + + ```shell-session + $ mv ~/Downloads/consul-terraform-sync /usr/local/bin/consul-terraform-sync + ``` + +2. Create the config.hcl file and configure the options for your use case. Refer to the [configuration reference](/consul/docs/reference/cts) for details about all CTS configurations. + +3. Run Consul-Terraform-Sync (CTS). + + ```shell-session + $ consul-terraform-sync start -config-file + ``` + +4. Check status of tasks. Replace port number if configured in Step 2. Refer to [Consul-Terraform-Sync API](/consul/docs/reference/cts/api) for additional information. + + ```shell-session + $ curl localhost:8558/status/tasks + ``` + +## Other Run modes + +You can [configure CTS for high availability](/consul/docs/automate/infrastructure/high-availability), which is an enterprise capability that ensures that all changes to Consul that occur during a failover transition are processed and that CTS continues to operate as expected. + +You can start CTS in [inspect mode](/consul/docs/nia/cli/start#modes) to review and test your configuration before applying any changes. Inspect mode allows you to verify that the changes work as expected before running them in an unsupervised daemon mode. + +For hands-on instructions on using inspect mode, refer to the [Consul-Terraform-Sync Run Modes and Status Inspection](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-run-and-inspect?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. \ No newline at end of file diff --git a/website/content/docs/automate/infrastructure/task.mdx b/website/content/docs/automate/infrastructure/task.mdx new file mode 100644 index 000000000000..ec58559b2231 --- /dev/null +++ b/website/content/docs/automate/infrastructure/task.mdx @@ -0,0 +1,272 @@ +--- +layout: docs +page_title: Tasks +description: >- + Consul-Terraform-Sync Tasks +--- + +# Tasks + +A task is the translation of dynamic service information from the Consul Catalog into network infrastructure changes downstream. Consul-Terraform-Sync (CTS) carries out automation for executing tasks using network drivers. For a Terraform driver, the scope of a task is a Terraform module. + +Below is an example task configuration: + +```hcl +task { + name = "frontend-firewall-policies" + description = "Add firewall policy rules for frontend services" + providers = ["fake-firewall", "null"] + module = "example/firewall-policy/module" + version = "1.0.0" + condition "services" { + names = ["web", "image"] + } +} +``` + +In the example task above, the "fake-firewall" and "null" providers, listed in the `providers` field, are used. These providers themselves should be configured in their own separate [`terraform_provider` blocks](/consul/docs/nia/configuration#terraform-provider). These providers are used in the Terraform module "example/firewall-policy/module", configured in the `module` field, to create, update, and destroy resources. This module may do something like use the providers to create and destroy firewall policy objects based on IP addresses. The IP addresses come from the "web" and "image" service instances configured in the `condition "services"` block. This service-level information is retrieved by CTS which watches Consul catalog for changes. + +See [task configuration](/consul/docs/nia/configuration#task) for more details on how to configure a task. + +A task can be either enabled or disabled using the [task cli](/consul/docs/reference/cli/cts/task). When enabled, tasks are executed and automated as described in sections below. However, disabled tasks do not execute when changes are detected from Consul catalog. Since disabled tasks do not execute, they also do not store [events](/consul/docs/nia/tasks#event) until re-enabled. + +## Task Execution + +An enabled task can be configured to monitor and execute on different types of conditions, such as changes to services ([services condition](/consul/docs/nia/tasks#services-condition)) or service registration and deregistration ([catalog-services condition](/consul/docs/nia/tasks#catalog-services-condition)). + +A task can also monitor, but not execute on, other variables that provide additional information to the task's module. For example, a task with a catalog-services condition may execute on registration changes and additionally monitor service instances for IP information. + +All configured monitored information, regardless if it's used for execution or not, can be passed to the task's module as module input. Below are details on the types of execution conditions that CTS supports and their module inputs. + +### Services Condition + +Tasks with the services condition monitor and execute on either changes to a list of configured services or changes to any services that match a given regex. + +There are two ways to configure a task with a services condition. Only one of the two options below can be configured for a single task: +1. Configure a task's [`services`](/consul/docs/nia/configuration#services) field (deprecated) to specify the list of services to trigger the task +1. Configure a task's `condition` block with the [services condition](/consul/docs/nia/configuration#services-condition) type to specify services to trigger the task. + +The services condition operates by monitoring the [Health List Nodes For Service API](/consul/api-docs/health#list-nodes-for-service) and executing the task on any change of information for services configured. These changes include one or more changes to service values, like IP address, added or removed service instance, or tags. A complete list of values that would cause a task to run are expanded below: + +| Attribute | Description | +| ----------------------- | ------------------------------------------------------------------------------------------------- | +| `id` | A unique Consul ID for this service. This is unique per Consul agent. | +| `name` | The logical name of the service. Many service instances may share the same logical service name. | +| `address` | IP address of the service host -- if empty, node address should be used. | +| `port` | Port number of the service | +| `meta` | List of user-defined metadata key/value pairs for the service | +| `tags` | List of tags for the service | +| `namespace` | Consul Enterprise namespace of the service instance | +| `status` | Representative status for the service instance based on an aggregate of the list of health checks | +| `node` | Name of the Consul node on which the service is registered | +| `node_id` | ID of the node on which the service is registered. | +| `node_address` | The IP address of the Consul node on which the service is registered. | +| `node_datacenter` | Data center of the Consul node on which the service is registered. | +| `node_tagged_addresses` | List of explicit LAN and WAN IP addresses for the agent | +| `node_meta` | List of user-defined metadata key/value pairs for the node | + +Below is an example configuration for a task that will execute when a service with a name that matches the regular expression has a change. + +```hcl +task { + name = "services_condition_task" + description = "execute on changes to services whose name starts with web" + providers = ["my-provider"] + module = "path/to/services-condition-module" + + condition "services" { + regexp = "^web.*" + use_as_module_input = false + } +} +``` + +The services condition can provide input for the [`services` input variable](/consul/docs/nia/terraform-modules#services-variable) that is required for each CTS module. This can be provided depending on how the services condition is configured: +- task's `services` field (deprecated): services object is automatically passed as module input +- task's `condition "services"` block: users can configure the `use_as_module_input` field to optionally use the condition's services object as module input + - Field was previously named `source_includes_var` (deprecated) + +### Catalog-Services Condition + +Tasks with a catalog-services condition monitor and execute on service registration changes for services that satisfy the condition configuration. 'Service registration changes' specifically refers to service registration and deregistration where service registration occurs on the first service instance registration, and service deregistration occurs on the last service instance registration. Tasks with a catalog-services condition may, depending on the module, additionally monitor but not execute on service instance information. + +The catalog-services condition operates by monitoring the [Catalog List Services API](/consul/api-docs/catalog#list-services) and executing the task when services are added or removed in the list of registered services. Note, the task does not execute on changes to the tags of the list of services. This is similar to how changes to service instance information, mentioned above, also does not execute a task. + +Below is an example configuration for a task that will execute when a service with a name that matches the "web.*" regular expression in datacenter "dc1" has a registration change. It additionally monitors but does not execute on service instance changes to "web-api" in datacenter "dc2". + +```hcl +task { + name = "catalog_service_condition_task" + module = "path/to/catalog-services-module" + providers = ["my-provider"] + + condition "catalog-services" { + datacenter = "dc1" + regexp = "web.*" + use_as_module_input = false + } + + module_input "services" { + names = ["web-api"] + datacenter = "dc2" + } +} +``` + +Using the condition block's `use_as_module_input` field, users can configure CTS to use the condition's object as module input for the [`catalog_services` input variable](/consul/docs/nia/terraform-modules#catalog-services-variable). Users can refer to the configured module's documentation on how to set `use_as_module_input`. + +See the [Catalog-Services Condition](/consul/docs/nia/configuration#catalog-services-condition) configuration section for further details and additional configuration options. + +### Consul KV Condition + +Tasks with a consul-kv condition monitor and execute on Consul KV changes for KV pairs that satisfy the condition configuration. The consul-kv condition operates by monitoring the [Consul KV API](/consul/api-docs/kv#read-key) and executing the task when a configured KV entry is created, deleted, or updated. + +Based on the `recurse` option, the condition either monitors a single Consul KV pair for a given path or monitors all pairs that are prefixed by that path. In the example below, because `recurse` is set to true, the `path` option is treated as a prefix. Changes to an entry with the key `my-key` and an entry with the key `my-key/another-key` would both trigger the task. If `recurse` were set to false, then only changes to `my-key` would trigger the task. + +```hcl +task { + name = "consul_kv_condition_task" + description = "execute on changes to Consul KV entry" + module = "path/to/consul-kv-module" + providers = ["my-provider"] + + condition "consul-kv" { + path = "my-key" + recurse = true + datacenter = "dc1" + namespace = "default" + use_as_module_input = true + } +} +``` + +Using the condition block's `use_as_module_input` field, users can configure CTS to use the condition's object as module input for the [`consul_kv` input variable](/consul/docs/nia/terraform-modules#consul-kv-variable). Users can refer to the configured module's documentation on how to set `use_as_module_input`. + +See the [Consul-KV Condition](/consul/docs/nia/configuration#consul-kv-condition) configuration section for more details and additional configuration options. + +### Schedule Condition + +All scheduled tasks must be configured with a schedule condition. The schedule condition sets the cadence to trigger a task with a [`cron`](/consul/docs/nia/configuration#cron) configuration. The schedule condition block does not support parameters to configure module input. As a result, inputs must be configured separately. You can configure [`module_input` blocks](/consul/docs/nia/configuration#module_input) to define the module inputs. + +Below is an example configuration for a task that will execute every Monday, which is set by the schedule condition's [`cron`](/consul/docs/nia/configuration#cron) configuration. The module input is defined by the `module_input` block. When the task is triggered on Monday, it will retrieve the latest information on "web" and "db" from Consul and provide this to the module's input variables. + +```hcl +task { + name = "scheduled_task" + description = "execute every Monday using service information from web and db" + module = "path/to/module" + + condition "schedule" { + cron = "* * * * Mon" + } + module_input "services" { + names = ["web", "db"] + } +} +``` + +Below are the available options for module input types and how to configure them: + +- [Services module input](/consul/docs/nia/terraform-modules/#services-module-input): + - [`task.services`](/consul/docs/nia/configuration#services) field (deprecated) + - [`module_input "services"`](/consul/docs/nia/configuration#services-configure-input) block + - Block was previously named `source_input "services"` (deprecated) +- [Consul KV module input](/consul/docs/nia/terraform-modules/#consul-kv-module-input): + - [`module_input "consul-kv"`](/consul/docs/nia/configuration#consul-kv-module-input) + - Block was previously named `source_input "consul-kv"` (deprecated) + +#### Running Behavior + +Scheduled tasks generally run on schedule, but they can be triggered on demand when running CTS in the following ways: + +- [Long-running mode](/consul/docs/nia/cli#long-running-mode): At the beginning of the long-running mode, CTS first passes through a once-mode phase in which all tasks are executed once. Scheduled tasks will trigger once during this once-mode phase. This behavior also applies to tasks that are not scheduled. After once-mode has completed, scheduled tasks subsequently trigger on schedule. + +- [Inspect mode](/consul/docs/nia/cli#inspect-mode): When running in inspect mode, the terminal will output a plan of proposed updates that would be made if the tasks were to trigger at that moment and then exit. No changes are applied in this mode. The outputted plan for a scheduled task is also the proposed updates that would be made if the task was triggered at that moment, even if off-schedule. + +- [Once mode](/consul/docs/nia/cli#once-mode): During the once mode, all tasks are only triggered one time. Scheduled tasks will execute during once mode even if not on the schedule. + +- [Enable CLI](/consul/docs/nia/cli/task#task-enable): When a task is enabled through the CLI, any type of task, including scheduled tasks, will be triggered at that time. + +#### Buffer Period + +Because scheduled tasks trigger on a configured cadence, buffer periods are disabled for scheduled tasks. Any configured `buffer_period` at the global level or task level will only apply to dynamic tasks and not scheduled ones. + +#### Events + +[Events](#event) are stored each time a task executes. For scheduled tasks, an event will be stored each time the task triggers on schedule regardless of if there was a change in Consul catalog. + +## Task Automation + +CTS will attempt to execute each enabled task once upon startup to synchronize infrastructure with the current state of Consul. The daemon will stop and exit if any error occurs while preparing the automation environment or executing a task for the first time. This helps ensure tasks have proper configuration and are executable before the daemon transitions into running tasks in full automation as service changes are discovered over time. As a result, it is not recommended to configure a task as disabled from the start. After all tasks have successfully executed once, task failures during automation will be logged and retried or attempted again after a subsequent change. + +Tasks are executed near-real time when service changes are detected. For services or environments that are prone to flapping, it may be useful to configure a [buffer period](/consul/docs/nia/configuration#buffer_period-1) for a task to accumulate changes before it is executed. The buffer period would reduce the number of consecutive network calls to infrastructure by batching changes for a task over a short duration of time. + +## Status Information + +Status-related information is collected and offered via [status API](/consul/docs/nia/api#status) to provide visibility into what and how the tasks are running. Information is offered in three-levels (lowest to highest): + +- Event data +- Task status +- Overall status + +These three levels form a hierarchy where each level of data informs the one higher. The lowest-level, event data, is collected each time a task runs to update network infrastructure. This event data is then aggregated to inform individual task statuses. The count distribution of all the task statuses inform the overall status's task summary. + +### Event + +When a task is triggered, CTS takes a series of steps in order to update the network infrastructure. These steps consist of fetching the latest data from Consul for the task's module inputs and then updating the network infrastructure accordingly. An event captures information across this process. It stores information to help understand if the update to network infrastructure was successful or not and any errors that may have occurred. + +A dynamic task will store an event when it is triggered by a change in Consul. A scheduled task will store an event when it is triggered on schedule, regardless if there is a change in Consul. A disabled task does not update network infrastructures, so it will not store events until until re-enabled. + +Sample event: + +```json +{ + "id": "ef202675-502f-431f-b133-ed64d15b0e0e", + "success": false, + "start_time": "2020-11-24T12:05:18.651231-05:00", + "end_time": "2020-11-24T12:05:20.900115-05:00", + "task_name": "task_b", + "error": { + "message": "example error: error while doing terraform-apply" + }, + ... +} +``` + +For complete information on the event structure, see [events in our API documentation](/consul/docs/nia/api#event). Event information can be retrieved by using the [`include=events` parameter](/consul/docs/nia/api#include) with the [task status API](/consul/docs/nia/api#task-status). + +### Task Status + +Each time a task runs to update network infrastructure, event data is stored for that run. 5 most recent events are stored for each task, and these stored events are used to determine task status. For example, if the most recent stored event is not successful but the others are, then the task's health status is "errored". + +Sample task status: + +```json +{ + "task_name": "task_b", + "status": "errored", + "providers": ["null"], + "services": ["web"], + "events_url": "/v1/status/tasks/task_b?include=events" +} +``` + +Task status information can be retrieved with [task status API](/consul/docs/nia/api#task-status). The API documentation includes details on what health statuses are available and how it is calculated based on events' success/failure information. + +### Overall Status + +Overall status returns a summary of the health statuses across all tasks. The summary is the count of tasks in each health status category. + +Sample overall status: + +```json +{ + "task_summary": { + "successful": 28, + "errored": 5, + "critical": 1 + } +} +``` + +Overall status information can be retrieved with [overall status API](/consul/docs/nia/api#overall-status). The API documentation includes details on what health statuses are available and how it is calculated based on task statuses' health status information. \ No newline at end of file diff --git a/website/content/docs/automate/kv/index.mdx b/website/content/docs/automate/kv/index.mdx new file mode 100644 index 000000000000..6568e414ce79 --- /dev/null +++ b/website/content/docs/automate/kv/index.mdx @@ -0,0 +1,111 @@ +--- +layout: docs +page_title: Consul key/value (KV) store overview +description: >- + Consul includes a KV store for indexed objects, configuration parameters, and metadata that you can use to dynamically configure apps. Learn about accessing and using the KV store to extend Consul's functionality through watches, sessions, and Consul Template. +--- + +# Consul key/value (KV) store overview + +Consul KV is a core feature of Consul and is installed with the Consul agent. +Once installed with the agent, Consul KV has reasonable defaults. Consul KV +lets you store indexed objects, though its main uses are storing +configuration parameters and metadata. It is a basic KV store and is not +intended to be a full featured datastore (such as DynamoDB). + +The Consul KV datastore is located on the servers, but any client or server +agent may access it. The natively integrated [RPC +functionality](/consul/docs/architecture/control-plane) lets clients +forward requests to servers, including key/value reads and writes. Part of +Consul's core design allows automatic data replication across all the +servers. Having a quorum of servers decreases the risk of data loss if an +outage occurs. + +If you have not used Consul KV, complete this [Getting Started +tutorial](/consul/tutorials/interactive/get-started-key-value-store?utm_source=docs) +on HashiCorp. + + +The Consul KV API, CLI, and UI are now considered feature complete. No new feature development is planned for future releases. + + +## Accessing the KV store + +Access the KV store with the [consul kv CLI subcommands](/consul/commands/kv), +[HTTP API](/consul/api-docs/kv) and Consul UI. To restrict access, enable and +configure [ACLs](/consul/docs/secure/acl). Once the ACL system has been +bootstrapped, users and services need a valid token with KV +[privileges](/consul/docs/secure/acl/rule#key-value-rules) to access the data +store. This includes read-only access. We recommend creating a token with +limited privileges. For example, you could create a token with write privileges +on one key for developers to update the value related to their application. + +The datastore itself is located on the Consul servers in the [data directory](/consul/docs/architecture/backend). To ensure data is not lost in the event of a complete outage, use the [`consul snapshot`](/consul/commands/snapshot/restore) feature to backup the data. + +## Using Consul KV + +Objects are opaque to Consul, meaning there are no restrictions on the type of +object stored in a key/value entry. The main restriction on an object is a +maximum size of 512 KB. Due to the maximum object size and main use cases, you should +not need extra storage. The general [sizing +recommendations](/consul/docs/reference/agent#kv_max_value_size) are usually +sufficient. + +Keys, like objects, are not restricted by type and can include any character. +However, we recommend using URL-safe chars such as `[a-zA-Z0-9-._~]` with the +exception of `/`, which can be used to help organize data. Note, `/` is +treated like any other character and is not fixed to the file system. This means +that including `/` in a key does not fix it to a directory structure. This model is +similar to Amazon S3 buckets. However, `/` is still useful for organizing data +and when recursively searching within the data store. We also recommend that you +avoid the use of `*`, `?`, `'`, and `%` because they can cause issues when using +the API and in shell scripts. + +## Using Sentinel to apply policies for Consul KV + +This feature requires HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. + +You can also use Sentinel as a Policy-as-code framework for defining advanced key-value storage access control policies. Sentinel policies extend the ACL system in Consul beyond static "read", "write", and "deny" policies to support full conditional logic and integration with external systems. Reference the [Sentinel documentation](https://docs.hashicorp.com/sentinel/concepts) for high-level Sentinel concepts. + +To get started with Sentinel in Consul, refer to the [Sentinel documentation](https://docs.hashicorp.com/sentinel/consul) or [Consul documentation](/consul/docs/secure/acl/sentinel). + +## Extending Consul KV + +### Consul Template + +If you plan to use Consul KV as part of your configuration management process, review the [Consul Template](/consul/tutorials/developer-configuration/consul-template?utm_source=docs) tutorial on how to update configuration based on value updates in the KV. Consul Template is based on Go Templates and allows for a series of scripted actions to be initiated on value changes to a Consul key. + +### Watches + +Extend Consul KV with the use of Consul watches, +which are a way to monitor data for updates. When an update is detected, an +external handler is invoked. To use watches with the KV store, use the +`key` watch type. + +Refer to the [Consul watches documentation](/consul/docs/automate/watch) for more information. + +### Consul Sessions + +Use Consul sessions to build distributed locks with Consul KV. Sessions act as a +binding layer between nodes, health checks, and key/value data. The KV API +supports an `acquire` and `release` operation. The `acquire` operation acts like +a Check-And-Set operation. On success, Consul updates the key, increments the +`LockIndex` and then updates the session value to reflect the session holding +the lock. Refer to the [K/V integration +documentation](/consul/docs/automate/session#k-v-integration) for more +information. + +Refer to the following tutorials to learn how to use Consul sessions: + +- [Application leader + election](/consul/tutorials/developer-configuration/application-leader-elections) + to learn the process for building client-side leader elections for service + instances using Consul's session mechanism and the Consul key/value store. +- [Sessions and distributed locks + overview](/consul/tutorials/developer-configuration/distributed-semaphore) to build distributed semaphores + +### Vault + +If you plan to use Consul KV as a backend for Vault, refer to the [Configure +Vault cluster with Integrated Storage +tutorial](/vault/tutorials/day-one-consul/ha-with-consul?utm_source=docs). diff --git a/website/content/docs/dynamic-app-config/kv/store.mdx b/website/content/docs/automate/kv/store.mdx similarity index 100% rename from website/content/docs/dynamic-app-config/kv/store.mdx rename to website/content/docs/automate/kv/store.mdx diff --git a/website/content/docs/automate/native/go.mdx b/website/content/docs/automate/native/go.mdx new file mode 100644 index 000000000000..a4d6ff1275f0 --- /dev/null +++ b/website/content/docs/automate/native/go.mdx @@ -0,0 +1,253 @@ +--- +layout: docs +page_title: Service Mesh Native App Integration - Go Apps +description: >- + Consul's service mesh supports native integrations of Go applications into the service mesh through a Go library. Example code demonstrates how to connect your Go applications to the service mesh. +--- + +# Service Mesh Native Integration for Go Applications + + + +The Connect Native golang SDK is currently deprecated and will be removed in a future Consul release. +The SDK will be removed when the long term replacement to native application integration (such as a proxyless gRPC service mesh integration) is delivered. Refer to [GH-10339](https://github.com/hashicorp/consul/issues/10339) for additional information and to track progress toward one potential solution that is tracked as replacement functionality. + + + +We provide a library that makes it drop-in simple to integrate Consul service mesh +with most [Go](https://golang.org/) applications. This page shows examples +of integrating this library for accepting or establishing mesh-based +connections. For most Go applications, Consul service mesh can be natively integrated +in just a single line of code excluding imports and struct initialization. + +In addition to this, please read and understand the +[overview of service mesh native integrations](/consul/docs/automate/native). +In particular, after natively integrating applications with Consul service mesh, +they must declare that they accept mesh-based connections via their service definitions. + +The noun _connect_ is used throughout this documentation and the Go API +to refer to the connect subsystem that provides Consul's service mesh capabilities. + +## Accepting Connections + +-> **Note:** When calling `ConnectAuthorize()` on incoming connections this library +will return _deny_ if `Permissions` are defined on the matching intention. +The method is currently only suited for networking layer 4 (e.g. TCP) integration. + +Any server that supports TLS (HTTP, gRPC, net/rpc, etc.) can begin +accepting mesh-based connections in just a few lines of code. For most +existing applications, converting the server to accept mesh-based +connections will require only a one-line change excluding imports and +structure initialization. + +The +Go library exposes a `*tls.Config` that _automatically_ communicates with +Consul to load certificates and authorize inbound connections during the +TLS handshake. This also automatically starts goroutines to update any +changing certs. + +Example, followed by more details: + +```go +import( + "net/http" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" +) + +func main() { + // Create a Consul API client + client, _ := api.NewClient(api.DefaultConfig()) + + // Create an instance representing this service. "my-service" is the + // name of _this_ service. The service should be cleaned up via Close. + svc, _ := connect.NewService("my-service", client) + defer svc.Close() + + // Creating an HTTP server that serves via service mesh + server := &http.Server{ + Addr: ":8080", + TLSConfig: svc.ServerTLSConfig(), + // ... other standard fields + } + + // Serve! + server.ListenAndServeTLS("", "") +} +``` + +The first step is to create a Consul API client. This is almost always the +default configuration with an ACL token set, since you want to communicate +to the local agent. The default configuration will also read the ACL token +from environment variables if set. The Go library will use this client to request certificates, +authorize connections, and more. + +Next, `connect.NewService` is called to create a service structure representing +the _currently running service_. This structure maintains all the state +for accepting and establishing connections. An application should generally +create one service and reuse that one service for all servers and clients. + +Finally, a standard `*http.Server` is created. The magic line is the `TLSConfig` +value. This is set to a TLS configuration returned by the service structure. +This TLS configuration is configured to automatically load certificates +in the background, cache them, and authorize inbound connections. The service +structure automatically handles maintaining blocking queries to update certificates +in the background if they change. + +Since the service returns a standard `*tls.Config`, _any_ server that supports +TLS can be configured. This includes gRPC, net/rpc, basic TCP, and more. +Another example is shown below with just a plain TLS listener: + +```go +import( + "crypto/tls" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" +) + +func main() { + // Create a Consul API client + client, _ := api.NewClient(api.DefaultConfig()) + + // Create an instance representing this service. "my-service" is the + // name of _this_ service. The service should be cleaned up via Close. + svc, _ := connect.NewService("my-service", client) + defer svc.Close() + + // Creating an HTTP server that serves via service mesh + listener, _ := tls.Listen("tcp", ":8080", svc.ServerTLSConfig()) + defer listener.Close() + + // Accept + go acceptLoop(listener) +} +``` + +## HTTP Clients + +For Go applications that need to connect to HTTP-based upstream dependencies, +the Go library can construct an `*http.Client` that automatically establishes +mesh-based connections as long as Consul-based service discovery is used. + +Example, followed by more details: + +```go +import( + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" +) + +func main() { + // Create a Consul API client + client, _ := api.NewClient(api.DefaultConfig()) + + // Create an instance representing this service. "my-service" is the + // name of _this_ service. The service should be cleaned up via Close. + svc, _ := connect.NewService("my-service", client) + defer svc.Close() + + // Get an HTTP client + httpClient := svc.HTTPClient() + + // Perform a request, then use the standard response + resp, _ := httpClient.Get("https://userinfo.service.consul/user/mitchellh") +} +``` + +The first step is to create a Consul API client and service. These are the +same steps as accepting connections and are explained in detail in the +section above. If your application is both a client and server, both the +API client and service structure can be shared and reused. + +Next, we call `svc.HTTPClient()` to return a specially configured +`*http.Client`. This client will automatically established mesh-based +connections using Consul service discovery. + +Finally, we perform an HTTP `GET` request to a hypothetical userinfo service. +The HTTP client configuration automatically sends the correct client +certificate, verifies the server certificate, and manages background +goroutines for updating our certificates as necessary. + +If the application already uses a manually constructed `*http.Client`, +the `svc.HTTPDialTLS` function can be used to configure the +`http.Transport.DialTLS` field to achieve equivalent behavior. + +### Hostname Requirements + +The hostname used in the request URL is used to identify the logical service +discovery mechanism for the target. **It's not actually resolved via DNS** but +used as a logical identifier for a Consul service discovery mechanism. It has +the following specific limitations: + +- The scheme must be `https://`. +- It must be a Consul DNS name in one of the following forms: + - `.service[.].consul` to discover a healthy service + instance for a given service. + - `.query[.].consul` to discover an instance via + [Prepared Query](/consul/api-docs/query). +- The top-level domain _must_ be `.consul` even if your cluster has a custom + `domain` configured for its DNS interface. This might be relaxed in the + future. +- Tag filters for services are not currently supported (i.e. + `tag1.web.service.consul`) however the same behavior can be achieved using a + prepared query. +- External DNS names, raw IP addresses and so on will cause an error and should + be fetched using a separate `HTTPClient`. + +## Raw TLS Connection + +For a raw `net.Conn` TLS connection, the `svc.Dial` function can be used. +This will establish a connection to the desired service via the service mesh and +return the `net.Conn`. This connection can then be used as desired. + +Example: + +```go +import ( + "context" + + "github.com/hashicorp/consul/api" + "github.com/hashicorp/consul/connect" +) + +func main() { + // Create a Consul API client + client, _ := api.NewClient(api.DefaultConfig()) + + // Create an instance representing this service. "my-service" is the + // name of _this_ service. The service should be cleaned up via Close. + svc, _ := connect.NewService("my-service", client) + defer svc.Close() + + // Connect to the "userinfo" Consul service. + conn, _ := svc.Dial(context.Background(), &connect.ConsulResolver{ + Client: client, + Name: "userinfo", + }) +} +``` + +This uses a familiar `Dial`-like function to establish raw `net.Conn` values. +The second parameter to dial is an implementation of the `connect.Resolver` +interface. The example above uses the `*connect.ConsulResolver` implementation +to perform Consul-based service discovery. This also automatically determines +the correct certificate metadata we expect the remote service to serve. + +## Static Addresses, Custom Resolvers + +In the raw TLS connection example, you see the use of a `connect.Resolver` +implementation. This interface can be implemented to perform address +resolution. This must return the address and also the URI SAN expected +in the TLS certificate served by the remote service. + +The Go library provides two built-in resolvers: + +- `*connect.StaticResolver` can be used for static addresses where no + service discovery is required. The expected cert URI SAN must be + manually specified. + +- `*connect.ConsulResolver` which resolves services and prepared queries + via the Consul API. This also automatically determines the expected + cert URI SAN. diff --git a/website/content/docs/automate/native/index.mdx b/website/content/docs/automate/native/index.mdx new file mode 100644 index 000000000000..1b0b3f15fd34 --- /dev/null +++ b/website/content/docs/automate/native/index.mdx @@ -0,0 +1,165 @@ +--- +layout: docs +page_title: Service Mesh Native App Integration - Overview +description: >- + When using sidecar proxies is not possible, applications can natively integrate with Consul service mesh, but have reduced access to service mesh features. Learn how "mesh-native" or "connect-native" apps use mTLS to authenticate with Consul and how to add integrations to service registrations. +--- + +# Service Mesh Native App Integration Overview + + + +The Connect Native Golang SDK and `v1/agent/connect/authorize`, `v1/agent/connect/ca/leaf`, +and `v1/agent/connect/ca/roots` APIs are deprecated and will be removed in a future release. Although Connect Native +will still operate as designed, we do not recommend leveraging this feature because it is deprecated and will be removed +removed when the long term replacement to native application integration (such as a proxyless gRPC service mesh integration) is delivered. Refer to [GH-10339](https://github.com/hashicorp/consul/issues/10339) for additional information and to track progress toward one potential solution that is tracked as replacement functionality. + +The Native App Integration does not support many of the Consul's service mesh features, and is not under active development. +The [Envoy proxy](/consul/docs/reference/proxy/envoy) should be used for most production environments. + + + +Applications can natively integrate with Consul's service mesh API to support accepting +and establishing connections to other mesh services without the overhead of a +[proxy sidecar](/consul/docs/connect/proxy). This option is especially useful +for applications that may be experiencing performance issues with the proxy +sidecar deployment. This page will cover the high-level overview of +integration, registering the service, etc. For language-specific examples, see +the sidebar navigation to the left. It is also required if your service uses +relies on a dynamic set of upstream services. + +Service mesh traffic is just basic mutual TLS. This means that almost any application +can easily integrate with Consul service mesh. There is no custom protocol in use; +any language that supports TLS can accept and establish mesh-based +connections. + +We currently provide an easy-to-use [Go integration](/consul/docs/automate/native/go) +to assist with the getting the proper certificates, verifying connections, +etc. We plan to add helper libraries for other languages in the future. +However, without library support, it is still possible for any major language +to integrate with Consul service mesh. + +The noun _connect_ is used throughout this documentation to refer to the connect +subsystem that provides Consul's service mesh capabilities. + +## Overview + +The primary work involved in natively integrating with service mesh is +[acquiring the proper TLS certificate](/consul/api-docs/agent/connect#service-leaf-certificate), +[verifying TLS certificates](/consul/api-docs/agent/connect#certificate-authority-ca-roots), +and [authorizing inbound connections or requests](/consul/api-docs/connect/intentions#list-matching-intentions). + +All of this is done using the Consul HTTP APIs linked above. + +An overview of the sequence is shown below. The diagram and the following +details may seem complex, but this is a _regular mutual TLS connection_ with +an API call to verify the incoming client certificate. + +![Native Integration Overview](/img/connect-native-overview.png) + +-> **Note:** This diagram depicts the simpler networking layer 4 (e.g. TCP) [integration +mechanism](/consul/api-docs/agent/connect#authorize). + +Details on the steps are below: + +- **Service discovery** - This is normal service discovery using Consul, + a static IP, or any other mechanism. If you're using Consul DNS, the + [`.connect`](/consul/docs/services/discovery/dns-static-lookups#service-mesh-enabled-service-lookups) + syntax to find mesh-capable endpoints for a service. After service + discovery, choose one address from the list of **service addresses**. + +- **Mutual TLS** - As a client, connect to the discovered service address + over normal TLS. As part of the TLS connection, provide the + [service certificate](/consul/api-docs/agent/connect#service-leaf-certificate) + as the client certificate. Verify the remote certificate against the + [public CA roots](/consul/api-docs/agent/connect#certificate-authority-ca-roots). + As a client, if the connection is established then you've established + a mesh-based connection and there are no further steps! + +- **Authorization** - As a server accepting connections, verify the client + certificate against the [public CA + roots](/consul/api-docs/agent/connect#certificate-authority-ca-roots). After verifying + the certificate, parse some basic fields from it and use those to determine + if the connection should be allowed. How this is done is dependent on + the level of integration desired: + + - **Simple integration (TCP-only)** - Call the [authorizing + API](/consul/api-docs/agent/connect#authorize) against the local agent. If this returns + successfully, complete the TLS handshake and establish the connection. If + authorization fails, close the connection. + + -> **NOTE:** This API call is expected to be called in the connection path, + so if the local Consul agent is down or unresponsive it will effect the + success rate of new connections. The agent uses locally cached data to + authorize the connection and typically responds in microseconds. Therefore, + the impact to the TLS handshake is typically microseconds. + + - **Complete integration** - Like how the calls to acquire the leaf + certificate and CA roots are expected to be done out of band and reused, so + should the [intention match + API](/consul/api-docs/connect/intentions#list-matching-intentions). With all of the + relevant intentions cached for the destination, all enforcement operations + can be done entirely by the service without calling any Consul APIs in the + connection or request path. If the service is networking layer 7 (e.g. + HTTP) aware it can safely enforce intentions per _request_ instead of the + coarser per _connection_ model. + +## Update certificates and certificate roots + +The leaf certificate and CA roots can be updated at any time and the +natively integrated application must react to this relatively quickly +so that new connections are not disrupted. This can be done through +Consul blocking queries (HTTP long polling) or through periodic polling. + +The API calls for +[acquiring a service mesh TLS certificate](/consul/api-docs/agent/connect#service-leaf-certificate) +and [reading service mesh CA roots](/consul/api-docs/agent/connect#certificate-authority-ca-roots) +both support +[blocking queries](/consul/api-docs/features/blocking). By using blocking +queries, an application can efficiently wait for an updated value. For example, +the leaf certificate API will block until the certificate is near expiration +or the signing certificates have changed and will issue and return a new +certificate. + +In some languages, using blocking queries may not be simple. In that case, +we still recommend using the blocking query parameters but with a very short +`timeout` value set. Doing this is documented with +[blocking queries](/consul/api-docs/features/blocking). The low timeout will +ensure the API responds quickly. We recommend that applications poll the +certificate endpoints frequently, such as multiple times per minute. + +The overhead for the blocking queries (long or periodic polling) is minimal. +The API calls are to the local agent and the local agent uses locally +cached data multiplexed over a single TCP connection to the Consul leader. +Even if a single machine has 1,000 mesh-enabled services all blocking +on certificate updates, this translates to only one TCP connection to the +Consul server. + +Some language libraries such as the +[Go library](/consul/docs/automate/native/go) automatically handle updating +and locally caching the certificates. + +## Service registration + +Mesh-native applications must tell Consul that they support service mesh +natively. This enables the service to be returned as part of service +discovery for service mesh-capable services used by other mesh-native applications +and client [proxies](/consul/docs/connect/proxy). + +You can enable native service mesh support directly in the [service definition](/consul/docs/reference/service#connect) by configuring the `connect` block. In the following example, the `redis` service is configured to support service mesh natively: + +```json +{ + "service": { + "name": "redis", + "port": 8000, + "connect": { + "native": true + } + } +} +``` + +Services that support service mesh natively are still returned through the standard +service discovery mechanisms in addition to the mesh-only service discovery +mechanisms. diff --git a/website/content/docs/automate/session.mdx b/website/content/docs/automate/session.mdx new file mode 100644 index 000000000000..eca6ff487e78 --- /dev/null +++ b/website/content/docs/automate/session.mdx @@ -0,0 +1,144 @@ +--- +layout: docs +page_title: Sessions and distributed locks overview +description: >- + Consul supports sessions that you can use to build distributed locks with granular locking. Learn about sessions, how they can prevent "split-brain" systems by ensuring consistency in deployments, and how they can integrate with the key/value (KV) store. +--- + +# Sessions and distributed locks overview + +Consul provides a session mechanism which can be used to build distributed locks. +Sessions act as a binding layer between nodes, health checks, and key/value data. +They are designed to provide granular locking and are heavily inspired by +[The Chubby Lock Service for Loosely-Coupled Distributed Systems](https://research.google/pubs/the-chubby-lock-service-for-loosely-coupled-distributed-systems/). + +## Session Design + +A session in Consul represents a contract that has very specific semantics. +When a session is constructed, a node name, a list of health checks, a behavior, +a TTL, and a `lock-delay` may be provided. The newly constructed session is provided with +a named ID that can be used to identify it. This ID can be used with the KV +store to acquire locks: advisory mechanisms for mutual exclusion. + +Below is a diagram showing the relationship between these components: + +![Consul Sessions](/img/consul-sessions.png) + +The contract that Consul provides is that under any of the following +situations, the session will be _invalidated_: + +- Node is deregistered +- Any of the health checks are deregistered +- Any of the health checks go to the critical state +- Session is explicitly destroyed +- TTL expires, if applicable + +When a session is invalidated, it is destroyed and can no longer +be used. What happens to the associated locks depends on the +behavior specified at creation time. Consul supports a `release` +and `delete` behavior. The `release` behavior is the default +if none is specified. + +If the `release` behavior is being used, any of the locks held in +association with the session are released, and the `ModifyIndex` of +the key is incremented. Alternatively, if the `delete` behavior is +used, the key corresponding to any of the held locks is simply deleted. +This can be used to create ephemeral entries that are automatically +deleted by Consul. + +While this is a simple design, it enables a multitude of usage +patterns. By default, the +[gossip based failure detector](/consul/docs/concept/gossip) +is used as the associated health check. This failure detector allows +Consul to detect when a node that is holding a lock has failed and +to automatically release the lock. This ability provides **liveness** to +Consul locks; that is, under failure the system can continue to make +progress. However, because there is no perfect failure detector, it's possible +to have a false positive (failure detected) which causes the lock to +be released even though the lock owner is still alive. This means +we are sacrificing some **safety**. + +Conversely, it is possible to create a session with no associated +health checks. This removes the possibility of a false positive +and trades liveness for safety. You can be absolutely certain Consul +will not release the lock even if the existing owner has failed. +Since Consul APIs allow a session to be force destroyed, this allows +systems to be built that require an operator to intervene in the +case of a failure while precluding the possibility of a split-brain. + +A third health checking mechanism is session TTLs. When creating +a session, a TTL can be specified. If the TTL interval expires without +being renewed, the session has expired and an invalidation is triggered. +This type of failure detector is also known as a heartbeat failure detector. +It is less scalable than the gossip based failure detector as it places +an increased burden on the servers but may be applicable in some cases. +The contract of a TTL is that it represents a lower bound for invalidation; +that is, Consul will not expire the session before the TTL is reached, but it +is allowed to delay the expiration past the TTL. The TTL is renewed on +session creation, on session renew, and on leader failover. When a TTL +is being used, clients should be aware of clock skew issues: namely, +time may not progress at the same rate on the client as on the Consul servers. +It is best to set conservative TTL values and to renew in advance of the TTL +to account for network delay and time skew. + +The final nuance is that sessions may provide a `lock-delay`. This +is a time duration, between 0 and 60 seconds. When a session invalidation +takes place, Consul prevents any of the previously held locks from +being re-acquired for the `lock-delay` interval; this is a safeguard +inspired by Google's Chubby. The purpose of this delay is to allow +the potentially still live leader to detect the invalidation and stop +processing requests that may lead to inconsistent state. While not a +bulletproof method, it does avoid the need to introduce sleep states +into application logic and can help mitigate many issues. While the +default is to use a 15 second delay, clients are able to disable this +mechanism by providing a zero delay value. + +## K/V Integration + +Integration between the KV store and sessions is the primary +place where sessions are used. A session must be created prior to use +and is then referred to by its ID. + +The KV API is extended to support an `acquire` and `release` operation. +The `acquire` operation acts like a Check-And-Set operation except it +can only succeed if there is no existing lock holder (the current lock holder +can re-`acquire`, see below). On success, there is a normal key update, but +there is also an increment to the `LockIndex`, and the `Session` value is +updated to reflect the session holding the lock. + +If the lock is already held by the given session during an `acquire`, then +the `LockIndex` is not incremented but the key contents are updated. This +lets the current lock holder update the key contents without having to give +up the lock and reacquire it. + +Once held, the lock can be released using a corresponding `release` operation, +providing the same session. Again, this acts like a Check-And-Set operation +since the request will fail if given an invalid session. A critical note is +that the lock can be released without being the creator of the session. +This is by design as it allows operators to intervene and force-terminate +a session if necessary. As mentioned above, a session invalidation will also +cause all held locks to be released or deleted. When a lock is released, the `LockIndex` +does not change; however, the `Session` is cleared and the `ModifyIndex` increments. + +These semantics (heavily borrowed from Chubby), allow the tuple of (Key, LockIndex, Session) +to act as a unique "sequencer". This `sequencer` can be passed around and used +to verify if the request belongs to the current lock holder. Because the `LockIndex` +is incremented on each `acquire`, even if the same session re-acquires a lock, +the `sequencer` will be able to detect a stale request. Similarly, if a session is +invalided, the Session corresponding to the given `LockIndex` will be blank. + +To be clear, this locking system is purely _advisory_. There is no enforcement +that clients must acquire a lock to perform any operation. Any client can +read, write, and delete a key without owning the corresponding lock. It is not +the goal of Consul to protect against misbehaving clients. + +## Leader Election + +You can use the primitives provided by sessions and the locking mechanisms of the KV +store to build client-side leader election algorithms. +These are covered in more detail in the [Leader Election guide](/consul/docs/automate/application-leader-election). + +## Prepared Query Integration + +Prepared queries may be attached to a session in order to automatically delete +the prepared query when the session is invalidated. \ No newline at end of file diff --git a/website/content/docs/automate/watch.mdx b/website/content/docs/automate/watch.mdx new file mode 100644 index 000000000000..83b3574e5e88 --- /dev/null +++ b/website/content/docs/automate/watch.mdx @@ -0,0 +1,693 @@ +--- +layout: docs +page_title: Watches overview +description: >- + Watches monitor the key/value (KV) store, services, nodes, health checks, and events for updates. When a watch detects a change, it invokes a handler that can call an HTTP endpoint or runs an executable. Learn how to configure watches to dynamically respond to changes in Consul. +--- + +# Watches overview + +Watches are a way of specifying a view of data (e.g. list of nodes, KV pairs, health +checks) which is monitored for updates. When an update is detected, an external handler +is invoked. A handler can be any executable or HTTP endpoint. As an example, you could watch the status +of health checks and notify an external system when a check is critical. + +Watches are implemented using blocking queries in the [HTTP API](/consul/api-docs). +Agents automatically make the proper API calls to watch for changes +and inform a handler when the data view has updated. + +Watches can be configured as part of the [agent's configuration](/consul/docs/reference/agent/configuration-file/general#watches), +causing them to run once the agent is initialized. Reloading the agent configuration +allows for adding or removing watches dynamically. + +Alternatively, the [watch command](/consul/commands/watch) enables a watch to be +started outside of the agent. This can be used by an operator to inspect data in Consul +or to easily pipe data into processes without being tied to the agent lifecycle. + +In either case, the `type` of the watch must be specified. Each type of watch +supports different parameters, some required and some optional. These options are specified +in a JSON body when using agent configuration or as CLI flags for the watch command. + +## Handlers + +The watch configuration specifies the view of data to be monitored. +Once that view is updated, the specified handler is invoked. Handlers can be either an +executable or an HTTP endpoint. A handler receives JSON formatted data +with invocation info, following a format that depends on the type of the watch. +Each watch type documents the format type. Because they map directly to an HTTP +API, handlers should expect the input to match the format of the API. A Consul +index is also given, corresponding to the responses from the +[HTTP API](/consul/api-docs). + +### Executable + +An executable handler reads the JSON invocation info from stdin. Additionally, +the `CONSUL_INDEX` environment variable will be set to the Consul index. +Anything written to stdout is logged. + +Here is an example configuration, where `handler_type` is optionally set to +`script`: + + + + +```hcl +watches = [ + { + type = "key" + key = "foo/bar/baz" + handler_type = "script" + args = ["/usr/bin/my-service-handler.sh", "-redis"] + } +] +``` + + + + + +```json +{ + "watches": [ + { + "type": "key", + "key": "foo/bar/baz", + "handler_type": "script", + "args": ["/usr/bin/my-service-handler.sh", "-redis"] + } + ] +} +``` + + + + + +Prior to Consul 1.0, watches used a single `handler` field to define the command to run, and +would always run in a shell. In Consul 1.0, the `args` array was added so that handlers can be +run without a shell. The `handler` field is deprecated, and you should include the shell in +the `args` to run under a shell, eg. `"args": ["sh", "-c", "..."]`. + +### HTTP endpoint + +An HTTP handler sends an HTTP request when a watch is invoked. The JSON invocation info is sent +as a payload along the request. The response also contains the Consul index as a header named +`X-Consul-Index`. + +The HTTP handler can be configured by setting `handler_type` to `http`. Additional handler options +are set using `http_handler_config`. The only required parameter is the `path` field which specifies +the URL to the HTTP endpoint. Consul uses `POST` as the default HTTP method, but this is also configurable. +Other optional fields are `header`, `timeout` and `tls_skip_verify`. The watch invocation data is +always sent as a JSON payload. + +Here is an example configuration: + + + + +```hcl +watches = [ + { + type = "key" + key = "foo/bar/baz" + handler_type = "http" + http_handler_config { + path = "https://localhost:8000/watch" + method = "POST" + header = { + x-foo = ["bar", "baz"] + } + timeout = "10s" + tls_skip_verify = false + } + } +] +``` + + + + +```json +{ + "watches": [ + { + "type": "key", + "key": "foo/bar/baz", + "handler_type": "http", + "http_handler_config": { + "path": "https://localhost:8000/watch", + "method": "POST", + "header": { "x-foo": ["bar", "baz"] }, + "timeout": "10s", + "tls_skip_verify": false + } + } + ] +} +``` + + + + +## Global Parameters + +In addition to the parameters supported by each option type, there +are a few global parameters that all watches support: + +- `datacenter` - Can be provided to override the agent's default datacenter. +- `token` - Can be provided to override the agent's default ACL token. +- `args` - The handler subprocess and arguments to invoke when the data view updates. +- `handler` - The handler shell command to invoke when the data view updates. + +## Watch Types + +The following types are supported. Detailed documentation on each is below: + +- [`key`](#key) - Watch a specific KV pair +- [`keyprefix`](#keyprefix) - Watch a prefix in the KV store +- [`services`](#services) - Watch the list of available services +- [`nodes`](#nodes) - Watch the list of nodes +- [`service`](#service)- Watch the instances of a service +- [`checks`](#checks) - Watch the value of health checks +- [`event`](#event) - Watch for custom user events + +### Type: key ((#key)) + +The "key" watch type is used to watch a specific key in the KV store. +It requires that the `key` parameter be specified. + +This maps to the `/v1/kv/` API internally. + +Here is an example configuration: + + + +```hcl +{ + type = "key" + key = "foo/bar/baz" + args = ["/usr/bin/my-service-handler.sh", "-redis"] +} +``` + +```json +{ + "type": "key", + "key": "foo/bar/baz", + "args": ["/usr/bin/my-service-handler.sh", "-redis"] +} +``` + + + +Or, using the watch command: + +```shell-session +$ consul watch -type=key -key=foo/bar/baz /usr/bin/my-key-handler.sh +``` + +An example of the output of this command: + +```json +{ + "Key": "foo/bar/baz", + "CreateIndex": 1793, + "ModifyIndex": 1793, + "LockIndex": 0, + "Flags": 0, + "Value": "aGV5", + "Session": "" +} +``` + +### Type: keyprefix ((#keyprefix)) + +The `keyprefix` watch type is used to watch a prefix of keys in the KV store. +It requires that the `prefix` parameter be specified. This watch +returns _all_ keys matching the prefix whenever _any_ key matching the prefix +changes. + +This maps to the `/v1/kv/` API internally. + +Here is an example configuration: + + + +```hcl +{ + type = "keyprefix" + prefix = "foo/" + args = ["/usr/bin/my-prefix-handler.sh", "-redis"] +} +``` + +```json +{ + "type": "keyprefix", + "prefix": "foo/", + "args": ["/usr/bin/my-prefix-handler.sh", "-redis"] +} +``` + + + +Or, using the watch command: + +```shell-session +$ consul watch -type=keyprefix -prefix=foo/ /usr/bin/my-prefix-handler.sh +``` + +An example of the output of this command: + +```json +[ + { + "Key": "foo/bar", + "CreateIndex": 1796, + "ModifyIndex": 1796, + "LockIndex": 0, + "Flags": 0, + "Value": "TU9BUg==", + "Session": "" + }, + { + "Key": "foo/baz", + "CreateIndex": 1795, + "ModifyIndex": 1795, + "LockIndex": 0, + "Flags": 0, + "Value": "YXNkZg==", + "Session": "" + }, + { + "Key": "foo/test", + "CreateIndex": 1793, + "ModifyIndex": 1793, + "LockIndex": 0, + "Flags": 0, + "Value": "aGV5", + "Session": "" + } +] +``` + +### Type: services ((#services)) + +The "services" watch type is used to watch the list of available +services. It has no parameters. + +This maps to the `/v1/catalog/services` API internally. + +Below is an example configuration: + + + +```hcl +{ + type = "services" + args = ["/usr/bin/my-services-handler.sh"] +} +``` + +```json +{ + "type": "services", + "args": ["/usr/bin/my-services-handler.sh"] +} +``` + + + +Or, using the watch command: + +```shell-session +$ consul watch -type=services /usr/bin/my-services-handler.sh +``` + +An example of the output of this command: + +```json +{ + "consul": [], + "redis": [], + "web": [] +} +``` + +### Type: nodes ((#nodes)) + +The "nodes" watch type is used to watch the list of available +nodes. It has no parameters. + +This maps to the `/v1/catalog/nodes` API internally. + +Below is an example configuration: + + + +```hcl +{ + type = "nodes" + args = ["/usr/bin/my-nodes-handler.sh"] +} +``` + +```json +{ + "type": "nodes", + "args": ["/usr/bin/my-nodes-handler.sh"] +} +``` + + + +Or, using the watch command: + +```shell-session +$ consul watch -type=nodes /usr/bin/my-nodes-handler.sh +``` + +An example of the output of this command: + +```json +[ + { + "ID": "8d3088b5-ce7d-0b94-f185-ae70c3445642", + "Node": "nyc1-consul-1", + "Address": "192.0.2.10", + "Datacenter": "dc1", + "TaggedAddresses": null, + "Meta": null, + "CreateIndex": 23792324, + "ModifyIndex": 23792324 + }, + { + "ID": "1edb564e-65ee-9e60-5e8a-83eae4637357", + "Node": "nyc1-worker-1", + "Address": "192.0.2.20", + "Datacenter": "dc1", + "TaggedAddresses": { + "lan": "192.0.2.20", + "lan_ipv4": "192.0.2.20", + "wan": "192.0.2.20", + "wan_ipv4": "192.0.2.20" + }, + "Meta": { + "consul-network-segment": "", + "host-ip": "192.0.2.20", + "pod-name": "hashicorp-consul-q7nth" + }, + "CreateIndex": 23792336, + "ModifyIndex": 23792338 + } +] +``` + +### Type: service ((#service)) + +The "service" watch type is used to monitor the providers +of a single service. It requires the `service` parameter +and optionally takes the parameters `tag` and +`passingonly`. The `tag` parameter will filter by one or more tags. +It may be either a single string value or a slice of strings. +The `passingonly` parameter is a boolean that will filter to only the +instances passing all health checks. + +This maps to the `/v1/health/service` API internally. + +Here is an example configuration with a single tag: + + + +```hcl +{ + type = "service" + service = "redis" + args = ["/usr/bin/my-service-handler.sh", "-redis"] + tag = "bar" +} +``` + +```json +{ + "type": "service", + "service": "redis", + "args": ["/usr/bin/my-service-handler.sh", "-redis"], + "tag": "bar" +} +``` + + + +Here is an example configuration with multiple tags: + + + +```hcl +{ + type = "service" + service = "redis" + args = ["/usr/bin/my-service-handler.sh", "-redis"] + tag = ["bar", "foo"] +} +``` + +```json +{ + "type": "service", + "service": "redis", + "args": ["/usr/bin/my-service-handler.sh", "-redis"], + "tag": ["bar", "foo"] +} +``` + + + +Or, using the watch command: + +Single tag: + +```shell-session +$ consul watch -type=service -service=redis -tag=bar /usr/bin/my-service-handler.sh +``` + +Multiple tags: + +```shell-session +$ consul watch -type=service -service=redis -tag=bar -tag=foo /usr/bin/my-service-handler.sh +``` + +An example of the output of this command: + +```json +[ + { + "Node": { + "ID": "f013522f-aaa2-8fc6-c8ac-c84cb8a56405", + "Node": "hashicorp-consul-server-1", + "Address": "192.0.2.50", + "Datacenter": "dc1", + "TaggedAddresses": null, + "Meta": null, + "CreateIndex": 23785783, + "ModifyIndex": 23785783 + }, + "Service": { + "ID": "redis", + "Service": "redis", + "Tags": [], + "Meta": null, + "Port": 6379, + "Address": "", + "Weights": { + "Passing": 1, + "Warning": 1 + }, + "EnableTagOverride": false, + "CreateIndex": 23785794, + "ModifyIndex": 23785794, + "Proxy": { + "MeshGateway": {}, + "Expose": {} + }, + "Connect": {} + }, + "Checks": [ + { + "Node": "hashicorp-consul-server-1", + "CheckID": "serfHealth", + "Name": "Serf Health Status", + "Status": "passing", + "Notes": "", + "Output": "Agent alive and reachable", + "ServiceID": "", + "ServiceName": "", + "ServiceTags": [], + "Type": "", + "Definition": { + "Interval": "0s", + "Timeout": "0s", + "DeregisterCriticalServiceAfter": "0s", + "HTTP": "", + "Header": null, + "Method": "", + "Body": "", + "TLSServerName": "", + "TLSSkipVerify": false, + "TCP": "", + "TCPUseTLS": false, + "GRPC": "", + "GRPCUseTLS": false + }, + "CreateIndex": 23785783, + "ModifyIndex": 23791503 + } + ] + } +] +``` + +### Type: checks ((#checks)) + +The "checks" watch type is used to monitor the checks of a given +service or those in a specific state. It optionally takes the `service` +parameter to filter to a specific service or the `state` parameter to +filter to a specific state. By default, it will watch all checks. + +This maps to the `/v1/health/state/` API if monitoring by state +or `/v1/health/checks/` if monitoring by service. + +Here is an example configuration for monitoring by state: + + + +```hcl +{ + type = "checks" + state = "passing" + args = ["/usr/bin/my-check-handler.sh", "-passing"] +} +``` + +```json +{ + "type": "checks", + "state": "passing", + "args": ["/usr/bin/my-check-handler.sh", "-passing"] +} +``` + + + +Here is an example configuration for monitoring by service: + + + +```hcl +{ + type = "checks" + service = "redis" + args = ["/usr/bin/my-check-handler.sh", "-redis"] +} +``` + +```json +{ + "type": "checks", + "service": "redis", + "args": ["/usr/bin/my-check-handler.sh", "-redis"] +} +``` + + + +Or, using the watch command: + +State: + +```shell-session +$ consul watch -type=checks -state=passing /usr/bin/my-check-handler.sh -passing +``` + +Service: + +```shell-session +$ consul watch -type=checks -service=redis /usr/bin/my-check-handler.sh -redis +``` + +An example of the output of this command: + +```json +[ + { + "Node": "foobar", + "CheckID": "service:redis", + "Name": "Service 'redis' check", + "Status": "passing", + "Notes": "", + "Output": "", + "ServiceID": "redis", + "ServiceName": "redis" + } +] +``` + +### Type: event ((#event)) + +The "event" watch type is used to monitor for custom user +events. These are fired using the [consul event](/consul/commands/event) command. +It takes only a single optional `name` parameter which restricts +the watch to only events with the given name. + +This maps to the `/v1/event/list` API internally. + +Here is an example configuration: + + + +```hcl +{ + type = "event" + name = "web-deploy" + args = ["/usr/bin/my-event-handler.sh", "-web-deploy"] +} +``` + +```json +{ + "type": "event", + "name": "web-deploy", + "args": ["/usr/bin/my-event-handler.sh", "-web-deploy"] +} +``` + + + +Or, using the watch command: + +```shell-session +$ consul watch -type=event -name=web-deploy /usr/bin/my-event-handler.sh -web-deploy +``` + +An example of the output of this command: + +```json +[ + { + "ID": "f07f3fcc-4b7d-3a7c-6d1e-cf414039fcee", + "Name": "web-deploy", + "Payload": "MTYwOTAzMA==", + "NodeFilter": "", + "ServiceFilter": "", + "TagFilter": "", + "Version": 1, + "LTime": 18 + } +] +``` + +To fire a new `web-deploy` event the following could be used: + +```shell-session +$ consul event -name=web-deploy 1609030 +``` diff --git a/website/content/docs/concept/catalog.mdx b/website/content/docs/concept/catalog.mdx new file mode 100644 index 000000000000..07162b3d4cca --- /dev/null +++ b/website/content/docs/concept/catalog.mdx @@ -0,0 +1,39 @@ +--- +layout: docs +page_title: Consul catalog +description: Learn about version 1 of the Consul catalog, including what Consul servers record when they register a service. +--- + +# Consul catalog + +This topic provides conceptual information about the Consul catalog API. The catalog tracks registered services and their locations for both service discovery and service mesh use cases. + +For more information about the information returned when querying the catalog, including filtering options when querying the catalog for a list of nodes, services, or gateways, refer to the [`/catalog` endpoint reference in the HTTP API documentation](/consul/api-docs/catalog). + +## Introduction + +Consul tracks information about registered services through its catalog API. This API records user-defined information about the external services, such as their partitions and required health checks. It also records information that Consul assigns for its own operations, such as an ID for each service instance and the [Raft indices](/consul/docs/concept/consensus) when the instance is registered and modified. + +### v2 Catalog + +Consul introduced an experimental v2 Catalog API in v1.17.0. This API supported multi-port Service configurations on Kubernetes, and it was made available for testing and development purposes. The v2 catalog and its support for multiport Kubernetes Services were deprecated in the v1.19.0 release. + +## Catalog structure + +When Consul registers a service instance using the v1 catalog API, it records the following information about each instance: + +| v1 Catalog field | Description | Source | +| :--------------- | :---------- | :----- | +| ID | A unique identifier for a service instance. | Defined by user in [service definition](/consul/docs/reference/service#id). | +| Node | The connection point where the service is available. | On VMs, defined by user.

On Kubernetes, computed by Consul according to [Kubernetes Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/). | +| Address | The registered address of the service instance. | Defined by user in [service definition](/consul/docs/reference/service#address). | +| Tagged Addresses | User-defined labels for addresses. | Defined by user in [service definition](/consul/docs/reference/service#tagged_addresses). | +| NodeMeta | User-defined metadata about the node. | Defined by user | +| Datacenter | The name of the datacenter the service is registered in. | Defined by user | +| Service | The name of the service Consul registers the service instance under. | Defined by user | +| Agent Check | The health checks defined for a service instance managed by a Consul client agent. | Computed by Consul | +| Health Checks | The health checks defined for the service. Refer to [define health checks](/consul/docs/register/health-check/vm) for more information. | Defined by user | +| Partition | The name of the admin partition the service is registered in. Refer to [admin partitions](/consul/docs/multi-tenant/admin-partition) for more information. | Defined by user | +| Locality | Region and availability zone of the service. Refer to [`locality`](/consul/docs/reference/agent/configuration-file/service-mesh#locality) for more information. | Defined by user | + +Depending on the configuration entries or custom resource definitions you apply to your Consul installation, additional information such as [proxy default behavior](/consul/docs/reference/config-entry/proxy-defaults) is automatically recorded to the catalog for services. You can return this information using the [`/catalog` HTTP API endpoint](/consul/api-docs/catalog). diff --git a/website/content/docs/concept/consensus.mdx b/website/content/docs/concept/consensus.mdx new file mode 100644 index 000000000000..dffd48a50c4b --- /dev/null +++ b/website/content/docs/concept/consensus.mdx @@ -0,0 +1,135 @@ +--- +layout: docs +page_title: Consensus +description: >- + Consul ensures a consistent state using the Raft protocol. A quorum, or a majority of server agents with one leader, agree to state changes before committing to the state log. Learn how Raft works in Consul to ensure state consistency and how that state can be read with different consistency modes to balance read latency and consistency. +--- + +# Consensus + +Consul uses a [consensus protocol]() +to provide [Consistency (as defined by CAP)](https://en.wikipedia.org/wiki/CAP_theorem). +The consensus protocol is based on +["Raft: In search of an Understandable Consensus Algorithm"](https://raft.github.io/raft.pdf). +For a visual explanation of Raft, see [The Secret Lives of Data](http://thesecretlivesofdata.com/raft). + +## Raft Protocol Overview + +Raft is a consensus algorithm that is based on +[Paxos](https://en.wikipedia.org/wiki/Paxos_%28computer_science%29). Compared +to Paxos, Raft is designed to have fewer states and a simpler, more +understandable algorithm. + +There are a few key terms to know when discussing Raft: + +- Log - The primary unit of work in a Raft system is a log entry. The problem + of consistency can be decomposed into a _replicated log_. A log is an ordered + sequence of entries. Entries includes any cluster change: adding nodes, adding services, new key-value pairs, etc. We consider the log consistent + if all members agree on the entries and their order. + +- FSM - [Finite State Machine](https://en.wikipedia.org/wiki/Finite-state_machine). + An FSM is a collection of finite states with transitions between them. As new logs + are applied, the FSM is allowed to transition between states. Application of the + same sequence of logs must result in the same state, meaning behavior must be deterministic. + +- Peer set - The peer set is the set of all members participating in log replication. + For Consul's purposes, all server nodes are in the peer set of the local datacenter. + +- Quorum - A quorum is a majority of members from a peer set: for a set of size `N`, + quorum requires at least `(N/2)+1` members. + For example, if there are 5 members in the peer set, we would need 3 nodes + to form a quorum. If a quorum of nodes is unavailable for any reason, the + cluster becomes _unavailable_ and no new logs can be committed. + +- Committed Entry - An entry is considered _committed_ when it is durably stored + on a quorum of nodes. Once an entry is committed it can be applied. + +- Leader - At any given time, the peer set elects a single node to be the leader. + The leader is responsible for ingesting new log entries, replicating to followers, + and managing when an entry is considered committed. + +Raft is a complex protocol and will not be covered here in detail (for those who +desire a more comprehensive treatment, the full specification is available in this +[paper](https://raft.github.io/raft.pdf)). +We will, however, attempt to provide a high level description which may be useful +for building a mental model. + +Raft nodes are always in one of three states: follower, candidate, or leader. All +nodes initially start out as a follower. In this state, nodes can accept log entries +from a leader and cast votes. If no entries are received for some time, nodes +self-promote to the candidate state. In the candidate state, nodes request votes from +their peers. If a candidate receives a quorum of votes, then it is promoted to a leader. +The leader must accept new log entries and replicate to all the other followers. +In addition, if stale reads are not acceptable, all queries must also be performed on +the leader. + +Once a cluster has a leader, it is able to accept new log entries. A client can +request that a leader append a new log entry (from Raft's perspective, a log entry +is an opaque binary blob). The leader then writes the entry to durable storage and +attempts to replicate to a quorum of followers. Once the log entry is considered +_committed_, it can be _applied_ to a finite state machine. The finite state machine +is application specific; in Consul's case, we use +[MemDB](https://github.com/hashicorp/go-memdb) to maintain cluster state. Consul's writes +block until it is both _committed_ and _applied_. This achieves read after write semantics +when used with the [consistent](/consul/api-docs/features/consistency#consistent) mode for queries. + +Obviously, it would be undesirable to allow a replicated log to grow in an unbounded +fashion. Raft provides a mechanism by which the current state is snapshotted and the +log is compacted. Because of the FSM abstraction, restoring the state of the FSM must +result in the same state as a replay of old logs. This allows Raft to capture the FSM +state at a point in time and then remove all the logs that were used to reach that +state. This is performed automatically without user intervention and prevents unbounded +disk usage while also minimizing time spent replaying logs. One of the advantages of +using MemDB is that it allows Consul to continue accepting new transactions even while +old state is being snapshotted, preventing any availability issues. + +Consensus is fault-tolerant up to the point where quorum is available. +If a quorum of nodes is unavailable, it is impossible to process log entries or reason +about peer membership. For example, suppose there are only 2 peers: A and B. The quorum +size is also 2, meaning both nodes must agree to commit a log entry. If either A or B +fails, it is now impossible to reach quorum. This means the cluster is unable to add +or remove a node or to commit any additional log entries. This results in +_unavailability_. At this point, manual intervention would be required to remove +either A or B and to restart the remaining node in bootstrap mode. + +A Raft cluster of 3 nodes can tolerate a single node failure while a cluster +of 5 can tolerate 2 node failures. The recommended configuration is to either +run 3 or 5 Consul servers per datacenter. This maximizes availability without +greatly sacrificing performance. The [deployment table](/concept/reliability#deployment-size) +summarizes the potential cluster size options and the fault tolerance of each. + +In terms of performance, Raft is comparable to Paxos. Assuming stable leadership, +committing a log entry requires a single round trip to half of the cluster. +Thus, performance is bound by disk I/O and network latency. Although Consul is +not designed to be a high-throughput write system, it should handle on the order +of hundreds to thousands of transactions per second depending on network and +hardware configuration. + +## Raft in Consul + +Only Consul server nodes participate in Raft and are part of the peer set. All +client nodes forward requests to servers. Part of the reason for this design is +that, as more members are added to the peer set, the size of the quorum also increases. +This introduces performance problems as you may be waiting for hundreds of machines +to agree on an entry instead of a handful. + +When getting started, a single Consul server is put into "bootstrap" mode. This mode +allows it to self-elect as a leader. Once a leader is elected, other servers can be +added to the peer set in a way that preserves consistency and safety. Eventually, +once the first few servers are added, bootstrap mode can be disabled. See [this +document](/consul/docs/deploy/server/vm/bootstrap) for more details. + +Since all servers participate as part of the peer set, they all know the current +leader. When an RPC request arrives at a non-leader server, the request is +forwarded to the leader. If the RPC is a _query_ type, meaning it is read-only, +the leader generates the result based on the current state of the FSM. If +the RPC is a _transaction_ type, meaning it modifies state, the leader +generates a new log entry and applies it using Raft. Once the log entry is committed +and applied to the FSM, the transaction is complete. + +Because of the nature of Raft's replication, performance is sensitive to network +latency. For this reason, each datacenter elects an independent leader and maintains +a disjoint peer set. Data is partitioned by datacenter, so each leader is responsible +only for data in their datacenter. When a request is received for a remote datacenter, +the request is forwarded to the correct leader. This design allows for lower latency +transactions and higher availability without sacrificing consistency. diff --git a/website/content/docs/concept/consistency.mdx b/website/content/docs/concept/consistency.mdx new file mode 100644 index 000000000000..61ee3ac0f902 --- /dev/null +++ b/website/content/docs/concept/consistency.mdx @@ -0,0 +1,234 @@ +--- +layout: docs +page_title: Consistency +description: >- + Anti-entropy keeps distributed systems consistent. Learn how Consul uses an anti-entropy mechanism to periodically sync agent states with the service catalog to prevent the catalog from becoming stale. Learn about the Jepsen testing performed on Consul to ensure it gracefully recovers from partitions and maintains consistent state. +--- + +# Consistency + +Consul uses an advanced method of maintaining service and health information. +This page details how services and checks are registered, how the catalog is +populated, and how health status information is updated as it changes. + +## Anti-Entropy + +Entropy is the tendency of systems to become increasingly disordered. Consul's +anti-entropy mechanisms are designed to counter this tendency, to keep the +state of the cluster ordered even through failures of its components. + +Consul has a clear separation between the global service catalog and the agent's +local state as discussed above. The anti-entropy mechanism reconciles these two +views of the world: anti-entropy is a synchronization of the local agent state and +the catalog. For example, when a user registers a new service or check with the +agent, the agent in turn notifies the catalog that this new check exists. +Similarly, when a check is deleted from the agent, it is consequently removed from +the catalog as well. + +Anti-entropy is also used to update availability information. As agents run +their health checks, their status may change in which case their new status +is synced to the catalog. Using this information, the catalog can respond +intelligently to queries about its nodes and services based on their +availability. + +During this synchronization, the catalog is also checked for correctness. If +any services or checks exist in the catalog that the agent is not aware of, they +will be automatically removed to make the catalog reflect the proper set of +services and health information for that agent. Consul treats the state of the +agent as authoritative; if there are any differences between the agent +and catalog view, the agent-local view will always be used. + +### Periodic Synchronization + +In addition to running when changes to the agent occur, anti-entropy is also a +long-running process which periodically wakes up to sync service and check +status to the catalog. This ensures that the catalog closely matches the agent's +true state. This also allows Consul to re-populate the service catalog even in +the case of complete data loss. + +To avoid saturation, the amount of time between periodic anti-entropy runs will +vary based on cluster size. The table below defines the relationship between +cluster size and sync interval: + +| Cluster Size | Periodic Sync Interval | +| ------------ | ---------------------- | +| 1 - 128 | 1 minute | +| 129 - 256 | 2 minutes | +| 257 - 512 | 3 minutes | +| 513 - 1024 | 4 minutes | +| ... | ... | + +The intervals above are approximate. Each Consul agent will choose a randomly +staggered start time within the interval window to avoid a thundering herd. + +### Best-effort sync + +Anti-entropy can fail in a number of cases, including misconfiguration of the +agent or its operating environment, I/O problems (full disk, filesystem +permission, etc.), networking problems (agent cannot communicate with server), +among others. Because of this, the agent attempts to sync in best-effort +fashion. + +If an error is encountered during an anti-entropy run, the error is logged and +the agent continues to run. The anti-entropy mechanism is run periodically to +automatically recover from these types of transient failures. + +### Enable Tag Override + +Synchronization of service registration can be partially modified to +allow external agents to change the tags for a service. This can be +useful in situations where an external monitoring service needs to be +the source of truth for tag information. For example, the Redis +database and its monitoring service Redis Sentinel have this kind of +relationship. Redis instances are responsible for much of their +configuration, but Sentinels determine whether the Redis instance is a +primary or a secondary. Enable the +[`enable_tag_override`](/consul/docs/reference/service#enable_tag_override) parameter in your service definition file to tell the Consul agent where the Redis database is running to bypass +tags during anti-entropy synchronization. Refer to +[Modify anti-entropy synchronization](/consul/docs/services/usage/define-services#modify-anti-entropy-synchronization) for additional information. + +## Consistency Modes + +Although all writes to the replicated log go through Raft, reads are more +flexible. To support various trade-offs that developers may want, Consul +supports 3 different consistency modes for reads. + +The three read modes are: + +- `default` - Raft makes use of leader leasing, providing a time window + in which the leader assumes its role is stable. However, if a leader + is partitioned from the remaining peers, a new leader may be elected + while the old leader is holding the lease. This means there are 2 leader + nodes. There is no risk of a split-brain since the old leader will be + unable to commit new logs. However, if the old leader services any reads, + the values are potentially stale. The default consistency mode relies only + on leader leasing, exposing clients to potentially stale values. We make + this trade-off because reads are fast, usually strongly consistent, and + only stale in a hard-to-trigger situation. The time window of stale reads + is also bounded since the leader will step down due to the partition. + +- `consistent` - This mode is strongly consistent without caveats. It requires + that a leader verify with a quorum of peers that it is still leader. This + introduces an additional round-trip to all server nodes. The trade-off is + always consistent reads but increased latency due to the extra round trip. + +- `stale` - This mode allows any server to service the read regardless of whether + it is the leader. This means reads can be arbitrarily stale but are generally + within 50 milliseconds of the leader. The trade-off is very fast and scalable + reads but with stale values. This mode allows reads without a leader meaning + a cluster that is unavailable will still be able to respond. + +For more documentation about using these various modes, see the +[HTTP API](/consul/api-docs/features/consistency). + +## Jepsen Testing Results + +[Jepsen](http://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions) +is a tool, written by Kyle Kingsbury, designed to test the partition +tolerance of distributed systems. It creates network partitions while fuzzing +the system with random operations. The results are analyzed to see if the system +violates any of the consistency properties it claims to have. + +As part of our Consul testing, we ran a Jepsen test to determine if +any consistency issues could be uncovered. In our testing, Consul +gracefully recovered from partitions without introducing any consistency +issues. + +### Running the tests + +At the moment, testing with Jepsen is rather complex as it requires +setting up multiple virtual machines, SSH keys, DNS configuration, +and a working Clojure environment. We hope to contribute our Consul +testing code upstream and to provide a Vagrant environment for Jepsen +testing soon. + +### Output + +Below is the output captured from Jepsen. We ran Jepsen multiple times, +and it passed each time. This output is only representative of a single +run and has been edited for length. Please reach out on [Consul's Discuss](https://discuss.hashicorp.com/c/consul) +if you would like to reproduce the Jepsen results. + + + +```shell-session +$ lein test :only jepsen.system.consul-test + +lein test jepsen.system.consul-test +INFO jepsen.os.debian - :n5 setting up debian +INFO jepsen.os.debian - :n3 setting up debian +INFO jepsen.os.debian - :n4 setting up debian +INFO jepsen.os.debian - :n1 setting up debian +INFO jepsen.os.debian - :n2 setting up debian +INFO jepsen.os.debian - :n4 debian set up +INFO jepsen.os.debian - :n5 debian set up +INFO jepsen.os.debian - :n3 debian set up +INFO jepsen.os.debian - :n1 debian set up +INFO jepsen.os.debian - :n2 debian set up +INFO jepsen.system.consul - :n1 consul nuked +INFO jepsen.system.consul - :n4 consul nuked +INFO jepsen.system.consul - :n5 consul nuked +INFO jepsen.system.consul - :n3 consul nuked +INFO jepsen.system.consul - :n2 consul nuked +INFO jepsen.system.consul - Running nodes: {:n1 false, :n2 false, :n3 false, :n4 false, :n5 false} +INFO jepsen.system.consul - :n2 consul nuked +INFO jepsen.system.consul - :n3 consul nuked +INFO jepsen.system.consul - :n4 consul nuked +INFO jepsen.system.consul - :n5 consul nuked +INFO jepsen.system.consul - :n1 consul nuked +INFO jepsen.system.consul - :n1 starting consul +INFO jepsen.system.consul - :n2 starting consul +INFO jepsen.system.consul - :n4 starting consul +INFO jepsen.system.consul - :n5 starting consul +INFO jepsen.system.consul - :n3 starting consul +INFO jepsen.system.consul - :n3 consul ready +INFO jepsen.system.consul - :n2 consul ready +INFO jepsen.system.consul - Running nodes: {:n1 true, :n2 true, :n3 true, :n4 true, :n5 true} +INFO jepsen.system.consul - :n5 consul ready +INFO jepsen.system.consul - :n1 consul ready +INFO jepsen.system.consul - :n4 consul ready +INFO jepsen.core - Worker 0 starting +INFO jepsen.core - Worker 2 starting +INFO jepsen.core - Worker 1 starting +INFO jepsen.core - Worker 3 starting +INFO jepsen.core - Worker 4 starting +INFO jepsen.util - 2 :invoke :read nil +INFO jepsen.util - 3 :invoke :cas [4 4] +INFO jepsen.util - 0 :invoke :write 4 +INFO jepsen.util - 1 :invoke :write 1 +INFO jepsen.util - 4 :invoke :cas [4 0] +INFO jepsen.util - 2 :ok :read nil +INFO jepsen.util - 4 :fail :cas [4 0] +(Log Truncated...) +INFO jepsen.util - 4 :invoke :cas [3 3] +INFO jepsen.util - 4 :fail :cas [3 3] +INFO jepsen.util - :nemesis :info :stop nil +INFO jepsen.util - :nemesis :info :stop "fully connected" +INFO jepsen.util - 0 :fail :read nil +INFO jepsen.util - 1 :fail :write 0 +INFO jepsen.util - :nemesis :info :stop nil +INFO jepsen.util - :nemesis :info :stop "fully connected" +INFO jepsen.core - nemesis done +INFO jepsen.core - Worker 3 done +INFO jepsen.util - 1 :invoke :read nil +INFO jepsen.core - Worker 2 done +INFO jepsen.core - Worker 4 done +INFO jepsen.core - Worker 0 done +INFO jepsen.util - 1 :ok :read 3 +INFO jepsen.core - Worker 1 done +INFO jepsen.core - Run complete, writing +INFO jepsen.core - Analyzing +(Log Truncated...) +INFO jepsen.core - Analysis complete +INFO jepsen.system.consul - :n3 consul nuked +INFO jepsen.system.consul - :n2 consul nuked +INFO jepsen.system.consul - :n4 consul nuked +INFO jepsen.system.consul - :n1 consul nuked +INFO jepsen.system.consul - :n5 consul nuked +1964 element history linearizable. :D + +Ran 1 tests containing 1 assertions. +0 failures, 0 errors. +``` + + diff --git a/website/content/docs/concept/gossip.mdx b/website/content/docs/concept/gossip.mdx new file mode 100644 index 000000000000..7707cc798956 --- /dev/null +++ b/website/content/docs/concept/gossip.mdx @@ -0,0 +1,56 @@ +--- +layout: docs +page_title: Gossip Protocol | Serf +description: >- + Consul agents manage membership in datacenters and WAN federations using the Serf protocol. Learn about the differences between LAN and WAN gossip pools and how `serfHealth` affects health checks. +--- + +# Gossip Protocol + +Consul uses a [gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) +to manage membership and broadcast messages to the cluster. The protocol, membership management, and message broadcasting is provided +through the [Serf library](https://github.com/hashicorp/serf/). The gossip protocol +used by Serf is based on a modified version of the +[SWIM (Scalable Weakly-consistent Infection-style Process Group Membership)](https://www.cs.cornell.edu/projects/Quicksilver/public_pdfs/SWIM.pdf) protocol. +Refer to the [Serf documentation](https://github.com/hashicorp/serf/blob/master/docs/internals/gossip.html.markdown) for additional information about the gossip protocol. + +## Gossip in Consul + +Consul uses a LAN gossip pool and a WAN gossip pool to perform different functions. The pools +are able to perform their functions by leveraging an embedded [Serf](https://github.com/hashicorp/serf/) +library. The library is abstracted and masked by Consul to simplify the user experience, +but developers may find it useful to understand how the library is leveraged. + +### LAN Gossip Pool + +Each datacenter that Consul operates in has a LAN gossip pool containing all members +of the datacenter (clients _and_ servers). Membership information provided by the +LAN pool allows clients to automatically discover servers, reducing the amount of +configuration needed. Failure detection is also distributed and shared by the entire cluster, +instead of concentrated on a few servers. Lastly, the gossip pool allows for fast and +reliable event broadcasts. + +### WAN Gossip Pool + +The WAN pool is globally unique. All servers should participate in the WAN pool, +regardless of datacenter. Membership information provided by the WAN pool allows +servers to perform cross-datacenter requests. The integrated failure detection +allows Consul to gracefully handle loss of connectivity--whether the loss is for +an entire datacenter, or a single server in a remote datacenter. + +## Lifeguard Enhancements ((#lifeguard)) + +SWIM assumes that the local node is healthy, meaning that soft real-time packet +processing is possible. The assumption may be violated, however, if the local node +experiences CPU or network exhaustion. In these cases, the `serfHealth` check status +can flap. This can result in false monitoring alarms, additional telemetry noise, and +CPU and network resources being wasted as they attempt to diagnose non-existent failures. + +Lifeguard completely resolves this issue with novel enhancements to SWIM. + +For more details about Lifeguard, please see the +[Making Gossip More Robust with Lifeguard](https://www.hashicorp.com/blog/making-gossip-more-robust-with-lifeguard/) +blog post, which provides a high level overview of the HashiCorp Research paper +[Lifeguard : SWIM-ing with Situational Awareness](https://arxiv.org/abs/1707.00788). The +[Serf gossip protocol guide](https://github.com/hashicorp/serf/blob/master/docs/internals/gossip.html.markdown#lifeguard-enhancements) +also provides some lower-level details about the gossip protocol and Lifeguard. \ No newline at end of file diff --git a/website/content/docs/concept/reliability.mdx b/website/content/docs/concept/reliability.mdx new file mode 100644 index 000000000000..c339c277daac --- /dev/null +++ b/website/content/docs/concept/reliability.mdx @@ -0,0 +1,228 @@ +--- +layout: docs +page_title: Fault Tolerance in Consul +description: >- + Fault tolerance is a system's ability to operate without interruption despite component failure. Learn how a set of Consul servers provide fault tolerance through use of a quorum, and how to further improve control plane resilience through use of infrastructure zones and Enterprise redundancy zones. +--- + +# Fault tolerance + +You must give careful consideration to reliability in the architecture frameworks that you build. When you build a resilient platform, it minimizes the remediation actions you need to take when a failure occurs. This document provides useful information on how to design and operate a resilient Consul cluster, including the methods and functionalities for this goal. + +Consul has many features that operate both locally and remotely that can help you offer a resilient service across multiple datacenters. + +## Introduction + +Fault tolerance is the ability of a system to continue operating without interruption +despite the failure of one or more components. In Consul, the number of server agents determines the fault tolerance. + + +Each Consul datacenter depends on a set of Consul voting server agents. +The voting servers ensure Consul has a consistent, fault-tolerant state +by requiring a majority of voting servers, known as a quorum, to agree upon any state changes. +Examples of state changes include: adding or removing services, +adding or removing nodes, and changes in service or node health status. + +Without a quorum, Consul experiences an outage: +it cannot provide most of its capabilities because they rely on +the availability of this state information. +If Consul has an outage, normal operation can be restored by following the +[Disaster recovery for Consul clusters guide](/consul/tutorials/datacenter-operations/recovery-outage). + +If Consul is deployed with 3 servers, the quorum size is 2. The deployment can lose 1 +server and still maintain quorum, so it has a fault tolerance of 1. +If Consul is instead deployed with 5 servers, the quorum size increases to 3, so +the fault tolerance increases to 2. +To learn more about the relationship between the +number of servers, quorum, and fault tolerance, refer to the +[consensus protocol documentation](/consul/docs/concept/reliability#deployment-size). + +Effectively mitigating your risk is more nuanced than just increasing the fault tolerance +because the infrastructure costs can outweigh the improved resiliency. You must also consider correlated risks at the infrastructure-level. There are occasions when multiple servers fail at the same time. That means that a single failure could cause a Consul outage, even if your server-level fault tolerance is 2. + +Different options for your resilient datacenter present trade-offs between operational complexity, computing cost, and Consul request performance. Consider these factors when designing your resilient architecture. + +## Fault tolerance + +The following sections explore several options for increasing Consul's fault tolerance. For enhanced reliability, we recommend taking a holistic approach by layering these multiple functionalities together. + +- Spread servers across infrastructure [availability zones](#availability-zones). +- Use a [minimum quorum size](#quorum-size) to avoid performance impacts. +- Use [redundancy zones](#redundancy-zones) to improve fault tolerance. +- Use [Autopilot](#autopilot) to automatically prune failed servers and maintain quorum size. +- Use [cluster peering](#cluster-peering) to provide service redundancy. + +### Availability zones + +The cloud or on-premise infrastructure underlying your [Consul datacenter](/consul/docs/install/glossary#datacenter) can run across multiple availability zones. + +An availability zone is meant to share no points of failure with other zones by: +- Having power, cooling, and networking systems independent from other zones +- Being physically distant enough from other zones so that large-scale disruptions + such as natural disasters (flooding, earthquakes) are very unlikely to affect multiple zones + +Availability zones are available in the regions of most cloud providers and in some on-premise installations. +If possible, spread your Consul voting servers across 3 availability zones +to protect your Consul datacenter from a single zone-level failure. +For example, if deploying 5 Consul servers across 3 availability zones, place no more than 2 servers in each zone. +If one zone fails, at most 2 servers are lost and quorum will be maintained by the 3 remaining servers. + +To distribute your Consul servers across availability zones, modify your infrastructure configuration with your infrastructure provider. No change is needed to your Consul server's agent configuration. + +Additionally, you should leverage resources that can automatically restore your compute instance, +such as autoscaling groups, virtual machine scale sets, or compute engine autoscaler. +Customize autoscaling resources to re-deploy servers into specific availability zones and ensure the desired numbers of servers are available at all times. + +### Quorum size + +For most production use cases, we recommend using a minimum quorum of either 3 or 5 voting servers, +yielding a server-level fault tolerance of 1 or 2 respectively. + +Even though it would improve fault tolerance, +adding voting servers beyond 5 is **not recommended** because it decreases Consul's performance— +it requires Consul to involve more servers in every state change or consistent read. + +Consul Enterprise users can use redundancy zones to improve fault tolerance without this performance penalty. + +### Redundancy zones + +Use Consul Enterprise [redundancy zones](/consul/docs/manage/scale/redundancy-zone) to improve fault tolerance without the performance penalty of increasing the number of voting servers. + +![Reference architecture diagram for Consul Redundancy zones](/img/architecture/consul-redundancy-zones-light.png#light-theme-only) +![Reference architecture diagram for Consul Redundancy zones](/img/architecture/consul-redundancy-zones-dark.png#dark-theme-only) + +Each redundancy zone should be assigned 2 or more Consul servers. +If all servers are healthy, only one server per redundancy zone will be an active voter; +all other servers will be backup voters. +If a zone's voter is lost, it will be replaced by: +- A backup voter within the same zone, if any. Otherwise, +- A backup voter within another zone, if any. + +Consul can replace lost voters with backup voters within 30 seconds in most cases. +Because this replacement process is not instantaneous, +redundancy zones do not improve immediate fault tolerance— +the number of healthy voting servers that can fail at once without causing an outage. +Instead, redundancy zones improve optimistic fault tolerance: +the number of healthy active and back-up voting servers that can fail gradually without causing an outage. + +The relationship between these two types of fault tolerance is: + +_Optimistic fault tolerance = immediate fault tolerance + the number of healthy backup voters_ + +For example, consider a Consul datacenter with 3 redundancy zones and 2 servers per zone. +There will be 3 voting servers (1 per zone), meaning a quorum size of 2 and an immediate fault tolerance of 1. +There will also be 3 backup voters (1 per zone), each of which increase the optimistic fault tolerance. +Therefore, the optimistic fault tolerance is 4. +This provides performance similar to a 3 server setup with fault tolerance similar to a 7 server setup. + +We recommend associating each Consul redundancy zone with an infrastructure availability zone +to also gain the infrastructure-level fault tolerance benefits provided by availability zones. +However, Consul redundancy zones can be used even without the backing of infrastructure availability zones. + +For more information on redundancy zones, refer to: +- [Redundancy zone documentation](/consul/docs/manage/scale/redundancy-zone) + for a more detailed explanation +- [Redundancy zone tutorial](/consul/tutorials/enterprise/redundancy-zones) + to learn how to use them + +### Autopilot + +Autopilot is a set of functions that introduce servers to a cluster, cleans up dead servers, and monitors the state of the Raft protocol in the Consul cluster. + +When you enable Autopilot's dead server cleanup, Autopilot marks failed servers as `Left` and removes them from the Raft peer set to prevent them from interfering with the quorum size. Autopilot does that as soon as a replacement Consul server comes online. This behavior is beneficial when server nodes failed and have been redeployed but Consul considers them as new nodes because their IP address and hostnames have changed. Autopilot keeps the cluster peer set size correct and the quorum requirement simple. + +To illustrate the Autopilot advantage, consider a scenario where Consul has a cluster of five server nodes. The quorum is three, which means the cluster can lose two server nodes before the cluster fails. The following events happen: + +1. Two server nodes fail. +1. Two replacement nodes are deployed with new hostnames and IPs. +1. The two replacement nodes rejoin the Consul cluster. +1. Consul treats the replacement nodes as extra nodes, unrelated to the previously failed nodes. + +_With Autopilot not enabled_, the following happens: + +1. Consul does not immediately clean up the failed nodes when the replacement nodes join the cluster. +1. The cluster now has the three surviving nodes, the two failed nodes, and the two replacement nodes, for a total of seven nodes. + - The quorum is increased to four, which means the cluster can only afford to lose one node until after the two failed nodes are deleted in seventy-two hours. + - The redundancy level has decreased from its initial state. + +_With Autopilot enabled_, the following happens: + +1. Consul immediately cleans up the failed nodes when the replacement nodes join the cluster. +1. The cluster now has the three surviving nodes and the two replacement nodes, for a total of five nodes. + - The quorum stays at three, which means the cluster can afford to lose two nodes before it fails. + - The redundancy level remains the same. + +### Cluster peering + +Linking multiple Consul clusters together to provide service redundancy is the most effective method to prevent disruption from failure. This method is enhanced when you design individual Consul clusters with resilience in mind. Consul clusters interconnect in two ways: WAN federation and cluster peering. We recommend using cluster peering whenever possible. + +Cluster peering lets you connect two or more independent Consul clusters using mesh gateways, so that services can communicate between non-identical partitions in different datacenters. + +![Reference architecture diagram for Consul cluster peering](/img/architecture/cluster-peering-diagram-light.png#light-theme-only) +![Reference architecture diagram for Consul cluster peering](/img/architecture/cluster-peering-diagram-dark.png#dark-theme-only) + +Cluster peering is the preferred way to interconnect clusters because it is operationally easier to configure and manage than WAN federation. Cluster peering communication between two datacenters runs only on one port on the related Consul mesh gateway, which makes it operationally easy to expose for routing purposes. + +When you use cluster peering to connect admin partitions between datacenters, use Consul’s dynamic traffic management functionalities `service-splitter`, `service-router` and `service-failover` to configure your service mesh to automatically forward or failover service traffic between peer clusters. Consul can then manage the traffic intended for the service and do [failover](/consul/docs/reference/config-entry/service-resolver#spec-failover), [load-balancing](/consul/docs/reference/config-entry/service-resolver#spec-loadbalancer), or [redirection](/consul/docs/reference/config-entry/service-resolver#spec-redirect). + +Cluster peering also extends service discovery across different datacenters independent of service mesh functions. After you peer datacenters, you can refer to services between datacenters with `.virtual.peer.consul` in Consul DNS. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [Consul DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups. + +For more information on cluster peering, refer to: +- [Cluster peering documentation](/consul/docs/east-west/cluster-peering) + for a more detailed explanation +- [Cluster peering tutorial](/consul/tutorials/implement-multi-tenancy/cluster-peering) + to learn how to implement cluster peering + +## Deployment size + +The following table shows quorum size and failure tolerance for various +cluster sizes. The recommended deployment is either 3 or 5 servers. A single +server deployment is _**highly**_ discouraged as data loss is inevitable in a +failure scenario. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ServersQuorum SizeFailure Tolerance
110
220
321
431
532
642
743
\ No newline at end of file diff --git a/website/content/docs/concepts/service-discovery.mdx b/website/content/docs/concepts/service-discovery.mdx deleted file mode 100644 index 44c83b74147d..000000000000 --- a/website/content/docs/concepts/service-discovery.mdx +++ /dev/null @@ -1,95 +0,0 @@ ---- -layout: docs -page_title: Service Discovery Explained -description: >- - Service discovery dynamically tracks and monitors service instances on your network and makes them discoverable through DNS queries. Learn about the benefits of service discovery and how it works. ---- - -# What is service discovery? - -_Service discovery_ helps you discover, track, and monitor the health of services within a network. Service discovery registers and maintains a record of all your services in a _service catalog_. This service catalog acts as a single source of truth that allows your services to query and communicate with each other. - -## Benefits of service discovery - -Service discovery provides benefits for all organizations, ranging from simplified scalability to improved application resiliency. Some of the benefits of service discovery include: - -- Dynamic IP address and port discovery -- Simplified horizontal service scaling -- Abstracts discovery logic away from applications -- Reliable service communication ensured by health checks -- Load balances requests across healthy service instances -- Faster deployment times achieved by high-speed discovery -- Automated service registration and de-registration - -## How does service discovery work? - -Service discovery uses a service's identity instead of traditional access information (IP address and port). This allows you to dynamically map services and track any changes within a service catalog. Service consumers (users or other services) then use DNS to dynamically retrieve other service's access information from the service catalog. The lifecycle of a service may look like the following: - -A service consumer communicates with the "Web" service via a unique Consul DNS entry provided by the service catalog. - -![Example diagram of how service consumers query for services](/img/what_is_service_discovery\_1.png) - -A new instance of the "Web" service registers itself to the service catalog with its IP address and port. As new instances of your services are registered to the service catalog, they will participate in the load balancing pool for handling service consumer requests. - -![Example diagram of how a service is registered to the service catalog](/img/what_is_service_discovery\_2.png) - -The service catalog is dynamically updated as new instances of the service are added and legacy or unhealthy service instances are removed. Removed services will no longer participate in the load balancing pool for handling service consumer requests. - -![Example diagram of how unhealthy services are removed from the service catalog](/img/what_is_service_discovery\_3.png) - -## What is service discovery in microservices? - -In a microservices application, the set of active service instances changes frequently across a large, dynamic environment. These service instances rely on a service catalog to retrieve the most up-to-date access information from the respective services. A reliable service catalog is especially important for service discovery in microservices to ensure healthy, scalable, and highly responsive application operation. - -## What are the two main types of service discovery? - -There are two main service‑discovery patterns: _client-side_ discovery and _server-side_ discovery. - -In systems that use client‑side discovery, the service consumer is responsible for determining the access information of available service instances and load balancing requests between them. - -1. The service consumer queries the service catalog -1. The service catalog retrieves and returns all access information -1. The service consumer selects a healthy downstream service and makes requests directly to it - -![Example diagram of client-side discovery concept](/img/what_is_service_discovery\_4.png) - -In systems that use server‑side discovery, the service consumer uses an intermediary to query the service catalog and make requests to them. - -1. The service consumer queries an intermediary (Consul) -1. The intermediary queries the service catalog and routes requests to the available service instances. - -![Example diagram of server-side discovery concept](/img/what_is_service_discovery\_5.png) - -For modern applications, this discovery method is advantageous because developers can make their applications faster and more lightweight by decoupling and centralizing service discovery logic. - -## Service discovery vs load balancing - -Service discovery and load balancing share a similarity in distributing requests to back end services, but differ in many important ways. - -Traditional load balancers are not designed for rapid registration and de-registration of services, nor are they designed for high-availability. By contrast, service discovery systems use multiple nodes that maintain the service registry state and a peer-to-peer state management system for increased resilience across any type of infrastructure. - -For modern, cloud-based applications, service discovery is the preferred method for directing traffic to the right service provider due to its ability to scale and remain resilient, independent of infrastructure. - -## How do you implement service discovery? - -You can implement service discovery systems across any type of infrastructure, whether it is on-premise or in the cloud. Service discovery is a native feature of many container orchestrators such as Kubernetes or Nomad. There are also platform-agnostic service discovery methods available for non-container workloads such as VMs and serverless technologies. Implementing a resilient service discovery system involves creating a set of servers that maintain and facilitate service registry operations. You can achieve this by installing a service discovery system or using a managed service discovery service. - -## What is Consul? - -Consul is a service networking solution that lets you automate network configurations, discover services, and enable secure connectivity across any cloud or runtime. With these features, Consul helps you solve the complex networking and security challenges of operating microservices and cloud infrastructure (multi-cloud and hybrid cloud). You can use these features independently or together to achieve [zero trust](https://www.hashicorp.com/solutions/zero-trust-security) security. - -Consul's service discovery capabilities help you discover, track, and monitor the health of services within a network. Consul acts as a single source of truth that allows your services to query and communicate with each other. - -You can use Consul with virtual machines (VMs), containers, serverless technologies, or with container orchestration platforms, such as [Nomad](https://www.nomadproject.io/) and Kubernetes. Consul is platform agnostic which makes it a great fit for all environments, including legacy platforms. - -Consul is available as a [self-managed](/consul/downloads) project or as a fully managed service mesh solution ([HCP Consul Dedicated](https://portal.cloud.hashicorp.com/sign-in?utm_source=consul_docs)). HCP Consul Dedicated enables users to discover and securely connect services without the added operational burden of maintaining a service mesh on their own. - -## Next steps - -Get started with service discovery today by leveraging Consul on HCP, Consul on Kubernetes, or Consul on VMs. Prepare your organization for the future of multi-cloud and embrace a [zero-trust](https://www.hashicorp.com/solutions/zero-trust-security) architecture. - -Feel free to get started with Consul by exploring one of these Consul tutorials: - -- [Get Started with Consul on VMs](/consul/tutorials/get-started-vms) -- [Get Started with Consul on HCP](/consul/tutorials/get-started-hcp) -- [Get Started with Consul on Kubernetes](/consul/tutorials/get-started-kubernetes) diff --git a/website/content/docs/concepts/service-mesh.mdx b/website/content/docs/concepts/service-mesh.mdx deleted file mode 100644 index 33ebf1478d83..000000000000 --- a/website/content/docs/concepts/service-mesh.mdx +++ /dev/null @@ -1,118 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Explained -description: >- - Service mesh is a dedicated network layer for secure, resilient, observable microservice communication. Learn about using Consul's service mesh to solve service networking challenges in application architectures and manage complexity in multi-cloud, hybrid cloud, and multi-platform environments. ---- - -# What is a service mesh? - -A _service mesh_ is a dedicated network layer that provides secure service-to-service communication within and across infrastructure, including on-premises and cloud environments. -Service meshes are often used with a microservice architectural pattern, but can provide value in any scenario where complex networking is involved. - -## Benefits of a service mesh - -A service mesh provides benefits for all organizations, ranging from security to improved application resiliency. -Some of the benefits of a service mesh include; - -- service discovery -- application health monitoring -- load balancing -- automatic failover -- traffic management -- encryption -- observability and traceability -- authentication and authorization -- network automation - -A common use case for leveraging a service mesh is to achieve a [_zero trust_ model](https://www.consul.io/use-cases/zero-trust-networking). -In a zero trust model, applications require identity-based access to ensure all communication within the service mesh is authenticated with TLS certificates and encrypted in transit. - -In traditional security strategies, protection is primarily focused at the perimeter of a network. -In cloud environments, the surface area for network access is much wider than the traditional on-premises networks. -In addition, traditional security practices overlook the fact that many bad actors can originate from within the network walls. -A zero trust model addresses these concerns while allowing organizations to scale as needed. - -## How does a service mesh work? - -A service mesh typically consist of a control plane and a data plane. The control plane maintains a central registry that keeps track of all services and their respective IP addresses. This activity is called [service discovery](https://www.hashicorp.com/products/consul/service-discovery-and-health-checking). -As long as the application is registered with the control plane, the control plane will be able to share with other members of the mesh how to communicate with the application and enforce rules for who can communicate with each other. - -The control plane is responsible for securing the mesh, facilitating service discovery, health checking, policy enforcement, and other similar operational concerns. - -The data plane handles communication between services. -Many service mesh solutions employ a sidecar proxy to handle data plane communications, and thus limit the level of awareness the services need to have about the network environment. - -![Overview of a service mesh](/img/what_is_service_mesh\_1.png) - -## API gateway vs service mesh - -An API gateway is a centralized access point for handling incoming client requests and delivering them to services. -The API gateway acts as a control plane that allows operators and developers to manage incoming client requests and apply different handling logic depending on the request. -The API gateway will route the incoming requests to the respective service. The primary function of an API gateway is to handle requests and return the reply from the service back to the client. - -A service mesh specializes in the network management of services and the communication between services. -The mesh is responsible for keeping track of services and their health status, IP address, and traffic routing and ensuring all traffic between services is authenticated and encrypted. -Unlike some API gateways, a service mesh will track all registered services' lifecycle and ensure requests are routed to healthy instances of the service. -API gateways are frequently deployed alongside a load balancer to ensure traffic is directed to healthy and available instances of the service. -The mesh reduces the load balancer footprint as routing responsibilities are handled in a decentralized manner. - -API gateways can be used with a service mesh to bridge external networks (non-mesh) with a service mesh. - --> **API gateways and traffic direction:** API gateways are often used to accept north-south traffic. North-south traffic is networking traffic that either enters or exits a datacenter or a virtual private network (VPC). You can connect API gateways to a service mesh and provide access to it from outside the mesh. -A service mesh is primarily used for handling east-west traffic. East-west traffic traditionally remains inside a data center or a VPC. -A service mesh can be connected to another service mesh in another data center or VPC to form a federated mesh. - -## What problems does a service mesh solve? - -Modern infrastructure is transitioning from being primarily static to dynamic in nature (ephemeral). -This dynamic infrastructure has a short life cycle, meaning virtual machines (VM) and containers are frequently recycled. -It's difficult for an organization to manage and keep track of application services that live on short-lived resources. A service mesh solves this problem by acting as a central registry of all registered services. -As instances of a service (e.g., VM, container, serverless functions) come up and down, the mesh is aware of their state and availability. The ability to conduct _service discovery_ is the foundation to the other problems a service mesh solves. - -As a service mesh is aware of the state of a service and its instances, the mesh can implement more intelligent and dynamic network routing. -Many service meshes offer L7 traffic management capabilities. As a result, operators and developers can create powerful rules to direct network traffic as needed, such as load balancing, traffic splitting, dynamic failover, and custom resolvers. -A service mesh's dynamic network behavior allows application owners to improve application resiliency and availability with no application changes. - -Implementing dynamic network behavior is critical as more and more applications are deployed across different cloud providers (multi-cloud) and private data centers. -Organizations may need to route network traffic to other infrastructure environments. Ensuring this traffic is secure is on top of mind for all organizations. -Service meshes offer the ability to enforce network traffic encryption (mTLS) and authentication between all services. The service mesh can automatically generate an SSL certificate for each service and its instances. -The certificate authenticates with other services inside the mesh and encrypts the TCP/UDP/gRPC connection with SSL. - -Fine-grained policies that dictate what services are allowed to communicate with each other is another benefit of a service mesh. -Traditionally, services are permitted to communicate with other services through firewall rules. -The traditional firewall (IP-based) model is difficult to enforce with dynamic infrastructure resources with a short lifecycle and frequently recycling IP addresses. -As a result, network administrators have to open up network ranges to permit network traffic between services without differentiating the services generating the network traffic. However, a service mesh allows operators and developers to shift away from an IP-based model and focus more on service to service permissions. -An operator defines a policy that only allows _service A_ to communicate with _service B_. Otherwise, the default action is to deny the traffic. -This shift from an IP address-based security model to a service-focused model reduces the overhead of securing network traffic and allows an organization to take advantage of multi-cloud environments without sacrificing security due to complexity. - -## How do you implement a service mesh? - -Service meshes are commonly installed in Kubernetes clusters. There are also platform-agnostic service meshes available for non-Kubernetes-based workloads. -For Kubernetes, most service meshes can be installed by operators through a [Helm chart](https://helm.sh/). Additionally, the service mesh may offer a CLI tool that supports the installation and maintenance of the service mesh. -Non-Kubernetes based service meshes can be installed through infrastructure as code (IaC) products such as [Terraform](https://www.terraform.io/), CloudFormation, ARM Templates, Puppet, Chef, etc. - -## What is a multi platform service mesh? - -A multi-platform service mesh is capable of supporting various infrastructure environments. -This can range from having the service mesh support Kubernetes and non-Kubernetes workloads, to having a service mesh span across various cloud environments (multi-cloud and hybrid cloud). - -## What is Consul? - -Consul is a multi-networking tool that offers a fully-featured service mesh solution that solves the networking and security challenges of operating microservices and cloud infrastructure (multi-cloud and hybrid cloud). -Consul offers a software-driven approach to routing and segmentation. It also brings additional benefits such as failure handling, retries, and network observability. -Each of these features can be used individually as needed or they can be used together to build a full service mesh and achieve [zero trust](https://www.hashicorp.com/solutions/zero-trust-security) security. -In simple terms, Consul is the control plane of the service mesh. The data plane is supported by Consul through its first class support of [Envoy](https://www.envoyproxy.io/) as a proxy. - -You can use Consul with virtual machines (VMs), containers, or with container orchestration platforms, such as [Nomad](https://www.nomadproject.io/) and Kubernetes. -Consul is platform agnostic which makes it a great fit for all environments, including legacy platforms. - -Consul is available as a [self-install](/consul/downloads) project or as a fully managed service mesh solution called [HCP Consul Dedicated](https://portal.cloud.hashicorp.com/sign-in?utm_source=consul_docs). -HCP Consul Dedicated enables users to discover and securely connect services without the added operational burden of maintaining a service mesh on their own. - -You can learn more about Consul by visiting the Consul [tutorials](/consul/tutorials). - -## Next - -Get started today with a service mesh by leveraging [HCP Consul Dedicated](https://portal.cloud.hashicorp.com/sign-in?utm_source=consul_docs). -Prepare your organization for the future of multi-cloud and embrace a [zero-trust](https://www.hashicorp.com/solutions/zero-trust-security) architecture. diff --git a/website/content/docs/connect/ca/aws.mdx b/website/content/docs/connect/ca/aws.mdx deleted file mode 100644 index cac5cb46e650..000000000000 --- a/website/content/docs/connect/ca/aws.mdx +++ /dev/null @@ -1,182 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Certificate Authority - AWS Certificate Manager -description: >- - You can use the AWS Certificate Manager Private Certificate Authority as the Consul service mesh's certificate authority to secure your service mesh. Learn how to configure the AWS ACM Private CA, its limitations in Consul, and cost planning considerations. ---- - -# AWS Certificate Manager as a Service Mesh Certificate Authority - -Consul can be used with [AWS Certificate Manager (ACM) Private Certificate -Authority -(CA)](https://aws.amazon.com/certificate-manager/private-certificate-authority/) -to manage and sign certificates. - --> This page documents the specifics of the AWS ACM Private CA provider. -Please read the [certificate management overview](/consul/docs/connect/ca) -page first to understand how Consul manages certificates with configurable -CA providers. - -## Requirements - -The ACM Private CA Provider was added in Consul 1.7.0. - -The ACM Private CA Provider needs to be authorized via IAM credentials to -perform operations. Every Consul server needs to be running in an environment -where a suitable IAM configuration is present. - -The [standard AWS SDK credential -locations](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials) -are used, which means that suitable credentials and region configuration need to be present in one of the following: - -1. Environment variables -1. Shared credentials file -1. Via an EC2 instance role - -The IAM credential provided must have permission for the following actions: - -- CreateCertificateAuthority - assuming an existing CA is not specified in `existing_arn` -- DescribeCertificateAuthority -- GetCertificate -- IssueCertificate - -## Configuration - -The ACM Private CA provider is enabled by setting the CA provider to -`"aws-pca"` in the agent's [`ca_provider`] configuration option, or via the -[`/connect/ca/configuration`] API endpoint. At this time there is only one, -optional configuration value. - -Example configurations are shown below: - - - - - -```hcl -# ... -connect { - enabled = true - ca_provider = "aws-pca" - ca_config { - existing_arn = "arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-123456789012" - } -} -``` - - - - - -```json -{ - "Provider": "aws-pca", - "Config": { - "ExistingARN": "arn:aws:acm-pca:region:account:certificate-authority/12345678-1234-1234-123456789012" - } -} -``` - - - - - -~> **Note**: Suitable AWS IAM credentials are necessary for the provider to -work. However, these are not configured in the Consul config which is typically -on disk, and instead rely on the [standard AWS SDK configuration -locations](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html#specifying-credentials). - -The configuration options are listed below. - --> **Note**: The first key is the value used in API calls, and the second key - (after the `/`) is used if you are adding the configuration to the agent's - configuration file. - -- `ExistingARN` / `existing_arn` (`string: `) - The Amazon Resource - Name (ARN) of an existing private CA in your ACM account. If specified, - Consul will attempt to use the existing CA to issue certificates. - - - In the primary datacenter this ARN **must identify a root CA**. See - [limitations](#limitations). - - In a secondary datacenter, it must identify a subordinate CA signed by - the same root used in the primary datacenter. If it is signed by another - root, Consul will automatically create a new subordinate signed by the - primary's root instead. - - The default behavior with no `ExistingARN` specified is for Consul to - create a new root CA in the primary datacenter and a subordinate CA in - each secondary DC. - -@include 'http_api_connect_ca_common_options.mdx' - -## Limitations - -ACM Private CA has several -[limits](https://docs.aws.amazon.com/acm-pca/latest/userguide/PcaLimits.html) -that restrict how fast certificates can be issued. This may impact how quickly -large clusters can rotate all issued certificates. - -Currently, the ACM Private CA provider for service mesh has some additional -limitations described below. - -### Unable to Cross-sign Other CAs - -It's not possible to cross-sign other CA provider's root certificates during a -migration. ACM Private CA is capable of doing that through a different workflow -but is not able to blindly cross-sign another root certificate without a CSR -being generated. Both Consul's built-in CA and Vault can do this and the current -workflow for managing CAs relies on it. - -For now, the limitation means that once ACM Private CA is configured as the CA -provider, it is not possible to reconfigure a different CA provider, or rotate -the root CA key without potentially observing some transient connection -failures. See the section on [forced rotation without -cross-signing](/consul/docs/connect/ca#forced-rotation-without-cross-signing) for -more details. - -### Primary DC Must be a Root CA - -Currently, if an existing ACM Private CA is used, the primary DC must use a Root -CA directly to issue certificates. - -## Cost Planning - -To help estimate costs, an example is provided below of the resources that would -be used. - -~> This is intended to illustrate the behavior of the CA for cost planning -purposes. Please refer to the [pricing for ACM Private -CA](https://aws.amazon.com/certificate-manager/pricing/) for actual cost -information. - -Assume the following Consul datacenters exist and are configured to use ACM -Private CA as their service mesh CA with the default leaf certificate lifetime of -72 hours: - -| Datacenter | Primary | CA Resource Created | Number of service instances | -| ---------- | ------- | ------------------- | --------------------------- | -| dc1 | yes | 1 ROOT | 100 | -| dc2 | no | 1 SUBORDINATE | 50 | -| dc3 | no | 1 SUBORDINATE | 500 | - -Leaf certificates are valid for 72 hours but are refreshed when -between 60% and 90% of their lifetime has elapsed. On average each certificate -will be reissued every 54 hours or roughly 13.3 times per month. - -So monthly cost would be calculated as: - -- 3 ⨉ Monthly CA cost, plus -- 8630 ⨉ Certificate Issue cost, made up of: - - 100 ⨉ 13.3 = 1,330 certificates issued in dc1 - - 50 ⨉ 13.3 = 665 certificates issued in dc2 - - 500 ⨉ 13.3 = 6,650 certificates issued in dc3 - -The number of certificates issued could be reduced by increasing -[`leaf_cert_ttl`](/consul/docs/agent/config/config-files#ca_leaf_cert_ttl) in the CA Provider -configuration if the longer lived credentials are an acceptable risk tradeoff -against the cost. - - -[`ca_config`]: /consul/docs/agent/config/config-files#connect_ca_config -[`ca_provider`]: /consul/docs/agent/config/config-files#connect_ca_provider -[`/connect/ca/configuration`]: /consul/api-docs/connect/ca#update-ca-configuration diff --git a/website/content/docs/connect/ca/consul.mdx b/website/content/docs/connect/ca/consul.mdx deleted file mode 100644 index 870bea4fe434..000000000000 --- a/website/content/docs/connect/ca/consul.mdx +++ /dev/null @@ -1,137 +0,0 @@ ---- -layout: docs -page_title: Certificate Authority - Built-in Service Mesh CA -description: >- - Consul has a built-in service mesh certificate authority that can be used to secure your service mesh without needing a separate CA system. Learn how to configure the built-in service mesh CA as a root CA or an intermediate CA connected to an existing PKI system. ---- - -# Built-In Certificate Authority for Service Mesh - -Consul ships with a built-in CA system so that service mesh can be -easily enabled out of the box. The built-in CA generates and stores the -root certificate and private key on Consul servers. It can also be -configured with a custom certificate and private key if needed. - -If service mesh is enabled and no CA provider is specified, the built-in -CA is the default provider used. The provider can be -[updated and rotated](/consul/docs/connect/ca#root-certificate-rotation) -at any point to migrate to a new provider. - --> This page documents the specifics of the built-in CA provider. -Please read the [certificate management overview](/consul/docs/connect/ca) -page first to understand how Consul manages certificates with configurable -CA providers. - -## Configuration - -The built-in CA provider has no required configuration. Enabling service mesh -alone will configure the built-in CA provider, and will automatically generate -a root certificate and private key: - - - -```hcl -# ... -connect { - enabled = true -} -``` - - - -The configuration options are listed below. - --> **Note**: The first key is the value used in API calls, and the second key -(after the `/`) is used if you are adding the configuration to the agent's -configuration file. - -- `PrivateKey` / `private_key` (`string: ""`) - A PEM-encoded private key - for signing operations. This must match the private key used for the root - certificate if it is manually specified. If this is blank, a private key - is automatically generated. - -- `RootCert` / `root_cert` (`string: ""`) - A PEM-encoded root certificate - to use. If this is blank, a root certificate is automatically generated - using the private key specified. If this is specified, the certificate - must be a valid - [SPIFFE SVID signing certificate](https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md) - and the URI in the SAN must match the cluster identifier created at - bootstrap with the ".consul" TLD. The cluster identifier can be found - using the [CA List Roots endpoint](/consul/api-docs/connect/ca#list-ca-root-certificates). - -@include 'http_api_connect_ca_common_options.mdx' - -## Specifying a Custom Private Key and Root Certificate - -By default, a root certificate and private key will be automatically -generated during the cluster's bootstrap. It is possible to configure -the Consul CA provider to use a specific private key and root certificate. -This is particularly useful if you have an external PKI system that doesn't -currently integrate with Consul directly. - -To view the current CA configuration, use the [Get CA Configuration endpoint](/consul/api-docs/connect/ca#get-ca-configuration): - -```shell-session -$ curl localhost:8500/v1/connect/ca/configuration -{ - "Provider": "consul", - "Config": { - "LeafCertTTL": "72h", - "IntermediateCertTTL": "8760h" - }, - "CreateIndex": 5, - "ModifyIndex": 5 -} -``` - -This is the default service mesh CA configuration if nothing is explicitly set when -service mesh is enabled - the PrivateKey and RootCert fields have not been set, so those have -been generated (as seen above in the roots list). - -There are two ways to have the Consul CA use a custom private key and root certificate: -either through the `ca_config` section of the [Agent configuration](/consul/docs/agent/config/config-files#connect_ca_config) (which can only be used during the cluster's -initial bootstrap) or through the [Update CA Configuration endpoint](/consul/api-docs/connect/ca#update-ca-configuration). - -Currently Consul requires that root certificates are valid [SPIFFE SVID Signing certificates](https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md) and that the URI encoded -in the SAN is the cluster identifier created at bootstrap with the ".consul" TLD. In this -example, we will set the URI SAN to `spiffe://36cb52cd-4058-f811-0432-6798a240c5d3.consul`. - -In order to use the Update CA Configuration HTTP endpoint, the private key and certificate -must be passed via JSON: - -```shell-session -$ jq --null-input --rawfile key root.key --rawfile cert root.crt ' -{ - "Provider": "consul", - "Config": { - "LeafCertTTL": "72h", - "PrivateKey": $key | sub("\\n$"; ""), - "RootCert": $cert | sub("\\n$"; ""), - "IntermediateCertTTL": "8760h" - } -}' > ca_config.json -``` - -The resulting `ca_config.json` file can then be used to update the active root certificate: - -```shell-session -$ cat ca_config.json -{ - "Provider": "consul", - "Config": { - "LeafCertTTL": "72h", - "PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEArqiy1c3pbT3cSkjdEM1APALUareU...", - "RootCert": "-----BEGIN CERTIFICATE-----\nMIIDijCCAnKgAwIBAgIJAOFZ66em1qC7MA0GCSqGSIb3...", - "IntermediateCertTTL": "8760h" - } -} - -$ curl --request PUT --data @ca_config.json localhost:8500/v1/connect/ca/configuration - -... - -[INFO] connect: CA rotated to new root under provider "consul" -``` - -The cluster is now using the new private key and root certificate. Updating the CA config -this way also triggered a certificate rotation. diff --git a/website/content/docs/connect/ca/index.mdx b/website/content/docs/connect/ca/index.mdx deleted file mode 100644 index c49e07516fae..000000000000 --- a/website/content/docs/connect/ca/index.mdx +++ /dev/null @@ -1,263 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Certificate Authority - Overview -description: >- - Consul uses a certificate authority (CA) to generate, use, manage, sign, and store certificates for your service mesh. Learn about certificate management, including configuration, root cert rotation, cross-signing, and regenerating the CA. ---- - -# Service Mesh Certificate Authority Overview - -Service mesh certificate management is done centrally through the Consul -servers using the configured service mesh CA (Certificate Authority) provider. A CA provider -manages root and intermediate certificates and performs certificate signing -operations. The Consul leader orchestrates CA provider operations as necessary, -such as when a service needs a new certificate or during CA rotation events. - -The CA provider abstraction enables Consul to support multiple systems for -storing and signing certificates. Consul ships with a -[built-in CA](/consul/docs/connect/ca/consul) which generates and stores the -root certificate and private key on the Consul servers. Consul also has -support for using -[Vault as a CA](/consul/docs/connect/ca/vault). With Vault, the root certificate -and private key material remain with the Vault cluster. - -## CA and Certificate relationship - -This diagram shows the relationship between the CA certificates in a Consul primary datacenter and a -secondary Consul datacenter. - -![CA relationship](/img/cert-relationship.svg) - -Leaf certificates are created for two purposes: -- the Leaf Cert Service is used by envoy proxies in the mesh to perform mTLS with other -services. -- the Leaf Cert Client Agent is created by auto-encrypt and auto-config. It is used by -client agents for HTTP API TLS, and for mTLS for RPC requests to servers. - -Any secondary datacenters use their CA provider to generate an intermediate certificate -signing request (CSR) to be signed by the primary root CA. They receive an intermediate -CA certificate, which is used to sign leaf certificates in the secondary datacenter. - -You can use different providers across primary and secondary datacenters. -For example, an operator may use a Vault CA provider for extra security in the primary -datacenter but choose to use the built-in CA provider in the secondary datacenter, which -may not have a reachable Vault cluster. The following table compares the built-in and Vault providers. - -## CA Provider Comparison - -| | Consul built-in | Vault | -|------------|------------------------------------|-----------------------------------------------------------------------------------| -| Security | CA private keys are stored on disk | CA private keys are stored in Vault and are never exposed to Consul server agents | -| Resiliency | No dependency on external systems. If Consul is available, it can sign certificates | Dependent on Vault availability | -| Latency | Consul signs certificates locally | A network call to Vault is required to sign certificates | - -## CA Bootstrapping - -CA initialization happens automatically when a new Consul leader is elected -as long as -[service mesh is enabled](/consul/docs/connect/configuration#agent-configuration), -and the CA system has not already been initialized. This initialization process -will generate the initial root certificates and setup the internal Consul server -state. - -For the initial bootstrap, the CA provider can be configured through the -[Agent configuration](/consul/docs/agent/config/config-files#connect_ca_config). After -initialization, the CA can only be updated through the -[Update CA Configuration API endpoint](/consul/api-docs/connect/ca#update-ca-configuration). -If a CA is already initialized, any changes to the CA configuration in the -agent configuration file (including removing the configuration completely) -will have no effect. - -If no specific provider is configured when service mesh is enabled, the built-in -Consul CA provider will be used and a private key and root certificate will -be generated automatically. - -## Viewing Root Certificates - -Root certificates can be queried with the -[list CA Roots endpoint](/consul/api-docs/connect/ca#list-ca-root-certificates). -With this endpoint, you can see the list of currently trusted root certificates. -When a cluster first initializes, this will only list one trusted root. Multiple -roots may appear as part of -[rotation](#root-certificate-rotation). - -```shell-session -$ curl http://localhost:8500/v1/connect/ca/roots -{ - "ActiveRootID": "31:6c:06:fb:49:94:42:d5:e4:55:cc:2e:27:b3:b2:2e:96:67:3e:7e", - "TrustDomain": "36cb52cd-4058-f811-0432-6798a240c5d3.consul", - "Roots": [ - { - "ID": "31:6c:06:fb:49:94:42:d5:e4:55:cc:2e:27:b3:b2:2e:96:67:3e:7e", - "Name": "Consul CA Root Cert", - "SerialNumber": 7, - "SigningKeyID": "19:45:8b:30:a1:45:84:ae:23:52:db:8d:1b:ff:a9:09:db:fc:2a:72:39:ae:da:11:53:f4:37:5c:de:d1:68:d8", - "ExternalTrustDomain": "a1499528-fbf6-df7b-05e5-ae81e1873fc4", - "NotBefore": "2018-06-06T17:35:25Z", - "NotAfter": "2028-06-03T17:35:25Z", - "RootCert": "-----BEGIN CERTIFICATE-----\nMIICmDCCAj6gAwIBAgIBBzAKBggqhkjOPQQDAjAWMRQwEgYDVQQDEwtDb25zdWwg\nQ0EgNzAeFw0xODA2MDYxNzM1MjVaFw0yODA2MDMxNzM1MjVaMBYxFDASBgNVBAMT\nC0NvbnN1bCBDQSA3MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEgo09lpx63bHw\ncSXeeoSpHpHgyzX1Q8ewJ3RUg6Ie8Howbs/QBz1y/kGxsF35HXij3YrqhgQyPPx4\nbQ8FH2YR4aOCAXswggF3MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/\nMGgGA1UdDgRhBF8xOTo0NTo4YjozMDphMTo0NTo4NDphZToyMzo1MjpkYjo4ZDox\nYjpmZjphOTowOTpkYjpmYzoyYTo3MjozOTphZTpkYToxMTo1MzpmNDozNzo1Yzpk\nZTpkMTo2ODpkODBqBgNVHSMEYzBhgF8xOTo0NTo4YjozMDphMTo0NTo4NDphZToy\nMzo1MjpkYjo4ZDoxYjpmZjphOTowOTpkYjpmYzoyYTo3MjozOTphZTpkYToxMTo1\nMzpmNDozNzo1YzpkZTpkMTo2ODpkODA/BgNVHREEODA2hjRzcGlmZmU6Ly8zNmNi\nNTJjZC00MDU4LWY4MTEtMDQzMi02Nzk4YTI0MGM1ZDMuY29uc3VsMD0GA1UdHgEB\n/wQzMDGgLzAtgiszNmNiNTJjZC00MDU4LWY4MTEtMDQzMi02Nzk4YTI0MGM1ZDMu\nY29uc3VsMAoGCCqGSM49BAMCA0gAMEUCIHl6UDdouw8Fzn/oDHputAxt3UFbVg/U\nvC6jWPuqqMwmAiEAkvMadtwjtNU7m/AQRJrj1LeG3eXw7dWO8SlI2fEs0yY=\n-----END CERTIFICATE-----\n", - "IntermediateCerts": null, - "Active": true, - "PrivateKeyType": "", - "PrivateKeyBits": 0, - "CreateIndex": 8, - "ModifyIndex": 8 - } - ] -} -``` - -## CA Configuration - -After initialization, the CA provider configuration can be viewed with the -[Get CA Configuration API endpoint](/consul/api-docs/connect/ca#get-ca-configuration). -Consul will filter sensitive values from this endpoint depending on the -provider in use, so the configuration may not be complete. - -```shell-session -$ curl http://localhost:8500/v1/connect/ca/configuration -{ - "Provider": "consul", - "Config": { - "LeafCertTTL": "72h", - "IntermediateCertTTL": "8760h" - }, - "CreateIndex": 5, - "ModifyIndex": 5 -} -``` - -The CA provider can be reconfigured using the -[Update CA Configuration API endpoint](/consul/api-docs/connect/ca#update-ca-configuration). -Specific options for reconfiguration can be found in the specific -CA provider documentation in the sidebar to the left. - -## Root Certificate Rotation - -Whenever the CA's configuration is updated in a way that causes the root key to -change, a special rotation process will be triggered in order to smoothly -transition to the new certificate. This rotation is automatically orchestrated -by Consul. - -~> If the current CA Provider doesn't support cross-signing, this process can't -be followed. See [Forced Rotation Without -Cross-Signing](#forced-rotation-without-cross-signing). - -This also automatically occurs when a completely different CA provider is -configured (since this changes the root key). Therefore, this automatic rotation -process can also be used to cleanly transition between CA providers. For example, -updating the service mesh to use Vault instead of the built-in CA. - -During rotation, an intermediate CA certificate is requested from the new root, -which is then cross-signed by the old root. This cross-signed certificate is -then distributed alongside any newly-generated leaf certificates used by the -proxies once the new root becomes active, and provides a chain of trust back to -the old root certificate in the event that a certificate signed by the new root -is presented to a proxy that has not yet updated its bundle of trusted root CA -certificates to include the new root. - -After the cross-signed certificate has been successfully generated and the new root -certificate or CA provider has been set up, the new root becomes the active one -and is immediately used for signing any new incoming certificate requests. - -If we check the [list CA roots -endpoint](/consul/api-docs/connect/ca#list-ca-root-certificates) after updating the -configuration with a new root certificate, we can see both the old and new root -certificates are present, and the currently active root has an intermediate -certificate which has been generated and cross-signed automatically by the old -root during the rotation process: - -```shell-session -$ curl localhost:8500/v1/connect/ca/roots -{ - "ActiveRootID": "d2:2c:41:94:1e:50:04:ea:86:fc:08:d6:b0:45:a4:af:8a:eb:76:a0", - "TrustDomain": "36cb52cd-4058-f811-0432-6798a240c5d3.consul", - "Roots": [ - { - "ID": "31:6c:06:fb:49:94:42:d5:e4:55:cc:2e:27:b3:b2:2e:96:67:3e:7e", - "Name": "Consul CA Root Cert", - "SerialNumber": 7, - "SigningKeyID": "19:45:8b:30:a1:45:84:ae:23:52:db:8d:1b:ff:a9:09:db:fc:2a:72:39:ae:da:11:53:f4:37:5c:de:d1:68:d8", - "ExternalTrustDomain": "a1499528-fbf6-df7b-05e5-ae81e1873fc4", - "NotBefore": "2018-06-06T17:35:25Z", - "NotAfter": "2028-06-03T17:35:25Z", - "RootCert": "-----BEGIN CERTIFICATE-----\nMIICmDCCAj6gAwIBAgIBBzAKBggqhkjOPQQDAjAWMRQwEgYDVQQDEwtDb25zdWwg\nQ0EgNzAeFw0xODA2MDYxNzM1MjVaFw0yODA2MDMxNzM1MjVaMBYxFDASBgNVBAMT\nC0NvbnN1bCBDQSA3MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEgo09lpx63bHw\ncSXeeoSpHpHgyzX1Q8ewJ3RUg6Ie8Howbs/QBz1y/kGxsF35HXij3YrqhgQyPPx4\nbQ8FH2YR4aOCAXswggF3MA4GA1UdDwEB/wQEAwIBhjAPBgNVHRMBAf8EBTADAQH/\nMGgGA1UdDgRhBF8xOTo0NTo4YjozMDphMTo0NTo4NDphZToyMzo1MjpkYjo4ZDox\nYjpmZjphOTowOTpkYjpmYzoyYTo3MjozOTphZTpkYToxMTo1MzpmNDozNzo1Yzpk\nZTpkMTo2ODpkODBqBgNVHSMEYzBhgF8xOTo0NTo4YjozMDphMTo0NTo4NDphZToy\nMzo1MjpkYjo4ZDoxYjpmZjphOTowOTpkYjpmYzoyYTo3MjozOTphZTpkYToxMTo1\nMzpmNDozNzo1YzpkZTpkMTo2ODpkODA/BgNVHREEODA2hjRzcGlmZmU6Ly8zNmNi\nNTJjZC00MDU4LWY4MTEtMDQzMi02Nzk4YTI0MGM1ZDMuY29uc3VsMD0GA1UdHgEB\n/wQzMDGgLzAtgiszNmNiNTJjZC00MDU4LWY4MTEtMDQzMi02Nzk4YTI0MGM1ZDMu\nY29uc3VsMAoGCCqGSM49BAMCA0gAMEUCIHl6UDdouw8Fzn/oDHputAxt3UFbVg/U\nvC6jWPuqqMwmAiEAkvMadtwjtNU7m/AQRJrj1LeG3eXw7dWO8SlI2fEs0yY=\n-----END CERTIFICATE-----\n", - "IntermediateCerts": null, - "Active": false, - "PrivateKeyType": "", - "PrivateKeyBits": 0, - "CreateIndex": 8, - "ModifyIndex": 24 - }, - { - "ID": "d2:2c:41:94:1e:50:04:ea:86:fc:08:d6:b0:45:a4:af:8a:eb:76:a0", - "Name": "Consul CA Root Cert", - "SerialNumber": 16238269036752183483, - "SigningKeyID": "", - "ExternalTrustDomain": "a1499528-fbf6-df7b-05e5-ae81e1873fc4", - "NotBefore": "2018-06-06T17:37:03Z", - "NotAfter": "2028-06-03T17:37:03Z", - "RootCert": "-----BEGIN CERTIFICATE-----\nMIIDijCCAnKgAwIBAgIJAOFZ66em1qC7MA0GCSqGSIb3DQEBCwUAMGIxCzAJBgNV\nBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNp\nc2NvMRIwEAYDVQQKDAlIYXNoaUNvcnAxEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0x\nODA2MDYxNzM3MDNaFw0yODA2MDMxNzM3MDNaMGIxCzAJBgNVBAYTAlVTMRMwEQYD\nVQQIDApDYWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNpc2NvMRIwEAYDVQQK\nDAlIYXNoaUNvcnAxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEB\nBQADggEPADCCAQoCggEBAK6ostXN6W093EpI3RDNQDwC1Gq3lPNoodL5XRaVVIBU\n3X5iC+Ttk02p67cHUguh4ZrWr3o3Dzxm+gKK0lfZLW0nNYNPAIGZWQD9zVSx1Lqt\n8X0pd+fhMV5coQrh3YIG/vy17IBTSBuRUX0mXOKjOeJJlrw1HQZ8pfm7WX6LFul2\nXszvgn5K1XR+9nhPy6K2bv99qsY0sm7AqCS2BjYBW8QmNngJOdLPdhyFh7invyXe\nPqgujc/KoA3P6e3/G7bJZ9+qoQMK8uwD7PxtA2hdQ9t0JGPsyWgzhwfBxWdBWRzV\nRvVi6Yu2tvw3QrjdeKQ5Ouw9FUb46VnTU7jTO974HjkCAwEAAaNDMEEwPwYDVR0R\nBDgwNoY0c3BpZmZlOi8vMzZjYjUyY2QtNDA1OC1mODExLTA0MzItNjc5OGEyNDBj\nNWQzLmNvbnN1bDANBgkqhkiG9w0BAQsFAAOCAQEATHgCro9VXj7JbH/tlB6f/KWf\n7r98+rlUE684ZRW9XcA9uUA6y265VPnemsC/EykPsririoh8My1jVPuEfgMksR39\n9eMDJKfutvSpLD1uQqZE8hu/hcYyrmQTFKjW71CfGIl/FKiAg7wXEw2ljLN9bxNv\nGG118wrJyMZrRvFjC2QKY025QQSJ6joNLFMpftsZrJlELtRV+nx3gMabpiDRXhIw\nJM6ti26P1PyVgGRPCOG10v+OuUtwe0IZoOqWpPJN8jzSuqZWf99uolkG0xuqLNz6\nd8qvTp1YF9tTmysgvdeGALez/02HTF035RVTsQfH9tM/+4yG1UnmjLpz3p4Fow==\n-----END CERTIFICATE-----", - "IntermediateCerts": [ - "-----BEGIN CERTIFICATE-----\nMIIDTzCCAvWgAwIBAgIBFzAKBggqhkjOPQQDAjAWMRQwEgYDVQQDEwtDb25zdWwg\nQ0EgNzAeFw0xODA2MDYxNzM3MDNaFw0yODA2MDMxNzM3MDNaMGIxCzAJBgNVBAYT\nAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNpc2Nv\nMRIwEAYDVQQKDAlIYXNoaUNvcnAxEjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJ\nKoZIhvcNAQEBBQADggEPADCCAQoCggEBAK6ostXN6W093EpI3RDNQDwC1Gq3lPNo\nodL5XRaVVIBU3X5iC+Ttk02p67cHUguh4ZrWr3o3Dzxm+gKK0lfZLW0nNYNPAIGZ\nWQD9zVSx1Lqt8X0pd+fhMV5coQrh3YIG/vy17IBTSBuRUX0mXOKjOeJJlrw1HQZ8\npfm7WX6LFul2Xszvgn5K1XR+9nhPy6K2bv99qsY0sm7AqCS2BjYBW8QmNngJOdLP\ndhyFh7invyXePqgujc/KoA3P6e3/G7bJZ9+qoQMK8uwD7PxtA2hdQ9t0JGPsyWgz\nhwfBxWdBWRzVRvVi6Yu2tvw3QrjdeKQ5Ouw9FUb46VnTU7jTO974HjkCAwEAAaOC\nARswggEXMGgGA1UdDgRhBF8xOTo0NTo4YjozMDphMTo0NTo4NDphZToyMzo1Mjpk\nYjo4ZDoxYjpmZjphOTowOTpkYjpmYzoyYTo3MjozOTphZTpkYToxMTo1MzpmNDoz\nNzo1YzpkZTpkMTo2ODpkODBqBgNVHSMEYzBhgF8xOTo0NTo4YjozMDphMTo0NTo4\nNDphZToyMzo1MjpkYjo4ZDoxYjpmZjphOTowOTpkYjpmYzoyYTo3MjozOTphZTpk\nYToxMTo1MzpmNDozNzo1YzpkZTpkMTo2ODpkODA/BgNVHREEODA2hjRzcGlmZmU6\nLy8zNmNiNTJjZC00MDU4LWY4MTEtMDQzMi02Nzk4YTI0MGM1ZDMuY29uc3VsMAoG\nCCqGSM49BAMCA0gAMEUCIBp46tRDot7GFyDXu7egq7lXBvn+UUHD5MmlFvdWmtnm\nAiEAwKBzEMcLd5kCBgFHNGyksRAMh/AGdEW859aL6z0u4gM=\n-----END CERTIFICATE-----\n" - ], - "Active": true, - "PrivateKeyType": "", - "PrivateKeyBits": 0, - "CreateIndex": 24, - "ModifyIndex": 24 - } - ] -} -``` - -The old root certificate will be automatically removed once enough time has elapsed -for any leaf certificates signed by it to expire. - -### Forced Rotation Without Cross-Signing - -If the CA provider that is currently in use does not support cross-signing, then -attempts to change the root key or CA provider will fail. This is to ensure -operators don't make the change without understanding that there is additional -risk involved. - -It is possible to force the change to happen anyway by setting the -`ForceWithoutCrossSigning` field in the CA configuration to `true`. - -The downside is that all new certificates will immediately start being signed -with the new root key, but it will take some time for agents throughout the -cluster to observe the root CA change and reconfigure applications and proxies -to accept certificates signed by this new root. This will mean connections made -with a new certificate may fail for a short period after the CA change. - -Typically all connected agents will have observed the new roots within seconds -even in a large deployment so the impact should be contained. But it is possible -for a disconnected, overloaded or misconfigured agent to not see the new root -for an unbounded amount of time during which new connections to services on that -host will fail. The issue will resolve as soon as the agent can reconnect to -servers. - -Currently both Consul and Vault CA providers _do_ support cross signing. As more -providers are added this documentation will list any that this section applies -to. - -### Recovering From Expired Certificates -If the built-in CA provider is misconfigured or unavailable, Consul service mesh requests eventually -stop functioning due to expiration of intermediate and root certificates. To recover manually, use the -[CLI helper](/consul/commands/tls/ca#consul-tls-ca-create) to generate CA certificates. - - -#### Example - Regenerating the built in CA -```shell-session -$ consul tls ca create -cluster-id test -common-name "Consul Agent CA" -days=365 -domain consul - ==> Saved consul-agent-ca.pem - ==> Saved consul-agent-ca-key.pem -``` -The example above generates a new CA with a validity of 365 days. The cluster-id argument is specific -to each cluster and can be looked up by examining the `TrustDomain` field in -the [List CA Roots](/consul/api-docs/connect/ca#list-ca-root-certificates) endpoint. - -The contents of the generated cert and private key files from the above step should then be used with -the [Update CA Configuration](/consul/api-docs/connect/ca#update-ca-configuration) endpoint. Once the CA configuration is -updated on the primary datacenter, all secondary datacenters will pick up the changes and regenerate their intermediate -and leaf certificates, after which any new requests that require certificate verification will succeed. diff --git a/website/content/docs/connect/cluster-peering/index.mdx b/website/content/docs/connect/cluster-peering/index.mdx deleted file mode 100644 index 83cc4b97e4c4..000000000000 --- a/website/content/docs/connect/cluster-peering/index.mdx +++ /dev/null @@ -1,88 +0,0 @@ ---- -layout: docs -page_title: Cluster Peering Overview -description: >- - Cluster peering establishes communication between independent clusters in Consul, allowing services to interact across datacenters. Learn how cluster peering works, its differences with WAN federation for multi-datacenter deployments, and how to troubleshoot common issues. ---- - -# Cluster peering overview - -This topic provides an overview of cluster peering, which lets you connect two or more independent Consul clusters so that services deployed to different partitions or datacenters can communicate. -Cluster peering is enabled in Consul by default. For specific information about cluster peering configuration and usage, refer to following pages. - -## What is cluster peering? - -Consul supports cluster peering connections between two [admin partitions](/consul/docs/enterprise/admin-partitions) _in different datacenters_. Deployments without an Enterprise license can still use cluster peering because every datacenter automatically includes a default partition. Meanwhile, admin partitions _in the same datacenter_ do not require cluster peering connections because you can export services between them without generating or exchanging a peering token. - -The following diagram describes Consul's cluster peering architecture. - -![Diagram of cluster peering with admin partitions](/img/cluster-peering-diagram.png) - -In this diagram, the `default` partition in Consul DC 1 has a cluster peering connection with the `web` partition in Consul DC 2. Enforced by their respective mesh gateways, this cluster peering connection enables `Service B` to communicate with `Service C` as a service upstream. - -Cluster peering leverages several components of Consul's architecture to enforce secure communication between services: - -- A _peering token_ contains an embedded secret that securely establishes communication when shared symmetrically between datacenters. Sharing this token enables each datacenter's server agents to recognize requests from authorized peers, similar to how the [gossip encryption key secures agent LAN gossip](/consul/docs/security/encryption#gossip-encryption). -- A _mesh gateway_ encrypts outgoing traffic, decrypts incoming traffic, and directs traffic to healthy services. Consul's service mesh features must be enabled in order to use mesh gateways. Mesh gateways support the specific admin partitions they are deployed on. Refer to [Mesh gateways](/consul/docs/connect/gateways/mesh-gateway) for more information. -- An _exported service_ communicates with downstreams deployed in other admin partitions. They are explicitly defined in an [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services). -- A _service intention_ secures [service-to-service communication in a service mesh](/consul/docs/connect/intentions). Intentions enable identity-based access between services by exchanging TLS certificates, which the service's sidecar proxy verifies upon each request. - -### Compared with WAN federation - -WAN federation and cluster peering are different ways to connect services through mesh gateways so that they can communicate across datacenters. WAN federation connects multiple datacenters to make them function as if they were a single cluster, while cluster peering treats each datacenter as a separate cluster. As a result, WAN federation requires a primary datacenter to maintain and replicate global states such as ACLs and configuration entries, but cluster peering does not. - -WAN federation and cluster peering also treat encrypted traffic differently. While mesh gateways between WAN federated datacenters use mTLS to keep data encrypted, mesh gateways between peers terminate mTLS sessions, decrypt data to HTTP services, and then re-encrypt traffic to send to services. Data must be decrypted in order to evaluate and apply dynamic routing rules at the destination cluster, which reduces coupling between peers. - -Regardless of whether you connect your clusters through WAN federation or cluster peering, human and machine users can use either method to discover services in other clusters or dial them through the service mesh. - -| | WAN Federation | Cluster Peering | -| :------------------------------------------------- | :------------: | :-------------: | -| Connects clusters across datacenters | ✅ | ✅ | -| Shares support queries and service endpoints | ✅ | ✅ | -| Connects clusters owned by different operators | ❌ | ✅ | -| Functions without declaring primary datacenter | ❌ | ✅ | -| Can use sameness groups for identical services | ❌ | ✅ | -| Replicates exported services for service discovery | ❌ | ✅ | -| Gossip protocol: Requires LAN gossip only | ❌ | ✅ | -| Forwards service requests for service discovery | ✅ | ❌ | -| Can replicate ACL tokens, policies, and roles | ✅ | ❌ | - -## Guidance - -The following resources are available to help you use Consul's cluster peering features. - -### Tutorials - -- To learn how to peer clusters and connect services across peers in AWS Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) environments, complete the [Connect services between Consul datacenters with cluster peering tutorial](/consul/tutorials/developer-mesh/cluster-peering). - -### Usage documentation - -- [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) -- [Manage cluster peering connections](/consul/docs/connect/cluster-peering/usage/manage-connections) -- [Manage L7 traffic with cluster peering](/consul/docs/connect/cluster-peering/usage/peering-traffic-management) -- [Create sameness groups](/consul/docs/connect/cluster-peering/usage/create-sameness-groups) - -### Kubernetes documentation - -- [Cluster peering on Kubernetes technical specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs) -- [Establish cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering) -- [Manage cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/manage-peering) -- [Manage L7 traffic with cluster peering on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/l7-traffic) -- [Create sameness groups on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/create-sameness-groups) - -### Reference documentation - -- [Cluster peering technical specifications](/consul/docs/connect/cluster-peering/tech-specs) -- [HTTP API reference: `/peering/` endpoint](/consul/api-docs/peering) -- [CLI reference: `peering` command](/consul/commands/peering). - -## Basic troubleshooting - -If you experience errors when using Consul's cluster peering features, refer to the following list of technical constraints. - -- Peer names can only contain lowercase characters. -- Services with node, instance, and check definitions totaling more than 8MB cannot be exported to a peer. -- Two admin partitions in the same datacenter cannot be peered. Use the [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services#exporting-services-to-peered-clusters) instead. -- To manage intentions that specify services in peered clusters, use [configuration entries](/consul/docs/connect/config-entries/service-intentions). The `consul intention` CLI command is not supported. -- The Consul UI does not support exporting services between clusters or creating service intentions. Use either the API or the CLI to complete these required steps when establishing new cluster peering connections. -- Accessing key/value stores across peers is not supported. diff --git a/website/content/docs/connect/cluster-peering/tech-specs.mdx b/website/content/docs/connect/cluster-peering/tech-specs.mdx deleted file mode 100644 index 36c7dc9d9130..000000000000 --- a/website/content/docs/connect/cluster-peering/tech-specs.mdx +++ /dev/null @@ -1,84 +0,0 @@ ---- -layout: docs -page_title: Cluster Peering Technical Specifications -description: >- - Cluster peering connections in Consul interact with mesh gateways, sidecar proxies, exported services, and ACLs. Learn about the configuration requirements for these components. ---- - -# Cluster peering technical specifications - -This reference topic describes the technical specifications associated with using cluster peering in your deployments. These specifications include required Consul components and their configurations. To learn more about Consul's cluster peering feature, refer to [cluster peering overview](/consul/docs/connect/cluster-peering). - -For cluster peering requirements in Kubernetes deployments, refer to [cluster peering on Kubernetes technical specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs). - -## Requirements - -Consul's default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. - -In addition, make sure your Consul environment meets the following prerequisites: - -- Consul v1.14 or higher. -- Use [Envoy proxies](/consul/docs/connect/proxies/envoy). Envoy is the only proxy with mesh gateway capabilities in Consul. -- A local Consul agent is required to manage mesh gateway configurations. - -## Mesh gateway specifications - -To change Consul's default configuration and enable cluster peering through mesh gateways, use a mesh configuration entry to update your network's service mesh proxies globally: - -1. In a `mesh` configuration entry, set `PeerThroughMeshGateways` to `true`: - - - - ```hcl - Kind = "mesh" - Peering { - PeerThroughMeshGateways = true - } - ``` - - - -1. Write the configuration entry to Consul: - - ```shell - $ consul config write mesh-config.hcl - ``` - -When cluster peering through mesh gateways, consider the following deployment requirements: - -- A cluster requires a registered mesh gateway in order to export services to peers in other regions or cloud providers. -- The mesh gateway must also be registered in the same admin partition as the exported services and their `exported-services` configuration entry. An enterprise license is required to use multiple admin partitions with a single cluster of Consul servers. -- To use the `local` mesh gateway mode, you must register a mesh gateway in the importing cluster. -- Define the `Proxy.Config` settings using opaque parameters compatible with your proxy. Refer to the [Gateway options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional Envoy proxy configuration information. - -### Mesh gateway modes - -By default, cluster peering connections use mesh gateways in [remote mode](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#remote). Be aware of these additional requirements when changing a mesh gateway's mode. - -- For mesh gateways that connect peered clusters, you can set the `mode` as either `remote` or `local`. -- The `none` mode is invalid for mesh gateways with cluster peering connections. - -Refer to [mesh gateway modes](/consul/docs/connect/gateways/mesh-gateway#modes) for more information. - -## Sidecar proxy specifications - -The Envoy proxies that function as sidecars in your service mesh require configuration in order to properly route traffic to peers. Sidecar proxies are defined in the [service definition](/consul/docs/services/usage/define-services). - -- Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and peer. Refer to the [`upstreams`](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) documentation for details. -- The `proxy.upstreams.destination_name` parameter is always required. -- The `proxy.upstreams.destination_peer` parameter must be configured to enable cross-cluster traffic. -- The `proxy.upstream/destination_namespace` configuration is only necessary if the destination service is in a non-default namespace. - -## Exported service specifications - -The `exported-services` configuration entry is required in order for services to communicate across partitions with cluster peering connections. Basic guidance on using the `exported-services` configuration entry is included in [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#export-services-between-clusters). - -Refer to the [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services) reference for more information. - -## ACL specifications - -If ACLs are enabled, you must add tokens to grant the following permissions: - -- Grant `service:write` permissions to services that define mesh gateways in their server definition. -- Grant `service:read` permissions for all services on the partition. -- Grant `mesh:write` permissions to the mesh gateways that participate in cluster peering connections. This permission allows a leaf certificate to be issued for mesh gateways to terminate TLS sessions for HTTP requests. \ No newline at end of file diff --git a/website/content/docs/connect/cluster-peering/usage/create-sameness-groups.mdx b/website/content/docs/connect/cluster-peering/usage/create-sameness-groups.mdx deleted file mode 100644 index 71537355eb90..000000000000 --- a/website/content/docs/connect/cluster-peering/usage/create-sameness-groups.mdx +++ /dev/null @@ -1,306 +0,0 @@ ---- -page_title: Create sameness groups -description: |- - Learn how to create sameness groups between partitions and cluster peers so that Consul can identify instances of the same service across partitions and datacenters. ---- - -# Create sameness groups - -This topic describes how to create a sameness group, which designates a set of admin partitions as functionally identical in your network. Adding an admin partition to a sameness group enables Consul to recognize services registered to remote partitions with cluster peering connections as instances of the same service when they share a name and namespace. - -For information about configuring a failover strategy using sameness groups, refer to [Failover with sameness groups](/consul/docs/connect/manage-traffic/failover/sameness). - -## Workflow - -Sameness groups are a user-defined set of partitions with identical configurations, including configuration entries for service and proxy defaults. Partitions on separate clusters should have an established cluster peering connection in order to recognize each other. - -To create and use sameness groups in your network, complete the following steps: - -- **Create sameness group configuration entries for each member of the group**. For each partition that you want to include in the sameness group, you must write and apply a sameness group configuration entry that defines the group’s members from that partition’s perspective. Refer to the [sameness group configuration entry reference](/consul/docs/connect/config-entries/sameness-group) for details on configuration hierarchy, default values, and specifications. -- **Export services to members of the sameness group**. You must write and apply an exported services configuration entry that makes the partition’s services available to other members of the group. Refer to [exported services configuration entry reference](/consul/docs/connect/config-entries/exported-services) for additional specification information. -- **Create service intentions to authorize other members of the sameness group**. For each partition that you want to include in the sameness group, you must write and apply service intentions configuration entries to authorize traffic to your services from all members of the group. Refer to the [service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional specification information. - -## Requirements - -- All datacenters where you want to create sameness groups must run Consul v1.16 or later. Refer to [upgrade instructions](/consul/docs/upgrading/instructions) for more information about how to upgrade your deployment. -- A [Consul Enterprise license](/consul/docs/enterprise/license/overview) is required. - -### Before you begin - -Before creating a sameness group, take the following actions to prepare your network: - -#### Check namespace and service naming conventions - -Sameness groups are defined at the partition level. Consul assumes all partitions in the group have identical configurations, including identical service names and identical namespaces. This behavior occurs even when partitions in the group contain functionally different services that share a common name and namespace. For example, if distinct services named `api` were registered to different members of a sameness group, it could lead to errors because requests may be sent to the incorrect service. - -To prevent errors, check the names of the services deployed to your network and the namespaces they are deployed in. Pay particular attention to the default namespace to confirm that services have unique names. If different services share a name, you should either change one of the service’s names or deploy one of the services to a different namespace. - -#### Deploy mesh gateways for each partition - -Mesh gateways are required for cluster peering connections and recommended to secure cross-partition traffic in a single datacenter. Therefore, we recommend securing your network, and especially your production environment, by deploying mesh gateways to each datacenter. Refer to [mesh gateways specifications](/consul/docs/connect/cluster-peering/tech-specs#mesh-gateway-specifications) for more information about configuring mesh gateways. - -#### Establish cluster peering relationships between remote partitions - -You must establish connections with cluster peers before you can create a sameness group that includes them. A cluster peering connection exists between two admin partitions in different datacenters, and each connection between two partitions must be established separately with each peer. Refer to [establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) for step-by-step instructions. - -To establish cluster peering connections and define a group as part of the same workflow, follow instructions up to [Export services between clusters](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#export-services-between-clusters). You can use the same exported services and service intention configuration entries to establish the cluster peering connection and create the sameness group. - -## Create a sameness group - -To create a sameness group, you must write and apply a set of three configuration entries for each partition that is a member of the group: - -- Sameness group configuration entries: Defines the sameness group from each partition’s perspective. -- Exported services configuration entries: Makes services available to other partitions in the group. -- Service intentions configuration entries: Authorizes traffic between services across partitions. - -### Define the sameness group from each partition’s perspective - -To define a sameness group for a partition, create a [sameness group configuration entry](/consul/docs/connect/config-entries/sameness-group) that describes the partitions and cluster peers that are part of the group. Typically, the order follows this pattern: - -1. The local partition -1. Other partitions in the same datacenter -1. Partitions with established cluster peering relationships - -If you want all services to failover to other instances in the sameness group by default, set `DefaultForFailover=true` and list the group members in the order you want to use in a failover scenario. Refer to [failover with sameness groups](/consul/docs/connect/manage-traffic/failover/sameness) for more information. - -Be aware that the sameness group configuration entries are different for each partition. The following example demonstrates how to format three different configuration entries for three partitions that are part of the sameness group `product-group` when Partition 1 and Partition 2 are in DC1, and the third partition is Partition 1 in DC2: - - - - - -```hcl -Kind = "sameness-group" -Name = "product-group" -Partition = "partition-1" -Members = [ - {Partition = "partition-1"}, - {Partition = "partition-2"}, - {Peer = "dc2-partition-1"} - ] -``` - - - - - -```hcl -Kind = "sameness-group" -Name = "product-group" -Partition = "partition-2" -Members = [ - {Partition = "partition-2"}, - {Partition = "partition-1"}, - {Peer = "dc2-partition-1"} - ] -``` - - - - - -```hcl -Kind = "sameness-group" -Name = "product-group" -Partition = "partition-1" -Members = [ - {Partition = "partition-1"}, - {Peer = "dc1-partition-1"}, - {Peer = "dc1-partition-2"} - ] -``` - - - - - -After you create the configuration entry, apply it to the Consul server with the following CLI command: - -```shell-session -$ consul config write product-group.hcl -``` - -Then, repeat the process to create and apply a configuration entry for every partition that is a member of the sameness group. - -### Export services to other partitions in the sameness group - -To make services available to other members of the sameness group, you must write and apply an [exported services configuration entry](/consul/docs/connect/config-entries/exported-services) to each partition in the group. This configuration entry exports the local partition's services to the rest of the group members. In each configuration entry, set the sameness group as the `Consumer` for the exported services. You can export multiple services in a single exported services configuration entry. - -Because you are configuring the consumer to reference the sameness group instead of listing out each partition and cluster peer, you do not need to edit this configuration again when you add a partition or peer to the group. - -The following example demonstrates how to format three different `exported-service` configuration entries to make a service named `api` deployed to the `store` namespace of each partition available to all other group members: - - - - - -```hcl -Kind = "exported-services" -Name = "product-sg-export" -Partition = "partition-1" -Services = [ - { - Name = "api" - Namespace = "store" - Consumers = [ - {SamenessGroup="product-group"} - ] - } - ] -``` - - - - - -```hcl -Kind = "exported-services" -Name = "product-sg-export" -Partition = "partition-2" -Services = [ - { - Name = "api" - Namespace = "store" - Consumers = [ - {SamenessGroup="product-group"} - ] - } - ] -``` - - - - - -```hcl -Kind = "exported-services" -Name = "product-sg-export" -Partition = "partition-1" -Services = [ - { - Name = "api" - Namespace = "store" - Consumers = [ - {SamenessGroup="product-group"} - ] - } - ] -``` - - - - - -For more information about exporting services, including examples of configuration entries that export multiple services at the same time, refer to the [exported services configuration entry reference](/consul/docs/connect/config-entries/exported-services). - -After you create each exported services configuration entry, apply it to the Consul server with the following CLI command: - -```shell-session -$ consul config write product-sg-export.hcl -``` - -#### Export services for cluster peers and sameness groups as part of the same workflow - -Creating a cluster peering connection between two partitions and then adding the partitions to a sameness group requires that you write and apply two separate exported services configuration entries. One configuration entry exports services to the peer, and a second entry exports services to other members of the group. - -If your goal for peering clusters is to create a sameness group, you can write and apply a single exported services configuration entry by configuring the `Services[].Consumers` block with the `SamenessGroup` field instead of the `Peer` field. - -Be aware that this scenario requires you to write the `sameness-group` configuration entry to Consul before you apply the `exported-services` configuration entry that references the sameness group. - -### Create service intentions to authorize traffic between group members - -Exporting the service to other members of the sameness group makes the services visible to remote partitions, but you must also create service intentions so that local services are authorized to send and receive traffic from a member of the sameness group. - -For each partition that is member of the group, write and apply a [service intentions configuration entry](/consul/docs/connect/config-entries/service-intentions) that defines intentions for the services that are part of the group. In the `Sources` block of the configuration entry, include the service name, its namespace, the sameness group, and grant `allow` permissions. - -Because you are using the sameness group in the `Sources` block rather than listing out each partition and cluster peer, you do not have to make further edits to the service intentions configuration entries when members are added to or removed from the group. - -The following example demonstrates how to format three different `service-intentions` configuration entries to make a service named `api` available to all instances of `payments` deployed in all members of the sameness group including the local partition. In this example, `api` is deployed to the `store` namespace in all three partitions. - - - - - - -```hcl -Kind = "service-intentions" -Name = "api-intentions" -Namespace = "store" -Partition = "partition-1" -Sources = [ - { - Name = "api" - Action = "allow" - Namespace = "store" - SamenessGroup = "product-group" - } -] -``` - - - - - -```hcl -Kind = "service-intentions" -Name = "api-intentions" -Namespace = "store" -Partition = "partition-2" -Sources = [ - { - Name = "api" - Action = "allow" - Namespace = "store" - SamenessGroup = "product-group" - } -] -``` - - - - - -```hcl -Kind = "service-intentions" -Name = "api-intentions" -Namespace = "store" -Partition = "partition-1" -Sources = [ - { - Name = "api" - Action = "allow" - Namespace = "store" - SamenessGroup = "product-group" - } -] -``` - - - - - -Refer to [create and manage intentions](/consul/docs/connect/intentions/create-manage-intentions) for more information about how to create and apply service intentions in Consul. - -After you create each service intentions configuration entry, apply it to the Consul server with the following CLI command: - -```shell-session -$ consul config write api-intentions.hcl -``` - -#### Create service intentions for cluster peers and sameness groups as part of the same workflow - -Creating a cluster peering connection between two remote partitions and then adding the partitions to a sameness group requires that you write and apply two separate service intention configuration entries. One configuration entry authorizes services to the peer, and a second entry authorizes services to other members of the group. - -If you are peering clusters with the goal of creating a sameness group, it is possible to combine these workflows by using a single service intentions configuration entry. - -Configure the `Sources` block with the `SamenessGroup` field instead of the `Peer` field. Be aware that this scenario requires you to write the `sameness-group` configuration entry to Consul before you apply the `service-intentions` configuration entry that references the sameness group. - -## Next steps - -When `DefaultForFailover=true` in a sameness group configuration entry, additional upstream configuration is not required. - -After creating a sameness group, you can use them with static Consul DNS lookups and dynamic DNS lookups (prepared queries) for service discovery use cases. You can also set up failover between services in a sameness group. Refer to the following topics for more details: - -- [Static Consul DNS lookups](/consul/docs/services/discovery/dns-static-lookups) -- [Dynamic Consul DNS lookups](/consul/docs/services/discovery/dns-dynamic-lookups) -- [Failover overview](/consul/docs/connect/manage-traffic/failover) diff --git a/website/content/docs/connect/cluster-peering/usage/establish-cluster-peering.mdx b/website/content/docs/connect/cluster-peering/usage/establish-cluster-peering.mdx deleted file mode 100644 index b2ed30f46058..000000000000 --- a/website/content/docs/connect/cluster-peering/usage/establish-cluster-peering.mdx +++ /dev/null @@ -1,269 +0,0 @@ ---- -layout: docs -page_title: Establish Cluster Peering Connections -description: >- - Generate a peering token to establish communication, export services, and authorize requests for cluster peering connections. Learn how to establish peering connections with Consul's HTTP API, CLI or UI. ---- - -# Establish cluster peering connections - -This page details the process for establishing a cluster peering connection between services deployed to different datacenters. You can interact with Consul's cluster peering features using the CLI, the HTTP API, or the UI. The overall process for establishing a cluster peering connection consists of the following steps: - -1. Create a peering token in one cluster. -1. Use the peering token to establish peering with a second cluster. -1. Export services between clusters. -1. Create intentions to authorize services for peers. - -Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in [create sameness groups](/consul/docs/connect/cluster-peering/usage/create-sameness-groups). - -For Kubernetes guidance, refer to [Establish cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering). - -## Requirements - -You must meet the following requirements to use cluster peering: - -- Consul v1.14.1 or higher -- Services hosted in admin partitions on separate datacenters - -If you need to make services available to an admin partition in the same datacenter, do not use cluster peering. Instead, use the [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services) to make service upstreams available to other admin partitions in a single datacenter. - -### Mesh gateway requirements - -Consul's default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. - -To enable cluster peering through mesh gateways and configure mesh gateways to support cluster peering, refer to [mesh gateway specifications](/consul/docs/connect/cluster-peering/tech-specs#mesh-gateway-specifications). - -## Create a peering token - -To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. - -Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. - - - - -1. In `cluster-01`, use the [`consul peering generate-token` command](/consul/commands/peering/generate-token) to issue a request for a peering token. - - ```shell-session - $ consul peering generate-token -name cluster-02 - ``` - - The CLI outputs the peering token, which is a base64-encoded string containing the token details. - -1. Save this value to a file or clipboard to use in the next step on `cluster-02`. - - - - -1. In `cluster-01`, use the [`/peering/token` endpoint](/consul/api-docs/peering#generate-a-peering-token) to issue a request for a peering token. - - ```shell-session - $ curl --request POST --data '{"Peer":"cluster-02"}' --url http://localhost:8500/v1/peering/token - ``` - - The CLI outputs the peering token, which is a base64-encoded string containing the token details. - -1. Create a JSON file that contains the first cluster's name and the peering token. - - - - ```json - { - "Peer": "cluster-01", - "PeeringToken": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6IlNvbHIifQ.5T7L_L1MPfQ_5FjKGa1fTPqrzwK4bNSM812nW6oyjb8" - } - ``` - - - - - - -To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. - -Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. - -1. In the Consul UI for the datacenter associated with `cluster-01`, click **Peers**. -1. Click **Add peer connection**. -1. In the **Generate token** tab, enter `cluster-02` in the **Name of peer** field. -1. Click the **Generate token** button. -1. Copy the token before you proceed. You cannot view it again after leaving this screen. If you lose your token, you must generate a new one. - - - - -## Establish a connection between clusters - -Next, use the peering token to establish a secure connection between the clusters. - - - - -1. In one of the client agents deployed to "cluster-02," issue the [`consul peering establish` command](/consul/commands/peering/establish) and specify the token generated in the previous step. - - ```shell-session - $ consul peering establish -name cluster-01 -peering-token token-from-generate - "Successfully established peering connection with cluster-01" - ``` - -When you connect server agents through cluster peering, they peer their default partitions. To establish peering connections for other partitions through server agents, you must add the `-partition` flag to the `establish` command and specify the partitions you want to peer. For additional configuration information, refer to [`consul peering establish` command](/consul/commands/peering/establish). - -You can run the `peering establish` command once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token. - - - - -1. In one of the client agents in "cluster-02," use `peering_token.json` and the [`/peering/establish` endpoint](/consul/api-docs/peering#establish-a-peering-connection) to establish the peering connection. This endpoint does not generate an output unless there is an error. - - ```shell-session - $ curl --request POST --data @peering_token.json http://127.0.0.1:8500/v1/peering/establish - ``` - -When you connect server agents through cluster peering, their default behavior is to peer to the `default` partition. To establish peering connections for other partitions through server agents, you must add the `Partition` field to `peering_token.json` and specify the partitions you want to peer. For additional configuration information, refer to [Cluster Peering - HTTP API](/consul/api-docs/peering). - -You can dial the `peering/establish` endpoint once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token. - - - - - -1. In the Consul UI for the datacenter associated with `cluster 02`, click **Peers** and then **Add peer connection**. -1. Click **Establish peering**. -1. In the **Name of peer** field, enter `cluster-01`. Then paste the peering token in the **Token** field. -1. Click **Add peer**. - - - - -## Export services between clusters - -After you establish a connection between the clusters, you need to create an `exported-services` configuration entry that defines the services that are available for other clusters. Consul uses this configuration entry to advertise service information and support service mesh connections across clusters. - -An `exported-services` configuration entry makes services available to another admin partition. While it can target admin partitions either locally or remotely. Clusters peers always export services to remote partitions. Refer to [exported service consumers](/consul/docs/connect/config-entries/exported-services#consumers-1) for more information. - -You must use the Consul CLI to complete this step. The HTTP API and the Consul UI do not support `exported-services` configuration entries. - - - - -1. Create a configuration entry and specify the `Kind` as `"exported-services"`. - - - - ```hcl - Kind = "exported-services" - Name = "default" - Services = [ - { - ## The name and namespace of the service to export. - Name = "service-name" - Namespace = "default" - - ## The list of peer clusters to export the service to. - Consumers = [ - { - ## The peer name to reference in config is the one set - ## during the peering process. - Peer = "cluster-02" - } - ] - } - ] - ``` - - - -1. Add the configuration entry to your cluster. - - ```shell-session - $ consul config write peering-config.hcl - ``` - -Before you proceed, wait for the clusters to sync and make services available to their peers. To check the peered cluster status, [read the cluster peering connection](/consul/docs/connect/cluster-peering/usage/manage-connections#read-a-peering-connection). - - - - -## Authorize services for peers - -Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters. - -You must use the HTTP API or the Consul CLI to complete this step. The Consul UI supports intentions for local clusters only. - - - - -1. Create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." In the following example, the service intentions configuration entry authorizes the `backend-service` to communicate with the `frontend-service` that is hosted on remote peer `cluster-02`: - - - - ```hcl - Kind = "service-intentions" - Name = "backend-service" - - Sources = [ - { - Name = "frontend-service" - Peer = "cluster-02" - Action = "allow" - } - ] - ``` - - - - If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster. - -1. Add the configuration entry to your cluster. - - ```shell-session - $ consul config write peering-intentions.hcl - ``` - - - - -1. Create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." In the following example, the service intentions configuration entry authorizes the `backend-service` to communicate with the `frontend-service` that is hosted on remote peer `cluster-02`: - - - - ```hcl - Kind = "service-intentions" - Name = "backend-service" - - Sources = [ - { - Name = "frontend-service" - Peer = "cluster-02" - Action = "allow" - } - ] - ``` - - - - If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster. - -1. Add the configuration entry to your cluster. - - ```shell-session - $ curl --request PUT --data @peering-intentions.hcl http://127.0.0.1:8500/v1/config - ``` - - - - -### Authorize service reads with ACLs - -If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access. - -Read access to all imported services is granted using either of the following rules associated with an ACL token: - -- `service:write` permissions for any service in the sidecar's partition. -- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition. - -For Consul Enterprise, the permissions apply to all imported services in the service's partition. These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities). - -Refer to [Reading services](/consul/docs/connect/config-entries/exported-services#reading-services) in the `exported-services` configuration entry documentation for example rules. - -For additional information about how to configure and use ACLs, refer to [ACLs system overview](/consul/docs/security/acl). diff --git a/website/content/docs/connect/cluster-peering/usage/manage-connections.mdx b/website/content/docs/connect/cluster-peering/usage/manage-connections.mdx deleted file mode 100644 index a4e92373328a..000000000000 --- a/website/content/docs/connect/cluster-peering/usage/manage-connections.mdx +++ /dev/null @@ -1,137 +0,0 @@ ---- -layout: docs -page_title: Manage Cluster Peering Connections -description: >- - Learn how to list, read, and delete cluster peering connections using Consul. You can use the HTTP API, the CLI, or the Consul UI to manage cluster peering connections. ---- - -# Manage cluster peering connections - -This usage topic describes how to manage cluster peering connections using the CLI, the HTTP API, and the UI. - -After you establish a cluster peering connection, you can get a list of all active peering connections, read a specific peering connection's information, and delete peering connections. - -For Kubernetes-specific guidance for managing cluster peering connections, refer to [Manage cluster peering connections on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/manage-peering). - -## List all peering connections - -You can list all active peering connections in a cluster. - - - - - ```shell-session - $ consul peering list - Name State Imported Svcs Exported Svcs Meta - cluster-02 ACTIVE 0 2 env=production - cluster-03 PENDING 0 0 - ``` - -For more information, including optional flags and parameters, refer to the [`consul peering list` CLI command reference](/consul/commands/peering/list). - - - - -The following example shows how to format an API request to list peering connections: - - ```shell-session - $ curl --header "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" http://127.0.0.1:8500/v1/peerings - ``` - -For more information, including optional parameters and sample responses, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#list-all-peerings). - - - - -In the Consul UI, click **Peers**. - -The UI lists peering connections you created for clusters in a datacenter. The name that appears in the list is the name of the cluster in a different datacenter with an established peering connection. - - - - -## Read a peering connection - -You can get information about individual peering connections between clusters. - - - - - -The following example outputs information about a peering connection locally referred to as "cluster-02": - - ```shell-session - $ consul peering read -name cluster-02 - Name: cluster-02 - ID: 3b001063-8079-b1a6-764c-738af5a39a97 - State: ACTIVE - Meta: - env=production - - Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05 - Peer Server Name: server.dc1.consul - Peer CA Pems: 0 - Peer Server Addresses: - 10.0.0.1:8300 - - Imported Services: 0 - Exported Services: 2 - - Create Index: 89 - Modify Index: 89 - ``` - -For more information, including optional flags and parameters, refer to the [`consul peering read` CLI command reference](/consul/commands/peering/read). - - - - - ```shell-session - $ curl --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" http://127.0.0.1:8500/v1/peering/cluster-02 - ``` - -For more information, including optional parameters and sample responses, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#read-a-peering-connection). - - - - -1. In the Consul UI, click **Peers**. - -1. Click the name of a peered cluster to view additional details about the peering connection. - - - - -## Delete peering connections - -You can disconnect the peered clusters by deleting their connection. Deleting a peering connection stops data replication to the peer and deletes imported data, including services and CA certificates. - - - - - The following examples deletes a peering connection to a cluster locally referred to as "cluster-02": - - ```shell-session - $ consul peering delete -name cluster-02 - Successfully submitted peering connection, cluster-02, for deletion - ``` - -For more information, including optional flags and parameters, refer to the [`consul peering delete` CLI command reference](/consul/commands/peering/delete). - - - - - ```shell-session - $ curl --request DELETE --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" http://127.0.0.1:8500/v1/peering/cluster-02 - ``` - -This endpoint does not return a response. For more information, including optional parameters, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#delete-a-peering-connection). - - - -1. In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. -1. Next to the name of the peer, click **More** (three horizontal dots) and then **Delete**. -1. Click **Delete** to confirm and remove the peering connection. - - - \ No newline at end of file diff --git a/website/content/docs/connect/cluster-peering/usage/peering-traffic-management.mdx b/website/content/docs/connect/cluster-peering/usage/peering-traffic-management.mdx deleted file mode 100644 index 63942e5bdeef..000000000000 --- a/website/content/docs/connect/cluster-peering/usage/peering-traffic-management.mdx +++ /dev/null @@ -1,168 +0,0 @@ ---- -layout: docs -page_title: Cluster Peering L7 Traffic Management -description: >- - Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects and failover. ---- - -# Manage L7 traffic with cluster peering - -This usage topic describes how to configure and apply the [`service-resolver` configuration entry](/consul/docs/connect/config-entries/service-resolver) to set up redirects and failovers between services that have an existing cluster peering connection. - -For Kubernetes-specific guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/l7-traffic). - -## Service resolvers for redirects and failover - -When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/connect/manage-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters. - -However, the `service-splitter` and `service-router` configuration entry kinds do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in the `service-resolver` configuration entry. Then, you can set up a redirect in a second service resolver to interact with the peer service by name. - -For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/agent/config-entries). - -## Configure dynamic traffic between peers - -To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order: - -1. Define the peer cluster as a failover target in the service resolver configuration. - - The following examples update the [`service-resolver` configuration entry](/consul/docs/connect/config-entries/service-resolver) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. - - - - ```hcl - Kind = "service-resolver" - Name = "frontend" - ConnectTimeout = "15s" - Failover = { - "*" = { - Targets = [ - {Peer = "cluster-02"} - ] - } - } - ``` - - ```json - { - "ConnectTimeout": "15s", - "Kind": "service-resolver", - "Name": "frontend", - "Failover": { - "*": { - "Targets": [ - { - "Peer": "cluster-02" - } - ] - } - }, - "CreateIndex": 250, - "ModifyIndex": 250 - } - ``` - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceResolver - metadata: - name: frontend - spec: - connectTimeout: 15s - failover: - '*': - targets: - - peer: 'cluster-02' - service: 'frontend' - namespace: 'default' - ``` - - - -1. Define the desired behavior in `service-splitter` or `service-router` configuration entries. - - The following example splits traffic evenly between `frontend` services hosted on peers by defining the desired behavior locally: - - - - ```hcl - Kind = "service-splitter" - Name = "frontend" - Splits = [ - { - Weight = 50 - ## defaults to service with same name as configuration entry ("frontend") - }, - { - Weight = 50 - Service = "frontend-peer" - }, - ] - ``` - - ```json - { - "Kind": "service-splitter", - "Name": "frontend", - "Splits": [ - { - "Weight": 50 - }, - { - "Weight": 50, - "Service": "frontend-peer" - } - ] - } - ``` - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceSplitter - metadata: - name: frontend - spec: - splits: - - weight: 50 - ## defaults to service with same name as configuration entry ("frontend") - - weight: 50 - service: frontend-peer - ``` - - - -1. Create a local `service-resolver` configuration entry named `frontend-peer` and define a redirect targeting the peer and its service: - - - - ```hcl - Kind = "service-resolver" - Name = "frontend-peer" - Redirect { - Service = frontend - Peer = "cluster-02" - } - ``` - - ```json - { - "Kind": "service-resolver", - "Name": "frontend-peer", - "Redirect": { - "Service": "frontend", - "Peer": "cluster-02" - } - } - ``` - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceResolver - metadata: - name: frontend-peer - spec: - redirect: - peer: 'cluster-02' - service: 'frontend' - ``` - - \ No newline at end of file diff --git a/website/content/docs/connect/config-entries/api-gateway.mdx b/website/content/docs/connect/config-entries/api-gateway.mdx deleted file mode 100644 index dc5d6f63e2a5..000000000000 --- a/website/content/docs/connect/config-entries/api-gateway.mdx +++ /dev/null @@ -1,562 +0,0 @@ ---- -layout: docs -page_title: API Gateway configuration reference -description: Learn how to configure a Consul API gateway on VMs. ---- - -# API gateway configuration reference - -This topic provides reference information for the API gateway configuration entry that you can deploy to networks in virtual machine (VM) environments. For reference information about configuring Consul API gateways on Kubernetes, refer to [Gateway Resource Configuration](/consul/docs/connect/gateways/api-gateway/configuration/gateway). - -## Introduction - -A gateway is a type of network infrastructure that determines how service traffic should be handled. Gateways contain one or more listeners that bind to a set of hosts and ports. An HTTP Route or TCP Route can then attach to a gateway listener to direct traffic from the gateway to a service. - -## Configuration model - -The following list outlines field hierarchy, language-specific data types, and requirements in an `api-gateway` configuration entry. Click on a property name to view additional details, including default values. - -- [`Kind`](#kind): string | must be `"api-gateway"` -- [`Name`](#name): string | no default -- [`Namespace`](#namespace): string | no default -- [`Partition`](#partition): string | no default -- [`Meta`](#meta): map | no default -- [`Listeners`](#listeners): list of objects | no default - - [`Name`](#listeners-name): string | no default - - [`Port`](#listeners-port): number | no default - - [`Hostname`](#listeners-hostname): string | `"*"` - - [`Protocol`](#listeners-protocol): string | `"tcp"` - - [`TLS`](#listeners-tls): map | none - - [`MinVersion`](#listeners-tls-minversion): string | no default - - [`MaxVersion`](#listeners-tls-maxversion): string | no default - - [`CipherSuites`](#listeners-tls-ciphersuites): list of strings | Envoy default cipher suites - - [`Certificates`](#listeners-tls-certificates): list of objects | no default - - [`Kind`](#listeners-tls-certificates-kind): string | no default - - [`Name`](#listeners-tls-certificates-name): string | no default - - [`Namespace`](#listeners-tls-certificates-namespace): string | no default - - [`Partition`](#listeners-tls-certificates-partition): string | no default - - [`default`](#listeners-default): map - - [`JWT`](#listeners-default-jwt): map - - [`Providers`](#listeners-default-jwt-providers): list - - [`Name`](#listeners-default-jwt-providers): string - - [`VerifyClaims`](#listeners-default-jwt-providers): map - - [`Path`](#listeners-default-jwt-providers): list - - [`Value`](#listeners-default-jwt-providers): string - - [`override`](#listeners-override): map - - [`JWT`](#listeners-override-jwt): map - - [`Providers`](#listeners-override-jwt-providers): list - - [`Name`](#listeners-override-jwt-providers): string - - [`VerifyClaims`](#listeners-override-jwt-providers): map - - [`Path`](#listeners-override-jwt-providers): list - - [`Value`](#listeners-override-jwt-providers): string - - - -## Complete configuration - -When every field is defined, an `api-gateway` configuration entry has the following form: - - - -```hcl -Kind = "api-gateway" -Name = "" -Namespace = "" -Partition = "" - -Meta = { - = "" -} - -Listeners = [ - { - Port = - Name = "" - Protocol = "" - TLS = { - MaxVersion = "" - MinVersion = "" - CipherSuites = [ - "" - ] - Certificates = [ - { - Kind = "file-system-certificate" - Name = "" - Namespace = "" - Partition = "" - } - ] - } - default = { - JWT = { - Providers = [ - Name = "" - VerifyClaims = { - Path = [""] - Value = "" - } - ] - } - } - override = { - JWT = { - Providers = [ - Name = "" - VerifyClaims = { - Path = [""] - Value = "" - } - ] - } - } - } -] -``` - -```json -{ - "Kind": "api-gateway", - "Name": "", - "Namespace": "", - "Partition": "", - "Meta": { - "": "" - }, - "Listeners": [ - { - "Name": "", - "Port": , - "Protocol": "", - "TLS": { - "MaxVersion": "", - "MinVersion": "", - "CipherSuites": [ - "" - ], - "Certificates": [ - { - "Kind": "file-system-certificate", - "Name": "", - "Namespace": "", - "Partition": "" - } - ] - } - }, - { - "default": { - "JWT": { - "Providers": [ - { - "Name": "", - "VerifyClaims": { - "Path": [""], - "Value": "" - } - } - ] - } - } - }, - { - "override": { - "JWT": { - "Providers": [ - { - "Name": "", - "VerifyClaims": { - "Path": [""], - "Value": "" - } - } - ] - } - } - } - ] -} -``` - - - -## Specification - -This section provides details about the fields you can configure in the -`api-gateway` configuration entry. - -### `Kind` - -Specifies the type of configuration entry to implement. This must be -`api-gateway`. - -#### Values - -- Default: none -- This field is required. -- Data type: string value that must be set to `"api-gateway"`. - -### `Name` - -Specifies a name for the configuration entry. The name is metadata that you can -use to reference the configuration entry when performing Consul operations, -such as applying a configuration entry to a specific cluster. - -#### Values - -- Default: none -- This field is required. -- Data type: string - -### `Namespace` - -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to apply to the configuration entry. - -#### Values - -- Default: `"default"` in Enterprise -- Data type: string - -### `Partition` - -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) to apply to the configuration entry. - -#### Values - -- Default: `"default"` in Enterprise -- Data type: string - -### `Meta` - -Specifies an arbitrary set of key-value pairs to associate with the gateway. - -#### Values - -- Default: none -- Data type: map containing one or more keys and string values. - -### `Listeners[]` - -Specifies a list of listeners that gateway should set up. Listeners are -uniquely identified by their port number. - -#### Values - -- Default: none -- This field is required. -- Data type: List of maps. Each member of the list contains the following fields: - - [`Name`](#listeners-name) - - [`Port`](#listeners-port) - - [`Hostname`](#listeners-hostname) - - [`Protocol`](#listeners-protocol) - - [`TLS`](#listeners-tls) - -### `Listeners[].Name` - -Specifies the unique name for the listener. This field accepts letters, numbers, and hyphens. - -#### Values - -- Default: none -- This field is required. -- Data type: string - -### `Listeners[].Port` - -Specifies the port number that the listener receives traffic on. - -#### Values - -- Default: `0` -- This field is required. -- Data type: integer - -### `Listeners[].Hostname` - -Specifies the hostname that the listener receives traffic on. - -#### Values - -- Default: `"*"` -- This field is optional. -- Data type: string - -### `Listeners[].Protocol` - -Specifies the protocol associated with the listener. - -#### Values - -- Default: none -- This field is required. -- The data type is one of the following string values: `"tcp"` or `"http"`. - -### `Listeners[].TLS` - -Specifies the TLS configurations for the listener. - -#### Values - -- Default: none -- Map that contains the following fields: - - [`MaxVersion`](#listeners-tls-maxversion) - - [`MinVersion`](#listeners-tls-minversion) - - [`CipherSuites`](#listeners-tls-ciphersuites) - - [`Certificates`](#listeners-tls-certificates) - -### `Listeners[].TLS.MaxVersion` - -Specifies the maximum TLS version supported for the listener. - -#### Values - -- Default depends on the version of Envoy: - - Envoy 1.22.0 and later default to `TLSv1_2` - - Older versions of Envoy default to `TLSv1_0` -- Data type is one of the following string values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `Listeners[].TLS.MinVersion` - -Specifies the minimum TLS version supported for the listener. - -#### Values - -- Default: none -- Data type is one of the following string values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `Listeners[].TLS.CipherSuites[]` - -Specifies a list of cipher suites that the listener supports when negotiating connections using TLS 1.2 or older. - -#### Values - -- Defaults to the ciphers supported by the version of Envoy in use. Refer to the - [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tlsparameters-cipher-suites) - for details. -- Data type: List of string values. Refer to the - [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) - for a list of supported ciphers. - -### `Listeners[].TLS.Certificates[]` - -The list of references to [file system](/consul/docs/connect/config-entries/file-system-certificate) or [inline certificates](/consul/docs/connect/config-entries/inline-certificate) that the listener uses for TLS termination. You should create the configuration entry for the certificate separately and then reference the configuration entry in the `Name` field. - -#### Values - -- Default: None -- Data type: List of maps. Each member of the list has the following fields: - - [`Kind`](#listeners-tls-certificates-kind) - - [`Name`](#listeners-tls-certificates-name) - - [`Namespace`](#listeners-tls-certificates-namespace) - - [`Partition`](#listeners-tls-certificates-partition) - -### `Listeners[].TLS.Certificates[].Kind` - -The list of references to certificates that the listener uses for TLS termination. - -#### Values - -- Default: None -- This field is required. -- The data type is one of the following string values: `"file-system-certificate"` or `"inline-certificate"`. - -### `Listeners[].TLS.Certificates[].Name` - -Specifies the name of the [file system certificate](/consul/docs/connect/config-entries/file-system-certificate) or [inline certificate](/consul/docs/connect/config-entries/inline-certificate) that the listener uses for TLS termination. - -#### Values - -- Default: None -- This field is required. -- Data type: string - -### `Listeners[].TLS.Certificates[].Namespace` - -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) where the certificate can be found. - -#### Values - -- Default: `"default"` in Enterprise -- Data type: string - -### `Listeners[].TLS.Certificates[].Partition` - -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) where the certificate can be found. - -#### Values - -- Default: `"default"` in Enterprise -- Data type: string - -### `Listeners[].default` - -Specifies a block of default configurations to apply to the gateway listener. All routes attached to the listener inherit the default configurations. You can specify override configurations that have precedence over default configurations in the [`override` block](#listeners-override) as well as in the `JWT` block in the [HTTP route configuration entry](/consul/docs/connect/config-entries/http-route). - -#### Values - -- Default: None -- Data type: Map - -### `Listeners[].default{}.JWT` - -Specifies a block of default JWT verification configurations to apply to the gateway listener. Specify configurations that have precedence over the defaults in either the [`override.JWT` block](#listeners-override) or in the [`JWT` block](/consul/docs/connect/config-entries/http-route#rules-filters-jwt) in the HTTP route configuration. Refer to [Use JWTs to verify requests to API gateways](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms) for order of precedence and other details about using JWT verification in API gateways. - -#### Values - -- Default: None -- Data type: Map - -### `Listeners[].default{}.JWT{}.Providers` - -Specifies a list of default JWT provider configurations to apply to the gateway listener. A provider configuration contains the name of the provider and claims. Specify configurations that have precedence over the defaults in either the [`override.JWT.Providers` block](#listeners-override-providers) or in the [`JWT` block](/consul/docs/connect/config-entries/http-route#rules-filters-jwt-providers) of the HTTP route configuration. Refer to [Use JWTs to verify requests to API gateways](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms) for order of precedence and other details about using JWT verification in API gateways. - -#### Values - -- Default: None -- Data type: List of maps - -The following table describes the parameters you can specify in a member of the `Providers` list: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `Name` | Specifies the name of the provider. | String | None | -| `VerifyClaims` | Specifies a list of paths and a value that define the claim that Consul verifies when it receives a request. The `VerifyClaims` map specifies the following settings:
  • `Path`: Specifies a list of one or more registered or custom claims.
  • `Value`: Specifies the expected value of the claim.
| Map | None | - -Refer to [Configure JWT verification settings](#configure-jwt-verification-settings) for an example configuration. - -### `Listeners[].override` - -Specifies a block of configurations to apply to the gateway listener. The override settings have precedence over the configurations in the [`Listeners[].default` block](#listeners-default). - -#### Values - -- Default: None -- Data type: Map - -### `Listeners[].override{}.JWT` - -Specifies a block of JWT verification configurations to apply to the gateway listener. The override settings have precedence over the [`Listeners[].default` configurations](#listeners-default) as well as any route-specific JWT configurations. - -#### Values - -- Default: None -- Data type: Map - -### `Listeners[].override{}.JWT{}.Providers` - -Specifies a list of JWT provider configurations to apply to the gateway listener. A provider configuration contains the name of the provider and claims. The override settings have precedence over `Listeners[].defaults{}.JWT{}.Providers` as well as any listener-specific configuration. - -#### Values - -- Default: None -- Data type: List of maps - -The following table describes the parameters you can specify in a member of the `Providers` list: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `Name` | Specifies the name of the provider. | String | None | -| `VerifyClaims` | Specifies a list of paths and a value that define the claim that Consul verifies when it receives a request. The `VerifyClaims` map specifies the following settings:
  • `Path`: Specifies a list of one or more registered or custom claims.
  • `Value`: Specifies the expected value of the claim.
| Map | None | - -Refer to [Configure JWT verification settings](#configure-jwt-verification-settings) for an example configuration. - -## Examples - -The following examples demonstrate common API gateway configuration patterns for specific use cases. - -### Configure JWT verification settings - -The following example configures `listener-one` to verify that requests include a token with Okta user permissions by default. The listener also verifies that the token has an audience of `api.apps.organization.com`. - - - - -```hcl -Kind = "api-gateway" -Name = "api-gateway" -Listeners = [ - { - name = "listener-one" - port = 9001 - protocol = "http" - # override and default are backed by the same type of data structure, see the following section for more on how they interact - override = { - JWT = { - Providers = [ - { - Name = "okta", - VerifyClaims = { - Path = ["aud"], - Value = "api.apps.organization.com", - } - }, - ] - } - } - default = { - JWT = { - Providers = [ - { - Name = "okta", - VerifyClaims = { - Path = ["perms", "role"], - Value = "user", - } - } - ] - } - } - } -] -``` - - - -```json -{ - "Kind": "api-gateway", - "Name": "api-gateway", - "Listeners": [ - { - "name": "listener-one", - "port": 9001, - "protocol": "http", - "override": { - "JWT": { - "Providers": [{ - "Name": "okta", - "VerifyClaims": { - "Path": ["aud"], - "Value": "api.apps.organization.com" - } - }] - } - }, - "default": { - "JWT": { - "Providers": [{ - "Name": "okta", - "VerifyClaims": { - "Path": ["perms", "role"], - "Value": "user" - } - }] - } - } - } - ] -} -``` - - - diff --git a/website/content/docs/connect/config-entries/index.mdx b/website/content/docs/connect/config-entries/index.mdx deleted file mode 100644 index 309ea26b177a..000000000000 --- a/website/content/docs/connect/config-entries/index.mdx +++ /dev/null @@ -1,60 +0,0 @@ ---- -layout: docs -page_title: Configuration Entry Overview -description: >- - Configuration entries define service mesh behaviors in order to secure and manage traffic. Learn about Consul’s different config entry kinds and get links to configuration reference pages. ---- - -# Configuration Entry Overview - -Configuration entries can be used to configure the behavior of Consul service mesh. - -The following configuration entries are supported: - -- [API Gateway](/consul/docs/connect/config-entries/api-gateway) - defines the configuration for an API gateway - -- [Ingress Gateway](/consul/docs/connect/config-entries/ingress-gateway) - defines the - configuration for an ingress gateway - -- [Mesh](/consul/docs/connect/config-entries/mesh) - controls - mesh-wide configuration that applies across namespaces and federated datacenters. - -- [Exported Services](/consul/docs/connect/config-entries/exported-services) - enables - Consul to export service instances to other peers or to other admin partitions local or remote to the datacenter. - -- [Proxy Defaults](/consul/docs/connect/config-entries/proxy-defaults) - controls - proxy configuration - -- [Sameness Group](/consul/docs/connect/config-entries/sameness-group) - defines partitions and cluster peers with identical services - -- [Service Defaults](/consul/docs/connect/config-entries/service-defaults) - configures - defaults for all the instances of a given service - -- [Service Intentions](/consul/docs/connect/config-entries/service-intentions) - defines - the [intentions](/consul/docs/connect/intentions) for a destination service - -- [Service Resolver](/consul/docs/connect/config-entries/service-resolver) - matches - service instances with a specific Connect upstream discovery requests - -- [Service Router](/consul/docs/connect/config-entries/service-router) - defines - where to send layer 7 traffic based on the HTTP route - -- [Service Splitter](/consul/docs/connect/config-entries/service-splitter) - defines - how to divide requests for a single HTTP route based on percentages - -- [Terminating Gateway](/consul/docs/connect/config-entries/terminating-gateway) - defines the - services associated with terminating gateway - -## Managing Configuration Entries - -See [Agent - Config Entries](/consul/docs/agent/config-entries). - -## Using Configuration Entries For Service Defaults - -Outside of Kubernetes, when the agent is -[configured](/consul/docs/agent/config/config-files#enable_central_service_config) to enable -central service configurations, it will look for service configuration defaults -that match a registering service instance. If it finds any, the agent will merge -those defaults with the service instance configuration. This allows for things -like service protocol or proxy configuration to be defined globally and -inherited by any affected service registrations. diff --git a/website/content/docs/connect/config-entries/ingress-gateway.mdx b/website/content/docs/connect/config-entries/ingress-gateway.mdx deleted file mode 100644 index cd8eaf326f1e..000000000000 --- a/website/content/docs/connect/config-entries/ingress-gateway.mdx +++ /dev/null @@ -1,1876 +0,0 @@ ---- -layout: docs -page_title: Ingress gateway configuration reference -description: >- - The ingress gateway configuration entry kind defines behavior for securing incoming communication between the service mesh and external sources. Learn about `""ingress-gateway""` config entry parameters for exposing TCP and HTTP listeners. ---- - -# Ingress gateway configuration reference - - - -Ingress gateway is deprecated and will not be enhanced beyond its current capabilities. Ingress gateway is fully supported in this version but will be removed in a future release of Consul. - -Consul's API gateway is the recommended alternative to ingress gateway. - - - -This topic provides configuration reference information for the ingress gateway configuration entry. An ingress gateway is a type of proxy you register as a service in Consul to enable network connectivity from external services to services inside of the service mesh. Refer to [Ingress gateways overview](/consul/docs/connect/gateways/ingress-gateway) for additional information. - -## Configuration model - -The following list describes the configuration hierarchy, language-specific data types, default values if applicable, and requirements for the configuration entry. Click on a property name to view additional details. - - - - -- [`Kind`](#kind): string | must be `ingress-gateway` | required -- [`Name`](#name): string | required -- [`Namespace`](#namespace): string | `default` | -- [`Meta`](#meta): map of strings -- [`Partition`](#partition): string | `default` | -- [`TLS`](#tls): map - - [`Enabled`](#tls-enabled): boolean | `false` - - [`TLSMinVersion`](#tls-tlsminversion): string | `TLSv1_2` - - [`TLSMaxVersion`](#tls-tlsmaxversion): string - - [`CipherSuites`](#tls-ciphersuites): list of strings - - [`SDS`](#tls-sds): map of strings - - [`ClusterName`](#tls-sds): string - - [`CertResource`](#tls-sds): string -- [`Defaults`](#defaults): map - - [`MaxConnections`](#defaults-maxconnections): number - - [`MaxPendingRequests`](#defaults-maxpendingrequests): number - - [`MaxConcurrentRequests`](#defaults-maxconcurrentrequests): number - - [`PassiveHealthCheck`](#defaults-passivehealthcheck): map - - [`Interval`](#defaults-passivehealthcheck): number - - [`MaxFailures`](#defaults-passivehealthcheck): number - - [`EnforcingConsecutive5xx`](#defaults-passivehealthcheck): number - - [`MaxEjectionPercent`](#defaults-passivehealthcheck): number - - [`BaseEjectionTime`](#defaults-passivehealthcheck): string -- [`Listeners`](#listeners): list of maps - - [`Port`](#listeners-port): number | `0` - - [`Protocol`](#listeners-protocol): number | `tcp` - - [`Services`](#listeners-services): list of objects - - [`Name`](#listeners-services-name): string - - [`Namespace`](#listeners-services-namespace): string | - - [`Partition`](#listeners-services-partition): string | - - [`Hosts`](#listeners-services-hosts): List of strings | `.ingress.*` - - [`RequestHeaders`](#listeners-services-requestheaders): map - - [`Add`](#listeners-services-requestheaders): map of strings - - [`Set`](#listeners-services-requestheaders): map of strings - - [`Remove`](#listeners-services-requestheaders): list of strings - - [`ResponseHeaders`](#listeners-services-responseheaders): map - - [`Add`](#listeners-services-responseheaders): map of strings - - [`Set`](#listeners-services-responseheaders): map of strings - - [`Remove`](#listeners-services-responseheaders): list of strings - - [`TLS`](#listeners-services-tls): map - - [`SDS`](#listeners-services-tls-sds): map of strings - - [`ClusterName`](#listeners-services-tls-sds): string - - [`CertResource`](#listeners-services-tls-sds): string - - [`MaxConnections`](#listeners-services-maxconnections): number | `0` - - [`MaxPendingRequests`](#listeners-services-maxconnections): number | `0` - - [`MaxConcurrentRequests`](#listeners-services-maxconnections): number | `0` - - [`PassiveHealthCheck`](#listeners-services-passivehealthcheck): map - - [`Interval`](#listeners-services-passivehealthcheck): number - - [`MaxFailures`](#listeners-services-passivehealthcheck): number - - [`EnforcingConsecutive5xx`](#listeners-services-passivehealthcheck): number - - [`MaxEjectionPercent`](#listeners-services-passivehealthcheck): number - - [`BaseEjectionTime`](#listeners-services-passivehealthcheck): string - - [`TLS`](#listeners-tls): map - - [`Enabled`](#listeners-tls-enabled): boolean | `false` - - [`TLSMinVersion`](#listeners-tls-tlsminversion): string | `TLSv1_2` - - [`TLSMaxVersion`](#listeners-tls-tlsmaxversion): string - - [`CipherSuites`](#listeners-tls-ciphersuites): list of strings - - [`SDS`](#listeners-tls-sds): map of strings - - [`ClusterName`](#listeners-tls-sds): string - - [`CertResource`](#listeners-tls-sds): string - - - - - -- [ `apiVersion`](#apiversion): string | must be set to `consul.hashicorp.com/v1alpha1` | required -- [`kind`](#kind): string | must be `IngressGateway` | required -- [`metadata`](#metadata): map of strings - - [`name`](#metadata-name): string | required - - [`namespace`](#metadata-namespace): string | `default` | -- [`spec`](#spec): map - - [`tls`](#spec-tls): map - - [`enabled`](#spec-tls-enabled): boolean | `false` - - [`tlsMinVersion`](#spec-tls-tlsminversion): string | `TLSv1_2` - - [`tlsMaxVersion`](#spec-tls-tlsmaxversion): string - - [`cipherSuites`](#spec-tls-ciphersuites): list of strings - - [`sds`](#spec-tls-sds): map of strings - - [`clusterName`](#spec-tls-sds): string - - [`certResource`](#spec-tls-sds): string - - [`defaults`](#spec-defaults): map - - [`maxConnections`](#spec-defaults-maxconnections): number - - [`maxPendingRequests`](#spec-defaults-maxpendingrequests): number - - [`maxConcurrentRequests`](#spec-defaults-maxconcurrentrequests): number - - [`passiveHealthCheck`](#spec-defaults-passivehealthcheck): map - - [`interval`](#spec-defaults-passivehealthcheck): string - - [`maxFailures`](#spec-defaults-passivehealthcheck): integer - - [`enforcingConsecutive5xx`](#spec-defaults-passivehealthcheck): number - - [`maxEjectionPercent`](#spec-defaults-passivehealthcheck): number - - [`baseEjectionTime`](#spec-defaults-passivehealthcheck): string - - [`listeners`](#spec-listeners): list of maps - - [`port`](#spec-listeners-port): number | `0` - - [`protocol`](#spec-listeners-protocol): number | `tcp` - - [`services`](#spec-listeners-services): list of maps - - [`name`](#spec-listeners-services-name): string - - [`namespace`](#spec-listeners-services-namespace): string | current namespace | - - [`partition`](#spec-listeners-services-partition): string | current partition | - - [`hosts`](#spec-listeners-services-hosts): list of strings | `.ingress.*` - - [`requestHeaders`](#spec-listeners-services-requestheaders): map - - [`add`](#spec-listeners-services-requestheaders): map of strings - - [`set`](#spec-listeners-services-requestheaders): map of strings - - [`remove`](#spec-listeners-services-requestheaders): list of strings - - [`responseHeaders`](#spec-listeners-services-responseheaders): map - - [`add`](#spec-listeners-services-responseheaders): map of strings - - [`set`](#spec-listeners-services-responseheaders): map of strings - - [`remove`](#spec-listeners-services-responseheaders): list of strings - - [`tls`](#spec-listeners-services-tls): map - - [`sds`](#spec-listeners-services-tls-sds): map of strings - - [`clusterName`](#spec-listeners-services-tls-sds): string - - [`certResource`](#spec-listeners-services-tls-sds): string - - [`maxConnections`](#spec-listeners-services-maxconnections): number | `0` - - [`maxPendingRequests`](#spec-listeners-services-maxconnections): number | `0` - - [`maxConcurrentRequests`](#spec-listeners-services-maxconnections): number | `0` - - [`passiveHealthCheck`](#spec-listeners-services-passivehealthcheck): map - - [`interval`](#spec-listeners-services-passivehealthcheck): string - - [`maxFailures`](#spec-listeners-services-passivehealthcheck): number - - [`enforcingConsecutive5xx`](#spec-listeners-services-passivehealthcheck): number - - [`maxEjectionPercent`](#spec-listeners-services-passivehealthcheck): integer - - [`baseEjectionTime`](#spec-listeners-services-passivehealthcheck): string - - [`tls`](#spec-listeners-tls): map - - [`enabled`](#spec-listeners-tls-enabled): boolean | `false` - - [`tlsMinVersion`](#spec-listeners-tls-tlsminversion): string | `TLSv1_2` - - [`tlsMaxVersion`](#spec-listeners-tls-tlsmaxversion): string - - [`cipherSuites`](#spec-listeners-tls-ciphersuites): list of strings - - [`sds`](#spec-listeners-tls-sds): map of strings - - [`clusterName`](#spec-listeners-tls-sds): string - - [`certResource`](#spec-listeners-tls-sds): string - - - - - -## Complete configuration - -When every field is defined, an ingress gateway configuration entry has the following form: - - - - - -```hcl -Kind = "ingress-gateway" -Name = "" -Namespace = "" -Partition = "" -Meta = { - = "" -} -TLS = { - Enabled = false - TLSMinVersion = "TLSv1_2" - TLSMaxVersion = "" - CipherSuites = [ - "" - ] - SDS = { - ClusterName = "" - CertResource = "" - } -} -Defaults = { - MaxConnections = - MaxPendingRequests = - MaxConcurrentRequests = - PassiveHealthCheck = { - Interval = "" - MaxFailures = - EnforcingConsecutive5xx = - MaxEjectionPercent = - BaseEjectionTime = "" - } -} -Listeners = [ - { - Port = 0 - Protocol = "tcp" - Services = [ - { - Name = "" - Namespace = "" - Partition = "" - Hosts = [ - ".ingress.*" - ] - RequestHeaders = { - Add = { - RequestHeaderName = "" - } - Set = { - RequestHeaderName = "" - } - Remove = [ - "" - ] - } - ResponseHeaders = { - Add = { - ResponseHeaderName = "" - } - Set = { - ResponseHeaderName = "" - } - Remove = [ - "" - ] - } - TLS = { - SDS = { - ClusterName = "" - CertResource = "" - } - } - MaxConnections = - MaxPendingRequests = - MaxConcurrentRequests = - PassiveHealthCheck = { - Interval = "" - MaxFailures = - EnforcingConsecutive5xx = - MaxEjectionPercent = - BaseEjectionTime = "" - } - }] - TLS = { - Enabled = false - TLSMinVersion = "TLSv1_2" - TLSMaxVersion = "" - CipherSuites = [ - "" - ] - SDS = { - ClusterName = "" - CertResource = "" - } - } - } -] -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: IngressGateway -metadata: - name: - namespace: "" -spec: - tls: - enabled: false - tlsSMinVersion: TLSv1_2 - tlsMaxVersion: "" - cipherSuites: - - - sds: - clusterName: - certResource: - defaults: - maxConnections: - maxPendingRequests: - maxConcurrentRequests: - passiveHealthCheck: - interval: "" - maxFailures: - enforcingConsecutive5xx: - maxEjectionPercent: - baseEjectionTime: "" - listeners: - - port: 0 - protocol: tcp - services: - - name: - namespace: - partition: - hosts: - - .ingress.* - requestHeaders: - add: - requestHeaderName: - set: - requestHeaderName: - remove: - - - responseHeaders: - add: - responseHeaderName: - set: - responseHeaderName: - remove: - - - tls: - sds: - clusterName: - certResource: - maxConnections: - maxPendingRequests: - maxConcurrentRequests: - passiveHealthCheck: - interval: "" - maxFailures: - enforcingConsecutive5xx: - maxEjectionPercent: - baseEjectionTime: "" - tls: - enabled: false - tlsMinVersion: TLSv1_2 - tlsMaxVersion: - cipherSuites: - - - sds: - clusterName: - certResource: -``` - - - - - -```json -{ - "Kind" : "ingress-gateway", - "Name" : "", - "Namespace" : "", - "Partition" : "", - "Meta": { - "" : "" - }, - "TLS" : { - "Enabled" : false, - "TLSMinVersion" : "TLSv1_2", - "TLSMaxVersion" : "", - "CipherSuites" : [ - "" - ], - "SDS": { - "ClusterName" : "", - "CertResource" : "" - } - }, - "Defaults" : { - "MaxConnections" : , - "MaxPendingRequests" : , - "MaxConcurrentRequests": , - "PassiveHealthCheck" : { - "interval": "", - "maxFailures": , - "enforcingConsecutive5xx": , - "maxEjectionPercent": , - "baseEjectionTime": "" - } - }, - "Listeners" : [ - { - "Port" : 0, - "Protocol" : "tcp", - "Services" : [ - { - "Name" : "", - "Namespace" : "", - "Partition" : "", - "Hosts" : [ - ".ingress.*" - ], - "RequestHeaders" : { - "Add" : { - "RequestHeaderName" : "" - }, - "Set" : { - "RequestHeaderName" : "" - }, - "Remove" : [ - "" - ] - }, - "ResponseHeaders" : { - "Add" : { - "ResponseHeaderName" : "" - }, - "Set" : { - "ResponseHeaderName" : "" - }, - "Remove" : [ - "" - ] - }, - "TLS" : { - "SDS" : { - "ClusterName" : "", - "CertResource" : "" - } - }, - "MaxConnections" : , - "MaxPendingRequests" : , - "MaxConcurrentRequests" : , - "PassiveHealthCheck" : { - "interval": "", - "maxFailures": , - "enforcingConsecutive5xx":, - "maxEjectionPercent": , - "baseEjectionTime": "" - } - } - ], - "TLS" : { - "Enabled" : false, - "TLSMinVersion" : "TLSv1_2", - "TLSMaxVersion" : "", - "CipherSuites" : [ - "" - ], - "SDS" : { - "ClusterName" : "", - "CertResource" : "" - } - } - } - ] -} -``` - - - - - -## Specification - -This section provides details about the fields you can configure in the ingress gateway configuration entry. - - - - - -### `Kind` - -Specifies the type of configuration entry. Must be set to `ingress-gateway`. - -#### Values - -- Default: None -- This field is required. -- Data type: String value that must be set to `ingress-gateway`. - -### `Name` - -Specifies a name for the gateway. The name is metadata that you can use to reference the configuration entry when performing Consul operations with the [`consul config` command](/consul/commands/config). - -#### Values - -- Default: None -- This field is required. -- Data type: String - -### `Namespace` - -Specifies the namespace to apply the configuration entry in. Refer to [Namespaces](/consul/docs/enterprise/namespaces) for additional information about Consul namespaces. - -If unspecified, the ingress gateway is applied to the `default` namespace. You can override the namespace when using the [`/config` API endpoint](/consul/api-docs/config) to register the configuration entry by specifying the `ns` query parameter. - -#### Values - -- Default: `default`, -- Data type: String - -### `Partition` - -Specifies the admin partition that the ingress gateway applies to. The value must match the partition in which the gateway is registered. Refer to [Admin partitions](/consul/docs/enterprise/admin-partitions) for additional information. - -If unspecified, the ingress gateway is applied to the `default` partition. You can override the partition when using the [`/config` API endpoint](/consul/api-docs/config) to register the configuration entry by specifying the `partition` query parameter. - -#### Values - -- Default: `default -- Data type: String - -### `Meta` - -Defines an arbitrary set of key-value pairs to store in the Consul KV. - -#### Values - -- Default: None -- Data type: Map of one or more key-value pairs. - - keys: String - - values: String, integer, or float - -### `TLS` - -Specifies the TLS configuration settings for the gateway. - -#### Values - -- Default: No default -- Data type: Object that can contain the following fields: - - [`Enabled`](#tls-enabled) - - [`TLSMinVersion`](#tls-tlsminversion) - - [`TLSMaxVersion`](#tls-tlsmaxversion) - - [`CipherSuites`](#tls-ciphersuites) - - [`SDS`](#tls-sds) - -### `TLS.Enabled` - -Enables and disables TLS for the configuration entry. Set to `true` to enable built-in TLS for every listener on the gateway. TLS is disabled by default. - -When enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). - -#### Values - - - Default: `false` - - Data type: boolean - -### `TLS.TLSMinVersion` - -Specifies the minimum TLS version supported for gateway listeners. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `TLS.TLSMaxVersion` - -Specifies the maximum TLS version supported for gateway listeners. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `TLS.CipherSuites[]` - -Specifies a list of cipher suites that gateway listeners support when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. - -#### Values - -- Default: None -- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. - -### `TLS.SDS` - -Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for additional information. - -Consul applies the SDS configuration specified in this field as defaults for all listeners defined in the gateway. You can override the SDS settings for per listener or per service defined in the listener. Refer to the following configurations for additional information: - -- [`Listeners.TLS.SDS`](#listeners-tls-sds): Configures SDS settings for all services in the listener. -- [`Listeners.Services.TLS.SDS`](#listeners-services-tls-sds): Configures SDS settings for a specific service defined in the listener. - -#### Values - -- Default: None -- Data type: Map containing the following fields: - - `ClusterName` - - `CertResource` - -The following table describes how to configure SDS parameters. - -| Parameter | Description | Data type | -| --- | --- | --- | -| `ClusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | -| `CertResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | - -### `Defaults` - -Specifies default configurations for connecting upstream services. - -#### Values - -- Default: None -- The data type is a map containing the following parameters: - - - [`MaxConnections`](#defaults-maxconnections) - - [`MaxPendingRequests`](#defaults-maxpendingrequests) - - [`MaxConcurrentRequests`](#defaults-maxconcurrentrequests) - -### `Defaults.MaxConnections` - -Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. - -#### Values - -- Default value is `0`, which instructs Consul to use the proxy's configuration. For Envoy, the default is `1024`. -- Data type: Integer - -### `Defaults.MaxPendingRequests` - -Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol). - -#### Values - -- Default value is `0`, which instructs Consul to use the proxy's configuration. For Envoy, the default is `1024`. -- Data type: Integer - -### `Defaults.MaxConcurrentRequests` - -Specifies the maximum number of concurrent HTTP/2 traffic requests that are allowed at a single point in time. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol). - -#### Values - -- Default value is `0`, which instructs Consul to use the proxy's configuration. For Envoy, the default is `1024`. -- Data type: Integer - -### `Defaults.PassiveHealthCheck` - -Defines a passive health check configuration. Passive health checks remove hosts from the upstream cluster when they are unreachable or return errors. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the configurations for passive health checks: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | - | `Interval` | Specifies the time between checks. | string | `0s` | - | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | - | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | - | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | - | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | - -### `Listeners[]` - -Specifies a list of listeners in the mesh for the gateway. Listeners are uniquely identified by their port number. - -#### Values - -- Default: None -- Data type: List of maps containing the following fields: - - [`Port`](#listeners-port) - - [`Protocol`](#listeners-protocol) - - [`Services[]`](#listeners-services) - - [`TLS`](#listeners-tls) - -### `Listeners[].Port` - -Specifies the port that the listener receives traffic on. The port is bound to the IP address specified in the [`-address`](/consul/commands/connect/envoy#address) flag when starting the gateway. The listener port must not conflict with the health check port. - -#### Values - -- Default: `0` -- Data type: Integer - -### `Listeners[].Protocol` - -Specifies the protocol associated with the listener. To enable L7 network management capabilities, specify one of the following values: - -- `http` -- `http2` -- `grpc` - -#### Values - -- Default: `tcp` -- Data type: String that contains one of the following values: - - - `tcp` - - `http` - - `http2` - - `grpc` - -### `Listeners[].Services[]` - -Specifies a list of services that the listener exposes to services outside the mesh. Each service must have a unique name. The `Namespace` field is required for Consul Enterprise datacenters. If the [`Listeners.Protocol`] field is set to `tcp`, then Consul can only expose one service. You can expose multiple services if the listener uses any other supported protocol. - -#### Values - -- Default: None -- Data type: List of maps that can contain the following fields: - - [`Name`](#listeners-services-name) - - [`Namespace`](#listeners-services-namespace) - - [`Partition`](#listeners-services-partition) - - [`Hosts`](#listeners-services-hosts) - - [`RequestHeaders`](#listeners-services-requestheaders) - - [`ResponseHeaders`](#listeners-services-responseheaders)` - - [`TLS`](#listeners-services-tls) - - [`MaxConnections`](#listeners-services-maxconnections) - - [`MaxPendingRequests`](#listeners-services-maxpendingrequests) - - [`MaxConcurrentRequests`](#listeners-services-maxconcurrentrequests) - - [`PassiveHealthCheck`](#listeners-services-passivehealthcheck) - -### `Listeners[].Services[].Name` - -Specifies the name of a service to expose to the listener. You can specify services in the following ways: - -- Provide the name of a service registered in the Consul catalog. -- Provide the name of a service defined in other configuration entries. Refer to [Service Mesh Traffic Management Overview](/consul/docs/connect/manage-traffic) for additional information. -- Provide a `*` wildcard to expose all services in the datacenter. Wild cards are not supported for listeners configured for TCP. Refer to [`Listeners[].Protocol`](#listeners-protocol) for additional information. - -#### Values - -- Default: None -- Data type: String - -### `Listeners[].Services[].Namespace` - -Specifies the namespace to use when resolving the location of the service. - -#### Values - -- Default: Current namespace -- Data type: String - -### `Listeners[].Services[].Partition` - -Specifies the admin partition to use when resolving the location of the service. - -#### Values - -- Default: Current partition -- Data type: String - -### `Listeners[].Services[].Hosts[]` - -Specifies one or more hosts that the listening services can receive requests on. The ingress gateway proxies external traffic to the specified services when external requests include `host` headers that match a host specified in this field. - -If unspecified, Consul matches requests to services using the `.ingress.*` domain. You cannot specify a host for listeners that communicate over TCP. You cannot specify a host when service names are specified with a `*` wildcard. Requests must include the correct host for Consul to proxy traffic to the service. - -When TLS is disabled, you can use the `*` wildcard to match all hosts. Disabling TLS may be suitable for testing and learning purposes, but we recommend enabling TLS in production environments. - -You can use the wildcard in the left-most DNS label to match a set of hosts. For example, `*.example.com` is valid, but `example.*` and `*-suffix.example.com` are invalid. - -#### Values - -- Default: None -- Data type: List of strings or `*` - -### `Listeners[].Services[].RequestHeaders` - -Specifies a set of HTTP-specific header modification rules applied to requests routed through the gateway. You cannot configure request headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with Path-based Routing](#http-listener-with-path-based-routing) for an example configuration. - -#### Values - -- Default: None -- Data type: Object containing one or more fields that define header modification rules: - - - `Add`: Map of one or more key-value pairs - - `Set`: Map of one or more key-value pairs - - `Remove`: Map of one or more key-value pairs - -The following table describes how to configure values for request headers: - -| Rule | Description | Data type | -| --- | --- | --- | -| `Add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `Set` | Defines a set of key-value pairs to add to the request header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `Remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | - -##### Use variable placeholders - -For `Add` and `Set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. - -### `Listeners[].Services[].ResponseHeaders` - -Specifies a set of HTTP-specific header modification rules applied to responses routed through the gateway. You cannot configure response headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with Path-based Routing](#http-listener-with-path-based-routing) for an example configuration. - -#### Values - -- Default: None -- Data type: Map containing one or more fields that define header modification rules: - - - `Add`: Map of one or more key-value pairs - - `Set`: Map of one or more key-value pairs - - `Remove`: Map of one or more key-value pairs - -The following table describes how to configure values for request headers: - -| Rule | Description | Data type | -| --- | --- | --- | -| `Add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `Set` | Defines a set of key-value pairs to add to the response header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `Remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | - -##### Use variable placeholders - -For `Add` and `Set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. - -### `Listeners[].Services[].TLS` - -Specifies a TLS configuration for a specific service. The settings in this configuration overrides the main [`TLS`](#tls) settings for the configuration entry. - -#### Values - -- Default: None -- Data type: Map - -### `Listeners[].Services[].TLS.SDS` - -Specifies parameters that configure the listener to load TLS certificates from an external SDS. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for additional information. - -This configuration overrides the main [`TLS.SDS`](#tls-sds) settings for configuration entry. If unspecified, Consul applies the top-level [`TLS.SDS`](#tls-sds) settings. - -#### Values - -- Default: None -- Data type: Map containing the following fields: - - - `ClusterName` - - `CertResource` - -The following table describes how to configure SDS parameters. Refer to [Configure static SDS clusters](/consul/docs/connect/gateways/ingress-gateway/tls-external-service#configure-static-sds-clusters) for usage information: - -| Parameter | Description | Data type | -| `ClusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | -| `CertResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | - -### `Listeners[].Services[].MaxConnections` - -Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. - -When defined, this field overrides the [`Defaults.MaxConnections`](#defaults-maxconnections) configuration. - -#### Values - -- Default: None -- Data type: Integer - -### `Listeners[].Services.MaxPendingRequests` - -Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. When defined, this field overrides the value specified in the [`Defaults.MaxPendingRequests`](#defaults-maxpendingrequests) field of the configuration entry. - -Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol) for more information. - -#### Values - -- Default: None -- Data type: Integer - -### `Listeners[].Services[].MaxConcurrentRequests` - -Specifies the maximum number of concurrent HTTP/2 traffic requests that the service is allowed at a single point in time. This field overrides the value set in the [`Defaults.MaxConcurrentRequests`](#defaults-maxconcurrentrequests) field of the configuration entry. - -Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol) for more information. - -#### Values - -- Default: None -- Data type: Integer - -### `Listeners[].Services[].PassiveHealthCheck` - -Defines a passive health check configuration for the service. Passive health checks remove hosts from the upstream cluster when the service is unreachable or returns errors. This field overrides the value set in the [`Defaults.PassiveHealthCheck`](#defaults-passivehealthcheck) field of the configuration entry. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the configurations for passive health checks: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | - | `Interval` | Specifies the time between checks. | string | `0s` | - | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | - | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | - | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | - | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | - -### `Listeners[].TLS` - -Specifies the TLS configuration for the listener. If unspecified, Consul applies any [service-specific TLS configurations](#listeners-services-tls). If neither the listener- nor service-specific TLS configurations are specified, Consul applies the main [`TLS`](#tls) settings for the configuration entry. - -#### Values - -- Default: None -- Data type: Map that can contain the following fields: - - [`Enabled`](#listeners-tls-enabled) - - [`TLSMinVersion`](#listeners-tls-tlsminversion) - - [`TLSMaxVersion`](#listeners-tls-tlsmaxversion) - - [`CipherSuites`](#listeners-tls-ciphersuites) - - [`SDS`](#listeners-tls-sds) - -### `Listeners[].TLS.Enabled` - -Set to `true` to enable built-in TLS for the listener. If enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). - -#### Values - - - Default: `false` - - Data type: boolean - -### `Listeners[].TLS.TLSMinVersion` - -Specifies the minimum TLS version supported for the listener. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `Listeners[].TLS.TLSMaxVersion` - -Specifies the maximum TLS version supported for the listener. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `Listeners[].TLS.CipherSuites` - -Specifies a list of cipher suites that the listener supports when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. - -#### Values - -- Default: None -- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. - -### `Listeners[].TLS.SDS` - -Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for additional information. - -Consul applies the SDS configuration specified in this field to all services in the listener. You can override the `Listeners.TLS.SDS` configuration per service by configuring the [`Listeners.Services.TLS.SDS`](#listeners-services-tls-sds) settings for each service. - -#### Values - -- Default: None -- The data type is a map containing `ClusterName` and `CertResource` fields. - -The following table describes how to configure SDS parameters. Refer to [Configure static SDS clusters](/consul/docs/connect/gateways/ingress-gateway/tls-external-service#configure-static-sds-clusters) for usage information: - -| Parameter | Description | Data type | -| --- | --- | --- | -| `ClusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | -| `CertResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | - - - - - -### `apiVersion` - -Kubernetes-only parameter that specifies the version of the Consul API that the configuration entry maps to Kubernetes configurations. The value must be `consul.hashicorp.com/v1alpha1`. - -### `kind` - -Specifies the type of configuration entry to implement. Must be set to `IngressGateway`. - -#### Values - -- Default: None -- This field is required. -- Data type: String value that must be set to `IngressGateway`. - -### `metadata` - -Specifies metadata for the gateway. - -#### Values - -- Default: None -- This field is required -- Data type: Map the contains the following fields: - - [`name`](#metadata-name) - - [`namespace`](#metadata-namespace) - -### `metadata.name` - -Specifies a name for the gateway. The name is metadata that you can use to reference the configuration entry when performing Consul operations with the [`consul config` command](/consul/commands/config). - -#### Values - -- Default: None -- This field is required. -- Data type: String - -### `metadata.namespace` - -Specifies the namespace to apply the configuration entry in. Refer to [Namespaces](/consul/docs/enterprise/namespaces) for additional information about Consul namespaces. - -If unspecified, the ingress gateway is applied to the `default` namespace. You can override the namespace when using the [`/config` API endpoint](/consul/api-docs/config) to register the configuration entry by specifying the `ns` query parameter. - -#### Values - -- Default: `default`, -- Data type: String - -### `spec` - -Kubernetes-only field that contains all of the configurations for ingress gateway pods. - -#### Values - -- Default: None -- This field is required. -- Data type: Map containing the following fields: - - [`tls`](#tls) - - [`defaults`](#defaults) - - [`listeners`](#listeners) - -### `spec.tls` - -Specifies the TLS configuration settings for the gateway. - -#### Values - -- Default: No default -- Data type: Object that can contain the following fields: - - [`enabled`](#tls-enabled) - - [`tlsMinVersion`](#spec-tls-tlsminversion) - - [`tlsMaxVersion`](#spec-tls-tlsmaxversion) - - [`cipherSuites`](#spec-tls-tlsciphersuites) - - [`sds`](#spec-tls-sds) - -### `spec.tls.enabled` - -Enables and disables TLS for the configuration entry. Set to `true` to enable built-in TLS for every listener on the gateway. TLS is disabled by default. - -When enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). - -#### Values - - - Default: `false` - - Data type: boolean - -### `spec.tls.tlsMinVersion` - -Specifies the minimum TLS version supported for gateway listeners. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `spec.tls.tlsMaxVersion` - -Specifies the maximum TLS version supported for gateway listeners. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `spec.tls.cipherSuites[]` - -Specifies a list of cipher suites that gateway listeners support when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. - -#### Values - -- Default: None -- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. - -### `spec.tls.sds` - -Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for additional information. - -Consul applies the SDS configuration specified in this field as defaults for all listeners defined in the gateway. You can override the SDS settings for per listener or per service defined in the listener. Refer to the following configurations for additional information: - -- [`spec.listeners.tls.sds`](#spec-listeners-tls-sds): Configures SDS settings for all services in the listener. -- [`spec.listeners.services.tls.sds`](#spec-listeners-services-tls-sds): Configures SDS settings for a specific service defined in the listener. - -#### Values - -- Default: None -- Data type: Map containing the following fields: - - [`clusterName`] - - [`certResource`] - -The following table describes how to configure SDS parameters. - -| Parameter | Description | Data type | -| --- | --- | --- | -| `clusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | -| `certResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | - -### `spec.defaults` - -Specifies default configurations for upstream connections. - -#### Values - -- Default: None -- The data type is a map containing the following parameters: - - - [`maxConnections`](#spec-defaults-maxconnections) - - [`maxPendingRequests`](#spec-defaults-maxpendingrequests) - - [`maxConcurrentRequests`](#spec-defaults-maxconcurrentrequests) - -### `spec.defaults.maxConnections` - -Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. If unspecified, Consul uses Envoy's configuration. The default configuration for Envoy is `1024`. - -#### Values - -- Default: `0` -- Data type: Integer - -### `spec.defaults.maxPendingRequests` - -Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.Protocol`](#spec-listeners-protocol). - -If unspecified, Consul uses Envoy's configuration. The default for Envoy is `1024`. - -#### Values - -- Default: `0` -- Data type: Integer - -### `spec.defaults.maxConcurrentRequests` - -Specifies the maximum number of concurrent HTTP/2 traffic requests that are allowed at a single point in time. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol). - -If unspecified, Consul uses Envoy's configuration. The default for Envoy is `1024`. - -#### Values - -- Default: `0` -- Data type: Integer - -### `spec.defaults.passiveHealthCheck` - -Defines a passive health check configuration. Passive health checks remove hosts from the upstream cluster when they are unreachable or return errors. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the configurations for passive health checks: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | - | `Interval` | Specifies the time between checks. | string | `0s` | - | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | - | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | - | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | - | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | - -### `spec.listeners[]` - -Specifies a list of listeners in the mesh for the gateway. Listeners are uniquely identified by their port number. - -#### Values - -- Default: None -- Data type: List of maps containing the following fields: - - [`port`](#spec-listeners-port) - - [`protocol`](#spec-listeners-protocol) - - [`services[]`](#spec-listeners-services) - - [`tls`](#spec-listeners-tls) - -### `spec.listeners[].port` - -Specifies the port that the listener receives traffic on. The port is bound to the IP address specified in the [`-address`](/consul/commands/connect/envoy#address) flag when starting the gateway. The listener port must not conflict with the health check port. - -#### Values - -- Default: `0` -- Data type: Integer - -### `spec.listeners[].protocol` - -Specifies the protocol associated with the listener. To enable L7 network management capabilities, specify one of the following values: - -- `http` -- `http2` -- `grpc` - -#### Values - -- Default: `tcp` -- Data type: String that contains one of the following values: - - - `tcp` - - `http` - - `http2` - - `grpc` - -### `spec.listeners[].services[]` - -Specifies a list of services that the listener exposes to services outside the mesh. Each service must have a unique name. The `namespace` field is required for Consul Enterprise datacenters. If the listener's [`protocol`](#spec-listeners-protocol) field is set to `tcp`, then Consul can only expose one service. You can expose multiple services if the listener uses any other supported protocol. - -#### Values - -- Default: None -- Data type: List of maps that can contain the following fields: - - [`name`](#spec-listeners-services-name) - - [`namespace`](#spec-listeners-services-namespace) - - [`partition`](#spec-listeners-services-partition) - - [`hosts`](#spec-listeners-services-hosts) - - [`requestHeaders`](#spec-listeners-services-requestheaders) - - [`responseHeaders`](#spec-listeners-services-responseheaders)` - - [`tlsLS`](#spec-listeners-services-tls) - - [`maxConnections`](#spec-listeners-services-maxconnections) - - [`maxPendingRequests`](#spec-listeners-services-maxpendingrequests) - - [`maxConcurrentRequests`](#spec-listeners-services-maxconcurrentrequests) - - [`passiveHealthCheck`](#spec-listeners-services-passivehealthcheck) - -### `spec.listeners[].services[].name` - -Specifies the name of a service to expose to the listener. You can specify services in the following ways: - -- Provide the name of a service registered in the Consul catalog. -- Provide the name of a service defined in other configuration entries. Refer to [Service Mesh Traffic Management Overview](/consul/docs/connect/manage-traffic) for additional information. Refer to [HTTP listener with path-based routes](#http-listener-with-path-based-routes) for an example. -- Provide a `*` wildcard to expose all services in the datacenter. Wild cards are not supported for listeners configured for TCP. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol) for additional information. - -#### Values - -- Default: None -- Data type: String - -### `spec.listeners[].services[].namespace` - -Specifies the namespace to use when resolving the location of the service. - -#### Values - -- Default: Current namespace -- Data type: String - -### `spec.listeners[].services[].partition` - -Specifies the admin partition to use when resolving the location of the service. - -#### Values - -- Default: Current partition -- Data type: String - -### `spec.listeners[].services[].hosts[]` - -Specifies one or more hosts that the listening services can receive requests on. The ingress gateway proxies external traffic to the specified services when external requests include `host` headers that match a host specified in this field. - -If unspecified, Consul matches requests to services using the `.ingress.*` domain. You cannot specify a host for listeners that communicate over TCP. You cannot specify a host when service names are specified with a `*` wildcard. Requests must include the correct host for Consul to proxy traffic to the service. - -When TLS is disabled, you can use the `*` wildcard to match all hosts. Disabling TLS may be suitable for testing and learning purposes, but we recommend enabling TLS in production environments. - -You can use the wildcard in the left-most DNS label to match a set of hosts. For example, `*.example.com` is valid, but `example.*` and `*-suffix.example.com` are invalid. - -#### Values - -- Default: None -- Data type: List of strings or `*` - -### `spec.listeners[].services[].requestHeaders` - -Specifies a set of HTTP-specific header modification rules applied to requests routed through the gateway. You cannot configure request headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with path-based routing](#http-listener-with-path-based-routing) for an example configuration. - -#### Values - -- Default: None -- Data type: Object containing one or more fields that define header modification rules: - - - `add`: Map of one or more key-value pairs - - `set`: Map of one or more key-value pairs - - `remove`: Map of one or more key-value pairs - -The following table describes how to configure values for request headers: - -| Rule | Description | Data type | -| --- | --- | --- | -| `add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `set` | Defines a set of key-value pairs to add to the request header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | - -##### Use variable placeholders - -For `add` and `set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. - -### `spec.listeners[].services[].responseHeaders` - -Specifies a set of HTTP-specific header modification rules applied to responses routed through the gateway. You cannot configure response headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with path-based routing](#http-listener-with-path-based-routing) for an example configuration. - -#### Values - -- Default: None -- Data type: Map containing one or more fields that define header modification rules: - - - `add`: Map of one or more key-value pairs - - `set`: Map of one or more key-value pairs - - `remove`: Map of one or more key-value pairs - -The following table describes how to configure values for request headers: - -| Rule | Description | Data type | -| --- | --- | --- | -| `add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `set` | Defines a set of key-value pairs to add to the request header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | -| `remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | - -##### Use variable placeholders - -For `add` and `set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. - -### `spec.listeners[].services[].tls` - -Specifies a TLS configuration for a specific service. The settings in this configuration overrides the main [`tls`](#spec.tls) settings for the configuration entry. - -#### Values - -- Default: None -- Data type: Map - -### `spec.listeners[].services[].tls.sds` - -Specifies parameters that configure the listener to load TLS certificates from an external SDS. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for additional information. - -If unspecified, Consul applies the [`sds`](#spec-tls-sds) settings configured for the ingress gateway. If both are specified, this configuration overrides the settings for configuration entry. - -#### Values - -- Default: None -- Data type: Map containing the following fields: - - - `clusterName` - - `certResource` - -The following table describes how to configure SDS parameters. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for usage information: - -| Parameter | Description | Data type | -| --- | --- | --- | -| `clusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | -| `certResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | - -### `spec-listeners[].services[].maxConnections` - -Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. - -A value specified in this field overrides the [`maxConnections`](#spec-defaults-maxconnections) field specified in the `defaults` configuration. - -#### Values - -- Default: None -- Data type: Integer - -### `spec.listeners[].services.maxPendingRequests` - -Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. A value specified in this field overrides the [`maxPendingRequests`](#spec-defaults-maxpendingrequests) field specified in the `defaults` configuration. - -Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol) for more information. - -#### Values - -- Default: None -- Data type: Integer - -### `spec.listeners[].services[].maxConcurrentRequests` - -Specifies the maximum number of concurrent HTTP/2 traffic requests that the service is allowed at a single point in time. A value specified in this field overrides the [`maxConcurrentRequests`](#spec-defaults-maxconcurrentrequests) field specified in the `defaults` configuration entry. - -Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol) for more information. - -#### Values - -- Default: None -- Data type: Integer - -### `spec.listeners[].services[].passiveHealthCheck` - -Defines a passive health check configuration for the service. Passive health checks remove hosts from the upstream cluster when the service is unreachable or returns errors. Health checks specified for services override the health checks defined in the [`spec.defaults.passiveHealthCheck`](#spec-defaults-passivehealthcheck) configuration. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the configurations for passive health checks: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | - | `Interval` | Specifies the time between checks. | string | `0s` | - | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | - | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | - | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | - | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | - -### `spec.listeners[].tls` - -Specifies the TLS configuration for the listener. If unspecified, Consul applies any [service-specific TLS configurations](#spec-listeners-services-tls). If neither the listener- nor service-specific TLS configurations are specified, Consul applies the main [`tls`](#tls) settings for the configuration entry. - -#### Values - -- Default: None -- Data type: Map that can contain the following fields: - - [`enabled`](#spec-listeners-tls-enabled) - - [`tlsMinVersion`](#spec-listeners-tls-tlsminversion) - - [`tlsMaxVersion`](#spec-listeners-tls-tlsmaxversion) - - [`cipherSuites`](#spec-listeners-tls-ciphersuites) - - [`sds`](#spec-listeners-tls-sds) - -### `spec.listeners[].tls.enabled` - -Set to `true` to enable built-in TLS for the listener. If enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). - -#### Values - - - Default: `false` - - Data type: boolean - -### `spec.listeners[].tls.tlsMinVersion` - -Specifies the minimum TLS version supported for the listener. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `spec.listeners[].tls.tlsMaxVersion` - -Specifies the maximum TLS version supported for the listener. - -#### Values - -- Default: Depends on the version of Envoy: - - Envoy v1.22.0 and later: `TLSv1_2` - - Older versions: `TLSv1_0` -- Data type: String with one of the following values: - - `TLS_AUTO` - - `TLSv1_0` - - `TLSv1_1` - - `TLSv1_2` - - `TLSv1_3` - -### `spec.listeners[].tls.cipherSuites` - -Specifies a list of cipher suites that the listener supports when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. - -#### Values - -- Default: None -- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. - -### `spec.listeners[].tls.sds` - -Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for additional information. - -Consul applies the SDS configuration specified in this field to all services in the listener. You can override the `spec.listeners[].tls.sds` configuration per service by configuring the [`spec.listeners.services.tls.sds`](#spec-listeners-services-tls-sds) settings for each service. - -#### Values - -- Default: None -- Data type: Map containing the following fields - - `clusterName` - - `certResource` - -The following table describes how to configure SDS parameters. Refer to [Configure static SDS clusters](/consul/docs/connect/gateways/ingress-gateway/tls-external-service#configure-static-sds-clusters) for usage information: - -| Parameter | Description | Data type | -| --- | --- | --- | -| `clusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | -| `certResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | - - - - - -## Examples - -Refer to the following examples for common ingress gateway configuration patterns: -- [Define a TCP listener](#define-a-tcp-listener) -- [Use wildcards to define listeners](#use-wildcards-to-define-an-http-listener) -- [HTTP listener with path-based routes](#http-listener-with-path-based-routes) - -### Define a TCP listener - -The following example sets up a TCP listener on an ingress gateway named `us-east-ingress` that proxies traffic to the `db` service. For Consul Enterprise, the `db` service can only listen for traffic in the `default` namespace inside the `team-frontend` admin partition: - -#### Consul CE - - - -```hcl -Kind = "ingress-gateway" -Name = "us-east-ingress" - -Listeners = [ - { - Port = 3456 - Protocol = "tcp" - Services = [ - { - Name = "db" - } - ] - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: IngressGateway -metadata: - name: us-east-ingress -spec: - listeners: - - port: 3456 - protocol: tcp - services: - - name: db -``` - -```json -{ - "Kind": "ingress-gateway", - "Name": "us-east-ingress", - "Listeners": [ - { - "Port": 3456, - "Protocol": "tcp", - "Services": [ - { - "Name": "db" - } - ] - } - ] -} -``` - - - -#### Consul Enterprise - - - -```hcl -Kind = "ingress-gateway" -Name = "us-east-ingress" -Namespace = "default" -Partition = "team-frontend" - -Listeners = [ - { - Port = 3456 - Protocol = "tcp" - Services = [ - { - Namespace = "ops" - Name = "db" - } - ] - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: IngressGateway -metadata: - name: us-east-ingress - namespace: default -spec: - listeners: - - port: 3456 - protocol: tcp - services: - - name: db - namespace: ops -``` - -```json -{ - "Kind": "ingress-gateway", - "Name": "us-east-ingress", - "Namespace": "default", - "Partition": "team-frontend", - "Listeners": [ - { - "Port": 3456, - "Protocol": "tcp", - "Services": [ - { - "Namespace": "ops", - "Name": "db" - } - ] - } - ] -} -``` - - - -### Use wildcards to define an HTTP listener - -The following example gateway is named `us-east-ingress` and defines two listeners. The first listener is configured to listen on port `8080` and uses a wildcard (`*`) to proxy traffic to all services in the datacenter. The second listener exposes the `api` and `web` services on port `4567` at user-provided hosts. - -TLS is enabled on every listener. The `max_connections` of the ingress gateway proxy to each upstream cluster is set to `4096`. - -The Consul Enterprise version implements the following additional configurations: - -- The ingress gateway is set up in the `default` [namespace](/consul/docs/enterprise/namespaces) and proxies traffic to all services in the `frontend` namespace. -- The `api` and `web` services are proxied to team-specific [admin partitions](/consul/docs/enterprise/admin-partitions): - -#### Consul CE - - - -```hcl -Kind = "ingress-gateway" -Name = "us-east-ingress" - -TLS { - Enabled = true -} - -Defaults { - MaxConnections = 4096 -} - -Listeners = [ - { - Port = 8080 - Protocol = "http" - Services = [ - { - Name = "*" - } - ] - }, - { - Port = 4567 - Protocol = "http" - Services = [ - { - Name = "api" - Hosts = ["foo.example.com"] - }, - { - Name = "web" - Hosts = ["website.example.com"] - } - ] - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: IngressGateway -metadata: - name: us-east-ingress -spec: - tls: - enabled: true - listeners: - - port: 8080 - protocol: http - services: - - name: '*' - - port: 4567 - protocol: http - services: - - name: api - hosts: ['foo.example.com'] - - name: web - hosts: ['website.example.com'] -``` - -```json -{ - "Kind": "ingress-gateway", - "Name": "us-east-ingress", - "TLS": { - "Enabled": true - }, - "Listeners": [ - { - "Port": 8080, - "Protocol": "http", - "Services": [ - { - "Name": "*" - } - ] - }, - { - "Port": 4567, - "Protocol": "http", - "Services": [ - { - "Name": "api", - "Hosts": ["foo.example.com"] - }, - { - "Name": "web", - "Hosts": ["website.example.com"] - } - ] - } - ] -} -``` - - - -#### Consul Enterprise - - - -```hcl -Kind = "ingress-gateway" -Name = "us-east-ingress" -Namespace = "default" - -TLS { - Enabled = true -} - -Listeners = [ - { - Port = 8080 - Protocol = "http" - Services = [ - { - Namespace = "frontend" - Name = "*" - } - ] - }, - { - Port = 4567 - Protocol = "http" - Services = [ - { - Namespace = "frontend" - Name = "api" - Hosts = ["foo.example.com"] - Partition = "api-team" - }, - { - Namespace = "frontend" - Name = "web" - Hosts = ["website.example.com"] - Partition = "web-team" - } - ] - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: IngressGateway -metadata: - name: us-east-ingress - namespace: default -spec: - tls: - enabled: true - listeners: - - port: 8080 - protocol: http - services: - - name: '*' - namespace: frontend - - port: 4567 - protocol: http - services: - - name: api - namespace: frontend - hosts: ['foo.example.com'] - partition: api-team - - name: web - namespace: frontend - hosts: ['website.example.com'] - partition: web-team -``` - -```json -{ - "Kind": "ingress-gateway", - "Name": "us-east-ingress", - "Namespace": "default", - "TLS": { - "Enabled": true - }, - "Listeners": [ - { - "Port": 8080, - "Protocol": "http", - "Services": [ - { - "Namespace": "frontend", - "Name": "*" - } - ] - }, - { - "Port": 4567, - "Protocol": "http", - "Services": [ - { - "Namespace": "frontend", - "Name": "api", - "Hosts": ["foo.example.com"], - "Partition": "api-team" - }, - { - "Namespace": "frontend", - "Name": "web", - "Hosts": ["website.example.com"], - "Partition": "web-team" - } - ] - } - ] -} -``` - - diff --git a/website/content/docs/connect/config-entries/mesh.mdx b/website/content/docs/connect/config-entries/mesh.mdx deleted file mode 100644 index cf75c4bfa4ab..000000000000 --- a/website/content/docs/connect/config-entries/mesh.mdx +++ /dev/null @@ -1,587 +0,0 @@ ---- -layout: docs -page_title: Mesh - Configuration Entry Reference -description: >- - The mesh configuration entry kind defines global default settings like TLS version requirements for proxies inside the service mesh. Use the reference guide to learn about `""mesh""` config entry parameters and how to control communication with services outside of the mesh. ---- - -# Mesh Configuration Entry - -The `mesh` configuration entry allows you to define a global default configuration that applies to all service mesh proxies. -Settings in this config entry apply across all namespaces and federated datacenters. - -## Sample Configuration Entries - -### Mesh-wide TLS Min Version - -Enforce that service mesh mTLS traffic uses TLS v1.2 or newer. - - - - - - -```hcl -Kind = "mesh" -TLS { - Incoming { - TLSMinVersion = "TLSv1_2" - } -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - tls: - incoming: - tlsMinVersion: TLSv1_2 -``` - -```json -{ - "Kind": "mesh", - "TLS": { - "Incoming": { - "TLSMinVersion": "TLSv1_2" - } - } -} -``` - - - - - - -The `mesh` configuration entry can only be created in the `default` namespace and will apply to proxies across **all** namespaces. - - - -```hcl -Kind = "mesh" - -TLS { - Incoming { - TLSMinVersion = "TLSv1_2" - } -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh - namespace: default -spec: - tls: - incoming: - tlsMinVersion: TLSv1_2 -``` - -```json -{ - "Kind": "mesh", - "Namespace": "default", - "Partition": "default", - "TLS": { - "Incoming": { - "TLSMinVersion": "TLSv1_2" - } - } -} -``` - - - - - - -Note that the Kubernetes example does not include a `partition` field. Configuration entries are applied on Kubernetes using [custom resource definitions (CRD)](/consul/docs/k8s/crds), which can only be scoped to their own partition. - -### Mesh Destinations Only - -Only allow transparent proxies to dial addresses in the mesh. - - - - - - -```hcl -Kind = "mesh" -TransparentProxy { - MeshDestinationsOnly = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - transparentProxy: - meshDestinationsOnly: true -``` - -```json -{ - "Kind": "mesh", - "TransparentProxy": { - "MeshDestinationsOnly": true - } -} -``` - - - - - - -The `mesh` configuration entry can only be created in the `default` namespace and will apply to proxies across **all** namespaces. - - - -```hcl -Kind = "mesh" - -TransparentProxy { - MeshDestinationsOnly = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh - namespace: default -spec: - transparentProxy: - meshDestinationsOnly: true -``` - -```json -{ - "Kind": "mesh", - "Namespace": "default", - "Partition": "default", - "TransparentProxy": { - "MeshDestinationsOnly": true - } -} -``` - - - - - - -Note that the Kubernetes example does not include a `partition` field. Configuration entries are applied on Kubernetes using [custom resource definitions (CRD)](/consul/docs/k8s/crds), which can only be scoped to their own partition. - -### Peer Through Mesh Gateways - -Set the `PeerThroughMeshGateways` parameter to `true` to route peering control plane traffic through mesh gateways. - - - - - - -```hcl -Kind = "mesh" -Peering { - PeerThroughMeshGateways = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - peering: - peerThroughMeshGateways: true -``` - -```json -{ - "Kind": "mesh", - "Peering": { - "PeerThroughMeshGateways": true - } -} -``` - - - - - - -You can only set the `PeerThroughMeshGateways` attribute on `mesh` configuration entries in the `default` partition. -The `default` partition owns the traffic routed through the mesh gateway control plane to Consul servers. - - - -```hcl -Kind = "mesh" - -Peering { - PeerThroughMeshGateways = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh - namespace: default -spec: - peering: - peerThroughMeshGateways: true -``` - -```json -{ - "Kind": "mesh", - "Peering": { - "PeerThroughMeshGateways": true - } -} -``` - - - - - - -Note that the Kubernetes example does not include a `partition` field. Configuration entries are applied on Kubernetes using [custom resource definitions (CRD)](/consul/docs/k8s/crds), which can only be scoped to their own partition. - -### Request Normalization - -Enable options under `HTTP.Incoming.RequestNormalization` to apply normalization to all inbound traffic to mesh proxies. - -~> **Compatibility warning**: This feature is available as of Consul CE 1.20.1 and Consul Enterprise 1.20.1, 1.19.2, 1.18.3, and 1.15.15. We recommend upgrading to the latest version of Consul to take advantage of the latest features and improvements. - - - -```hcl -Kind = "mesh" -HTTP { - Incoming { - RequestNormalization { - InsecureDisablePathNormalization = false // default false, shown for completeness - MergeSlashes = true - PathWithEscapedSlashesAction = "UNESCAPE_AND_FORWARD" - HeadersWithUnderscoresAction = "REJECT_REQUEST" - } - } -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - http: - incoming: - requestNormalization: - insecureDisablePathNormalization: false # default false, shown for completeness - mergeSlashes: true - pathWithEscapedSlashesAction: UNESCAPE_AND_FORWARD - headersWithUnderscoresAction: REJECT_REQUEST -``` - -```json -{ - "Kind": "mesh", - "HTTP": { - "Incoming": { - "RequestNormalization": { - "InsecureDisablePathNormalization": false, - "MergeSlashes": true, - "PathWithEscapedSlashesAction": "UNESCAPE_AND_FORWARD", - "HeadersWithUnderscoresAction": "REJECT_REQUEST" - } - } - } -} -``` - - - -## Available Fields - -: nil', - description: - 'Specifies arbitrary KV metadata pairs. Added in Consul 1.8.4.', - yaml: false, - }, - { - name: 'metadata', - children: [ - { - name: 'name', - description: 'Must be set to `mesh`', - }, - { - name: 'namespace', - enterprise: true, - description: - 'Must be set to `default`. If running Consul Community Edition, the namespace is ignored (see [Kubernetes Namespaces in Consul CE](/consul/docs/k8s/crds#consul-ce)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/consul/docs/k8s/crds#consul-enterprise) for additional information.', - }, - ], - hcl: false, - }, - { - name: 'TransparentProxy', - type: 'TransparentProxyConfig: ', - description: - 'Controls configuration specific to proxies in `transparent` [mode](/consul/docs/connect/config-entries/service-defaults#mode). Added in v1.10.0.', - children: [ - { - name: 'MeshDestinationsOnly', - type: 'bool: false', - description: `Determines whether sidecar proxies operating in transparent mode can - proxy traffic to IP addresses not registered in Consul's mesh. If enabled, traffic will only be proxied - to upstream proxies or mesh-native services. If disabled, requests will be proxied as-is to the - original destination IP address. Consul will not encrypt the connection.`, - }, - ], - }, - { - name: 'AllowEnablingPermissiveMutualTLS', - type: 'bool: false', - description: - 'Controls whether `MutualTLSMode=permissive` can be set in the `proxy-defaults` and `service-defaults` configuration entries. ' - }, - { - name: 'ValidateClusters', - type: 'bool: false', - description: - `Controls whether the clusters the route table refers to are validated. The default value is false. When set to - false and a route refers to a cluster that does not exist, the route table loads and routing to a non-existent - cluster results in a 404. When set to true and the route is set to a cluster that do not exist, the route table - will not load. For more information, refer to - [HTTP route configuration in the Envoy docs](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route.proto#envoy-v3-api-field-config-route-v3-routeconfiguration-validate-clusters) - for more details. `, - }, - { - name: 'TLS', - type: 'TLSConfig: ', - description: 'TLS configuration for the service mesh.', - children: [ - { - name: 'Incoming', - type: 'TLSDirectionConfig: ', - description: `TLS configuration for inbound mTLS connections targeting - the public listener on \`connect-proxy\` and \`terminating-gateway\` - proxy kinds.`, - children: [ - { - name: 'TLSMinVersion', - type: 'string: ""', - description: - "Set the default minimum TLS version supported. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.", - }, - { - name: 'TLSMaxVersion', - type: 'string: ""', - description: { - hcl: - "Set the default maximum TLS version supported. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.3 as a max version for incoming connections.", - yaml: - "Set the default maximum TLS version supported. Must be greater than or equal to `tls_min_version`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.3 as a max version for incoming connections.", - }, - }, - { - name: 'CipherSuites', - type: 'array: ', - description: `Set the default list of TLS cipher suites - to support when negotiating connections using - TLS 1.2 or earlier. If unspecified, Envoy will use a - [default server cipher list](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tlsparameters-cipher-suites). - The list of supported cipher suites can seen in - [\`consul/types/tls.go\`](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) - and is dependent on underlying support in Envoy. Future - releases of Envoy may remove currently-supported but - insecure cipher suites, and future releases of Consul - may add new supported cipher suites if any are added to - Envoy.`, - }, - ], - }, - { - name: 'Outgoing', - type: 'TLSDirectionConfig: ', - description: `TLS configuration for outbound mTLS connections dialing upstreams - from \`connect-proxy\` and \`ingress-gateway\` - proxy kinds.`, - children: [ - { - name: 'TLSMinVersion', - type: 'string: ""', - description: - "Set the default minimum TLS version supported. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.", - }, - { - name: 'TLSMaxVersion', - type: 'string: ""', - description: { - hcl: - "Set the default maximum TLS version supported. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.2 as a max version for outgoing connections, but future Envoy releases [may change this to TLS 1.3](https://github.com/envoyproxy/envoy/issues/9300).", - yaml: - "Set the default maximum TLS version supported. Must be greater than or equal to `tls_min_version`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.2 as a max version for outgoing connections, but future Envoy releases [may change this to TLS 1.3](https://github.com/envoyproxy/envoy/issues/9300).", - }, - }, - { - name: 'CipherSuites', - type: 'array: ', - description: `Set the default list of TLS cipher suites - to support when negotiating connections using - TLS 1.2 or earlier. If unspecified, Envoy will use a - [default server cipher list](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tlsparameters-cipher-suites). - The list of supported cipher suites can seen in - [\`consul/types/tls.go\`](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) - and is dependent on underlying support in Envoy. Future - releases of Envoy may remove currently-supported but - insecure cipher suites, and future releases of Consul - may add new supported cipher suites if any are added to - Envoy.`, - }, - ], - }, - ], - }, - { - name: 'HTTP', - type: 'HTTPConfig: ', - description: 'HTTP configuration for the service mesh.', - children: [ - { - name: 'SanitizeXForwardedClientCert', - type: 'bool: ', - description: `If configured to \`true\`, the \`forward_client_cert_details\` option will be set to \`SANITIZE\` - for all Envoy proxies. As a result, Consul will not include the \`x-forwarded-client-cert\` header in the next hop. - If set to \`false\` (default), the XFCC header is propagated to upstream applications.`, - }, - { - name: 'Incoming', - type: 'DirectionalHTTPConfig: ', - description: `HTTP configuration for inbound traffic to mesh proxies.`, - children: [ - { - name: 'RequestNormalization', - type: 'RequestNormalizationConfig: ', - description: `Request normalization configuration for inbound traffic to mesh proxies.`, - children: [ - { - name: 'InsecureDisablePathNormalization', - type: 'bool: false', - description: `Sets the value of the \`normalize_path\` option in the Envoy listener's \`HttpConnectionManager\`. The default value is \`false\`. - When set to \`true\` in Consul, \`normalize_path\` is set to \`false\` for the Envoy proxy. - This parameter disables the normalization of request URL paths according to RFC 3986, - conversion of \`\\\` to \`/\`, and decoding non-reserved %-encoded characters. When using L7 - intentions with path match rules, we recommend enabling path normalization in order - to avoid match rule circumvention with non-normalized path values.`, - }, - { - name: 'MergeSlashes', - type: 'bool: false', - description: `Sets the value of the \`merge_slashes\` option in the Envoy listener's \`HttpConnectionManager\`. The default value is \`false\`. - This option controls the normalization of request URL paths by merging consecutive \`/\` characters. This normalization is not part - of RFC 3986. When using L7 intentions with path match rules, we recommend enabling this setting to avoid match rule circumvention through non-normalized path values, unless legitimate service - traffic depends on allowing for repeat \`/\` characters, or upstream services are configured to - differentiate between single and multiple slashes.`, - }, - { - name: 'PathWithEscapedSlashesAction', - type: 'string: ""', - description: `Sets the value of the \`path_with_escaped_slashes_action\` option in the Envoy listener's - \`HttpConnectionManager\`. The default value of this option is empty, which is - equivalent to \`IMPLEMENTATION_SPECIFIC_DEFAULT\`. This parameter controls the action taken in response to request URL paths with escaped - slashes in the path. When using L7 intentions with path match rules, we recommend enabling this setting to avoid match rule circumvention through non-normalized path values, unless legitimate service - traffic depends on allowing for escaped \`/\` or \`\\\` characters, or upstream services are configured to - differentiate between escaped and unescaped slashes. Refer to the Envoy documentation for more information on available - options.`, - }, - { - name: 'HeadersWithUnderscoresAction', - type: 'string: ""', - description: `Sets the value of the \`headers_with_underscores_action\` option in the Envoy listener's - \`HttpConnectionManager\` under \`common_http_protocol_options\`. The default value of this option is - empty, which is equivalent to \`ALLOW\`. Refer to the Envoy documentation for more information on available options.`, - }, - ], - }, - ], - } - ], - }, - { - name: 'Peering', - type: 'PeeringMeshConfig: ', - description: - 'Controls configuration specific to [peering connections](/consul/docs/connect/cluster-peering).', - children: [ - { - name: 'PeerThroughMeshGateways', - type: 'bool: ', - description: `Determines if peering control-plane traffic should be routed through mesh gateways. - When enabled, dialing cluster attempt to contact peers through their mesh gateway. - Clusters that accept calls advertise the address of their mesh gateways, rather than the address of their Consul servers.`, - }, - ], - }, - ]} -/> - -## ACLs - -Configuration entries may be protected by [ACLs](/consul/docs/security/acl). - -Reading a `mesh` config entry requires no specific privileges. - -Creating, updating, or deleting a `mesh` config entry requires -`operator:write`. diff --git a/website/content/docs/connect/config-entries/sameness-group.mdx b/website/content/docs/connect/config-entries/sameness-group.mdx deleted file mode 100644 index f9fdffcd7a64..000000000000 --- a/website/content/docs/connect/config-entries/sameness-group.mdx +++ /dev/null @@ -1,397 +0,0 @@ ---- -page_title: Sameness group configuration reference -description: |- - Sameness groups enable Consul to associate service instances with the same name deployed to the same namespace as identical services. Learn how to configure a `sameness-group` configuration entry to enable failover between partitions and cluster peers in non-federated networks. ---- - -# Sameness groups configuration reference - -This page provides reference information for sameness group configuration entries. Sameness groups associate identical admin partitions to facilitate traffic between identical services. When partitions are part of the same Consul datacenter, you can create a sameness group by listing them in the `Members[].Partition` field. When partitions are located on remote clusters, you must establish cluster peering connections between remote partitions in order to add them to a sameness group in the `Members[].Peer` field. - -To learn more about creating a sameness group, refer to [Create sameness groups](/consul/docs/connect/cluster-peering/usage/create-sameness-groups) or [Create sameness groups on Kubernetes](/consul/docs/k8s/connect/cluster-peering/usage/create-sameness-groups). - -## Configuration model - -The following list outlines field hierarchy, language-specific data types, and requirements in the sameness group configuration entry. Click on a property name to view additional details, including default values. - - - - - -- [`Kind`](#kind): string | required | must be set to `sameness-group` -- [`Name`](#name): string | required -- [`Partition`](#partition): string | `default` -- [`DefaultForFailover`](#defaultforfailover): boolean | `false` -- [`IncludeLocal`](#includelocal): boolean | `false` -- [`Members`](#members): list of maps | required - - [`Partition`](#members-partition): string - - [`Peer`](#members-peer): string - - - - - -- [`apiVersion`](#apiversion): string | required | must be set to `consul.hashicorp.com/v1alpha1` -- [`kind`](#kind): string | required | must be set to `SamenessGroup` -- [`metadata`](#metadata): map | required - - [`name`](#metadata-name): string | required -- [`spec`](#spec): map | required - - [`defaultForFailover`](#spec-defaultforfailover): boolean | `false` - - [`includeLocal`](#spec-includelocal): boolean | `false` - - [`members`](#spec-members): list of maps | required - - [`partition`](#spec-members-partition): string - - [`peer`](#spec-members-peer): string - - - - -## Complete configuration - -When every field is defined, a sameness group configuration entry has the following form: - - - - - -```hcl -Kind = "sameness-group" # required -Name = "" # required -Partition = "" -DefaultForFailover = false -IncludeLocal = true -Members = [ # required - { Partition = "" }, - { Peer = "" } -] -``` - - - - - -```json -{ - "Kind": "sameness-group", // required - "Name": "", // required - "Partition": "", - "DefaultForFailover": false, - "IncludeLocal": true, - "Members": [ // required - { - "Partition": "" - }, - { - "Peer": "" - } - ] -} -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 # required -kind: SamenessGroup # required -metadata: - name: -spec: - defaultForFailover: false - includeLocal: true - members: # required - - partition: - - peer: -``` - - - - -## Specifications - -This section provides details about the fields you can configure in the sameness group configuration entry. - - - - - -### `Kind` - -Specifies the type of configuration entry to implement. Must be set to `sameness-group`. - -#### Values - -- Default: None -- This field is required. -- Data type: String value that must be set to `sameness-group`. - -### `Name` - -Specifies a name for the configuration entry that is used to identify the sameness group. To ensure consistency, use descriptive names and make sure that the same name is used when creating configuration entries to add each member to the sameness group. - -#### Values - -- Default: None -- This field is required. -- Data type: String - -### `Partition` - -Specifies the local admin partition that the sameness group applies to. Refer to [admin partitions](/consul/docs/enterprise/admin-partitions) for more information. - -#### Values - -- Default: `default` -- Data type: String - -### `DefaultForFailover` - -Determines whether the sameness group should be used to establish connections to services with the same name during failover scenarios. - -When this field is set to `true`, upstream requests automatically fail over to services in the sameness group according to the order of the members in the `Members` list. It impacts all services on the partition. - -When this field is set to `false`, you can use a sameness group for failover by configuring the `Failover` block of a [service resolver configuration entry](/consul/docs/connect/config-entries/service-resolver). - -When you [query Consul DNS](/consul/docs/services/discovery/dns-static-lookups) using sameness groups, `DefaultForFailover` must be set to `true`. Otherwise, Consul DNS returns an error. - -#### Values - -- Default: `false` -- Data type: Boolean - -### `IncludeLocal` - -Determines whether the local partition should be considered the first member of the sameness group. When this field is set to `true`, DNS queries, upstream requests, and failover traffic returns a health instance from the local partition unless one does not exist. - -If you enable this parameter, you do not need to list the local partition as the first member in the group. - -#### Values - -- Default: `false` -- Data type: Boolean - -### `Members` - -Specifies the partitions and cluster peers that are members of the sameness group from the perspective of the local partition. - -The local partition should be the first member listed unless `IncludeLocal=true`. The order of the members determines their precedence during failover scenarios. If a member is listed but Consul cannot connect to it, failover proceeds with the next healthy member in the list. For an example demonstrating how to configure this parameter, refer to [Failover between sameness groups](#failover-between-members-of-a-sameness-group). - -Each partition can belong to a single sameness group. You cannot associate a partition or cluster peer with multiple sameness groups. - -#### Values - -- Default: None -- This field is required. -- Data type: List that can contain maps of the following parameters: - - [`Partition`](#members-partition) - - [`Peer`](#members-peer) - -### `Members[].Partition` - -Specifies a partition in the local datacenter that is a member of the sameness group. Local partitions do not require cluster peering connections before they are added to a sameness group. - -#### Values - -- Default: None -- Data type: String - -### `Members[].Peer` - -Specifies the name of a cluster peer that is a member of the sameness group. - -Cluster peering connections must be established before adding a remote partition to the list of members. Refer to [establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) for more information. - -#### Values - -- Default: None -- Data type: String - - - - - -### `apiVersion` - -Specifies the version of the Consul API for integrating with Kubernetes. The value must be `consul.hashicorp.com/v1alpha1`. - -#### Values - -- Default: None -- This field is required. -- String value that must be set to `consul.hashicorp.com/v1alpha1`. - -### `kind` - -Specifies the type of configuration entry to implement. Must be set to `SamenessGroup`. - -#### Values - -- Default: None -- This field is required. -- Data type: String value that must be set to `SamenessGroup`. - -### `metadata` - -Map that contains an arbitrary name for the configuration entry and the namespace it applies to. - -#### Values - -- Default: None -- Data type: Map - -### `metadata.name` - -Specifies a name for the configuration entry that is used to identify the sameness group. To ensure consistency, use descriptive names and make sure that the same name is used when creating configuration entries to add each member to the sameness group. - -#### Values - -- Default: None -- This field is required. -- Data type: String - -### `spec` - -Map that contains the details about the `SamenessGroup` configuration entry. The `apiVersion`, `kind`, and `metadata` fields are siblings of the spec field. All other configurations are children. - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `spec.defaultForFailover` - -Determines whether the sameness group should be used to establish connections to services with the same name during failover scenarios. When this field is set to `true`, upstream requests automatically failover to services in the sameness group according to the order of the members in the `spec.members` list. This setting affects all services on the partition. - -When this field is set to `false`, you can use a sameness group for failover by configuring the `spec.failover` block of a [service resolver CRD](/consul/docs/connect/config-entries/service-resolver). - -#### Values - -- Default: `false` -- Data type: Boolean - -### `spec.includeLocal` - -Determines whether the local partition should be considered the first member of the sameness group. When this field is set to `true`, DNS queries, upstream requests, and failover traffic target return a healthy instance from the local partition unless a healthy instance does not exist. - -If you enable this parameter, you do not need to list the local partition as the first member in the group. - -#### Values - -- Default: `false` -- Data type: Boolean - -### `spec.members` - -Specifies the local partitions and cluster peers that are members of the sameness group from the perspective of the local partition. - -The local partition should be the first member listed unless `spec.includeLocal: true`. The order of the members determines their precedence during failover scenarios. If a member is listed but Consul cannot connect to it, failover proceeds with the next healthy member in the list. For an example demonstrating how to configure this parameter, refer to [Failover between sameness groups](#failover-between-sameness-groups). - -Each partition can belong to a single sameness group. You cannot associate a partition or cluster peer with multiple sameness groups. - -#### Values - -- Default: None -- This field is required. -- Data type: List that can contain maps of the following parameters: - - - [`partition`](#spec-members-partition) - - [`peer`](#spec-members-peer) - -### `spec.members[].partition` - -Specifies a partition in the local datacenter that is a member of the sameness group. Local partitions do not require cluster peering connections before they are added to a sameness group. - -#### Values - -- Default: None -- Data type: String - -### `spec.members[].peer` - -Specifies the name of a cluster peer that is a member of the sameness group. - -Cluster peering connections must be established before adding a peer to the list of members. Refer to [establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) for more information. - -#### Values - -- Default: None -- Data type: String - - - - -## Examples - -The following examples demonstrate common sameness group configuration patterns for specific use cases. - -### Failover between members of a sameness group - -In the following example, the configuration entry defines a sameness group named `products-api` that applies to the `store-east` partition in the local datacenter. The sameness group is configured so that when a service instance in `store-east` fails, Consul attempts to establish a failover connection in the following order: - -- Services with the same name in the `store-east` partition -- Services with the same name in the `inventory-east` partition in the same datacenter -- Services with the same name in the `store-west` partition of datacenter `dc2`, which has an established cluster peering connection. -- Services with the same name in the `inventory-west` partition of `dc2`, which has an established cluster peering connection. - - - - - -```hcl -Kind = "sameness-group" -Name = "products-api" -Partition = "store-east" -Members = [ - { Partition = "store-east" }, - { Partition = "inventory-east" }, - { Peer = "dc2-store-west" }, - { Peer = "dc2-inventory-west" } -] -``` - - - - - -```json -{ - "Kind": "sameness-group", - "Name": "products-api", - "Partition": "store-east", - "Members": [ - { - "Partition": "store-east" - }, - { - "Partition": "inventory-east" - }, - { - "Peer": "dc2-store-west" - }, - { - "Peer": "dc2-inventory-west" - } - ] -} -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: SamenessGroup -metadata: - name: products-api -spec: - members: - - partition: store-east - - partition: inventory-east - - peer: dc2-store-west - - peer: dc2-inventory-west -``` - - - diff --git a/website/content/docs/connect/config-entries/terminating-gateway.mdx b/website/content/docs/connect/config-entries/terminating-gateway.mdx deleted file mode 100644 index 4512cf1a8148..000000000000 --- a/website/content/docs/connect/config-entries/terminating-gateway.mdx +++ /dev/null @@ -1,701 +0,0 @@ ---- -layout: docs -page_title: Terminating Gateway - Configuration Entry Reference -description: >- - The terminating gateway configuration entry kind defines behavior to secure outgoing communication between the service mesh and non-mesh services. Use the reference guide to learn about `""terminating-gateway""` config entry parameters and connecting from your service mesh to external or non-mesh services registered with Consul. ---- - -# Terminating Gateway Configuration Entry - -The `terminating-gateway` config entry kind (`TerminatingGateway` on Kubernetes) allows you to configure terminating gateways -to proxy traffic from services in the Consul service mesh to services registered with Consul that do not have a -[service mesh sidecar proxy](/consul/docs/connect/proxies). The configuration is associated with the name of a gateway service -and will apply to all instances of the gateway with that name. - -~> [Configuration entries](/consul/docs/agent/config-entries) are global in scope. A configuration entry for a gateway name applies -across all federated Consul datacenters. If terminating gateways in different Consul datacenters need to route to different -sets of services within their datacenter then the terminating gateways **must** be registered with different names. - -See [Terminating Gateway](/consul/docs/connect/gateways/terminating-gateway) for more information. - -## TLS Origination - -By specifying a path to a [CA file](/consul/docs/connect/config-entries/terminating-gateway#cafile) connections -from the terminating gateway will be encrypted using one-way TLS authentication. If a path to a -[client certificate](/consul/docs/connect/config-entries/terminating-gateway#certfile) -and [private key](/consul/docs/connect/config-entries/terminating-gateway#keyfile) are also specified connections -from the terminating gateway will be encrypted using mutual TLS authentication. - -~> Setting the `SNI` field is strongly recommended when enabling TLS to a service. If this field is not set, -Consul will not attempt to verify the Subject Alternative Name fields in the service's certificate. - -If none of these are provided, Consul will **only** encrypt connections to the gateway and not -from the gateway to the destination service. - -## Wildcard service specification - -Terminating gateways can optionally target all services within a Consul namespace by specifying a wildcard "\*" -as the service name. Configuration options set on the wildcard act as defaults that can be overridden -by options set on a specific service name. - -Note that if the wildcard specifier is used, and some services in that namespace have a service mesh sidecar proxy, -traffic from the mesh to those services will be evenly load-balanced between the gateway and their sidecars. - -## Sample Config Entries - -### Access an external service - - - - -Link gateway named "us-west-gateway" with the billing service. - -Connections to the external service will be unencrypted. - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" - -Services = [ - { - Name = "billing" - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: billing -``` - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Services": [ - { - "Name": "billing" - } - ] -} -``` - - - - - - -Link gateway named "us-west-gateway" in the default namespace with the billing service in the finance namespace. - -Connections to the external service will be unencrypted. - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" -Namespace = "default" - -Services = [ - { - Namespace = "finance" - Name = "billing" - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: billing - namespace: finance -``` - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Namespace": "default", - "Services": [ - { - "Namespace": "finance", - "Name": "billing" - } - ] -} -``` - - - - - - -### Access an external service over TLS - - - - -Link gateway named "us-west-gateway" with the billing service, and specify a CA -file to be used for one-way TLS authentication. - --> **Note**: When not using destinations in transparent proxy mode, you must specify the `CAFile` parameter -and point to a valid CA bundle in order to properly initiate a TLS -connection to the destination service. For more information about configuring a gateway for destinations, refer to [Register an External Service as a Destination](/consul/docs/k8s/connect/terminating-gateways#register-an-external-service-as-a-destination). - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" - -Services = [ - { - Name = "billing" - CAFile = "/etc/certs/ca-chain.cert.pem" - SNI = "billing.service.com" - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: billing - caFile: /etc/certs/ca-chain.cert.pem - sni: billing.service.com -``` - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Services": [ - { - "Name": "billing", - "CAFile": "/etc/certs/ca-chain.cert.pem", - "SNI": "billing.service.com" - } - ] -} -``` - - - - - - -Link gateway named "us-west-gateway" in the default namespace with the billing service in the finance namespace, -and specify a CA file to be used for one-way TLS authentication. - --> **Note**: The `CAFile` parameter must be specified _and_ point to a valid CA -bundle in order to properly initiate a TLS connection to the destination service. - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" -Namespace = "default" - -Services = [ - { - Namespace = "finance" - Name = "billing" - CAFile = "/etc/certs/ca-chain.cert.pem" - SNI = "billing.service.com" - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: billing - namespace: finance - caFile: /etc/certs/ca-chain.cert.pem - sni: billing.service.com -``` - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Namespace": "default", - "Services": [ - { - "Namespace": "finance", - "Name": "billing", - "CAFile": "/etc/certs/ca-chain.cert.pem", - "SNI": "billing.service.com" - } - ] -} -``` - - - - - - -### Access an external service over mutual TLS - - - - -Link gateway named "us-west-gateway" with the billing service, and specify a CA -file, key file, and cert file to be used for mutual TLS authentication. - --> **Note**: The `CAFile` parameter must be specified _and_ point to a valid CA -bundle in order to properly initiate a TLS connection to the destination service. - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" - -Services = [ - { - Name = "billing" - CAFile = "/etc/certs/ca-chain.cert.pem" - KeyFile = "/etc/certs/gateway.key.pem" - CertFile = "/etc/certs/gateway.cert.pem" - SNI = "billing.service.com" - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: billing - caFile: /etc/certs/ca-chain.cert.pem - keyFile: /etc/certs/gateway.key.pem - certFile: /etc/certs/gateway.cert.pem - sni: billing.service.com -``` - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Services": [ - { - "Name": "billing", - "CAFile": "/etc/certs/ca-chain.cert.pem", - "KeyFile": "/etc/certs/gateway.key.pem", - "CertFile": "/etc/certs/gateway.cert.pem", - "SNI": "billing.service.com" - } - ] -} -``` - - - - - - -Link gateway named "us-west-gateway" in the default namespace with the billing service in the finance namespace. -Also specify a CA file, key file, and cert file to be used for mutual TLS authentication. - --> **Note**: The `CAFile` parameter must be specified _and_ point to a valid CA -bundle in order to properly initiate a TLS connection to the destination service. - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" -Namespace = "default" - -Services = [ - { - Namespace = "finance" - Name = "billing" - CAFile = "/etc/certs/ca-chain.cert.pem" - KeyFile = "/etc/certs/gateway.key.pem" - CertFile = "/etc/certs/gateway.cert.pem" - SNI = "billing.service.com" - } -] -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: billing - namespace: finance - caFile: /etc/certs/ca-chain.cert.pem - keyFile: /etc/certs/gateway.key.pem - certFile: /etc/certs/gateway.cert.pem - sni: billing.service.com -``` - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Namespace": "default", - "Services": [ - { - "Namespace": "finance", - "Name": "billing", - "CAFile": "/etc/certs/ca-chain.cert.pem", - "KeyFile": "/etc/certs/gateway.key.pem", - "CertFile": "/etc/certs/gateway.cert.pem", - "SNI": "billing.service.com" - } - ] -} -``` - - - - - - -### Override connection parameters for a specific service - - - - -Link gateway named "us-west-gateway" with all services in the datacenter, and configure default certificates for mutual TLS. - -Override the SNI and CA file used for connections to the billing service. - - - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" - -Services = [ - { - Name = "*" - CAFile = "/etc/common-certs/ca-chain.cert.pem" - KeyFile = "/etc/common-certs/gateway.key.pem" - CertFile = "/etc/common-certs/gateway.cert.pem" - }, - { - Name = "billing" - CAFile = "/etc/billing-ca/ca-chain.cert.pem" - SNI = "billing.service.com" - } -] -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: '*' - caFile: /etc/common-certs/ca-chain.cert.pem - keyFile: /etc/common-certs/gateway.key.pem - certFile: /etc/common-certs/gateway.cert.pem - - name: billing - caFile: /etc/billing-ca/ca-chain.cert.pem - sni: billing.service.com -``` - - - - - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Services": [ - { - "Name": "*", - "CAFile": "/etc/common-certs/ca-chain.cert.pem", - "KeyFile": "/etc/common-certs/gateway.key.pem", - "CertFile": "/etc/common-certs/gateway.cert.pem" - }, - { - "Name": "billing", - "CAFile": "/etc/billing-ca/ca-chain.cert.pem", - "SNI": "billing.service.com" - } - ] -} -``` - - - - - - - - -Link gateway named "us-west-gateway" in the default namespace with all services in the finance namespace, -and configure default certificates for mutual TLS. - -Override the SNI and CA file used for connections to the billing service: - - - - - -```hcl -Kind = "terminating-gateway" -Name = "us-west-gateway" -Namespace = "default" - -Services = [ - { - Namespace = "finance" - Name = "*" - CAFile = "/etc/common-certs/ca-chain.cert.pem" - KeyFile = "/etc/common-certs/gateway.key.pem" - CertFile = "/etc/common-certs/gateway.cert.pem" - }, - { - Namespace = "finance" - Name = "billing" - CAFile = "/etc/billing-ca/ca-chain.cert.pem" - SNI = "billing.service.com" - } -] -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: us-west-gateway -spec: - services: - - name: '*' - namespace: finance - caFile: /etc/common-certs/ca-chain.cert.pem - keyFile: /etc/common-certs/gateway.key.pem - certFile: /etc/common-certs/gateway.cert.pem - - name: billing - namespace: finance - caFile: /etc/billing-ca/ca-chain.cert.pem - sni: billing.service.com -``` - - - - - -```json -{ - "Kind": "terminating-gateway", - "Name": "us-west-gateway", - "Namespace": "default", - "Services": [ - { - "Namespace": "finance", - "Name": "*", - "CAFile": "/etc/common-certs/ca-chain.cert.pem", - "KeyFile": "/etc/common-certs/gateway.key.pem", - "CertFile": "/etc/common-certs/gateway.cert.pem" - }, - { - "Namespace": "finance", - "Name": "billing", - "CAFile": "/etc/billing-ca/ca-chain.cert.pem", - "SNI": "billing.service.com" - } - ] -} -``` - - - - - - - - -## Available Fields - -', - yaml: false, - }, - { - name: 'Namespace', - type: `string: "default"`, - enterprise: true, - description: - 'Specifies the namespace to which the configuration entry will apply. This must match the namespace in which the gateway is registered.' + - ' If omitted, the namespace will be inherited from [the request](/consul/api-docs/config#ns)' + - ' or will default to the `default` namespace.', - yaml: false, - }, - { - name: 'Partition', - type: `string: "default"`, - enterprise: true, - description: - 'Specifies the admin partition to which the configuration entry will apply. This must match the partition in which the gateway is registered.' + - ' If omitted, the partition will be inherited from [the request](/consul/api-docs/config)' + - ' or will default to the `default` partition.', - yaml: false, - }, - { - name: 'Meta', - type: 'map: nil', - description: - 'Specifies arbitrary KV metadata pairs. Added in Consul 1.8.4.', - yaml: false, - }, - { - name: 'metadata', - children: [ - { - name: 'name', - description: 'Set to the name of the gateway being configured.', - }, - { - name: 'namespace', - description: - 'If running Consul Community Edition, the namespace is ignored (see [Kubernetes Namespaces in Consul CE](/consul/docs/k8s/crds#consul-ce)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/consul/docs/k8s/crds#consul-enterprise) for more details.', - }, - ], - hcl: false, - }, - { - name: 'Services', - type: 'array: ', - description: `A list of services or destinations to link - with the gateway. The gateway will proxy traffic to these services. These linked services - must be registered with Consul for the gateway to discover their addresses. They must also - be registered in the same Consul datacenter as the terminating gateway. - Destinations are an exception to this requirement, and only need to be defined as a service-defaults configuration entry in the same datacenter. - If Consul ACLs are enabled, the Terminating Gateway's ACL token must grant service:write for all linked services.`, - children: [ - { - name: 'Name', - type: 'string: ""', - description: - 'The name of the service to link with the gateway. If the wildcard specifier, `*`, is provided, then ALL services within the namespace will be linked with the gateway.', - }, - { - name: 'Namespace', - enterprise: true, - type: 'string: ""', - description: - 'The namespace of the service. If omitted, the namespace will be inherited from the config entry.', - }, - { - name: 'CAFile', - type: 'string: ""', - description: `A file path to a PEM-encoded certificate authority. - The file must be present on the proxy's filesystem. - The certificate authority is used to verify the authenticity of the service linked with the gateway. - It can be provided along with a CertFile and KeyFile for mutual TLS authentication, or on its own - for one-way TLS authentication. If none is provided the gateway will not encrypt the traffic to the destination.`, - }, - { - name: 'CertFile', - type: 'string: ""', - description: { - hcl: `A file path to a PEM-encoded certificate. - The file must be present on the proxy's filesystem. - The certificate is provided servers to verify the gateway's authenticity. It must be provided if a \`KeyFile\` was specified.`, - yaml: `A file path to a PEM-encoded certificate. - The file must be present on the proxy's filesystem. - The certificate is provided servers to verify the gateway's authenticity. It must be provided if a \`keyFile\` was specified.`, - }, - }, - { - name: 'KeyFile', - type: 'string: ""', - description: { - hcl: `A file path to a PEM-encoded private key. - The file must be present on the proxy's filesystem. - The key is used with the certificate to verify the gateway's authenticity. It must be provided along if a \`CertFile\` was specified.`, - yaml: `A file path to a PEM-encoded private key. - The file must be present on the proxy's filesystem. - The key is used with the certificate to verify the gateway's authenticity. It must be provided along if a \`certFile\` was specified.`, - }, - }, - { - name: 'SNI', - type: 'string: ""', - description: - `An optional hostname or domain name to specify during the TLS handshake. This option will also configure [strict SAN matching](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-certificatevalidationcontext-match-typed-subject-alt-names), which requires - the external services to have certificates with SANs, not having which will result in \`CERTIFICATE_VERIFY_FAILED\` error.`, - }, - { - name: 'DisableAutoHostRewrite', - type: 'bool: ""', - description: - 'When set to true, Terminating Gateway will not modify the incoming requests host header for this service.', - }, - ], - }, - ]} -/> - -## ACLs - -Configuration entries may be protected by [ACLs](/consul/docs/security/acl). - -Reading a `terminating-gateway` config entry requires `service:read` on the `Name` -field of the config entry. - -Creating, updating, or deleting a `terminating-gateway` config entry requires -`operator:write`. diff --git a/website/content/docs/connect/configuration.mdx b/website/content/docs/connect/configuration.mdx deleted file mode 100644 index dd1e8e156d8c..000000000000 --- a/website/content/docs/connect/configuration.mdx +++ /dev/null @@ -1,109 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Configuration - Overview -description: >- - Learn how to enable and configure Consul's service mesh capabilities in agent configurations, and how to integrate with schedulers like Kubernetes and Nomad. Consul's service mesh capabilities are provided by the ""connect"" subsystem. ---- - -# Service Mesh Configuration Overview - -There are many configuration options exposed for Consul service mesh. The only option -that must be set is the `connect.enabled` option on Consul servers to enable Consul service mesh. -All other configurations are optional and have defaults suitable for many environments. - -The noun _connect_ is used throughout this documentation to refer to the connect -subsystem that provides Consul's service mesh capabilities. -Where you encounter the _noun_ connect, it is usually functionality specific to -service mesh. - -## Agent configuration - -Begin by enabling service mesh for your Consul -cluster. By default, service is disabled. Enabling service mesh requires changing -the configuration of only your Consul _servers_ (not client agents). To enable -service mesh, add the following to a new or existing -[server configuration file](/consul/docs/agent/config/config-files). In an existing cluster, this configuration change requires a Consul server restart, which you can perform one server at a time to maintain availability. In HCL: - - - - -```hcl -connect { - enabled = true -} -``` - -```json -{ - "connect": { - "enabled": true - } -} -``` - - -This will enable service mesh and configure your Consul cluster to use the -built-in certificate authority for creating and managing certificates. -You may also configure Consul to use an external -[certificate management system](/consul/docs/connect/ca), such as -[Vault](https://www.vaultproject.io/). - -Services and proxies may always register with service mesh settings, but unless -service mesh is enabled on the server agents, their attempts to communicate will fail -because they have no means to obtain or verify service mesh TLS certificates. - -Other optional service mesh configurations that you can set in the server -configuration file include: - -- [certificate authority settings](/consul/docs/agent/config/config-files#connect) -- [token replication](/consul/docs/agent/config/config-files#acl_tokens_replication) -- [dev mode](/consul/docs/agent/config/cli-flags#_dev) -- [server host name verification](/consul/docs/agent/config/config-files#tls_internal_rpc_verify_server_hostname) - -If you would like to use Envoy as your service mesh proxy you will need to [enable -gRPC](/consul/docs/agent/config/config-files#grpc_port). - -Additionally if you plan on using the observability features of Consul service mesh, it can -be convenient to configure your proxies and services using [configuration -entries](/consul/docs/agent/config-entries) which you can interact with using the -CLI or API, or by creating configuration entry files. You will want to enable -[centralized service -configuration](/consul/docs/agent/config/config-files#enable_central_service_config) on -clients, which allows each service's proxy configuration to be managed centrally -via API. - -!> **Security note:** Enabling service mesh is enough to try the feature but doesn't -automatically ensure complete security. Please read the [service mesh production -tutorial](/consul/tutorials/developer-mesh/service-mesh-production-checklist) to understand the additional steps -needed for a secure deployment. - -## Centralized proxy and service configuration - -If your network contains many instances of the same service and many colocated sidecar proxies, you can specify global settings for proxies or services in [Configuration Entries](/consul/docs/agent/config-entries). You can override the centralized configurations for individual proxy instances in their -[sidecar service definitions](/consul/docs/connect/proxies/deploy-sidecar-services), -and the default protocols for service instances in their [service -definitions](/consul/docs/services/usage/define-services). - -## Schedulers - -Consul service mesh is especially useful if you are using an orchestrator like Nomad -or Kubernetes, because these orchestrators can deploy thousands of service instances -which frequently move hosts. Sidecars for each service can be configured through -these schedulers, and in some cases they can automate Consul configuration, -sidecar deployment, and service registration. - -### Nomad - -Consul service mesh can be used with Nomad to provide secure service-to-service -communication between Nomad jobs and task groups. The ability to use the dynamic -port feature of Nomad makes Consul service mesh particularly easy to use. Learn about how to -configure Consul service mesh on Nomad by reading the -[integration documentation](/consul/docs/connect/nomad). - -### Kubernetes - -The Consul Helm chart can automate much of Consul's service mesh configuration, and -makes it easy to automatically inject Envoy sidecars into new pods when they are -deployed. Learn about the [Helm chart](/consul/docs/k8s/helm) in general, -or if you are already familiar with it, check out its -[service mesh specific configurations](/consul/docs/k8s/connect). diff --git a/website/content/docs/connect/connect-internals.mdx b/website/content/docs/connect/connect-internals.mdx deleted file mode 100644 index 99541f012c3b..000000000000 --- a/website/content/docs/connect/connect-internals.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -layout: docs -page_title: Service Mesh - How it Works -description: >- - Consul's service mesh enforces secure service communication using mutual TLS (mTLS) encryption and explicit authorization. Learn how the service mesh certificate authorities, intentions, and agents work together to provide Consul’s service mesh capabilities. ---- - -# How Service Mesh Works - -This topic describes how many of the core features of Consul's service mesh functionality works. -It is not a prerequisite, -but this information will help you understand how Consul service mesh behaves in more complex scenarios. - -The noun _connect_ is used throughout this documentation to refer to the connect -subsystem that provides Consul's service mesh capabilities. -Where you encounter the _noun_ connect, it is usually functionality specific to -service mesh. - -To try service mesh locally, complete the [Getting Started with Consul service -mesh](/consul/tutorials/kubernetes-deploy/service-mesh?utm_source=docs) -tutorial. - -## Mutual Transport Layer Security (mTLS) - -The core of Consul service mesh is based on [mutual TLS](https://en.wikipedia.org/wiki/Mutual_authentication). - -Consul service mesh provides each service with an identity encoded as a TLS certificate. -This certificate is used to establish and accept connections to and from other -services. The identity is encoded in the TLS certificate in compliance with -the [SPIFFE X.509 Identity Document](https://github.com/spiffe/spiffe/blob/master/standards/X509-SVID.md). -This enables Consul service mesh services to establish and accept connections with -other SPIFFE-compliant systems. - -The client service verifies the destination service certificate -against the [public CA bundle](/consul/api-docs/connect/ca#list-ca-root-certificates). -This is very similar to a typical HTTPS web browser connection. In addition -to this, the client provides its own client certificate to show its -identity to the destination service. If the connection handshake succeeds, -the connection is encrypted and authorized. - -The destination service verifies the client certificate against the [public CA -bundle](/consul/api-docs/connect/ca#list-ca-root-certificates). After verifying the -certificate, the next step depends upon the configured application protocol of -the destination service. TCP (L4) services must authorize incoming _connections_ -against the configured set of Consul [intentions](/consul/docs/connect/intentions), -whereas HTTP (L7) services must authorize incoming _requests_ against those same -intentions. If the intention check responds successfully, the -connection/request is established. Otherwise the connection/request is -rejected. - -To generate and distribute certificates, Consul has a built-in CA that -requires no other dependencies, and -also ships with built-in support for [Vault](/consul/docs/connect/ca/vault). The PKI system is designed to be pluggable -and can be extended to support any system by adding additional CA providers. - -All APIs required for Consul service mesh typically respond in microseconds and impose -minimal overhead to existing services. To ensure this, Consul service mesh-related API calls -are all made to the local Consul agent over a loopback interface, and all [agent -Connect endpoints](/consul/api-docs/agent/connect) implement local caching, background -updating, and support blocking queries. Most API calls operate on purely local -in-memory data. - -## Agent Caching and Performance - -To enable fast responses on endpoints such as the [agent connect -API](/consul/api-docs/agent/connect), the Consul agent locally caches most Consul service mesh-related -data and sets up background [blocking queries](/consul/api-docs/features/blocking) against -the server to update the cache in the background. This allows most API calls -such as retrieving certificates or authorizing connections to use in-memory -data and respond very quickly. - -All data cached locally by the agent is populated on demand. Therefore, if -Consul service mesh is not used at all, the cache does not store any data. On first request, -the data is loaded from the server and cached. The set of data cached is: public -CA root certificates, leaf certificates, intentions, and service discovery -results for upstreams. For leaf certificates and intentions, only data related -to the service requested is cached, not the full set of data. - -Further, the cache is partitioned by ACL token and datacenters. This is done -to minimize the complexity of the cache and prevent bugs where an ACL token -may see data it shouldn't from the cache. This results in higher memory usage -for cached data since it is duplicated per ACL token, but with the benefit -of simplicity and security. - -With Consul service mesh enabled, you'll likely see increased memory usage by the -local Consul agent. The total memory is dependent on the number of intentions -related to the services registered with the agent accepting Consul service mesh-based -connections. The other data (leaf certificates and public CA certificates) -is a relatively fixed size per service. In most cases, the overhead per -service should be relatively small: single digit kilobytes at most. - -The cache does not evict entries due to memory pressure. If memory capacity -is reached, the process will attempt to swap. If swap is disabled, the Consul -agent may begin failing and eventually crash. Cache entries do have TTLs -associated with them and will evict their entries if they're not used. Given -a long period of inactivity (3 days by default), the cache will empty itself. - -## Connections Across Datacenters - -A sidecar proxy's [upstream configuration](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) -may specify an alternative datacenter or a prepared query that can address services -in multiple datacenters (such as the [geo failover](/consul/tutorials/developer-discovery/automate-geo-failover) pattern). - -[Intentions](/consul/docs/connect/intentions) verify connections between services by -source and destination name seamlessly across datacenters. - -Connections can be made via gateways to enable communicating across network -topologies, allowing connections between services in each datacenter without -externally routable IPs at the service level. - -## Intention Replication - -Intention replication happens automatically but requires the -[`primary_datacenter`](/consul/docs/agent/config/config-files#primary_datacenter) -configuration to be set to specify a datacenter that is authoritative -for intentions. In production setups with ACLs enabled, the -[replication token](/consul/docs/agent/config/config-files#acl_tokens_replication) must also -be set in the secondary datacenter server's configuration. - -## Certificate Authority Federation - -The primary datacenter also acts as the root Certificate Authority (CA) for Consul service mesh. -The primary datacenter generates a trust-domain UUID and obtains a root certificate -from the configured CA provider which defaults to the built-in one. - -Secondary datacenters fetch the root CA public key and trust-domain ID from the -primary and generate their own key and Certificate Signing Request (CSR) for an -intermediate CA certificate. This CSR is signed by the root in the primary -datacenter and the certificate is returned. The secondary datacenter can now use -this intermediate to sign new Consul service mesh certificates in the secondary datacenter -without WAN communication. CA keys are never replicated between datacenters. - -The secondary maintains watches on the root CA certificate in the primary. If the -CA root changes for any reason such as rotation or migration to a new CA, the -secondary automatically generates new keys and has them signed by the primary -datacenter's new root before initiating an automatic rotation of all issued -certificates in use throughout the secondary datacenter. This makes CA root key -rotation fully automatic and with zero downtime across multiple datacenters. diff --git a/website/content/docs/connect/connectivity-tasks.mdx b/website/content/docs/connect/connectivity-tasks.mdx deleted file mode 100644 index bd5a4bc66f66..000000000000 --- a/website/content/docs/connect/connectivity-tasks.mdx +++ /dev/null @@ -1,72 +0,0 @@ ---- -layout: docs -page_title: Gateway Types -description: >- - Ingress, terminating, and mesh gateways are proxies that direct traffic into, out of, and inside of Consul's service mesh. Learn how these gateways enable different kinds of service-to-service communication. ---- - -# Types of Gateway Connections in a Service Mesh - -~> **Note**: The features shown below are extensions of Consul's service mesh capabilities. If you are not utilizing -Consul service mesh then these features will not be relevant to your task. - -## Service-to-service traffic between Consul datacenters - --> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer. - -Mesh gateways enable routing of service mesh traffic between different Consul datacenters. Those datacenters can reside -in different clouds or runtime environments where general interconnectivity between all services in all datacenters -isn't feasible. One scenario where this is useful is when connecting networks with overlapping IP address space. - -These gateways operate by sniffing the SNI header out of the mTLS connection and then routing the connection to the -appropriate destination based on the server name requested. - -As of Consul 1.8.0, mesh gateways can also forward gossip and RPC traffic between Consul servers. -This is enabled by [WAN federation via mesh gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). - -As of Consul 1.14.0, mesh gateways can route both data-plane (service-to-service) and control-plane (consul-to-consul) traffic for peered clusters. -See [Mesh Gateways for Peering Control Plane Traffic](/consul/docs/connect/gateways/mesh-gateway/peering-via-mesh-gateways) - -For more information about mesh gateways, review the [complete documentation](/consul/docs/connect/gateways/mesh-gateway) -and the [mesh gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-gateways). - -![Mesh Gateway Architecture](/img/mesh-gateways.png) - -## Traffic from outside the Consul service mesh to services in the mesh - --> **1.8.0+:** This feature is available in Consul versions 1.8.0 and newer. - -Ingress gateways are an entrypoint for outside traffic. They enable potentially unauthenticated ingress traffic from -services outside the Consul service mesh to services inside the service mesh. - -These gateways allow you to define what services should be exposed, on what port, and by what hostname. You configure -an ingress gateway by defining a set of listeners that can map to different sets of backing services. - -Ingress gateways are tightly integrated with Consul's L7 configuration and enable dynamic routing of HTTP requests by -attributes like the request path. - -For more information about ingress gateways, review the [complete documentation](/consul/docs/connect/gateways/ingress-gateway) -and the [ingress gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-gateways). - -![Ingress Gateway Architecture](/img/ingress-gateways.png) - -## Traffic from services in the Consul service mesh to external services - --> **1.8.0+:** This feature is available in Consul versions 1.8.0 and newer. - -Terminating gateways enable connectivity from services in the Consul service mesh to services outside the mesh. -Services outside the mesh do not have sidecar proxies or are not [integrated natively](/consul/docs/connect/native). -These may be services running on legacy infrastructure or managed cloud services running on -infrastructure you do not control. - -Terminating gateways effectively act as egress proxies that can represent one or more services. They terminate service mesh -mTLS connections, enforce Consul intentions, and forward requests to the appropriate destination. - -These gateways also simplify authorization from dynamic service addresses. Consul's intentions determine whether -connections through the gateway are authorized. Then traditional tools like firewalls or IAM roles can authorize the -connections from the known gateway nodes to the destination services. - -For more information about terminating gateways, review the [complete documentation](/consul/docs/connect/gateways/terminating-gateway) -and the [terminating gateway tutorial](/consul/tutorials/developer-mesh/terminating-gateways-connect-external-services). - -![Terminating Gateway Architecture](/img/terminating-gateways.png) diff --git a/website/content/docs/connect/dataplane/consul-dataplane.mdx b/website/content/docs/connect/dataplane/consul-dataplane.mdx deleted file mode 100644 index 4bbc9602a74f..000000000000 --- a/website/content/docs/connect/dataplane/consul-dataplane.mdx +++ /dev/null @@ -1,181 +0,0 @@ ---- -layout: docs -page_title: Consul Dataplane CLI Reference -description: >- - Consul Dataplane runs as a separate binary controlled with the `consul-dataplane` CLI command. Learn how to use this command to configure your dataplane on Kubernetes with this reference guide and example code. ---- - -# Consul Dataplane CLI Reference - -The `consul-dataplane` command interacts with the binary for [simplified service mesh with Consul Dataplane](/consul/docs/connect/dataplane). Use this command to install Consul Dataplane, configure its Envoy proxies, and secure Dataplane deployments. - -## Usage - -Usage: `consul-dataplane [options]` - -### Requirements - -Consul Dataplane requires servers running Consul version `v1.14+`. To find a specific version of Consul, refer to [HashiCorp's Official Release Channels](https://www.hashicorp.com/official-release-channels). - -### Startup - -The following options are required when starting `consul-dataplane` with the CLI: - - - - -- `-addresses` -- `-service-node-name` -- `-proxy-service-id` - - - - - -- `-addresses` -- `-service-node-name` -- `-service-namespace` -- `-service-partition` -- `-proxy-service-id` - - - - - -### Command Options - -- `-addresses` - Consul server gRPC addresses. Can be a DNS name or an executable command. Accepted environment variable is `DP_CONSUL_ADDRESSES`. Refer to [go-netaddrs](https://github.com/hashicorp/go-netaddrs#summary) for details and examples. -- `-ca-certs` - The path to a file or directory containing CA certificates used to verify the server's certificate. Accepted environment variable is `DP_CA_CERTS`. -- `-consul-dns-bind-addr` - The address bound to the Consul DNS proxy. Default is `"127.0.0.1"`. Accepted environment variable is `DP_CONSUL_DNS_BIND_ADDR`. -- `-consul-dns-bind-port` - The port that the Consul DNS proxy listens on. Default is `-1`, which disables the DNS proxy. Accepted environment variable is `DP_CONSUL_DNS_BIND_PORT`. -- `-credential-type` - The type of credentials used to authenticate with Consul servers, either `"static"` or `"login"`. Accepted environment variable is `DP_CREDENTIAL_TYPE`. -- `-envoy-admin-bind-address` - The address the Envoy admin server is available on. Default is `"127.0.0.1"`. Accepted environment variable is `DP_ENVOY_ADMIN_BIND_ADDRESS`. -- `-envoy-admin-bind-port` - The port the Envoy admin server is available on. Default is `19000`. Accepted environment variable is `DP_ENVOY_ADMIN_BIND_PORT`. -- `-envoy-concurrency` - The number of worker threads that Envoy uses. Default is `2`. Accepted environment variable is `DP_ENVOY_CONCURRENCY`. -- `-envoy-ready-bind-address` - The address Envoy's readiness probe is available on. Accepted environment variable is `DP_ENVOY_READY_BIND_ADDRESS`. -- `-envoy-ready-bind-port` - The port Envoy's readiness probe is available on. Accepted environment variable is `DP_ENVOY_READY_BIND_PORT`. -- `-graceful-port` - The port to serve HTTP endpoints for graceful operations. Accepted environment variable is `DP_GRACEFUL_PORT`. -- `-graceful-shutdown-path` - The HTTP path to serve the graceful shutdown endpoint. Accepted environment variable is `DP_GRACEFUL_SHUTDOWN_PATH`. -- `-grpc-port` - The Consul server gRPC port to which `consul-dataplane` connects. Default is `8502`. Accepted environment variable is `DP_CONSUL_GRPC_PORT`. -- `-log-json` - Enables log messages in JSON format. Default is `false`. Accepted environment variable is `DP_LOG_JSON`. -- `-log-level` - Log level of the messages to print. Available log levels are `"trace"`, `"debug"`, `"info"`, `"warn"`, and `"error"`. Default is `"info"`. Accepted environment variable is `DP_LOG_LEVEL`. -- `-login-auth-method` - The auth method used to log in. Accepted environment variable is `DP_CREDENTIAL_LOGIN_AUTH_METHOD`. -- `-login-bearer-token` - The bearer token presented to the auth method. Accepted environment variable is `DP_CREDENTIAL_LOGIN_BEARER_TOKEN`. -- `-login-bearer-token-path` - The path to a file containing the bearer token presented to the auth method. Accepted environment variable is `DP_CREDENTIAL_LOGIN_BEARER_TOKEN_PATH`. -- `-login-datacenter` - The datacenter containing the auth method. Accepted environment variable is `DP_CREDENTIAL_LOGIN_DATACENTER`. -- `-login-meta` - A set of key/value pairs to attach to the ACL token. Each pair is formatted as `=`. This flag may be passed multiple times. Accepted environment variables are `DP_CREDENTIAL_LOGIN_META{1..9}`. -- `-login-namespace` - The Consul Enterprise namespace containing the auth method. Accepted environment variable is `DP_CREDENTIAL_LOGIN_NAMESPACE`. -- `-login-partition` - The Consul Enterprise partition containing the auth method. Accepted environment variable is `DP_CREDENTIAL_LOGIN_PARTITION`. -- `-proxy-service-id` - The proxy service instance's ID. Accepted environment variable is `DP_PROXY_SERVICE_ID`. -- `-proxy-service-id-path` - The path to a file containing the proxy service instance's ID. Accepted environment variable is `DP_PROXY_SERVICE_ID_PATH`. -- `-server-watch-disabled` - Prevent `consul-dataplane` from consuming the server update stream. Use this flag when Consul servers are behind a load balancer. Default is `false`. Accepted environment variable is `DP_SERVER_WATCH_DISABLED`. -- `-service-namespace` - The Consul Enterprise namespace in which the proxy service instance is registered. Accepted environment variable is `DP_SERVICE_NAMESPACE`. -- `-service-node-id` - The ID of the Consul node to which the proxy service instance is registered. Accepted environment variable is `DP_SERVICE_NODE_ID`. -- `-service-node-name` - The name of the Consul node to which the proxy service instance is registered. Accepted environment variable is `DP_SERVICE_NODE_NAME`. -- `-service-partition` - The Consul Enterprise partition in which the proxy service instance is registered. Accepted environment variable is `DP_SERVICE_PARTITION`. -- `-shutdown-drain-listeners` - Wait for proxy listeners to drain before terminating the proxy container. Accepted environment variable is `DP_SHUTDOWN_DRAIN_LISTENERS`. -- `-shutdown-grace-period-seconds` - Amount of time to wait after receiving a SIGTERM signal before terminating the proxy. Accepted environment variable is `DP_SHUTDOWN_GRACE_PERIOD_SECONDS`. -- `-static-token` - The ACL token used to authenticate requests to Consul servers when `-credential-type` is set to `"static"`. Accepted environment variable is `DP_CREDENTIAL_STATIC_TOKEN`. -- `-telemetry-prom-ca-certs-path` - The path to a file or directory containing CA certificates used to verify the Prometheus server's certificate. Accepted environment variable is `DP_TELEMETRY_PROM_CA_CERTS_PATH`. -- `-telemetry-prom-cert-file` - The path to the client certificate used to serve Prometheus metrics. Accepted environment variable is `DP_TELEMETRY_PROM_CERT_FILE`. -- `-telemetry-prom-key-file` - The path to the client private key used to serve Prometheus metrics. Accepted environment variable is `DP_TELEMETRY_PROM_KEY_FILE`. -- `-telemetry-prom-merge-port` - The local port used to serve merged Prometheus metrics. Default is `20100`. If your service instance uses the same default port, this flag must be set to a different port in order to avoid a port conflict. Accepted environment variable is `DP_TELEMETRY_PROM_MERGE_PORT`. -- `-telemetry-prom-retention-time` - The duration for Prometheus metrics aggregation. Default is `1m0s`. Accepted environment variable is `DP_TELEMETRY_PROM_RETENTION_TIME`. Refer to [`prometheus_retention_time`](/consul/docs/agent/config/config-files#telemetry-prometheus_retention_time) for details on setting this value. -- `-telemetry-prom-scrape-path` - The URL path where Envoy serves Prometheus metrics. Default is `"/metrics"`. Accepted environment variable is `DP_TELEMETRY_PROM_SCRAPE_PATH`. -- `-telemetry-prom-service-metrics-url` - The URL where your service instance serves Prometheus metrics. If this is set, the metrics at this URL are included in Consul Dataplane's merged Prometheus metrics. Accepted environment variable is `DP_TELEMETRY_PROM_SERVICE_METRICS_URL`. -- `-telemetry-use-central-config` - Controls whether the proxy applies the central telemetry configuration. Default is `true`. Accepted environment variable is `DP_TELEMETRY_USE_CENTRAL_CONFIG`. -- `-tls-cert` - The path to a client certificate file. This flag is required if `tls.grpc.verify_incoming` is enabled on the server. Accepted environment variable is `DP_TLS_CERT`. -- `-tls-disabled` - Communicate with Consul servers over a plaintext connection. Useful for testing, but not recommended for production. Default is `false`. Accepted environment variable is `DP_TLS_DISABLED`. -- `-tls-insecure-skip-verify` - Do not verify the server's certificate. Useful for testing, but not recommended for production. Default is `false`. `DP_TLS_INSECURE_SKIP_VERIFY`. -- `-tls-key` - The path to a client private key file. This flag is required if `tls.grpc.verify_incoming` is enabled on the server. Accepted environment variable is `DP_TLS_KEY`. -- `-tls-server-name` - The hostname to expect in the server certificate's subject. This flag is required if `-addresses` is not a DNS name. Accepted environment variable is `DP_TLS_SERVER_NAME`. -- `-version` - Print the current version of `consul-dataplane`. -- `-xds-bind-addr` - The address the Envoy xDS server is available on. Default is `"127.0.0.1"`. Accepted environment variable is `DP_XDS_BIND_ADDR`. -- `-xds-bind-port` - The port on which the Envoy xDS server is available. Default is `0`. When set to `0`, an available port is selected at random. Accepted environment variable is `DP_XDS_BIND_PORT`. - -## Examples - -### DNS - -Consul Dataplane resolves a domain name to discover Consul server IP addresses. - - ```shell-session - $ consul-dataplane -addresses my.consul.example.com - ``` - -### Executable Command - -Consul Dataplane runs a script that, on success, returns one or more IP addresses separated by whitespace. - - ```shell-session - $ ./my-script.sh - 172.20.0.1 - 172.20.0.2 - 172.20.0.3 - - $ consul-dataplane -addresses "exec=./my-script.sh" - ``` - -### Go Discover Nodes for Cloud Providers - -The [`go-discover`](https://github.com/hashicorp/go-discover) binary is included in the `hashicorp/consul-dataplane` image for use with this mode of server discovery, which functions in - a way similar to [Cloud Auto-join](/consul/docs/install/cloud-auto-join). The - following example demonstrates how to use the `go-discover` binary with Consul Dataplane. - - ```shell-session - $ consul-dataplane -addresses "exec=discover -q addrs provider=aws region=us-west-2 tag_key=consul-server tag_value=true" - ``` - -### Static token - -A static ACL token is passed to Consul Dataplane. - - ```shell-session - $ consul-dataplane -credential-type "static"` -static-token "12345678-90ab-cdef-0000-12345678abcd" - ``` - -### Auth method login - -Consul Dataplane logs in to one of Consul's supported [auth methods](/consul/docs/security/acl/auth-methods). - - - - - ```shell-session - $ consul-dataplane -credential-type "login" - -login-auth-method \ - -login-bearer-token \ ## Or -login-bearer-token-path - -login-datacenter \ - -login-meta key1=val1 -login-meta key2=val2 \ - ``` - - - - - - ```shell-session - $ consul-dataplane -credential-type "login" - -login-auth-method \ - -login-bearer-token \ ## Or -login-bearer-token-path - -login-datacenter \ - -login-meta key1=val1 -login-meta key2=val2 \ - -login-namespace \ - -login-partition - ``` - - - - -### Consul Servers Behind a Load Balancer - -When Consul servers are behind a load balancer, you must pass `-server-watch-disabled` to Consul -Dataplane. - -```shell-session -$ consul-dataplane -server-watch-disabled -``` - -By default, Consul Dataplane opens a server watch stream to a Consul server, which enables the server -to inform Consul Dataplane of new or different Consul server addresses. However, if Consul Dataplane -is connecting through a load balancer, then it must ignore the Consul server addresses that are -returned from the server watch stream. diff --git a/website/content/docs/connect/dataplane/index.mdx b/website/content/docs/connect/dataplane/index.mdx deleted file mode 100644 index e7a386324759..000000000000 --- a/website/content/docs/connect/dataplane/index.mdx +++ /dev/null @@ -1,163 +0,0 @@ ---- -layout: docs -page_title: Simplified Service Mesh with Consul Dataplane -description: >- - Consul Dataplane removes the need to run a client agent for service discovery and service mesh by leveraging orchestrator functions. Learn about Consul Dataplane, how it can lower latency for Consul on Kubernetes and AWS ECS, and how it enables Consul support for AWS Fargate and GKE Autopilot. ---- - -# Simplified Service Mesh with Consul Dataplane - -This topic provides an overview of Consul Dataplane, a lightweight process for managing Envoy proxies. Consul Dataplane removes the need to run client agents on every node in a cluster for service discovery and service mesh. Instead, Consul deploys sidecar proxies that provide lower latency, support additional runtimes, and integrate with cloud infrastructure providers. - -## Supported environments - -- Dataplanes can connect to Consul servers v1.14.0 and newer. -- Dataplanes on Kubernetes requires Consul K8s v1.0.0 and newer. -- Dataplanes on AWS Elastic Container Services (ECS) requires Consul ECS v0.7.0 and newer. - -## What is Consul Dataplane? - -When deployed to virtual machines or bare metal environments, the Consul control plane requires _server agents_ and _client agents_. Server agents maintain the service catalog and service mesh, including its security and consistency, while client agents manage communications between service instances, their sidecar proxies, and the servers. While this model is optimal for applications deployed on virtual machines or bare metal servers, orchestrators such as Kubernetes and ECS have native components that support health checking and service location functions typically provided by the client agent. - -Consul Dataplane manages Envoy proxies and leaves responsibility for other functions to the orchestrator. As a result, it removes the need to run client agents on every node. In addition, services no longer need to be reregistered to a local client agent after restarting a service instance, as a client agent’s lack of access to persistent data storage in container-orchestrated deployments is no longer an issue. - -The following diagram shows how Consul Dataplanes facilitate service mesh in a Kubernetes-orchestrated environment. - -![Diagram of Consul Dataplanes in Kubernetes deployment](/img/k8s-dataplanes-architecture.png) - -### Impact on performance - -Consul Dataplanes replace node-level client agents and function as sidecars attached to each service instance. Dataplanes handle communication between Consul servers and Envoy proxies, using fewer resources than client agents. Consul servers need to consume additional resources in order to generate xDS resources for Envoy proxies. - -As a result, small deployments require fewer overall resources. For especially large deployments or deployments that expect to experience high levels of churn, consider the following impacts to your network's performance: - -1. In our internal tests, which used 5000 proxies and services flapping every 2 seconds, additional CPU utilization remained under 10% on the control plane. -1. As you deploy more services, the resource usage for dataplanes grows on a linear scale. -1. Envoy reconfigurations are rate limited to prevent excessive configuration changes from generating significant load on the servers. -1. To avoid generating significant load on an individual server, proxy configuration is load balanced proactively. -1. The frequency of the orchestrator's liveness and readiness probes determine how quickly Consul's control plane can become aware of failures. There is no impact on service mesh applications, however, as Envoy proxies have a passive ability to detect endpoint failure and steer traffic to healthy instances. - -## Benefits - -**Fewer networking requirements**: Without client agents, Consul does not require bidirectional network connectivity across multiple protocols to enable gossip communication. Instead, it requires a single gRPC connection to the Consul servers, which significantly simplifies requirements for the operator. - -**Simplified set up**: Because there are no client agents to engage in gossip, you do not have to generate and distribute a gossip encryption key to agents during the initial bootstrapping process. Securing agent communication also becomes simpler, with fewer tokens to track, distribute, and rotate. - -**Additional environment and runtime support**: Consul on Kubernetes versions _prior_ to v1.0 (Consul v1.14) require the use of hostPorts and DaemonSets for client agents, which limits Consul’s ability to be deployed in environments where those features are not supported. -As of Consul on Kubernetes version 1.0 (Consul 1.14) with the new Consul Dataplane, `hostPorts` are no longer required and Consul now supports AWS Fargate and GKE Autopilot. - -**Easier upgrades**: With Consul Dataplane, updating Consul to a new version no longer requires upgrading client agents. Consul Dataplane also has better compatibility across Consul server versions, so the process to upgrade Consul servers becomes easier. - -## Get started - -To get started with Consul Dataplane, use the following reference resources: - -- For `consul-dataplane` commands and usage examples, including required flags for startup, refer to the [`consul-dataplane` CLI reference](/consul/docs/connect/dataplane/consul-dataplane). -- For Helm chart information, refer to the [Helm Chart reference](/consul/docs/k8s/helm). -- For Envoy, Consul, and Consul Dataplane version compatibility, refer to the [Envoy compatibility matrix](/consul/docs/connect/proxies/envoy). -- For Consul on ECS workloads, refer to [Consul on AWS Elastic Container Service (ECS) Overview](/consul/docs/ecs). - -### Installation - - - - - -To install Consul Dataplane, set `VERSION` to `1.0.0` and then follow the instructions to install a specific version of Consul [with the Helm Chart](/consul/docs/k8s/installation/install#install-consul) or [with the Consul-k8s CLI](/consul/docs/k8s/installation/install-cli#install-a-previous-version). - -#### Helm - -```shell-session -$ export VERSION=1.0.0 -$ helm install consul hashicorp/consul --set global.name=consul --version ${VERSION} --create-namespace --namespace consul -``` - -#### Consul-k8s CLI - -```shell-session -$ export VERSION=1.0.0 && \ - curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip -``` - - - - -Refer to the following documentation for Consul on ECS workloads: - -- [Deploy Consul with the Terraform module](/consul/docs/ecs/deploy/terraform) -- [Deploy Consul manually](/consul/docs/ecs/deploy/manual) - - - - - -### Namespace ACL permissions - -If ACLs are enabled, exported services between partitions that use dataplanes may experience errors when you define namespace partitions with the `*` wildcard. Consul dataplanes use a token with the `builtin/service` policy attached, but this policy does not include access to all namespaces. - -Add the following policies to the service token attached to Consul dataplanes to grant Consul access to exported services across all namespaces: - -```hcl -partition "default" { - namespace "default" { - query_prefix "" { - policy = "read" - } - } -} - -partition_prefix "" { - namespace_prefix "" { - node_prefix "" { - policy = "read" - } - service_prefix "" { - policy = "read" - } - } -} -``` - -### Upgrading - - - - - -Before you upgrade Consul to a version that uses Consul Dataplane, you must edit your Helm chart so that client agents are removed from your deployments. Refer to [upgrading to Consul Dataplane](/consul/docs/k8s/upgrade#upgrading-to-consul-dataplanes) for more information. - - - - - -Refer to [Upgrade to dataplane architecture](/consul/docs/ecs/upgrade-to-dataplanes) for instructions. - - - - - -## Feature support - -Consul Dataplane on Kubernetes supports the following features: - -- Single and multi-cluster installations, including those with WAN federation, cluster peering, and admin partitions are supported. -- Ingress, terminating, and mesh gateways are supported. -- Running Consul service mesh in AWS Fargate and GKE Autopilot is supported. -- xDS load balancing is supported. -- Servers running in Kubernetes and servers external to Kubernetes are both supported. -- HCP Consul Dedicated is supported. -- Consul API Gateway - -Consul Dataplane on ECS support the following features: - -- Single and multi-cluster installations, including those with WAN federation, cluster peering, and admin partitions -- Mesh gateways -- Running Consul service mesh in AWS Fargate and EC2 -- xDS load balancing -- Self-managed Enterprise and HCP Consul Dedicated servers - -### Technical Constraints - -- Consul Dataplane is not supported on Windows. -- Consul Dataplane requires the `NET_BIND_SERVICE` capability. Refer to [Set capabilities for a Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) in the Kubernetes Documentation for more information. -- When ACLs are enabled, dataplanes use the [service token](/consul/docs/security/acl/tokens/create/create-a-service-token) and the `builtin/service` policy for their default permissions. diff --git a/website/content/docs/connect/dataplane/telemetry.mdx b/website/content/docs/connect/dataplane/telemetry.mdx deleted file mode 100644 index ce111e4872ba..000000000000 --- a/website/content/docs/connect/dataplane/telemetry.mdx +++ /dev/null @@ -1,43 +0,0 @@ ---- -layout: docs -page_title: Consul Dataplane - Enable Telemetry Metrics -description: >- - Configure telemetry to collect metrics you can use to debug and observe Consul Dataplane behavior and performance. ---- - -# Consul Dataplane Telemetry - -Consul Dataplane collects metrics about its own status and performance. -The following external metrics stores are supported: - -- [DogstatsD](https://docs.datadoghq.com/developers/dogstatsd/) -- [Prometheus](https://prometheus.io/docs/prometheus/latest/) -- [StatsD](https://github.com/statsd/statsd) - -Consul Dataplane uses the same external metrics store that is configured for Envoy. To enable -telemetry for Consul Dataplane, enable telemetry for Envoy by specifying an external metrics store -in the proxy-defaults configuration entry or directly in the proxy.config field of the proxy service -definition. Refer to the [Envoy bootstrap -configuration](/consul/docs/connect/proxies/envoy#bootstrap-configuration) for details. - -## Prometheus Metrics Merging - -When Prometheus metrics are used, Consul Dataplane configures Envoy to serve merged metrics through -a single endpoint. Metrics from the following sources are collected and merged: - -- Consul Dataplane -- The Envoy process managed by Consul Dataplane -- (optionally) Your service instance running alongside Consul Dataplane - -## Metrics Reference - -Consul Dataplane supports the following metrics: - -| Metric Name | Description | Unit | Type | -| :------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- | :------ | -| `consul_dataplane.connect_duration` | Measures the time `consul-dataplane` spends connecting to a Consul server, including the time to discover Consul server addresses and to complete other setup prior to Envoy opening the xDS stream. | ms | timer | -| `consul_dataplane.connected` | Indicates whether `consul-dataplane` is currently connected to a Consul server. | 1 or 0 | gauge | -| `consul_dataplane.connection_errors` | Measures the number of errors encountered on gRPC streams. This is labeled with the gRPC error status code. | number of errors | gauge | -| `consul_dataplane.discover_servers_duration` | Measures the time `consul-dataplane` spends discovering Consul server IP addresses. | ms | timer | -| `consul_dataplane.envoy_connected` | Indicates whether Envoy is currently connected to `consul-dataplane` and able to receive xDS updates. | 1 or 0 | gauge | -| `consul_dataplane.login_duration` | Measures the time `consul-dataplane` spends logging in to an ACL auth method. | ms | timer | diff --git a/website/content/docs/connect/dev.mdx b/website/content/docs/connect/dev.mdx deleted file mode 100644 index e31a2f423a61..000000000000 --- a/website/content/docs/connect/dev.mdx +++ /dev/null @@ -1,65 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Debugging -description: >- - Use the `consul connect proxy` command to connect to services or masquerade as other services for development and debugging purposes. Example code demonstrates connecting to services that are part of the service mesh as listeners only. ---- - -# Service Mesh Debugging - -It is often necessary to connect to a service for development or debugging. -If a service only exposes a service mesh listener, then we need a way to establish -a mutual TLS connection to the service. The -[`consul connect proxy` command](/consul/commands/connect/proxy) can be used -for this task on any machine with access to a Consul agent (local or remote). - -Restricting access to services only via service mesh ensures that the only way to -connect to a service is through valid authorization of the -[intentions](/consul/docs/connect/intentions). This can extend to developers -and operators, too. - -## Connecting to Mesh-only Services - -As an example, let's assume that we have a PostgreSQL database running that -we want to connect to via `psql`, but the only non-loopback listener is -via Connect. Let's also assume that we have an ACL token to identify as -`operator-mitchellh`. We can start a local proxy: - -```shell-session -$ consul connect proxy \ - -service operator-mitchellh \ - -upstream postgresql:8181 -``` - -This works because the source `-service` does not need to be registered -in the local Consul catalog. However, to retrieve a valid identifying -certificate, the ACL token must have `service:write` permissions. This -can be used as a sort of "debug service" to represent people, too. In -the example above, the proxy is identifying as `operator-mitchellh`. - -With the proxy running, we can now use `psql` like normal: - -```shell-session -$ psql --host=127.0.0.1 --port=8181 --username=mitchellh mydb -> -``` - -This `psql` session is now happening through our local proxy via an -authorized mutual TLS connection to the PostgreSQL service in our Consul -catalog. - -### Masquerading as a Service - -You can also easily masquerade as any source service by setting the -`-service` value to any service. Note that the proper ACL permissions are -required to perform this task. - -For example, if you have an ACL token that allows `service:write` for -`web` and you want to connect to the `postgresql` service as "web", you -can start a proxy like so: - -```shell-session -$ consul connect proxy \ - -service web \ - -upstream postgresql:8181 -``` diff --git a/website/content/docs/connect/distributed-tracing.mdx b/website/content/docs/connect/distributed-tracing.mdx deleted file mode 100644 index ffe5ef033bb6..000000000000 --- a/website/content/docs/connect/distributed-tracing.mdx +++ /dev/null @@ -1,265 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Distributed Tracing -description: >- - Distributed tracing tracks the path of a request as it traverses the service mesh. Consul supports distributed tracing for applications that have it implemented. Learn how to integrate tracing libraries in your application and configure Consul to participate in that tracing. ---- - -# Distributed Tracing - -Distributed tracing is a way to track and correlate requests across microservices. Distributed tracing must first -be implemented in each application, it cannot be added by Consul. Once implemented in your applications, adding -distributed tracing to Consul will add the sidecar proxies as spans in the request path. - -## Application Changes - -Consul alone cannot implement distributed tracing for your applications. Each application must propagate the required -headers. Typically this is done using a tracing library such as: - -- https://github.com/opentracing/opentracing-go -- https://github.com/DataDog/dd-trace-go -- https://github.com/openzipkin/zipkin-go - -## Configuration - -Once your applications have been instrumented with a tracing library, you are ready to configure Consul to add sidecar -proxy spans to the trace. Your eventual config will look something like: - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -Config { - protocol = "http" - envoy_tracing_json = < - --> **NOTE:** This example uses a [proxy defaults](/consul/docs/connect/config-entries/proxy-defaults) configuration entry, which applies to all proxies, -but you can also apply the configuration in the -[`proxy` block of your service configuration](/consul/docs/connect/proxies/proxy-config-reference#proxy-parameters). The proxy service registration is not supported on Kubernetes. - -Within the config there are two keys you need to customize: - -1. [`envoy_tracing_json`](/consul/docs/connect/proxies/envoy#envoy_tracing_json): Sets the tracing configuration for your specific tracing type. - See the [Envoy tracers documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/trace/trace) for your - specific collector's configuration. This configuration will reference the cluster name defined in `envoy_extra_static_clusters_json`. -1. [`envoy_extra_static_clusters_json`](/consul/docs/connect/proxies/envoy#envoy_extra_static_clusters_json): Defines the address - of your tracing collector where Envoy will send its spans. In this example the URL was `collector-url:9411`. - -## Applying the configuration - -This configuration only applies when proxies are _restarted_ since it changes the _bootstrap_ config for Envoy -which can only be applied on startup. This means you must restart all your proxies for changes to this -config to take effect. - --> **Note:** On Kubernetes this is a matter of restarting your deployments, e.g. `kubectl rollout restart deploy/deploy-name`. - -## Considerations - -1. Distributed tracing is only supported for HTTP and gRPC services. You must specify the protocol either globally - via a proxy defaults config entry: - - - - ```hcl - Kind = "proxy-defaults" - Name = "global" - Config { - protocol = "http" - } - ``` - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ProxyDefaults - metadata: - name: global - spec: - config: - protocol: http - ``` - - ```json - { - "Kind": "proxy-defaults", - "Name": "global", - "Config": { - "protocol": "http" - } - } - ``` - - - - Or via a service defaults config entry for each service: - - - - ```hcl - Kind = "service-defaults" - Name = "service-name" - Protocol = "http" - ``` - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceDefaults - metadata: - name: service-name - spec: - protocol: http - ``` - - ```json - { - "Kind": "service-defaults", - "Name": "service-name", - "Protocol": "http" - } - ``` - - - -1. Requests through [Ingress Gateways](/consul/docs/connect/gateways/ingress-gateway) will not be traced unless the header - `x-client-trace-id: 1` is set (see [hashicorp/consul#6645](https://github.com/hashicorp/consul/issues/6645)). - -1. Consul's proxies do not currently support [OpenTelemetry](https://opentelemetry.io/) spans, as Envoy has not - [fully implemented](https://github.com/envoyproxy/envoy/issues/9958) it. Instead, you can add - OpenTelemetry libraries to your application to emit spans for other - [tracing protocols](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/observability/tracing) - supported by Envoy, such as Zipkin or Jaeger. - -1. Tracing is only supported with Envoy proxies, not the built-in proxy. - -1. When configuring the Zipkin tracer in `envoy_tracing_json`, set [`trace_id_128bit`](https://www.envoyproxy.io/docs/envoy/v1.21.0/api-v3/config/trace/v3/zipkin.proto#envoy-v3-api-field-config-trace-v3-zipkinconfig-trace-id-128bit) to `true` if your application is configured to generate 128-bit trace IDs. For example: - - - - ```json - { - "http": { - "name": "envoy.tracers.zipkin", - "typedConfig": { - "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", - "collector_cluster": "zipkin", - "collector_endpoint_version": "HTTP_JSON", - "collector_endpoint": "/api/v2/spans", - "shared_span_context": false, - "trace_id_128bit": true - } - } - } - ``` - - diff --git a/website/content/docs/connect/ecs.mdx b/website/content/docs/connect/ecs.mdx new file mode 100644 index 000000000000..ac58fb713b0a --- /dev/null +++ b/website/content/docs/connect/ecs.mdx @@ -0,0 +1,79 @@ +--- +layout: docs +page_title: Connect ECS services with Consul +description: >- + Consul documentation provides reference material for all features and options available in Consul. +--- + +# Connect ECS services with Consul + +This topic describes how to configure routes between tasks after registering the tasks to Consul service mesh. + +## Overview + +To enable tasks to call through the service mesh, complete the following steps: + +1. Configure the sidecar proxy to listen on a different port for each upstream service your application needs to call. +1. Modify your application to make requests to the sidecar proxy on the specified port. + +## Requirements + +Consul service mesh must be deployed to ECS before you can bind a network address. For more information, refer to the following topics: + +- [Deploy Consul to ECS using the Terraform module](/consul/docs/deploy/server/ecs) +- [Deploy Consul to ECS manually](/consul/docs/deploy/server/ecs/manual) + +## Configure the sidecar proxy + +Add the `upstreams` block to your application configuration and specify the following fields: + +- `destinationName`: Specifies the name of the upstream service as it is registered in the Consul service catalog. +- `localBindPort`: Specifies the port that the proxy forwards requests to. You must specify an unused port but it does not need to match the upstream service port. + +In the following example, the route from an application named `web` to an application named `backend` goes through port `8080`: + +```hcl +module "web" { + family = "web" + upstreams = [ + { + destinationName = "backend" + localBindPort = 8080 + } + ] +} +``` + +You must include all upstream services in the `upstream` configuration. + +## Configure your application + +Use an appropriate environment variable in your container definition to configure your application to call the upstream service at the loopback address. + +In the following example, the `web` application calls the `backend` service by sending requests to the +`BACKEND_URL` environment variable: + +```hcl +module "web" { + family = "web" + upstreams = [ + { + destinationName = "backend" + localBindPort = 8080 + } + ] + container_definitions = [ + { + name = "web" + environment = [ + { + name = "BACKEND_URL" + value = "http://localhost:8080" + } + ] + ... + } + ] + ... +} +``` \ No newline at end of file diff --git a/website/content/docs/connect/enable.mdx b/website/content/docs/connect/enable.mdx new file mode 100644 index 000000000000..41fc41be5f17 --- /dev/null +++ b/website/content/docs/connect/enable.mdx @@ -0,0 +1,86 @@ +--- +layout: docs +page_title: Enable service mesh +description: >- + Learn how to enable and configure Consul's service mesh capabilities in agent configurations. +--- + +# Enable service mesh + +This page describes the process to enable Consul's service mesh features. + +For more information about configurable options in the service mesh and the process to full bootstrap Consul's service mesh, refer to [Connect services](/consul/docs/connect). + +## Enable mesh in server agent configuration + +Consul's service mesh features are not enabled by default when running Consul on virtual machines. To enable the service mesh, you must change the configuration of your Consul servers. You do not need to change client agent configurations in order to use the service mesh. + +To enable Consul's service mesh, set `connect.enabled` to `true` in a new or existing [agent configuration file](/consul/docs/reference/agent). + +Service mesh is enabled by default on Kubernetes deployments. + + + + + +```hcl +connect { + enabled = true +} +``` + + + + + +```json +{ + "connect": { + "enabled": true + } +} +``` + + + + + +```yaml +server: + connect: + enabled: true +``` + + + + + +## Apply configuration to Consul + +After you update your cluster's configuration, the Consul agent must restart before the service mesh is enabled. + +On VM deployments, restart each server in the cluster one at a time in order to maintain the cluster's availability. + +On Kubernetes deployments, you can run the following command to apply the configuration to your deployment: + +```shell-session +$ kubectl apply -f values.yaml +``` + +If you use the Consul on Kubernetes CLI, you can run the following command instead: + +```shell-session +$ consul-k8s upgrade -config-file values.yaml +``` + +For information about the `consul-k8s` CLI and how to install it, refer to [Install Consul on Kubernetes from Consul K8s CLI](/consul/docs/reference/cli/consul-k8s) + +## Next steps + +After you enable Consul's service mesh, enable the built-in certificate authority to ensure secure service-to-service communication and configure defaults settings for the Envoy proxies in the service mesh. You can also enable Consul's Access Control List (ACL) system to provide additional security. + +Refer to the following topics for more information: + +- [Bootstrap Consul's built-in CA](/consul/docs/secure-mesh/certificate/bootstrap) +- [Configure proxy defaults](/consul/docs/connect/proxy) +- [Enable Consul's ACL system](/consul/docs/secure/acl) \ No newline at end of file diff --git a/website/content/docs/connect/gateways/api-gateway/configuration/gateway.mdx b/website/content/docs/connect/gateways/api-gateway/configuration/gateway.mdx deleted file mode 100644 index 2ea1d1e64eeb..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/configuration/gateway.mdx +++ /dev/null @@ -1,230 +0,0 @@ ---- -layout: docs -page_title: Gateway Resource Configuration -description: >- - Learn how to configure the `Gateway` resource to define how the Consul API Gateway handles incoming service mesh traffic with this configuration model and reference specifications. ---- - -# Gateway Resource Configuration - -This topic provides full details about the `Gateway` resource. - -## Introduction - -A `Gateway` is an instance of network infrastructure that determines how service traffic should be handled. A `Gateway` contains one or more [`listeners`](#listeners) that bind to a set of IP addresses. An `HTTPRoute` or `TCPRoute` can then attach to a gateway listener to direct traffic from the gateway to a service. - -Gateway instances derive their configurations from the [`GatewayClass`](/consul/docs/connect/gateways/api-gateway/configuration/gatewayclass) resource, which acts as a template for individual `Gateway` deployments. Refer to [GatewayClass](/consul/docs/connect/gateways/api-gateway/configuration/gatewayclass) for additional information. - -Specify the following parameters to declare a `Gateway`: - -| Parameter | Description | Required | -| :----------- |:---------------------------------------------------------------------------------------------------------------------------------------------------------- |:-------- | -| `kind` | Specifies the type of configuration object. The value should always be `Gateway`. | Required | -| `description` | Human-readable string that describes the purpose of the `Gateway`. | Optional | -| `version ` | Specifies the Kubernetes API version. The value should always be `gateway.networking.k8s.io/v1alpha2` | Required | -| `scope` | Specifies the effective scope of the Gateway. The value should always be `namespaced`. | Required | -| `fields` | Specifies the configurations for the Gateway. The fields are listed in the [configuration model](#configuration-model). Details for each field are described in the [specification](#specification). | Required | - - -## Configuration model - -The following outline shows how to format the configurations in the `Gateway` object. Click on a property name to view details about the configuration. - -* [`gatewayClassName`](#gatewayclassname): string | required -* [`listeners`](#listeners): array of objects | required - * [`allowedRoutes`](#listeners-allowedroutes): object | required - * [`namespaces`](#listeners-allowedroutes-namespaces): object | required - * [`from`](#listeners-namespaces-from): string | required - * [`selector`](#listeners-allowedroutes-namespaces-selector): object | required if `from` is configured to `selector` - * [`matchExpressions`](#listeners-allowedroutes-namespaces-selector-matchexpressions): array of objects | required if `matchLabels` is not configured - * [`key`](#listeners-allowedroutes-namespaces-selector-matchexpressions): string | required if `matchExpressions` is declared - * [`operator`](#listeners-allowedroutes-namespaces-selector-matchexpressions): string | required if `matchExpressions` is declared - * [`values`](#listeners-allowedroutes-namespaces-selector-matchexpressions): array of strings | required if `matchExpressions` is declared - * [`matchLabels`](#listeners-allowedroutes-namespaces-selector-matchlabels): map of strings | required if `matchExpressions` is not configured - * [`hostname`](#listeners-hostname): string | required - * [`name`](#listeners-name): string | required - * [`port`](#listeners-port): integer | required - * [`protocol`](#listeners-protocol): string | required - * [`tls`](#listeners-tls): object | required if `protocol` is set to `HTTPS` - * [`certificateRefs`](#listeners-tls): array or objects | required if `tls` is declared - * [`name`](#listeners-tls): string | required if `certificateRefs` is declared - * [`namespace`](#listeners-tls): string | required if `certificateRefs` is declared - * [`mode`](#listeners-tls): string | required if `certificateRefs` is declared - * [`options`](#listeners-tls): map of strings | optional - -## Specification - -This topic provides details about the configuration parameters. - -### gatewayClassName -Specifies the name of the [`GatewayClass`](/consul/docs/connect/gateways/api-gateway/configuration/gatewayclass) resource used for the `Gateway` instance. Unless you are using a custom [GatewayClass](/consul/docs/connect/gateways/api-gateway/configuration/gatewayclass), this value should be set to `consul`. -* Type: string -* Required: required - -### listeners -Specifies the `listeners` associated with the `Gateway`. At least one `listener` must be specified. Each `listener` within a `Gateway` must have a unique combination of `hostname`, `port`, and `protocol`. -* Type: array of objects -* Required: required - -### listeners.allowedRoutes -Specifies a `namespace` object that defines the types of routes that may be attached to a listener. -* Type: object -* Required: required - -### listeners.allowedRoutes.namespaces -Determines which routes are allowed to attach to the `listener`. Only routes in the same namespace as the `Gateway` may be attached by default. -* Type: string -* Required: optional -* Default: Same namespace as the parent Gateway - -### listeners.allowedRoutes.namespaces.from -Determines which namespaces are allowed to attach a route to the `Gateway`. You can specify one of the following strings: - -* `All`: Routes in all namespaces may be attached to the `Gateway`. -* `Same` (default): Only routes in the same namespace as the `Gateway` may be attached. -* `Selector`: Only routes in namespaces that match the [`selector`](#listeners-allowedroutes-namespaces-selector) may be attached. - -This parameter is required. - -### listeners.allowedRoutes.namespaces.selector -Specifies a method for selecting routes that are allowed to attach to the listener. The `Gateway` checks for namespaces in the network that match either a regular expression or a label. Routes from the matching namespace are allowed to attach to the listener. - -You can configure one of the following objects: - -* [`matchExpressions`](#listeners-allowedroutes-namespaces-selector-matchexpressions) -* [`matchLabels`](#listeners-allowedroutes-namespaces-selector-matchlabels) - -This field is required when [`from`](#listeners-allowedroutes-namespaces-from) is configured to `Selector`. - -### listeners.allowedRoutes.namespaces.selector.matchExpressions -Specifies an array of requirements for matching namespaces. If a match is found, then routes from the matching namespace(s) are allowed to attach to the `Gateway`. The following table describes members of the `matchExpressions` array: - -| Requirement | Description | Type | Required | -|--- |--- |--- |--- | -|`key` | Specifies the label that the `key` applies to. | string | required when `matchExpressions` is declared | -|`operator` | Specifies the key's relation to a set of values. You can use the following keywords:
  • `In`: Only routes in namespaces that contain the strings in the `values` field can attach to the `Gateway`.
  • `NotIn`: Routes in namespaces that do not contain the strings in the `values` field can attach to the `Gateway`.
  • `Exists`: Routes in namespaces that contain the `key` value are allowed to attach to the `Gateway`.
  • `DoesNotExist`: Routes in namespaces that do not contain the `key` value are allowed to attach to the `Gateway`.
| string | required when `matchExpressions` is declared | -|`values` | Specifies an array of string values. If `operator` is configured to `In` or `NotIn`, then the `values` array must contain values. If `operator` is configured to `Exists` or `DoesNotExist`, then the `values` array must be empty. | array of strings | required when `matchExpressions` is declared | - -In the following example, routes in namespaces that contain `foo` and `bar` are allowed to attach routes to the `Gateway`. -```yaml -namespaceSelector: - matchExpressions: - - key: kubernetes.io/metadata.name - operator: In - values: - - foo - - bar -``` - -Refer to [Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements) in the Kubernetes documentation for additional information about `matchExpressions`. - -### listeners.allowedRoutes.namespaces.selector.matchLabels -Specifies an array of labels and label values. If a match is found, then routes with the matching label(s) are allowed to attach to the `Gateway`. This selector can contain any arbitrary key/value pair. - -In the following example, routes in namespaces that have a `bar` label are allowed to attach to the `Gateway`. - -```yaml -namespaceSelector: - matchLabels: - foo: bar -``` - -Refer to [Labels and Selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/) in the Kubernetes documentation for additional information about labels. - -### listeners.hostname -Specifies the `listener`'s hostname. -* Type: string -* Required: required - -### listeners.name -Specifies the `listener`'s name. -* Type: string -* Required: required - -### listeners.port -Specifies the port number that the `listener` attaches to. -* Type: integer -* Required: required - -### listeners.protocol -Specifies the protocol the `listener` communicates on. -* Type: string -* Required: required - -Allowed values are `TCP`, `HTTP`, or `HTTPS` - -### listeners.tls -Specifies the `tls` configurations for the `Gateway`. The `tls` object is required if `protocol` is set to `HTTPS`. The object contains the following fields: - -| Parameter | Description | Type | Required | -| --- | --- | --- | --- | -| `certificateRefs` |
Specifies Kubernetes `name` and `namespace` objects that contains TLS certificates and private keys.
The certificates establish a TLS handshake for requests that match the `hostname` of the associated `listener`. Each reference must be a Kubernetes Secret. If you are using a Secret in a namespace other than the `Gateway`'s, each reference must also have a corresponding [`ReferenceGrant`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferenceGrant).
| Object or array | Required if `tls` is set | -| `mode` | Specifies the TLS Mode. Should always be set to `Terminate` for `HTTPRoutes` | string | Required if `certificateRefs` is set | -| `options` | Specifies additional Consul API Gateway options. | Map of strings | optional | - -The following keys for `options` are available -* `api-gateway.consul.hashicorp.com/tls_min_version` -* `api-gateway.consul.hashicorp.com/tls_max_version` -* `api-gateway.consul.hashicorp.com/tls_cipher_suites` - -In the following example, `tls` settings are configured to use a secret named `consul-server-cert` in the same namespace as the `Gateway` and the minimum tls version is set to `TLSv1_2`. - -```yaml - -tls: - certificateRefs: - - name: consul-server-cert - group: "" - kind: Secret - mode: Terminate - options: - api-gateway.consul.hashicorp.com/tls_min_version: "TLSv1_2" - -``` - -#### Example cross-namespace certificateRef - -The following example creates a `Gateway` named `example-gateway` in namespace `gateway-namespace` (lines 2-4). The gateway has a `certificateRef` in namespace `secret-namespace` (lines 16-18). The reference is allowed because the `ReferenceGrant` configuration, named `reference-grant` in namespace `secret-namespace` (lines 24-27), allows `Gateways` in `gateway-namespace` to reference `Secrets` in `secret-namespace` (lines 31-35). - - - - ```yaml - apiVersion: gateway.networking.k8s.io/v1beta1 - kind: Gateway - metadata: - name: example-gateway - namespace: gateway-namespace - spec: - gatewayClassName: consul - listeners: - - protocol: HTTPS - port: 443 - name: https - allowedRoutes: - namespaces: - from: Same - tls: - certificateRefs: - - name: cert - namespace: secret-namespace - group: "" - kind: Secret - --- - - apiVersion: gateway.networking.k8s.io/v1alpha2 - kind: ReferenceGrant - metadata: - name: reference-grant - namespace: secret-namespace - spec: - from: - - group: gateway.networking.k8s.io - kind: Gateway - namespace: gateway-namespace - to: - - group: "" - kind: Secret - name: cert - ``` - - diff --git a/website/content/docs/connect/gateways/api-gateway/configuration/index.mdx b/website/content/docs/connect/gateways/api-gateway/configuration/index.mdx deleted file mode 100644 index f6fc99d05de8..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/configuration/index.mdx +++ /dev/null @@ -1,45 +0,0 @@ ---- -layout: docs -page_title: Consul API gateway configuration overview -description: >- - Configure your Consul API Gateway to manage traffic into your service mesh. Learn about the Kubernetes Gateway Specification items you can configure and how to configure custom API Gateways. ---- - -# Consul API gateway configuration overview - -This topic provides an overview of the configuration items you can use to create API gateways, configure listeners, define routes, and apply additional resources that may be necessary to operate Consul API gateways in your environment. - -## Configurations for virtual machines - -Apply the following configuration items if your network runs on virtual machines nodes: - -| Configuration | Description | Usage | -| --- | --- | --- | -| [`api-gateway`](/consul/docs/connect/config-entries/api-gateway) | Defines the main infrastructure resource for declaring an API gateway and listeners on the gateway. | [Deploy API gateway listeners on virtual machines](/consul/docs/connect/gateways/api-gateway/deploy/listeners-vms) | -| [`http-route`](/consul/docs/connect/config-entries/http-route) | Enables HTTP traffic to reach services in the mesh from a listener on the gateway.| [Define routes on virtual machines](/consul/docs/connect/gateways/api-gateway/define-routes/routes-vms) | -| [`tcp-route`](/consul/docs/connect/config-entries/tcp-route) | Enables TCP traffic to reach services in the mesh from a listener on the gateway.| [Define routes on virtual machines](/consul/docs/connect/gateways/api-gateway/define-routes/routes-vms) | -| [`file-system-certificate`](/consul/docs/connect/config-entries/file-system-certificate) | Provides gateway with a CA certificate so that requests between the user and the gateway endpoint are encrypted. | [Encrypt API gateway traffic on virtual machines](/consul/docs/connect/gateways/api-gateway/secure-traffic/encrypt-vms) | -| [`inline-certificate`](/consul/docs/connect/config-entries/inline-certificate) | Provides gateway with a CA certificate so that requests between the user and the gateway endpoint are encrypted. | [Encrypt API gateway traffic on virtual machines](/consul/docs/connect/gateways/api-gateway/secure-traffic/encrypt-vms) | -| [`service-intentions`](/consul/docs/connect/config-entries/service-intentions) | Specifies traffic communication rules between services in the mesh. Intentions also enforce rules for service-to-service traffic routed through a Consul API gateway. | General configuration for securing a service mesh | - -## Configurations for Kubernetes - -Apply the following configuration items if your network runs on Kubernetes: - -| Configuration | Description | Usage | -| --- | --- | --- | -| [`Gateway`](/consul/docs/connect/gateways/api-gateway/configuration/gateway) | Defines the main infrastructure resource for declaring an API gateway and listeners on the gateway. It also specifies the name of the `GatewayClass`. | [Deploy listeners on Kubernetes](/consul/docs/connect/gateways/api-gateway/deploy/listeners-k8s) | -| [`GatewayClass`](/consul/docs/connect/gateways/api-gateway/configuration/gatewayclass) | Defines a class of gateway resources used as a template for creating gateways. The default gateway class is `consul` and is suitable for most API gateway implementations. | [Deploy listeners on Kubernetes](/consul/docs/connect/gateways/api-gateway/deploy/listeners-k8s) | -| [`GatewayClassConfig`](/consul/docs/connect/gateways/api-gateway/configuration/gatewayclassconfig) | Describes additional gateway-related configuration parameters for the `GatewayClass` resource. | [Deploy listeners on Kubernetes](/consul/docs/connect/gateways/api-gateway/deploy/listeners-k8s) | -| [`Routes`](/consul/docs/connect/gateways/api-gateway/configuration/routes) | Specifies paths from the gateway listener to backend services. | [Define routes on Kubernetes](/consul/docs/connect/gateways/api-gateway/define-routes/routes-k8s)

[Reroute traffic in Kubernetes](/consul/docs/connect/gateways/api-gateway/define-routes/reroute-http-requests)

[Route traffic to peered services in Kubernetes](/consul/docs/connect/gateways/api-gateway/define-routes/route-to-peered-services)

| -| [`MeshServices`](/consul/docs/connect/gateways/api-gateway/configuration/meshservices) | Enables routes to reference services in Consul. | [Route traffic to peered services in Kubernetes](/consul/docs/connect/gateways/api-gateway/define-routes/route-to-peered-services) | -| [`ServiceIntentions`](/consul/docs/connect/config-entries/service-intentions) | Specifies traffic communication rules between services in the mesh. Intentions also enforce rules for service-to-service traffic routed through a Consul API gateway. | General configuration for securing a service mesh | - - diff --git a/website/content/docs/connect/gateways/api-gateway/define-routes/reroute-http-requests.mdx b/website/content/docs/connect/gateways/api-gateway/define-routes/reroute-http-requests.mdx deleted file mode 100644 index 5a56aeee792a..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/define-routes/reroute-http-requests.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -layout: docs -page_title: Reroute HTTP Requests -description: >- - Learn how to configure Consul API Gateway to reroute HTTP requests to a specific path. ---- - -# Reroute HTTP Requests - -This topic describes how to configure Consul API Gateway to reroute HTTP requests. - -## Requirements - -1. Verify that the [requirements](/consul/docs/api-gateway/tech-specs) have been met. -1. Verify that the Consul API Gateway CRDs and controller have been installed and applied. Refer to [Installation](/consul/docs/connect/gateways/api-gateway/deploy/install-k8s) for details. - -## Configuration - -Specify the following fields in your `Route` configuration. Refer to the [Route configuration reference](/consul/docs/connect/gateways/api-gateway/configuration/routes) for details about the parameters. - -- [`rules.filters.type`](/consul/docs/connect/gateways/api-gateway/configuration/routes#rules-filters-type): Set this parameter to `URLRewrite` to instruct Consul API Gateway to rewrite the URL when specific conditions are met. -- [`rules.filters.urlRewrite`](/consul/docs/connect/gateways/api-gateway/configuration/routes#rules-filters-urlrewrite): Specify the `path` configuration. -- [`rules.filters.urlRewrite.path`](/consul/docs/connect/gateways/api-gateway/configuration/routes#rules-filters-urlrewrite-path): Contains the paths that incoming requests should be rewritten to based on the match conditions. - -To configure the route to accept paths with or without a trailing slash, you must make two separate routes to handle each case. - -### Example - -In the following example, requests to` /incoming-request-prefix/` are forwarded to the `backendRef` as `/prefix-backend-receives/`. As a result, requests to `/incoming-request-prefix/request-path` are received by `backendRef` as `/prefix-backend-receives/request-path`. - - - -```yaml hideClipboard -apiVersion: gateway.networking.k8s.io/v1beta1 -kind: HTTPRoute -metadata: - name: example-route - ##... -spec: - parentRefs: - - group: gateway.networking.k8s.io - kind: Gateway - name: api-gateway - rules: - - backendRefs: - . . . - filters: - - type: URLRewrite - urlRewrite: - path: - replacePrefixMatch: /prefix-backend-receives/ - type: ReplacePrefixMatch - matches: - - path: - type: PathPrefix - value: /incoming–request-prefix/ -``` - \ No newline at end of file diff --git a/website/content/docs/connect/gateways/api-gateway/define-routes/route-to-peered-services.mdx b/website/content/docs/connect/gateways/api-gateway/define-routes/route-to-peered-services.mdx deleted file mode 100644 index e323f8ea9e37..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/define-routes/route-to-peered-services.mdx +++ /dev/null @@ -1,76 +0,0 @@ ---- -page_title: Route Traffic to Peered Services -description: Learn how to configure Consul API Gateway to route traffic to services connected to the mesh through a peering connection. ---- - -# Route Traffic to Peered Services - -This topic describes how to configure Consul API Gateway to route traffic to services connected to the mesh through a cluster peering connection. - -## Requirements - -- Consul v1.14 or later -- Verify that the [requirements](/consul/docs/api-gateway/tech-specs) have been met. -- Verify that the Consul API Gateway CRDs and controller have been installed and applied. Refer to [Installation](/consul/docs/connect/gateways/api-gateway/deploy/install-k8s) for details. -- A peering connection must already be established between Consul clusters. Refer to [Cluster Peering on Kubernetes](/consul/docs/k8s/connect/cluster-peering/tech-specs) for instructions. -- The Consul service that you want to route traffic to must be exported to the cluster containing your `Gateway`. Refer to [Cluster Peering on Kubernetes](/consul/docs/k8s/connect/cluster-peering/tech-specs) for instructions. -- A `ServiceResolver` for the Consul service you want to route traffic to must be created in the cluster that contains your `Gateway`. Refer to [Service Resolver Configuration Entry](/consul/docs/connect/config-entries/service-resolver) for instructions. - -## Configuration - -Specify the following fields in your `MeshService` configuration to use this feature. Refer to the [MeshService configuration reference](/consul/docs/connect/gateways/api-gateway/configuration/meshservice) for details about the parameters. - -- [`name`](/consul/docs/connect/gateways/api-gateway/configuration/meshservice#name) -- [`peer`](/consul/docs/connect/gateways/api-gateway/configuration/meshservice#peer) - -## Example - -In the following example, routes that use `example-mesh-service` as a backend are configured to send requests to the `echo` service exported by the peered Consul cluster `cluster-02`. - - - -```yaml hideClipboard -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceResolver -metadata: - name: echo -spec: - redirect: - peer: cluster-02 - service: echo -``` - - - - -```yaml hideClipboard -apiVersion: api-gateway.consul.hashicorp.com/v1alpha1 -kind: MeshService -metadata: - name: example-mesh-service -spec: - name: echo - peer: cluster-02 -``` - - -After applying the `meshservice.yaml` configuration, an `HTTPRoute` may then reference `example-mesh-service` as its `backendRef`. - - - -```yaml hideClipboard -apiVersion: gateway.networking.k8s.io/v1beta1 -kind: HTTPRoute -metadata: - name: example-route -spec: - ... - rules: - - backendRefs: - - group: consul.hashicorp.com - kind: MeshService - name: example-mesh-service - port: 3000 - ... -``` - diff --git a/website/content/docs/connect/gateways/api-gateway/define-routes/routes-k8s.mdx b/website/content/docs/connect/gateways/api-gateway/define-routes/routes-k8s.mdx deleted file mode 100644 index 13413e82bd36..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/define-routes/routes-k8s.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -layout: docs -page_title: Define API gateway routes on Kubernetes -description: Learn how to define and attach HTTP and TCP routes to Consul API gateway listeners in Kubernetes-orchestrated networks. ---- - -# Define API gateway routes on Kubernetes - -This topic describes how to configure HTTP and TCP routes and attach them to Consul API gateway listeners in Kubernetes-orchestrated networks. Routes are rule-based configurations that allow external clients to send requests to services in the mesh. For information - -## Overview - -The following steps describe the general workflow for defining and deploying routes: - -1. Define a route configuration that specifies the protocol type, name of the gateway to attach to, and rules for routing requests. -1. Deploy the configuration to create the routes and attach them to the gateway. - -Routes and the gateways they are attached to are eventually-consistent objects. They provide feedback about their current state through a series of status conditions. As a result, you must manually check the route status to determine if the route successfully bound to the gateway. - -## Requirements - -Verify that your environment meets the requirements specified in [Technical specifications for Kubernetes](/consul/docs/connect/gateways/api-gateway/tech-specs). - -### OpenShift - -If your Kubernetes-orchestrated network runs on OpenShift, verify that OpenShift is enabled for your Consul installation. Refer to [OpenShift requirements](/consul/docs/connect/gateways/api-gateway/tech-specs#openshift-requirements) for additional information. - -## Define routes - -Define route configurations and bind them to listeners configured on the gateway so that Consul can route incoming requests to services in the mesh. - -1. Create a configuration file and specify the following fields: - - - `apiVersion`: Specifies the Kubernetes API gateway version. This must be set to `gateway.networking.k8s.io/v1beta1` - - `kind`: Set to `HTTPRoute` or `TCPRoute`. - - `metadata.name`: Specify a name for the route. The name is metadata that you can use to reference the configuration when performing Consul operations. - - `spec.parentRefs.name`: Specifies a list of API gateways that the route binds to. - - `spec. rules`: Specifies a list of routing rules for constructing a routing table that maps listeners to services. - - Refer to the [`Routes` configuration reference](/consul/docs/connect/gateways/api-gateway/configuration/routes) for details about configuring route rules. - -1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. -1. Save the configuration. - -The following example creates a route named `example-route` associated with a listener defined in `example-gateway`. - -```yaml -apiVersion: gateway.networking.k8s.io/v1beta1 -kind: HTTPRoute -metadata: - name: example-route -spec: - parentRefs: - - name: example-gateway - rules: - - backendRefs: - - kind: Service - name: echo - port: 8080 -``` - -## Deploy the route configuration - -Apply the configuration to your cluster using the `kubectl` command. The following command applies the configuration to the `consul` namespace: - -```shell-session -$ kubectl apply -f my-route.yaml -n consul -``` diff --git a/website/content/docs/connect/gateways/api-gateway/define-routes/routes-vms.mdx b/website/content/docs/connect/gateways/api-gateway/define-routes/routes-vms.mdx deleted file mode 100644 index 5fe459d2062b..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/define-routes/routes-vms.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -layout: docs -page_title: Define API gateway routes on virtual machines -description: Learn how to define and attach HTTP and TCP routes to Consul API gateway listeners so that requests from external clients can reach services in the mesh. ---- - -# Define API gateway routes on virtual machines - -This topic describes how to configure HTTP and TCP routes and attach them to Consul API gateway listeners. Routes are rule-based configurations that allow external clients to send requests to services in the mesh. - -## Overview - -The following steps describe the general workflow for defining and deploying routes: - -1. Define routes in an HTTP or TCP configuration entry. The configuration entry includes rules for routing requests, target services in the mesh for the traffic, and the name of the gateway to attach to. -1. Deploy the configuration entry to create the routes and attach them to the gateway. - -Routes and the gateways they are attached to are eventually-consistent objects. They provide feedback about their current state through a series of status conditions. As a result, you must manually check the route status to determine if the route is bound to the gateway successfully. - -## Requirements - -The following requirements must be satisfied to use API gateways on VMs: - -- Consul 1.15 or later -- A Consul cluster with service mesh enabled. Refer to [`connect`](/consul/docs/agent/config/config-files#connect) -- Network connectivity between the machine deploying the API Gateway and a Consul cluster agent or server - -### ACL requirements - -If ACLs are enabled, you must present a token with the following permissions to -configure Consul and deploy API gateway routes: - -- `mesh: read` -- `mesh: write` - -Refer [Mesh Rules](/consul/docs/security/acl/acl-rules#mesh-rules) for -additional information about configuring policies that enable you to interact -with Consul API gateway configurations. - -## Define the routes - -Define route configurations and bind them to listeners configured on the gateway so that Consul can route incoming requests to services in the mesh. - -1. Create a route configuration entry file and specify the following settings: - - `Kind`: Set to `http` or `tcp`. - - `Name`: Specify a name for the route. The name is metadata that you can use to reference the configuration when performing Consul operations. - - `Parents`: Specifies a list of API gateways that the route binds to. - - `Rules`: If you are configuring HTTP routes, define a list of routing rules for constructing a routing table that maps listeners to services. Each member of the list is a map that may containing the following fields: - - `Filters` - - `Matches` - - `Services` - - Refer to the [HTTP route configuration entry](/consul/docs/connect/config-entries/http-route) and [TCP route configuration entry](/consul/docs/connect/config-entries/tcp-route) reference for details about configuring routes. - -1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. -1. Save the configuration. - - -The following example routes requests from the listener on the API gateway at port `8443` to services in Consul based on the path of the request. When an incoming request starts at path `/`, Consul forwards 90 percent of the requests to the `ui` service and 10 percent to `experimental-ui`. Consul also forwards requests starting with `/api` to `api`. - -```hcl -Kind = "http-route" -Name = "my-http-route" - -// Rules define how requests will be routed -Rules = [ - // Send all requests to UI services with 10% going to the "experimental" UI - { - Matches = [ - { - Path = { - Match = "prefix" - Value = "/" - } - } - ] - Services = [ - { - Name = "ui" - Weight = 90 - }, - { - Name = "experimental-ui" - Weight = 10 - } - ] - }, - // Send all requests that start with the path `/api` to the API service - { - Matches = [ - { - Path = { - Match = "prefix" - Value = "/api" - } - } - ] - Services = [ - { - Name = "api" - } - ] - } -] - -Parents = [ - { - Kind = "api-gateway" - Name = "my-gateway" - SectionName = "my-http-listener" - } -] -``` - -## Deploy the route configuration - -Run the `consul config write` command to attach the routes to the specified gateways. The following example writes a configuration called `my-http-route.hcl`: - -```shell-session -$ consul config write my-http-route.hcl -``` \ No newline at end of file diff --git a/website/content/docs/connect/gateways/api-gateway/deploy/listeners-k8s.mdx b/website/content/docs/connect/gateways/api-gateway/deploy/listeners-k8s.mdx deleted file mode 100644 index 7ce6ed9c5002..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/deploy/listeners-k8s.mdx +++ /dev/null @@ -1,74 +0,0 @@ ---- -layout: docs -page_title: Deploy API gateway listeners in Kubernetes -description: >- - Learn how to create API gateway configurations in Kubernetes that enable you to instantiate gateway instances. ---- - -# Deploy API gateway listeners in Kubernetes - -This topic describes how to deploy Consul API gateway listeners to Kubernetes-orchestrated environments. If you want to implement API gateway listeners on VMs, refer to [Deploy API gateway listeners to virtual machines](/consul/docs/connect/gateways/api-gateway/deploy/listeners-vms). - -## Overview - -API gateways have one or more listeners that serve as ingress points for requests to services in a Consul service mesh. Create an [API gateway configuration](/consul/docs/connect/gateways/api-gateway/configuration/gateway) and define listeners that expose ports on the endpoint for ingress. Apply the configuration to direct Kubernetes to start API gateway services. - -### Routes - -After deploying the gateway, attach HTTP or TCP [routes](/consul/docs/connect/gateways/api-gateway/configuration/routes) to listeners defined in the gateway to control how requests route to services in the network. - -### Intentions - -Configure Consul intentions to allow or prevent traffic between gateway listeners and services in the mesh. Refer to [Service intentions](/consul/docs/connect/intentions) for additional information. - - -## Requirements - -1. Verify that your environment meets the requirements specified in [Technical specifications for Kubernetes](/consul/docs/connect/gateways/api-gateway/tech-specs). -1. Verify that the Consul API Gateway CRDs were applied. Refer to [Installation](/consul/docs/connect/gateways/api-gateway/install-k8s) for details. -1. If your Kubernetes-orchestrated network runs on OpenShift, verify that OpenShift is enabled for your Consul installation. Refer to [OpenShift requirements](/consul/docs/connect/gateways/api-gateway/tech-specs#openshift-requirements) for additional information. - -## Define the gateway and listeners - -Create an API gateway values file that defines the gateway and listeners. - -1. Specify the following fields: - - `apiVersion`: Specifies the Kubernetes gateway API version. Must be `gateway.networking.k8s.io/v1beta1`. - - `kind`: Specifies the type of configuration entry to implement. This must be `Gateway`. - - `metadata.name`: Specify a name for the gateway configuration. The name is metadata that you can use to reference the configuration when performing Consul operations. - - `spec.gatewayClassName`: Specify the name of a `gatewayClass` configuration. Gateway classes are template-like resources in Kubernetes for instantiating gateway services. Specify `consul` to use the default gateway class shipped with Consul. Refer to the [GatewayClass configuration reference](/consul/docs/connect/gateways/api-gateway/configuration/gatewayclass) for additional information. - - `spec.listeners`: Specify a list of listener configurations. Each listener is map containing the following fields: - - `port`: Specifies the port that the listener receives traffic on. - - `name`: Specifies a unique name for the listener. - - `protocol`: You can set either `tcp` or `http` - - `allowedRoutes.namespaces`: Contains configurations for determining which namespaces are allowed to attach a route to the listener. -1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. Refer to the [API gateway configuration entry reference](/consul/docs/connect/gateways/api-gateway/configuration/gateway) for additional information. -1. Save the configuration. - -In the following example, the API gateway specifies an HTTP listener on port `80`: - -```yaml -apiVersion: gateway.networking.k8s.io/v1beta1 -kind: Gateway -metadata: - name: my-gateway - namespace: consul -spec: - gatewayClassName: consul - listeners: - - protocol: HTTP - port: 80 - name: http - allowedRoutes: - namespaces: - from: "All" -``` - - -## Deploy the API gateway and listeners - -Apply the configuration to your cluster using the `kubectl` command. The following command applies the configuration to the `consul` namespace: - -```shell-session -$ kubectl apply -f my-gateway.yaml -n consul -``` \ No newline at end of file diff --git a/website/content/docs/connect/gateways/api-gateway/deploy/listeners-vms.mdx b/website/content/docs/connect/gateways/api-gateway/deploy/listeners-vms.mdx deleted file mode 100644 index 1a868ca85703..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/deploy/listeners-vms.mdx +++ /dev/null @@ -1,113 +0,0 @@ ---- -layout: docs -page_title: Deploy API gateway listeners to virtual machines -description: Learn how to configure and Consul API gateways and gateway listeners on virtual machines so that you can enable ingress requests to services in your service mesh in VM environments. ---- - -# Deploy API gateway listeners to virtual machines - -This topic describes how to deploy Consul API gateway listeners to networks that operate in virtual machine (VM) environments. If you want to implement API gateway listeners in a Kubernetes environment, refer to [Deploy API gateway listeners to Kubernetes](/consul/docs/connect/gateways/api-gateway/deploy/listeners-k8s). - -## Overview - -API gateways have one or more listeners that serve as ingress points for requests to services in a Consul service mesh. Create an [API gateway configuration entry](/consul/docs/connect/config-entries/api-gateway) and define listeners that expose ports on the endpoint for ingress. - -The following steps describe the general workflow for deploying a Consul API gateway to a VM environment: - -1. Create an API gateway configuration entry. The configuration entry includes listener configurations and references to TLS certificates. -1. Deploy the API gateway configuration entry to create the listeners. - -### Encryption - -To encrypt traffic between the external client and the service that the API gateway routes traffic to, define an inline certificate configuration and attach it to your listeners. Refer to [Encrypt API gateway traffic on virtual machines](/consul/docs/connect/gateways/api-gateway/secure-traffic/encrypt-vms) for additional information. - -### Routes - -After deploying the gateway, attach [HTTP](/consul/docs/connect/config-entries/http-route) routes and [TCP](/consul/docs/connect/config-entries/tcp-route) routes to listeners defined in the gateway to control how requests route to services in the network. Refer to [Define API gateway routes on VMs](/consul/docs/connect/gateways/api-gateway/define-routes/routes-vms) for additional information. - -## Requirements - -The following requirements must be satisfied to use API gateways on VMs: - -- Consul 1.15 or later -- A Consul cluster with service mesh enabled. Refer to [`connect`](/consul/docs/agent/config/config-files#connect) -- Network connectivity between the machine deploying the API Gateway and a - Consul cluster agent or server - -### ACL requirements - -If ACLs are enabled, you must present a token with the following permissions to -configure Consul and deploy API gateways: - -- `mesh: read` -- `mesh: write` - -Refer to [Mesh Rules](/consul/docs/security/acl/acl-rules#mesh-rules) for -additional information about configuring policies that enable you to interact -with Consul API gateway configurations. - -## Define the gateway and listeners - -Create an API gateway configuration entry that defines listeners and TLS certificates -in the mesh. - -1. Specify the following fields: - - `Kind`: Specifies the type of configuration entry to implement. This must be `api-gateway`. - - `Name`: Specify a name for the gateway configuration. The name is metadata that you can use to reference the configuration entry when performing Consul operations. - - `Listeners`: Specify a list of listener configurations. Each listener is map containing the following fields: - - `Port`: Specifies the port that the listener receives traffic on. - - `Name`: Specifies a unique name for the listener. - - `Protocol`: You can set either `tcp` or `http` - - `TLS`: Defines TLS encryption configurations for the listener. - - Refer to the [API gateway configuration entry reference](/consul/docs/connect/config-entries/api-gateway) for details on how to define fields in the `Listeners` block. -1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. Refer to the [API gateway configuration entry reference](/consul/docs/connect/config-entries/api-gateway) for additional information. -1. Save the configuration. - -In the following example, the API gateway specifies an HTTP listener on port `8443`. It also requires an inline-certificate configuration entry named `my-certificate` that contains a valid certificate and private key pair: - -```hcl -Kind = "api-gateway" -Name = "my-gateway" - -// Each listener configures a port which can be used to access the Consul cluster -Listeners = [ - { - Port = 8443 - Name = "my-http-listener" - Protocol = "http" - TLS = { - Certificates = [ - { - Kind = "inline-certificate" - Name = "my-certificate" - } - ] - } - } -] -``` - -Refer to [API Gateway Configuration Reference](/consul/docs/connect/gateways/api-gateway/configuration/api-gateway) for -information about all configuration fields. - -Gateways and routes are eventually-consistent objects that provide feedback -about their current state through a series of status conditions. As a result, -you must manually check the route status to determine if the route -bound to the gateway successfully. - -## Deploy the API gateway and listeners - -Use the `consul config write` command to implement the API gateway configuration entry. The following command applies the configuration entry for the main gateway object: - -```shell-session -$ consul config write gateways.hcl -``` - -Run the following command to deploy an API gateway instance: - -```shell-session -$ consul connect envoy -gateway api -register -service my-api-gateway -``` - -The command directs Consul to configure Envoy as an API gateway. Gateways and routes are eventually-consistent objects that provide feedback about their current state through a series of status conditions. As a result, you must manually check the route status to determine if the route successfully bound to the gateway successfully. diff --git a/website/content/docs/connect/gateways/api-gateway/errors.mdx b/website/content/docs/connect/gateways/api-gateway/errors.mdx deleted file mode 100644 index 96ed7364fe6e..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/errors.mdx +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: docs -page_title: Consul API Gateway Error Messages -description: >- - Learn how to apply a configured Consul API Gateway to your Kubernetes cluster, review the required fields for rerouting HTTP requests, and troubleshoot an error message. ---- - -# Error Messages - -This topic provides information about potential error messages associated with Consul API Gateway. If you receive an error message that does not appear in this section, refer to the following resources: - -- [Common Consul errors](/consul/docs/troubleshoot/common-errors#common-errors-on-kubernetes) -- [Consul troubleshooting guide](/consul/docs/troubleshoot/common-errors) -- [Consul Discuss forum](https://discuss.hashicorp.com/) - - - -## Helm installation failed: "no matches for kind" - -```log -Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "GatewayClass" in version "gateway.networking.k8s.io/v1alpha2", unable to recognize "": no matches for kind "GatewayClassConfig" in version "api-gateway.consul.hashicorp.com/v1alpha1"] -``` -**Conditions:** -Consul API Gateway generates this error when the required CRD files have not been installed in Kubernetes prior to installing Consul API Gateway. - -**Impact:** -The installation process typically fails after this error message is generated. - -**Resolution:** -Install the required CRDs. Refer to the [Consul API Gateway installation instructions](/consul/docs/connect/gateways/api-gateway/deploy/install-k8s) for instructions. - -## Operation cannot be fulfilled, the object has been modified - -``` -{"error": "Operation cannot be fulfilled on gatewayclassconfigs.consul.hashicorp.com \"consul-api-gateway\": the object has been modified; please apply your changes to the latest version and try again"} - -``` -**Conditions:** -This error occurs when the gateway controller attempts to update an object that has been modified previously. It is a normal part of running the controller and will resolve itself by automatically retrying. - -**Impact:** -Excessive error logs are produced, but there is no impact to the functionality of the controller. - -**Resolution:** -No action needs to be taken to resolve this issue. diff --git a/website/content/docs/connect/gateways/api-gateway/index.mdx b/website/content/docs/connect/gateways/api-gateway/index.mdx deleted file mode 100644 index c6202d898b08..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/index.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -layout: docs -page_title: API gateways overview -description: API gateways provide an ingress point for service mesh traffic. Learn how API gateways add listeners for external traffic and route HTTP requests to services in the mesh. ---- - -# API gateways overview - -This topic provides overview information about API gateways in Consul. API gateways enable external network clients to access applications and services running in a Consul datacenter. Consul API gateways can also forward requests from clients to specific destinations based on path or request protocol. - -## API gateway use cases - -API gateways solve the following primary use cases: - -- **Control access at the point of entry**: Set the protocols of external connection requests and secure inbound connections with TLS certificates from trusted providers, such as Verisign and Let's Encrypt. -- **Simplify traffic management**: Load balance requests across services and route traffic to the appropriate service by matching one or more criteria, such as hostname, path, header presence or value, and HTTP method. - -## Workflows - -You can deploy API gateways to networks that implement a variety of computing environments: - -- Services hosted on VMs -- Kubernetes-orchestrated service containers -- Kubernetes-orchestrated service containers in OpenShift - -The following steps describe the general workflow for deploying a Consul API gateways: - -1. For Kubernetes-orchestrated services, install Consul on your cluster. For Kubernetes-orchestrated services on OpenShift, you must also enable the `openShift.enabled` parameter. Refer to [Install Consul on Kubernetes](/consul/docs/connect/gateways/api-gateway/install-k8s) for additional information. -1. Define and deploy the API gateway configurations to create the API gateway artifacts. For VM-hosted services, create configuration entries for the gateway service, listeners configurations, and TLS certificates. For Kubernetes-orchestrated services, configurations also include `GatewayClassConfig` and `parametersRef`. All Consul API Gateways created in Kubernetes with the `consul-k8s` Helm chart v1.5.0 or later use file system certificates when TLS is enabled. - -1. Define and deploy routes between the gateway listeners and services in the mesh. - -Gateway configurations are modular, so you can define and attach routes and inline certificates to multiple gateways. - -## Technical specifications - -Refer to [Technical specifications for API gateways on Kubernetes](/consul/docs/connect/gateways/api-gateway/tech-specs) for additional details and considerations about using API gateways in Kubernetes-orchestrated networks. - -## Guidance - -Refer to the following resources for help setting up and using API gateways: - -### Tutorials - -- [Control access into the service mesh with Consul API gateway](/consul/tutorials/developer-mesh/kubernetes-api-gateway) - -### Usage documentation - -- [Deploy API gateway listeners to VMs](/consul/docs/connect/gateways/api-gateway/deploy/listeners-vms) -- [Deploy API gateway listeners to Kubernetes](/consul/docs/connect/gateways/api-gateway/deploy/listeners-k8s) -- [Deploy API gateway routes to VMs](/consul/docs/connect/gateways/api-gateway/define-routes/routes-vms) -- [Deploy API gateway routes to Kubernetes](/consul/docs/connect/gateways/api-gateway/define-routes/routes-k8s) -- [Reroute HTTP requests in Kubernetes](/consul/docs/connect/gateways/api-gateway/define-routes/reroute-http-requests) -- [Route traffic to peered services in Kubernetes](/consul/docs/connect/gateways/api-gateway/define-routes/route-to-peered-services) -- [Encrypt API gateway traffic on VMs](/consul/docs/connect/gateways/api-gateway/secure-traffic/encrypt-vms) -- [Use JWTs to verify requests to API gateways on VMs](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms) -- [Use JWTs to verify requests to API gateways on Kubernetes](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-k8s) - -### Reference - -- [API gateway configuration reference overview](/consul/docs/connect/gateways/api-gateway/configuration/) -- [Error messages](/consul/docs/connect/gateways/api-gateway/errors) diff --git a/website/content/docs/connect/gateways/api-gateway/install-k8s.mdx b/website/content/docs/connect/gateways/api-gateway/install-k8s.mdx deleted file mode 100644 index 9362100cde04..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/install-k8s.mdx +++ /dev/null @@ -1,133 +0,0 @@ ---- -layout: docs -page_title: Install API Gateway for Kubernetes -description: >- - Learn how to install custom resource definitions (CRDs) and configure the Helm chart so that you can run Consul API Gateway on your Kubernetes deployment. ---- - -# Install API gateway for Kubernetes - -The Consul API gateway ships with Consul and is automatically installed when you install Consul on Kubernetes. Before you begin the installation process, verify that the environment you are deploying Consul and the API gateway in meets the requirements listed in the [Technical Specifications](/consul/docs/connect/gateways/api-gateway/tech-specs). Refer to the [Release Notes](/consul/docs/release-notes) for any additional information about the version you are deploying. - -1. The Consul Helm chart deploys the API gateway using the configuration specified in the `values.yaml` file. Refer to [Helm Chart Configuration - `connectInject.apiGateway`](/consul/docs/k8s/helm#apigateway) for information about the Helm chart configuration options. Create a `values.yaml` file for configuring your Consul API gateway deployment and include the following settings: - - - - - - - ```yaml - global: - name: consul - connectInject: - enabled: true - apiGateway: - manageExternalCRDs: true - ``` - - - - - - - If you are installing Consul on an OpenShift Kubernetes cluster, you must include the `global.openShift.enabled` parameter and set it to `true`. Refer to [OpenShift requirements](/consul/docs/connect/gateways/api-gateway/tech-specs#openshift-requirements) for additional information. - - - - ```yaml - global: - openshift: - enabled: true - connectInject: - enabled: true - apiGateway: - manageExternalCRDs: true - cni: - enabled: true - logLevel: info - multus: true - cniBinDir: "/var/lib/cni/bin" - cniNetDir: "/etc/kubernetes/cni/net.d" - ``` - - - - - - By default, GKE Autopilot installs [Gateway API resources](https://gateway-api.sigs.k8s.io), so we recommend customizing the `connectInject.apiGateway` stanza to accommodate the pre-installed Gateway API CRDs. - - The following working example enables both Consul Service Mesh and Consul API Gateway on GKE Autopilot. Refer to [`connectInject.agiGateway` in the Helm chart reference](https://developer.hashicorp.com/consul/docs/k8s/helm#v-connectinject-apigateway) for additional information. - - - - ```yaml - global: - name: consul - connectInject: - enabled: true - apiGateway: - manageExternalCRDs: false - manageNonStandardCRDs: true - cni: - enabled: true - logLevel: debug - cniBinDir: "/home/kubernetes/bin" - cniNetDir: "/etc/cni/net.d" - server: - resources: - requests: - memory: "500Mi" - cpu: "500m" - limits: - memory: "500Mi" - cpu: "500m" - ``` - - - - - -1. Install Consul API Gateway using the standard Consul Helm chart or Consul K8s CLI specify the custom values file. Refer to the [Consul Helm chart](https://github.com/hashicorp/consul-k8s/releases) in GitHub releases for the available versions. - - - - - Refer to the official [Consul K8S CLI documentation](/consul/docs/k8s/k8s-cli) to find additional settings. - - ```shell-session - $ brew tap hashicorp/tap - ``` - - ```shell-session - $ brew install hashicorp/tap/consul-k8s - ``` - - ```shell-session - $ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.17.0 - ``` - - - - - Add the HashiCorp Helm repository. - - ```shell-session - $ helm repo add hashicorp https://helm.releases.hashicorp.com - ``` - - Install Consul with API Gateway on your Kubernetes cluster by specifying the `values.yaml` file. - - ```shell-session - $ helm install consul hashicorp/consul --version 1.3.0 --values values.yaml --create-namespace --namespace consul - ``` - - - - - - -[tech-specs]: /consul/docs/api-gateway/tech-specs -[rel-notes]: /consul/docs/release-notes diff --git a/website/content/docs/connect/gateways/api-gateway/secure-traffic/encrypt-vms.mdx b/website/content/docs/connect/gateways/api-gateway/secure-traffic/encrypt-vms.mdx deleted file mode 100644 index fd1cdfe86cb0..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/secure-traffic/encrypt-vms.mdx +++ /dev/null @@ -1,89 +0,0 @@ ---- -layout: docs -page_title: Encrypt API gateway traffic on virtual machines -description: Learn how to define inline certificate config entries and deploy them to Consul. Inline certificate and file system certificate configuration entries enable you to attach TLS certificates and keys to gateway listeners so that traffic between external clients and gateway listeners is encrypted. ---- - -# Encrypt API gateway traffic on virtual machines - -This topic describes how to make TLS certificates available to API gateways so that requests between the user and the gateway endpoint are encrypted. - -## Requirements - -- Consul v1.15 or later is required to use the Consul API gateway on VMs - - Consul v1.19 or later is required to use the [file system certificate configuration entry](/consul/docs/connect/config-entries/file-system-certificate) -- You must have a certificate and key from your CA -- A Consul cluster with service mesh enabled. Refer to [`connect`](/consul/docs/agent/config/config-files#connect) -- Network connectivity between the machine deploying the API gateway and a - Consul cluster agent or server - -### ACL requirements - -If ACLs are enabled, you must present a token with the following permissions to -configure Consul and deploy API gateways: - -- `mesh: read` -- `mesh: write` - -Refer [Mesh Rules](/consul/docs/security/acl/acl-rules#mesh-rules) for -additional information about configuring policies that enable you to interact -with Consul API gateway configurations. - -## Define TLS certificates - -1. Create a [file system certificate](/consul/docs/connect/config-entries/file-system-certificate) or [inline certificate](/consul/docs/connect/config-entries/inline-certificate) and specify the following fields: - - `Kind`: Specifies the type of configuration entry. This must be set to `file-system-certificate` or `inline-certificate`. - - `Name`: Specify the name in the [API gateway listener configuration](/consul/docs/connect/gateways/api-gateway/configuration/api-gateway#listeners) to bind the certificate to that listener. - - `Certificate`: Specifies the filepath to the certificate on the local system or the inline public certificate as plain text. - - `PrivateKey`: Specifies the filepath to private key on the local system or the inline private key to as plain text. -1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. Refer to the [file system certificate configuration reference](/consul/docs/connect/config-entries/file-system-certificate) or [inline certificate configuration reference](/consul/docs/connect/config-entries/inline-certificate) for more information. -1. Save the configuration. - -### Examples - - - - - -The following example defines a certificate named `my-certificate`. API gateway configurations that specify `inline-certificate` in the `Certificate.Kind` field and `my-certificate` in the `Certificate.Name` field are able to use the certificate. - -```hcl -Kind = "inline-certificate" -Name = "my-certificate" - -Certificate = < - - - -The following example defines a certificate named `my-certificate`. API gateway configurations that specify `file-system-certificate` in the `Certificate.Kind` field and `my-certificate` in the `Certificate.Name` field are able to use the certificate. - -```hcl -Kind = "file-system-certificate" -Name = "my-certificate" -Certificate = "/opt/consul/tls/api-gateway.crt" -PrivateKey = "/opt/consul/tls/api-gateway.key" -``` - - - - -## Deploy the configuration to Consul - -Run the `consul config write` command to enable listeners to use the certificate. The following example writes a configuration called `my-certificate.hcl`: - -```shell-session -$ consul config write my-certificate.hcl -``` diff --git a/website/content/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-k8s.mdx b/website/content/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-k8s.mdx deleted file mode 100644 index 6bd8f28ccd84..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-k8s.mdx +++ /dev/null @@ -1,226 +0,0 @@ ---- -layout: docs -page_title: Use JWTs to verify requests to API gateways on Kubernetes -description: Learn how to use JSON web tokens (JWT) to verify requests from external clients to listeners on an API gateway on Kubernetes-orchestrated networks. ---- - -# Use JWTs to verify requests to API gateways on Kubernetes - -This topic describes how to use JSON web tokens (JWT) to verify requests to API gateways deployed to Kubernetes-orchestrated containers. If your API gateway is deployed to virtual machines, refer to [Use JWTs to verify requests to API gateways on VMs](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms). - - This feature is available in Consul Enterprise. - -## Overview - -You can configure API gateways to use JWTs to verify incoming requests so that you can stop unverified traffic at the gateway. You can configure JWT verification at different levels: - -- Listener defaults: Define basic defaults in a GatewayPolicy resource to apply them to all routes attached to a listener. -- HTTP route-specific settings: You can define JWT authentication settings for specific HTTP routes. Route-specific JWT settings override default listener configurations. -- Listener overrides: Define override settings in a GatewayPolicy resource that take precedence over default and route-specific configurations. Use override settings to set enforceable policies for listeners. - - -Complete the following steps to use JWTs to verify requests: - -1. Define a JWTProvider that specifies the JWT provider and claims used to verify requests to the gateway. -1. Define a GatewayPolicy that specifies default and override settings for API gateway listeners and attach it to the gateway. -1. Define a RouteAuthFilter that specifies route-specific JWT verification settings. -1. Reference the RouteAuthFilter from the HTTPRoute. -1. Apply the configurations. - - -## Requirements - -- Consul v1.17+ -- Consul on Kubernetes CLI or Helm chart v1.3.0+ -- JWT details, such as claims and provider - - -## Define a JWTProvider - -Create a `JWTProvider` CRD that defines the JWT provider to verify claims against. - -In the following example, the JWTProvider CRD contains a local JWKS. In production environments, use a production-grade JWKs endpoint instead. - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: JWTProvider -metadata: - name: local -spec: - issuer: local - jsonWebKeySet: - local: - jwks: "" -``` - - - -For more information about the fields you can configure in this CRD, refer to [`JWTProvider` configuration reference](/consul/docs/connect/config-entries/jwtprovider). - -## Define a GatewayPolicy - -Create a `GatewayPolicy` CRD that defines default and override settings for JWT verification. - -- `kind`: Must be set to `GatewayPolicy` -- `metadata.name`: Specifies a name for the policy. -- `spec.targetRef.name`: Specifies the name of the API gateway to attach the policy to. -- `spec.targetRef.kind`: Specifies the kind of resource to attach to the policy to. Must be set to `Gateway`. -- `spec.targetRef.group`: Specifies the resource group. Unless you have created a custom group, this should be set to `gateway.networking.k8s.io/v1beta1`. -- `spec.targetRef.sectionName`: Specifies a part of the gateway that the policy applies to. -- `spec.targetRef.override.jwt.providers`: Specifies a list of providers and claims used to verify requests to the gateway. The override settings take precedence over the default and route-specific JWT verification settings. -- `spec.targetRef.default.jwt.providers`: Specifies a list of default providers and claims used to verify requests to the gateway. - -The following examples configure a Gateway and the GatewayPolicy being attached to it so that every request coming through the listener must meet these conditions: - -- The request must be signed by the `local` provider -- The request must have a claim of `role` with a value of `user` unless the HTTPRoute attached to the listener overrides it - - - - - - -```yaml -apiVersion: gateway.networking.k8s.io/v1beta1 -kind: Gateway -metadata: - name: api-gateway -spec: - gatewayClassName: consul - listeners: - - protocol: HTTP - port: 30002 - name: listener-one -``` - - - - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: GatewayPolicy -metadata: - name: gw-policy -spec: - targetRef: - name: api-gateway - sectionName: listener-one - group: gateway.networking.k8s.io/v1beta1 - kind: Gateway - override: - jwt: - providers: - - name: "local" - default: - jwt: - providers: - - name: "local" - verifyClaims: - - path: - - role - value: user -``` - - - - - - -For more information about the fields you can configure, refer to [`GatewayPolicy` configuration reference](/consul/docs/connect/gateways/api-gateway/configuration/gatewaypolicy). - -## Define a RouteAuthFilter - -Create an `RouteAuthFilter` CRD that defines overrides for the default JWT verification configured in the GatewayPolicy. - -- `kind`: Must be set to `RouteAuthFilter` -- `metadata.name`: Specifies a name for the filter. -- `metadata.namespace`: Specifies the Consul namespace the filter applies to. -- `spec.jwt.providers`: Specifies a list of providers and claims used to verify requests to the gateway. The override settings take precedence over the default and route-specific JWT verification settings. - -In the following example, the RouteAuthFilter overrides default settings set in the GatewayPolicy so that every request coming through the listener must meet these conditions: - -- The request must be signed by the `local` provider -- The request must have a `role` claim -- The value of the claim must be `admin` - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: RouteAuthFilter -metadata: - name: auth-filter -spec: - jwt: - providers: - - name: local - verifyClaims: - - path: - - role - value: admin -``` - - - -For more information about the fields you can configure, refer to [`RouteAuthFilter` configuration reference](/consul/docs/connect/gateways/api-gateway/configuration/routeauthfilter). - -## Attach the auth filter to your HTTP routes - -In the `filters` field of your HTTPRoute configuration, define the filter behavior that results from JWT verification. - -- `type: extensionRef`: Declare list of extension references. -- `extensionRef.group`: Specifies the resource group. Unless you have created a custom group, this should be set to `gateway.networking.k8s.io/v1beta1`. -- `extensionRef.kind`: Specifies the type of extension reference to attach to the route. Must be `RouteAuthFilter` -- `extensionRef.name`: Specifies the name of the auth filter. - -The following example configures an HTTPRoute so that every request to `api-gateway-fqdn:3002/admin` must meet these conditions: - -- The request be signed by the `local` provider. -- The request must have a `role` claim. -- The value of the claim must be `admin`. - -Every other request must be signed by the `local` provider and have a claim of `role` with a value of `user`, as defined in the GatewayPolicy. - - - -```yaml -apiVersion: gateway.networking.k8s.io/v1beta1 -kind: HTTPRoute -metadata: - name: http-route -spec: - parentRefs: - - name: api-gateway - rules: - - matches: - - path: - type: PathPrefix - value: /admin - filters: - - type: ExtensionRef - extensionRef: - group: consul.hashicorp.com - kind: RouteAuthFilter - name: auth-filter - backendRefs: - - kind: Service - name: admin - port: 8080 - - matches: - - path: - type: PathPrefix - value: / - backendRefs: - - kind: Service - name: user-service - port: 8081 -``` - - diff --git a/website/content/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms.mdx b/website/content/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms.mdx deleted file mode 100644 index fda579669fac..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms.mdx +++ /dev/null @@ -1,184 +0,0 @@ ---- -layout: docs -page_title: Use JWTs to verify requests to API gateways on virtual machines -description: Learn how to use JSON web tokens (JWT) to verify requests from external clients to listeners on an API gateway. ---- - -# Use JWTs to verify requests to API gateways on virtual machines - -This topic describes how to use JSON web tokens (JWT) to verify requests to API gateways on virtual machines (VM). If your services are deployed to Kubernetes-orchestrated containers, refer to [Use JWTs to verify requests to API gateways on Kubernetes](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-k8s). - - This feature is available in Consul Enterprise. - -## Overview - -You can configure API gateways to use JWTs to verify incoming requests so that you can stop unverified traffic at the gateway. You can configure JWT verification at different levels: - -- Listener defaults: Define basic defaults that apply to all routes attached to a listener. -- HTTP route-specific settings: You can define JWT authentication settings for specific HTTP routes. Route-specific JWT settings override default configurations. -- Listener overrides: Define override settings that take precedence over default and route-specific configurations. This enables you to set enforceable policies for listeners. - -Complete the following steps to use JWTs to verify requests: - -1. Define a JWTProvider that specifies the JWT provider and claims used to verify requests to the gateway. -1. Configure default and override settings for listeners in the API gateway configuration entry. -1. Define route-specific JWT verification settings as filters in the HTTP route configuration entries. -1. Write the configuration entries to Consul to begin verifying requests using JWTs. - -## Requirements - -- Consul 1.17 or later -- JWT details, such as claims and provider - -## Define a JWTProvider - -Create a JWTProvider config entry that defines the JWT provider to verify claims against. -In the following example, the JWTProvider CRD contains a local JWKS. In production environments, use a production-grade JWKs endpoint instead. - - - -```hcl -Kind = "jwt-provider" -Name = "local" - -Issuer = "local" - -JSONWebKeySet = { - Local = { - JWKS="" - } -} -``` - - - -For more information about the fields you can configure in this CRD, refer to [`JWTProvider` configuration reference](/consul/docs/connect/config-entries/jwtprovider). - -## Configure default and override settings - -Define default and override settings for JWT verification in the [API gateway configuration entry](/consul/docs/connect/gateways/api-gateway/configuration/api-gateway). - -1. Add a `default.JWT` block to the listener that you want to apply JWT verification to. Consul applies these configurations to routes attached to the listener. Refer to the [`Listeners.default.JWT`](/consul/docs/connect/config-entries/api-gateway#listeners-default-jwt) configuration reference for details. -1. Add an `override.JWT` block to the listener that you want to apply JWT verification policies to. Consul applies these configurations to all routes attached to the listener, regardless of the `default` or route-specific settings. Refer to the [`Listeners.override.JWT`](/consul/docs/connect/config-entries/api-gateway#listeners-override-jwt) configuration reference for details. -1. Apply the settings in the API gateway configuration entry. You can use the [`/config` API endpoint](/consul/api-docs/config#apply-configuration) or the [`consul config write` command](/consul/commands/config/write). - -The following examples configure a Gateway so that every request coming through the listener must meet these conditions: -- The request must be signed by the `local` provider -- The request must have a claim of `role` with a value of `user` unless the HTTPRoute attached to the listener overrides it - - - -```hcl -Kind = "api-gateway" -Name = "api-gateway" -Listeners = [ - { - Name = "listener-one" - Port = 9001 - Protocol = "http" - Override = { - JWT = { - Providers = [ - { - Name = "local" - } - ] - } - } - default = { - JWT = { - Providers = [ - { - Name = "local" - VerifyClaims = [ - { - Path = ["role"] - Value = "pet" - } - ] - } - ] - } - } - } -] -``` - - - -## Configure verification for specific HTTP routes - -Define filters to enable route-specific JWT verification settings in the [HTTP route configuration entry](/consul/docs/connect/config-entries/http-route). - -1. Add a `JWT` configuration to the `rules.filter` block. Route-specific configurations that overlap the [default settings ](/consul/docs/connect/config-entries/api-gateway#listeners-default-jwt) in the API gateway configuration entry take precedence. Configurations defined in the [listener override settings](/consul/docs/connect/config-entries/api-gateway#listeners-override-jwt) take the highest precedence. -1. Apply the settings in the API gateway configuration entry. You can use the [`/config` API endpoint](/consul/api-docs/config#apply-configuration) or the [`consul config write` command](/consul/commands/config/write). - -The following example configures an HTTPRoute so that every request to `api-gateway-fqdn:3002/admin` must meet these conditions: -- The request be signed by the `local` provider. -- The request must have a `role` claim. -- The value of the claim must be `admin`. - -Every other request must be signed by the `local` provider and have a claim of `role` with a value of `user`, as defined in the Gateway listener. - - - -```hcl -Kind = "http-route" -Name = "api-gateway-route" -Parents = [ - { - SectionName = "listener-one" - Name = "api-gateway" - Kind = "api-gateway" - }, -] -Rules = [ - { - Matches = [ - { - Path = { - Match = "prefix" - Value = "/admin" - } - } - ] - Filters = { - JWT = { - Providers = [ - { - Name = "local" - VerifyClaims = [ - { - Path = ["role"] - Value = "admin" - } - ] - } - ] - } - } - Services = [ - { - Name = "admin-service" - } - ] - }, - { - Matches = [ - { - Path = { - Match = "prefix" - Value = "/" - } - } - ] - Services = [ - { - Name = "user-service" - } - ] - }, -] -``` - - diff --git a/website/content/docs/connect/gateways/api-gateway/tech-specs.mdx b/website/content/docs/connect/gateways/api-gateway/tech-specs.mdx deleted file mode 100644 index 9a79f75ca122..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/tech-specs.mdx +++ /dev/null @@ -1,154 +0,0 @@ ---- -layout: docs -page_title: API gateway for Kubernetes technical specifications -description: >- - Learn about the requirements for installing and using the Consul API gateway for Kubernetes, including required ports, component version minimums, Consul Enterprise limitations, and compatible k8s cloud environments. ---- - -# API gateway for Kubernetes technical specifications - -This topic describes the requirements and technical specifications associated with using Consul API gateway. - -## Datacenter requirements - -Your datacenter must meet the following requirements prior to configuring the Consul API gateway: - -- HashiCorp Consul Helm chart v1.2.0 and later - -## TCP port requirements - -The following table describes the TCP port requirements for each component of the API gateway. - -| Port | Description | Component | -| ---- | ----------- | --------- | -| 20000 | Kubernetes readiness probe | Gateway instance pod | -| Configurable | Port for scraping Prometheus metrics. Disabled by default. | Gateway controller pod | - -## OpenShift requirements - -You can deploy API gateways to Kubernetes clusters managed by Red Hat OpenShift, which is a security-conscious, opinionated wrapper for Kubernetes. To enable OpenShift support, add the following parameters to your Consul values file and apply the configuration: - -```yaml - openshift: - enabled: true - ``` - -Refer to the following topics for additional information: - -- [Install Consul on OpenShift clusters with Helm](/consul/docs/k8s/installation/install#install-consul-on-openshift-clusters) -- [Install Consul on OpenShift clusters with the `consul-k8s` CLI](/consul/docs/k8s/installation/install-cli#install-consul-on-openshift-clusters) - -### Security context constraints - -OpenShift requires a security context constraint (SCC) configuration, which restricts pods to specific groups. You can create a custom SCC or use one of the default constraints. Refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.13/authentication/managing-security-context-constraints.html) for additional information. - -By default, the SCC is set to `restricted-v2` for the `managedGatewayClass` that Consul automatically creates. The `restricted-v2` SCC is one of OpenShifts default SCCs, but you can specify a different SCC in the `openshiftSCCName` parameter: - -```yaml -connectInject: - apiGateway: - managedGatewayClass: - openshiftSCCName: "restricted-v2" -``` - -### Privileged container ports - -Containers cannot use privileged ports when OpenShift is enabled. Privileged ports are 1 through 1024, and serving applications from that range is a security risk. - -To allow gateway listeners to use privileged port numbers, specify an integer value in the `mapPrivilegedContainerPorts` field of your Consul values configuration. Consul adds the value to listener port numbers that are set to a number in the privileged container range. Consul maps the configured port number to the total port number so that traffic sent to the configured port number is correctly forwarded to the service. - -For example, if a gateway listener is configured to port `80` and the `mapPrivilegedContainerPorts` field is configured to `2000`, then the actual port number on the underlying container is `2080`. - -You can set the `mapPrivilegedContainerPorts` parameter in the following map in your Consul values file: - -```yaml -connectInject: - apiGateway: - managedGatewayClass: - mapPrivilegedContainerPorts: -``` - -## Supported versions of the Kubernetes gateway API specification - -Refer to the [release notes](/consul/docs/release-notes) for your version of Consul. - -## Supported Kubernetes gateway specification features - -Consul API gateways for Kubernetes support a subset of the Kubernetes Gateway API specification. For a complete list of features, including the list of gateway and route statuses and an explanation on how they -are used, refer to the [documentation in our GitHub repo](https://github.com/hashicorp/consul-api-gateway/blob/main/dev/docs/supported-features.md): - -### `GatewayClass` - -The `GatewayClass` resource describes a class of gateway configurations to use a template for creating `Gateway` resources. You can also specify custom API gateway configurations in a `GatewayClassConfig` CRD and attach them to resource to the `GatewayClass` using the `parametersRef` field. - -You must specify the `"hashicorp.com/consul-api-gateway-controller"` controller so that Consul can manage gateways generated by the `GatewayClass`. Refer to the [Kubernetes `GatewayClass` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.GatewayClass) for additional information. - -### `Gateway` - -The `Gateway` resource is the core API gateway component. Gateways have one or more listeners that can route `HTTP`, `HTTPS`, or `TCP` traffic. You can define header-based hostname matching for listeners, but SNI is not supported. - -You can apply filters to add, remove, and set header values on incoming requests. Gateways support the `terminate` TLS mode and `core/v1/Secret` TLS certificates. Extended option support includes TLS version and cipher constraints. Refer to [Kubernetes `Gateway` resource configuration reference](/consul/docs/connect/gateways/api-gateway/configuration/gateway) for more information. - -### `HTTPRoute` - -`HTTPRoute` configurations determine HTTP paths between listeners defined on the gateway and services in the mesh. You can specify weights to load balance traffic, as well as define rules for matching request paths, headers, queries, and methods to ensure that traffic is routed appropriately. You can apply filters to add, remove, and set header values on requests sent through th route. - -Routes support the following backend types: - -- `core/v1/Service` backend types when the route maps to service registered with Consul. -- `api-gateway.consul.hashicorp.com/v1alpha1/MeshService`. - -Refer to [Kubernetes `HTTPRoute` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.HTTPRoute) for additional information. - -### `TCPRoute` - -`TCPRoute` configurations determine TCP paths between listeners defined on the gateway and services in the mesh. Routes support the following backend types: - -- `core/v1/Service` backend types when the route maps to service registered with Consul. -- `api-gateway.consul.hashicorp.com/v1alpha1/MeshService`. - -Refer to [Kubernetes `TCPRoute` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.TCPRoute) for additional information. - -### `ReferenceGrant` - -`ReferenceGrant` resources allow resources to reference resources in other namespaces. They are required to allow references from a `Gateway` to a Kubernetes `core/v1/Secret` in a different namespace. Without a `ReferenceGrant`, `backendRefs` attached to the gateway may not be permitted. As a result, the `ReferenceGrant` sets a `ResolvedRefs` status to `False` with the reason `InvalidCertificateRef`, which prevents the gateway from becoming ready. - -`ReferenceGrant` resources are also required for references from an `HTTPRoute` or `TCPRoute` to a Kubernetes `core/v1/Service` in a different namespace. Without a `ReferenceGrant`, `backendRefs` attached to the route may not be permitted. As a result, Kubernetes sets a `ResolvedRefs` status to `False` with the reason `RefNotPermitted`, which causes the gateway listener to reject the route. - -If a route `backendRefs` becomes unpermitted, the entire route is removed from the gateway listener. A `backendRefs` can become unpermitted when you delete a `ReferenceGrant` or add a new unpermitted `backendRefs` to an existing route. - -Refer to the [Kubernetes `ReferenceGrant` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferenceGrant) for additional information. - -## Consul server deployments - -- Consul Enterprise and the community edition are both supported. -- Supported Consul Server deployment types: - - Self-Managed - - HCP Consul Dedicated - -### Consul feature support - -API gateways on Kubernetes support all Consul features, but you can only route traffic between multiple datacenters through peered connections. Refer to [Route Traffic to Peered Services](/consul/docs/connect/gateways/api-gateway/define-routes/route-to-peered-services) for additional information. WAN federation is not supported. - -## Deployment Environments - -Consul API gateway can be deployed in the following Kubernetes-based environments: - -- Standard Kubernetes environments -- AWS Elastic Kubernetes Service (EKS) -- Google Kubernetes Engine (GKE) -- Azure Kubernetes Service (AKS) - -## Resource allocations - -The following resources are allocated for each component of the API gateway. - -### Gateway controller pod - -- **CPU**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. -- **Memory**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. - -### Gateway instance pod - -- **CPU**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. -- **Memory**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. diff --git a/website/content/docs/connect/gateways/api-gateway/upgrades-k8s.mdx b/website/content/docs/connect/gateways/api-gateway/upgrades-k8s.mdx deleted file mode 100644 index 5b514340268f..000000000000 --- a/website/content/docs/connect/gateways/api-gateway/upgrades-k8s.mdx +++ /dev/null @@ -1,745 +0,0 @@ ---- -layout: docs -page_title: Upgrade API Gateway for Kubernetes -description: >- - Upgrade Consul API Gateway to use newly supported features. Learn about the requirements, procedures, and post-configuration changes involved in standard and specific version upgrades. ---- - -# Upgrade API gateway for Kubernetes - -Since Consul v1.15, the Consul API gateway is a native feature within the Consul binary and is installed during the normal Consul installation process. Since Consul on Kubernetes v1.2 (Consul v1.16), the CRDs necessary for using the Consul API gateway for Kubernetes are also included. You can install Consul v1.16 using the Consul Helm chart v1.2 and later. Refer to [Install API gateway for Kubernetes](/consul/docs/connect/gateways/api-gateway/deploy/install-k8s) for additional information. - -## Introduction - -Because Consul API gateway releases as part of Consul, it no longer has an independent version number. Instead, the API gateway inherits the same version number as the Consul binary. Refer to the [release notes](/consul/docs/release-notes) for additional information. - -To begin using the native API gateway, complete one of the following upgrade paths: - -### Upgrade from Consul on Kubernetes v1.1.x - -1. Complete the instructions for [upgrading to the native Consul API gateway](#upgrade-to-native-consul-api-gateway). - -### Upgrade from v0.4.x - v0.5.x - -1. Complete the [standard upgrade instructions](#standard-upgrade) -1. Complete the instructions for [upgrading to the native Consul API gateway](#upgrade-to-native-consul-api-gateway). - -### Upgrade from v0.3.x - -1. Complete the instructions for [upgrading to v0.4.0](#upgrade-to-v0-4-0) -1. Complete the [standard upgrade instructions](#standard-upgrade) -1. Complete the instructions for [upgrading to the native Consul API gateway](#upgrade-to-native-consul-api-gateway). - -### Upgrade from v0.2.x - -1. Complete the instructions for [upgrading to v0.3.0](#upgrade-to-v0-2-0) -1. Complete the instructions for [upgrading to v0.4.0](#upgrade-to-v0-4-0) -1. Complete the [standard upgrade instructions](#standard-upgrade) -1. Complete the instructions for [upgrading to the native Consul API gateway](#upgrade-to-native-consul-api-gateway). - -### Upgrade from v0.1.x - -1. Complete the instructions for [upgrading to v0.2.0](#upgrade-to-v0-2-0) -1. Complete the instructions for [upgrading to v0.3.0](#upgrade-to-v0-3-0) -1. Complete the instructions for [upgrading to v0.4.0](#upgrade-to-v0-4-0) -1. Complete the [standard upgrade instructions](#standard-upgrade) -1. Complete the instructions for [upgrading to the native Consul API gateway](#upgrade-to-native-consul-api-gateway). - -## Upgrade to native Consul API gateway - -You must begin the upgrade procedure with API gateway with Consul on Kubernetes v1.1 installed. If you are currently using a version of Consul on Kubernetes older than v1.1, complete the necessary stages of the upgrade path to v1.1 before you begin upgrading to the native API gateway. Refer to the [Introduction](#introduction) for an overview of the upgrade paths. - -### Consul-managed CRDs - -If you are able to tolerate downtime for your applications, you should delete previously installed CRDs and allow Consul to install and manage them for future updates. The amount of downtime depends on how quickly you are able to install the new version of Consul. If you are unable to tolerate any downtime, refer to [Self-managed CRDs](#self-managed-crds) for instructions on how to upgrade without downtime. - -1. Run the `kubectl delete` command and reference the `kustomize` directory to delete the existing CRDs. The following example deletes the CRDs that were installed with API gateway `v0.5.1`: - - ```shell-session - $ kubectl delete --kustomize="github.com/hashicorp/consul-api-gateway/config/crd?ref=v0.5.1" - ``` - -1. Issue the following command to use the API gateway packaged in Consul. Since Consul will not detected an external CRD, it will try to install the API gateway packaged with Consul. - - ```shell-session - $ consul-k8s install -config-file values.yaml - ``` - -1. Create `ServiceIntentions` allowing `Gateways` to communicate with any backend services that they route to. Refer to [Service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional information. - -1. Change any existing `Gateways` to reference the new `GatewayClass` `consul`. Refer to [gatewayClass](/consul/docs/connect/gateways/api-gateway/configuration/gateway#gatewayclassname) for additional information. - -1. After updating all of your `gateway` configurations to use the new controller, you can remove the `apiGateway` block from the Helm chart and upgrade your Consul cluster. This completely removes the old gateway controller. - - - - ```diff - global: - image: hashicorp/consul:1.15 - imageK8S: hashicorp/consul-k8s-control-plane:1.1 - - apiGateway: - - enabled: true - - image: hashicorp/consul-api-gateway:0.5.4 - - managedGatewayClass: - - enabled: true - ``` - - - - ```shell-session - $ consul-k8s install -config-file values.yaml - ``` - -### Self-managed CRDs - - - - This upgrade method uses `connectInject.apiGateway.manageExternalCRDs`, which was introduced in Consul on Kubernetes v1.2. As a result, you must be on at least Consul on Kubernetes v1.2 for this upgrade method. - - - -If you are unable to tolerate any downtime, you can complete the following steps to upgrade to the native Consul API gateway. If you choose this upgrade option, you must continue to manually install the CRDs necessary for operating the API gateway. - -1. Create a Helm chart that installs the version of Consul API gateway that ships with Consul and disables externally-managed CRDs: - - - - ```yaml - global: - image: hashicorp/consul:1.16 - imageK8S: hashicorp/consul-k8s-control-plane:1.2 - connectInject: - apiGateway: - manageExternalCRDs: false - apiGateway: - enabled: true - image: hashicorp/consul-api-gateway:0.5.4 - managedGatewayClass: - enabled: true - ``` - - - - You must set `connectInject.apiGateway.manageExternalCRDs` to `false`. If you have external CRDs with legacy installation and you do not set this, you will get an error when you try to upgrade because Helm will try to install CRDs that already exist. - -1. Issue the following command to install the new version of API gateway and disables externally-managed CRDs: - - ```shell-session - $ consul-k8s install -config-file values.yaml - ``` - -1. Create `ServiceIntentions` allowing `Gateways` to communicate with any backend services that they route to. Refer to [Service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional information. - -1. Change any existing `Gateways` to reference the new `GatewayClass` `consul`. Refer to [gatewayClass](/consul/docs/connect/gateways/api-gateway/configuration/gateway#gatewayclassname) for additional information. - -1. After updating all of your `gateway` configurations to use the new controller, you can remove the `apiGateway` block from the Helm chart and upgrade your Consul cluster. This completely removes the old gateway controller. - - - - ```diff - global: - image: hashicorp/consul:1.16 - imageK8S: hashicorp/consul-k8s-control-plane:1.2 - connectInject: - apiGateway: - manageExternalCRDs: false - - apiGateway: - - enabled: true - - image: hashicorp/consul-api-gateway:0.5.4 - - managedGatewayClass: - - enabled: true - ``` - - - - ```shell-session - $ consul-k8s install -config-file values.yaml - ``` - -## Upgrade to v0.4.0 - -Consul API Gateway v0.4.0 adds support for [Gateway API v0.5.0](https://github.com/kubernetes-sigs/gateway-api/releases/tag/v0.5.0) and the following resources: - -- The graduated v1beta1 `GatewayClass`, `Gateway` and `HTTPRoute` resources. - -- The [`ReferenceGrant`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferenceGrant) resource, which replaces the identical [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferencePolicy) resource. - -Consul API Gateway v0.4.0 is backward-compatible with existing `ReferencePolicy` resources, but we will remove support for `ReferencePolicy` resources in a future release. We recommend that you migrate to `ReferenceGrant` after upgrading. - -### Requirements - -Ensure that the following requirements are met prior to upgrading: - -- Consul API Gateway should be running version v0.3.0. - -### Procedure - -1. Complete the [standard upgrade](#standard-upgrade). - -1. After completing the upgrade, complete the [post-upgrade configuration changes](#v0.4.0-post-upgrade-configuration-changes). The post-upgrade procedure describes how to replace your `ReferencePolicy` resources with `ReferenceGrant` resources and how to upgrade your `GatewayClass`, `Gateway`, and `HTTPRoute` resources from v1alpha2 to v1beta1. - - - -### Post-upgrade configuration changes - -Complete the following steps after performing standard upgrade procedure. - -#### Requirements - -- Consul API Gateway should be running version v0.4.0. -- Consul Helm chart should be v0.47.0 or later. -- You should have the ability to run `kubectl` CLI commands. -- `kubectl` should be configured to point to the cluster containing the installation you are upgrading. -- You should have the following permissions for your Kubernetes cluster: - - `Gateway.read` - - `ReferenceGrant.create` (Added in Consul Helm chart v0.47.0) - - `ReferencePolicy.delete` - -#### Procedure - -1. Verify the current version of the `consul-api-gateway-controller` `Deployment`: - - ```shell-session - $ kubectl get deployment --namespace consul consul-api-gateway-controller --output=jsonpath="{@.spec.template.spec.containers[?(@.name=='api-gateway-controller')].image}" - ``` - - You should receive a response similar to the following: - - ```log hideClipboard - "hashicorp/consul-api-gateway:0.4.0" - ``` - - - -1. Issue the following command to get all `ReferencePolicy` resources across all namespaces. - - ```shell-session - $ kubectl get referencepolicy --all-namespaces - ``` -If you have any active `ReferencePolicy` resources, you will receive output similar to the response below. - - ```log hideClipboard - Warning: ReferencePolicy has been renamed to ReferenceGrant. ReferencePolicy will be removed in v0.6.0 in favor of the identical ReferenceGrant resource. - NAMESPACE NAME - default example-reference-policy - ``` - - If your output is empty, upgrade your `GatewayClass`, `Gateway` and `HTTPRoute` resources to v1beta1 as described in [step 7](#v1beta1-gatewayclass-gateway-httproute). - -1. For each `ReferencePolicy` in the source YAML files, change the `kind` field to `ReferenceGrant`. You can optionally update the `metadata.name` field or filename if they include the term "policy". In the following example, the `kind` and `metadata.name` fields and filename have been changed to reflect the new resource. Note that updating the `kind` field prevents you from using the `kubectl edit` command to edit the remote state directly. - - - - ```yaml - apiVersion: gateway.networking.k8s.io/v1alpha2 - kind: ReferenceGrant - metadata: - name: reference-grant - namespace: web-namespace - spec: - from: - - group: gateway.networking.k8s.io - kind: HTTPRoute - namespace: example-namespace - to: - - group: "" - kind: Service - name: web-backend - ``` - - - -1. For each file, apply the updated YAML to your cluster to create a new `ReferenceGrant` resource. - - ```shell-session - $ kubectl apply --filename - ``` - -1. Check to confirm that each new `ReferenceGrant` was created successfully. - - ```shell-session - $ kubectl get referencegrant --namespace - NAME - example-reference-grant - ``` - -1. Finally, delete each corresponding old `ReferencePolicy` resource. Because replacement `ReferenceGrant` resources have already been created, there should be no interruption in the availability of any referenced `Service` or `Secret`. - - ```shell-session - $ kubectl delete referencepolicy --namespace - Warning: ReferencePolicy has been renamed to ReferenceGrant. ReferencePolicy will be removed in v0.6.0 in favor of the identical ReferenceGrant resource. - referencepolicy.gateway.networking.k8s.io "example-reference-policy" deleted - ``` - - - -1. For each `GatewayClass`, `Gateway`, and `HTTPRoute` in the source YAML, update the `apiVersion` field to `gateway.networking.k8s.io/v1beta1`. Note that updating the `apiVersion` field prevents you from using the `kubectl edit` command to edit the remote state directly. - - - - ```yaml - apiVersion: gateway.networking.k8s.io/v1beta1 - kind: Gateway - metadata: - name: example-gateway - namespace: gateway-namespace - spec: - ... - ``` - - - -1. For each file, apply the updated YAML to your cluster to update the existing `GatewayClass`, `Gateway` or `HTTPRoute` resources. - - ```shell-session - $ kubectl apply --filename - gateway.gateway.networking.k8s.io/example-gateway configured - ``` - - - -## Upgrade to v0.3.0 from v0.2.0 or lower - -Consul API Gateway v0.3.0 introduces a change for people upgrading from lower versions. Gateways with `listeners` with a `certificateRef` defined in a different namespace now require a [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferencePolicy) that explicitly allows `Gateways` from the gateway's namespace to use `certificateRef` in the `certificateRef`'s namespace. - -### Requirements - -Ensure that the following requirements are met prior to upgrading: - -- Consul API Gateway should be running version v0.2.1 or lower. -- You should have the ability to run `kubectl` CLI commands. -- `kubectl` should be configured to point to the cluster containing the installation you are upgrading. -- You should have the following permission rights on your Kubernetes cluster: - - `Gateway.read` - - `ReferencePolicy.create` -- (Optional) The [jq](https://stedolan.github.io/jq/download/) command line processor for JSON can be installed, which will ease gateway retrieval during the upgrade process. - -### Procedure - - -1. Verify the current version of the `consul-api-gateway-controller` `Deployment`: - - ```shell-session - $ kubectl get deployment --namespace consul consul-api-gateway-controller --output=jsonpath="{@.spec.template.spec.containers[?(@.name=='api-gateway-controller')].image}" - ``` - - You should receive a response similar to the following: - - ```log hideClipboard - "hashicorp/consul-api-gateway:0.2.1" - ``` - -1. Retrieve all gateways that have a `certificateRefs` in a different namespace. If you have installed the [`jq`](https://stedolan.github.io/jq/) utility, you can skip to [step 4](#jq-command-secrets). Otherwise, issue the following command to get all `Gateways` across all namespaces: - - ```shell-session - $ kubectl get Gateway --output json --all-namespaces - ``` - - If you have any active `Gateways`, you will receive output similar to the following response. The output has been truncated to show only relevant fields: - - ```yaml - apiVersion: gateway.networking.k8s.io/v1alpha2 - kind: Gateway - metadata: - name: example-gateway - namespace: gateway-namespace - spec: - gatewayClassName: "consul-api-gateway" - listeners: - - name: https - port: 443 - protocol: HTTPS - allowedRoutes: - namespaces: - from: All - tls: - certificateRefs: - - group: "" - kind: Secret - name: example-certificate - namespace: certificate-namespace - ``` - -1. Inspect the `certificateRefs` entries for each of the routes. - - If a `namespace` field is not defined in the `certificateRefs` or if the namespace matches the namespace of the parent `Gateway`, then no additional action is required for the `certificateRefs`. Otherwise, note the `namespace` field values for `certificateRefs` configurations with a `namespace` field that do not match the namespace of the parent `Gateway`. You must also note the `namespace` of the parent gateway. You will need these to create a `ReferencePolicy` that explicitly allows each cross-namespace certificateRefs-to-gateway pair. (see [step 5](#create-secret-reference-policy)). - - After completing this step, you will have a list of all secrets similar to the following: - - - - ```yaml hideClipboard - example-certificate: - - namespace: certificate-namespace - parentNamespace: gateway-namespace - ``` - - - - Proceed with the [standard-upgrade](#standard-upgrade) if your list is empty. - - - -1. If you have installed [`jq`](https://stedolan.github.io/jq/), issue the following command to get all `Gateways` and filter for secrets that require a `ReferencePolicy`. - - ```shell-session - - $ kubectl get Gateway -o json -A | jq -r '.items[] | {gateway_name: .metadata.name, gateway_namespace: .metadata.namespace, kind: .kind, crossNamespaceSecrets: ( .metadata.namespace as $parentnamespace | .spec.listeners[] | select(has("tls")) | .tls.certificateRefs[] | select(.namespace != null and .namespace != $parentnamespace ) )} ' - - ``` - - The output will resemble the following response if gateways that require a new `ReferencePolicy` are returned: - - - - ```log hideClipboard - { - "gateway_name": "example-gateway", - "gateway_namespace": "gateway-namespace", - "kind": "Gateway", - "crossNamespaceSecrets": { - "group": "", - "kind": "Secret", - "name": "example-certificate", - "namespace": "certificate-namespace" - } - } - ``` - - - - If your output is empty, proceed with the [standard-upgrade](#standard-upgrade). - - -1. Using the list of secrets you created earlier as a guide, create a [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferencePolicy) to allow each gateway cross namespace secret access. - The `ReferencePolicy` explicitly allows each cross-namespace gateway to secret pair. The `ReferencePolicy` must be created in the same `namespace` as the `certificateRefs`. - - Skip to the next step if you've already created a `ReferencePolicy`. - - The following example `ReferencePolicy` enables `example-gateway` in `gateway-namespace` to utilize `certificateRefs` in the `certificate-namespace` namespace: - - - - ```yaml - apiVersion: gateway.networking.k8s.io/v1alpha2 - kind: ReferencePolicy - metadata: - name: reference-policy - namespace: certificate-namespace - spec: - from: - - group: gateway.networking.k8s.io - kind: Gateway - namespace: gateway-namespace - to: - - group: "" - kind: Secret - ``` - - - -1. If you have already created a `ReferencePolicy`, modify it to allow your gateway to access your `certificateRef` and save it as `referencepolicy.yaml`. Note that each `ReferencePolicy` only supports one `to` field and one `from` field (refer the [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/api-types/referencegrant/#api-design-decisions) documentation). As a result, you may need to create multiple `ReferencePolicy`s. - -1. Issue the following command to apply it to your cluster: - - ```shell-session - $ kubectl apply --filename referencepolicy.yaml - ``` - - Repeat this step as needed until each of your cross-namespace `certificateRefs` have a corresponding `ReferencePolicy`. - - Proceed with the [standard-upgrade](#standard-upgrade). - -## Upgrade to v0.2.0 - -Consul API Gateway v0.2.0 introduces a change for people upgrading from Consul API Gateway v0.1.0. Routes with a `backendRef` defined in a different namespace now require a [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferencePolicy) that explicitly allows traffic from the route's namespace to the `backendRef`'s namespace. - -### Requirements - -Ensure that the following requirements are met prior to upgrading: - -- Consul API Gateway should be running version v0.1.0. -- You should have the ability to run `kubectl` CLI commands. -- `kubectl` should be configured to point to the cluster containing the installation you are upgrading. -- You should have the following permission rights on your Kubernetes cluster: - - `HTTPRoute.read` - - `TCPRoute.read` - - `ReferencePolicy.create` -- (Optional) The [jq](https://stedolan.github.io/jq/download/) command line processor for JSON can be installed, which will ease route retrieval during the upgrade process. - -### Procedure - -1. Verify the current version of the `consul-api-gateway-controller` `Deployment`: - - ```shell-session - $ kubectl get deployment --namespace consul consul-api-gateway-controller --output=jsonpath= "{@.spec.template.spec.containers[?(@.name=='api-gateway-controller')].image}" - ``` - - You should receive the following response: - - ```log hideClipboard - "hashicorp/consul-api-gateway:0.1.0" - ``` - -1. Retrieve all routes that have a backend in a different namespace. If you have installed the [`jq`](https://stedolan.github.io/jq/) utility, you can skip to [step 4](#jq-command). Otherwise, issue the following command to get all `HTTPRoutes` and `TCPRoutes` across all namespaces: - - ```shell-session - $ kubectl get HTTPRoute,TCPRoute --output json --all-namespaces - ``` - - Note that the command only retrieves `HTTPRoutes` and `TCPRoutes`. `TLSRoutes` and `UDPRoutes` are not supported in v0.1.0. - - If you have any active `HTTPRoutes` or `TCPRoutes`, you will receive output similar to the following response. The output has been truncated to show only relevant fields: - - ```yaml - apiVersion: v1 - items: - - apiVersion: gateway.networking.k8s.io/v1alpha2 - kind: HTTPRoute - metadata: - name: example-http-route, - namespace: example-namespace, - ... - spec: - parentRefs: - - group: gateway.networking.k8s.io - kind: Gateway - name: gateway - namespace: gw-ns - rules: - - backendRefs: - - group: "" - kind: Service - name: web-backend - namespace: gateway-namespace - ... - ... - - apiVersion: gateway.networking.k8s.io/v1alpha2 - kind: TCPRoute - metadata: - name: example-tcp-route, - namespace: a-different-namespace, - ... - spec: - parentRefs: - - group: gateway.networking.k8s.io - kind: Gateway - name: gateway - namespace: gateway-namespace - rules: - - backendRefs: - - group: "" - kind: Service - name: web-backend - namespace: gateway-namespace - ... - ... - ``` - -1. Inspect the `backendRefs` entries for each of the routes. - - If a `namespace` field is not defined in the `backendRef` or if the namespace matches the namespace of the route, then no additional action is required for the `backendRef`. Otherwise, note the `group`, `kind`, `name`, and `namespace` field values for `backendRef` configurations that have a `namespace` defined that do not match the namespace of the parent route. You must also note the `kind` and `namespace` of the parent route. You will need these to create a `ReferencePolicy` that explicitly allows each cross-namespace route-to-service pair (see [step 5](#create-reference-policy)). - - After completing this step, you will have a list of all routes similar to the following: - - - - ```yaml hideClipboard - example-http-route: - - namespace: example-namespace - kind: HTTPRoute - backendReferences: - - group : "" - kind: Service - name: web-backend - namespace: gateway-namespace - - example-tcp-route: - - namespace: a-different-namespace - kind: HTTPRoute - backendReferences: - - group : "" - kind: Service - name: web-backend - namespace: gateway-namespace - ``` - - - - Proceed with [standard-upgrade](#standard-upgrade) if your list is empty. - - -1. If you have installed [`jq`](https://stedolan.github.io/jq/), issue the following command to get all `HTTPRoutes` and `TCPRoutes` and filter for routes that require a `ReferencePolicy`. - - ```shell-session - $ kubectl get HTTPRoute,TCPRoute -o json -A | jq -r '.items[] | {name: .metadata.name, namespace: .metadata.namespace, kind: .kind, crossNamespaceBackendReferences: ( .metadata.namespace as $parentnamespace | .spec.rules[] .backendRefs[] | select(.namespace != null and .namespace != $parentnamespace ) )} ' - ``` - - Note that the command retrieves all `HTTPRoutes` and `TCPRoutes`. `TLSRoutes` and `UDPRoutes` are not supported in v0.1.0. - - The output will resemble the following response if routes that require a new `ReferencePolicy` are returned: - - - - ```log hideClipboard - { - "name": "example-http-route", - "namespace": "example-namespace", - "kind": "HTTPRoute", - "crossNamespaceBackendReferences": { - "group": "", - "kind": "Service", - "name": "web-backend", - "namespace": "gateway-namespace", - "port": 8080, - "weight": 1 - } - } - { - "name": "example-tcp-route", - "namespace": "a-different-namespace", - "kind": "TCPRoute", - "crossNamespaceBackendReferences": { - "group": "", - "kind": "Service", - "name": "web-backend", - "namespace": "gateway-namespace", - "port": 8080, - "weight": 1 - } - } - ``` - - - - If your output is empty, proceed with the [standard-upgrade](#standard-upgrade). - - -1. Using the list of routes you created earlier as a guide, create a [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferencePolicy) to allow cross namespace traffic for each route service pair. - The `ReferencePolicy` explicitly allows each cross-namespace route to service pair. The `ReferencePolicy` must be created in the same `namespace` as the backend `Service`. - - Skip to the next step if you've already created a `ReferencePolicy`. - - The following example `ReferencePolicy` enables `HTTPRoute` traffic from the `example-namespace` to Kubernetes Services in the `web-backend` namespace: - - - - ```yaml - apiVersion: gateway.networking.k8s.io/v1alpha2 - kind: ReferencePolicy - metadata: - name: reference-policy - namespace: gateway-namespace - spec: - from: - - group: gateway.networking.k8s.io - kind: HTTPRoute - namespace: example-namespace - to: - - group: "" - kind: Service - name: web-backend - ``` - - - -1. If you have already created a `ReferencePolicy`, modify it to allow your route and save it as `referencepolicy.yaml`. Note that each `ReferencePolicy` only supports one `to` field and one `from` field (refer the [`ReferencePolicy`](https://gateway-api.sigs.k8s.io/api-types/referencegrant/#api-design-decisions) documentation). As a result, you may need to create multiple `ReferencePolicy`s. - -2. Issue the following command to apply it to your cluster: - - ```shell-session - $ kubectl apply --filename referencepolicy.yaml - ``` - - Repeat this step as needed until each of your cross-namespace routes have a corresponding `ReferencePolicy`. - - Proceed with the [standard-upgrade](#standard-upgrade). - - -## Standard Upgrade - -~> **Note:** When you see `VERSION` in examples of commands or configuration settings, replace `VERSION` with the version number of the release you are installing, like `0.2.0`. If there is a lower case "v" in front of `VERSION` the version number needs to follow the "v" as is `v0.2.0` - -### Requirements - -Ensure that the following requirements are met prior to upgrading: - -- You should have the ability to run `kubectl` CLI commands. -- `kubectl` should be configured to point to the cluster containing the installation you are upgrading. - - -### Procedure - -This is the upgrade path to use when there are no version specific steps to take. - - - -1. Issue the following command to install the new version of CRDs into your cluster: - - ``` shell-session - $ kubectl apply --kustomize="github.com/hashicorp/consul-api-gateway/config/crd?ref=vVERSION" - ``` - -1. Update `apiGateway.image` in `values.yaml`: - - - - ```yaml - ... - apiGateway: - image: hashicorp/consul-api-gateway:VERSION - ... - ``` - - - -1. Issue the following command to upgrade your Consul installation: - - ```shell-session - $ helm upgrade --values values.yaml --namespace consul --version hashicorp/consul - ``` - - Note that the upgrade will cause the Consul API Gateway controller shut down and restart with the new version. - -1. According to the Kubernetes Gateway API specification, [Gateway Class](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io%2fv1alpha2.GatewayClass) configurations should only be applied to a gateway upon creation. To see the effects on preexisting gateways after upgrading your CRD installation, delete and recreate any gateways by issuing the following commands: - - ```shell-session - $ kubectl delete --filename - $ kubectl create --filename - ``` - - -1. (Optional) Delete and recreate your routes. Note that it may take several minutes for attached routes to reconcile and start reporting bind errors. - - ```shell-session - $ kubectl delete --filename - $ kubectl create --filename - ``` - -### Post-Upgrade Configuration Changes - -No additional configuration changes are required for this upgrade. diff --git a/website/content/docs/connect/gateways/index.mdx b/website/content/docs/connect/gateways/index.mdx deleted file mode 100644 index 067d3b277672..000000000000 --- a/website/content/docs/connect/gateways/index.mdx +++ /dev/null @@ -1,101 +0,0 @@ ---- -layout: docs -page_title: Gateways Overview -description: >- - Gateways are proxies that direct traffic into, out of, and inside of Consul's service mesh. They secure communication with external or non-mesh network resources and enable services on different runtimes, cloud providers, or with overlapping IP addresses to communicate with each other. ---- - -# Gateways Overview - -This topic provides an overview of the gateway features shipped with Consul. Gateways provide connectivity into, out of, and between Consul service meshes. You can configure the following types of gateways: - -- [Mesh gateways](#mesh-gateways) enable service-to-service traffic between Consul datacenters or between Consul admin partitions. They also enable datacenters to be federated across wide area networks. -- [Ingress gateways](#ingress-gateways) enable connectivity within your organizational network from services outside the Consul service mesh to services in the mesh. -- [Terminating gateways](#terminating-gateways) enable connectivity within your organizational network from services in the Consul service mesh to services outside the mesh. - -[![Gateway Architecture](/img/consul-connect/svgs/consul_gateway_overview.svg)](/img/consul-connect/svgs/consul_gateway_overview.svg) - -## Mesh Gateways - -Mesh gateways enable service mesh traffic to be routed between different Consul datacenters and admin partitions. The datacenters or partitions can reside -in different clouds or runtime environments where general interconnectivity between all services in all datacenters -isn't feasible. - -They operate by sniffing and extracting the server name indication (SNI) header from the service mesh session and routing the connection to the appropriate destination based on the server name requested. - -Mesh gateways enable the following scenarios: - -* **Federate multiple datacenters across a WAN**. Since Consul 1.8.0, mesh gateways can forward gossip and RPC traffic between Consul servers. See [WAN federation via mesh gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways) for additional information. -- **Service-to-service communication across WAN-federated datacenters**. Refer to [Enabling Service-to-service Traffic Across Datacenters](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters) for additional information. -- **Service-to-service communication across admin partitions**. Since Consul 1.11.0, you can create administrative boundaries for single Consul deployments called "admin partitions". You can use mesh gateways to facilitate cross-partition communication. Refer to [Enabling Service-to-service Traffic Across Admin Partitions](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions) for additional information. -- **Bridge multiple datacenters using Cluster Peering**. Since Consul 1.14.0, mesh gateways can be used to route peering control-plane traffic between peered Consul Servers. See [Mesh Gateways for Peering Control Plane Traffic](/consul/docs/connect/gateways/mesh-gateway/peering-via-mesh-gateways) for more information. -- **Service-to-service communication across peered datacenters**. Refer to [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) for more information. - --> **Mesh gateway tutorial**: Follow the [mesh gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-gateways) to learn concepts associated with mesh gateways. - -## API Gateways - -API gateways enable network access, from outside a service mesh, to services running in a Consul service mesh. The -systems accessing the services in the mesh, may be within your organizational network or external to it. This type of -network traffic is commonly called _north-south_ network traffic because it refers to the flow of data into and out of -a specific environment. - -API gateways solve the following primary use cases: - -- **Control access at the point of entry**: Set the protocols of external connection - requests and secure inbound connections with TLS certificates from trusted - providers, such as Verisign and Let's Encrypt. -- **Simplify traffic management**: Load balance requests across services and route - traffic to the appropriate service by matching one or more criteria, such as - hostname, path, header presence or value, and HTTP method. - -Refer to the following documentation for information on how to configure and deploy API gateways: -- [API Gateways on VMs](/consul/docs/connect/gateways/api-gateway/deploy/listeners-vms) -- [API Gateways for Kubernetes](/consul/docs/connect/gateways/api-gateway/deploy/listeners-k8s). - - -## Ingress Gateways - - - -Ingress gateway is deprecated and will not be enhanced beyond its current capabilities. Ingress gateway is fully supported -in this version but will be removed in a future release of Consul. - -Consul's API gateway is the recommended alternative to ingress gateway. - - - -Ingress gateways enable connectivity within your organizational network from services outside the Consul service mesh -to services in the mesh. To accept ingress traffic from the public internet, use Consul's -[API Gateway](https://www.hashicorp.com/blog/announcing-hashicorp-consul-api-gateway) instead. - -These gateways allow you to define what services should be exposed, on what port, and by what hostname. You configure -an ingress gateway by defining a set of listeners that can map to different sets of backing services. - -Ingress gateways are tightly integrated with Consul's L7 configuration and enable dynamic routing of HTTP requests by -attributes like the request path. - -For more information about ingress gateways, review the [complete documentation](/consul/docs/connect/gateways/ingress-gateway) -and the [ingress gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-ingress-gateways). - -![Ingress Gateway Architecture](/img/ingress-gateways.png) - -## Terminating Gateways - -Terminating gateways enable connectivity within your organizational network from services in the Consul service mesh -to services outside the mesh. -Services outside the mesh do not have sidecar proxies or are not [integrated natively](/consul/docs/connect/native). -These may be services running on legacy infrastructure or managed cloud services running on -infrastructure you do not control. - -Terminating gateways effectively act as egress proxies that can represent one or more services. They terminate service mesh -mTLS connections, enforce Consul intentions, and forward requests to the appropriate destination. - -These gateways also simplify authorization from dynamic service addresses. Consul's intentions determine whether -connections through the gateway are authorized. Then traditional tools like firewalls or IAM roles can authorize the -connections from the known gateway nodes to the destination services. - -For more information about terminating gateways, review the [complete documentation](/consul/docs/connect/gateways/terminating-gateway) -and the [terminating gateway tutorial](/consul/tutorials/developer-mesh/terminating-gateways-connect-external-services). - -![Terminating Gateway Architecture](/img/terminating-gateways.png) diff --git a/website/content/docs/connect/gateways/ingress-gateway/index.mdx b/website/content/docs/connect/gateways/ingress-gateway/index.mdx deleted file mode 100644 index 3f0b4ea836f9..000000000000 --- a/website/content/docs/connect/gateways/ingress-gateway/index.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -layout: docs -page_title: Ingress gateway overview -description: >- - Ingress gateways enable you to connect external services to services in your mesh. Ingress gateways are a type of proxy that listens for requests from external network locations and route authorized traffic to destinations in the service mesh. ---- - -# Ingress gateways overview - -An ingress gateway is a type of proxy that enables network connectivity from external services to services inside the mesh. The following diagram describes the ingress gateway workflow: - -![Ingress Gateway Architecture](/img/ingress-gateways.png) - - - -Ingress gateway is deprecated and will not be enhanced beyond its current capabilities. Ingress gateway is fully supported -in this version but will be removed in a future release of Consul. - -Consul's API gateway is the recommended alternative to ingress gateway. - - - -## Workflow - -The following stages describe how to add an ingress gateway to your service mesh: - -1. Configure ingress gateway listeners: Create an ingress gateway configuration entry and specify which services to expose to external requests. The configuration entry allows you to define what services should be exposed, on what port, and by what hostname. You can expose services registered with Consul or expose virtual services defined in other configuration entries. Refer to [Ingress gateway configuration entry reference](/consul/docs/connect/config-entries/ingress-gateway) for details on the configuration parameters you can specify. - -1. Define an ingress gateway proxy service: Ingress gateways are a special-purpose proxy service that you can define and register in a similar manner to other services. When you register the ingress gateway service, Consul applies the configurations defined in the ingress gateway configuration reference. Refer to [Implement an ingress gateway](/consul/docs/connect/gateways/ingress-gateway/usage) for additional information. - -1. Start the network proxy: The ingress gateway proxy service accepts configurations from the configuration entry and directs requests to the exposed services. When the external traffic passes through the ingress gateway, your sidecar proxy handles the inbound and outbound connections between the exposed services and the gateway. Refer to [Service mesh proxy overview](/consul/docs/connect/proxies) to learn more about the proxies Consul supports. - -## Integrations with custom TLS management solutions - -You can configure the ingress gateway to retrieve and serve custom TLS certificates from external systems. This functionality is designed to help you integrate with custom TLS management software. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for additional information. \ No newline at end of file diff --git a/website/content/docs/connect/gateways/ingress-gateway/tls-external-service.mdx b/website/content/docs/connect/gateways/ingress-gateway/tls-external-service.mdx deleted file mode 100644 index d3d116761831..000000000000 --- a/website/content/docs/connect/gateways/ingress-gateway/tls-external-service.mdx +++ /dev/null @@ -1,253 +0,0 @@ ---- -layout: docs -page_title: Serve custom TLS certificates from an external service -description: Learn how to configure ingress gateways to serve TLS certificates from an external service to using secret discovery service. The SDS feature is designed for developers building integrations with custom TLS management solutions. ---- - -# Serve custom TLS certificates from an external service - -This is an advanced topic that describes how to configure ingress gateways to serve TLS certificates sourced from an external service to inbound traffic using secret discovery service (SDS). SDS is a low-level feature designed for developers building integrations with custom TLS management solutions. For instructions on more common ingress gateway implementations, refer to [Implement an ingress gateway](/consul/docs/connect/gateways/ingress-gateway/usage). - -## Overview - -The following process describes the general procedure for configuring ingress gateways to serve TLS certificates sourced from external services: - -1. Configure static SDS clusters in the ingress gateway service definition. -1. Register the service definition. -1. Configure TLS client authentication -1. Start Envoy. -1. Configure SDS settings in an ingress gateway configuration entry. -1. Register the ingress gateway configuration entry with Consul. - -## Requirements - -- The external service must implement Envoy's [gRPC secret discovery service (SDS) API](https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret). -- You should have some familiarity with Envoy configuration and the SDS protocol. -- The [`connect.enabled` parameter](/consul/docs/agent/config/config-files#connect) must be set to `true` for all server agents in the Consul datacenter. -- The [`ports.grpc` parameter](/consul/docs/agent/config/config-files#connect) must be configured for all server agents in the Consul datacenter. - -### ACL requirements - -If ACLs are enabled, you must present a token when registering ingress gateways that grant the following permissions: - -- `service:write` for the ingress gateway's service name -- `service:read` for all services in the ingress gateway's configuration entry -- `node:read` for all nodes of the services in the ingress gateway's configuration entry. - -These privileges authorize the token to route communications to other services in the mesh. If the Consul client agent on the gateway's node is not configured to use the default gRPC port, `8502`, then the gateway's token must also provide `agent:read` for its node's name in order to discover the agent's gRPC port. gRPC is used to expose Envoy's xDS API to Envoy proxies. - -## Configure static SDS clusters - -You must define one or more additional static clusters in the ingress gateway service definition for each Envoy proxy associated with the gateway. The additional clusters define how Envoy should connect to the required SDS services. - -Configure the static clusters in the [`Proxy.Config.envoy_envoy_extra_static_clusters_json`](/consul/docs/connect/proxies/envoy#envoy_extra_static_clusters_json) parameter in the service definition. - -The clusters must provide connection information and any necessary authentication information, such as mTLS credentials. - -You must manually register the ingress gateway with Consul proxy to define extra clusters in Envoy's bootstrap configuration. You can not use the `-register` flag with `consul connect envoy -gateway=ingress` to automatically register the proxy to define static clusters. - -In the following example, the `public-ingress` gateway includes a static cluster named `sds-cluster` that specifies paths to the SDS certificate and SDS certification validation files: - - - - -```hcl -Services { - Name = "public-ingress" - Kind = "ingress-gateway" - - Proxy { - Config { - envoy_extra_static_clusters_json = < - -Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-staticresources-clusters) for details about configuration parameters for SDS clusters. - -## Register the ingress gateway service definition - -Issue the `consul services register` command on the Consul agent on the Envoy proxy's node to register the service. The following example command registers an ingress gateway proxy from a `public-ingress.hcl` file: - -```shell-session -$ consul services register public-ingress.hcl -``` - -Refer to [Register services and health checks](/consul/docs/services/usage/register-services-checks) for additional information about registering services in Consul. - -## Configure TLS client authentication - -Store TLS client authentication files, certificate files, and keys on disk where the Envoy proxy runs and ensure that they are available to Consul. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/bootstrap/bootstrap) for details on configuring authentication files. - -The following example specifies certificate chain: - - - - -```json -{ - "resources": [ - { - "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret", - "name": "tls_sds", - "tls_certificate": { - "certificate_chain": { - "filename": "/certs/sds-client-auth.crt" - }, - "private_key": { - "filename": "/certs/sds-client-auth.key" - } - } - } - ] -} -``` - - - -The following example specifies the validation context: - - - -```json -{ - "resources": [ - { - "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret", - "name": "validation_context_sds", - "validation_context": { - "trusted_ca": { - "filename": "/certs/sds-ca.crt" - } - } - } - ] -} -``` - - - -## Start Envoy - -Issue the `consul connect envoy` command to bootstrap Envoy. The following example starts Envoy and registers it as a service called `public-ingress`: - -```shell-session -$ ​​consul connect envoy -gateway=ingress -service public-ingress -``` - -Refer to [Consul Connect Envoy](/consul/commands/connect/envoy) for additional information about using the `consul connect envoy` command. - -## Define an ingress gateway configuration entry - -Create an ingress gateway configuration entry that enables the gateway to use certificates from SDS. The configuration entry also maps downstream ingress listeners to upstream services. Configure the following fields: - -- [`Kind`](/consul/docs/connect/config-entries/ingress-gateway#kind): Set the value to `ingress-gateway`. -- [`Name`](/consul/docs/connect/config-entries/ingress-gateway#name): Consul applies the configuration entry settings to ingress gateway proxies with names that match the `Name` field. -- [`TLS`](/consul/docs/connect/config-entries/ingress-gateway#tls): The main `TLS` parameter for the configuration entry holds the SDS configuration. You can also specify TLS configurations per listener and per service. - - [`TLS.SDS`](/consul/docs/connect/config-entries/ingress-gateway#tls-sds): The `SDS` map includes the following configuration settings: - - [`ClusterName`](/consul/docs/connect/config-entries/ingress-gateway#tls-sds-clustername): Specifies the name of the cluster you specified when [configuring the SDS cluster](#configure-static-SDS-clusters). - - [`CertResource`](/consul/docs/connect/config-entries/ingress-gateway#tls-sds-certresource): Specifies the name of the certificate resource to load. -- [`Listeners`](/consul/docs/connect/config-entries/ingress-gateway#listeners): Specify one or more listeners. - - [`Listeners.Port`](/consul/docs/connect/config-entries/ingress-gateway#listeners-port): Specify a port for the listener. Each listener is uniquely identified by its port number. - - [`Listeners.Protocol`](/consul/docs/connect/config-entries/ingress-gateway#listeners-protocol): The default protocol is `tcp`, but you must specify the protocol used by the services you want to allow traffic from. - - [`Listeners.Services`](/consul/docs/connect/config-entries/ingress-gateway#listeners-services): The `Services` field contains the services that you want to expose to upstream services. The field contains several options and sub-configurations that enable granular control over ingress traffic, such as health check and TLS configurations. - -For Consul Enterprise service meshes, you may also need to configure the [`Partition`](/consul/docs/connect/config-entries/ingress-gateway#partition) and [`Namespace`](/consul/docs/connect/config-entries/ingress-gateway#namespace) fields for the gateway and for each exposed service. - -Refer to [Ingress gateway configuration entry reference](/consul/docs/connect/config-entries/ingress-gateway) for details about the supported parameters. - -The following example directs Consul to retrieve `example.com-public-cert` certificates from an SDS cluster named `sds-cluster` and serve them to all listeners: - - - -```hcl -Kind = "ingress-gateway" -Name = "public-ingress" - -TLS { - SDS { - ClusterName = "sds-cluster" - CertResource = "example.com-public-cert" - } -} - -Listeners = [ - { - Port = 8443 - Protocol = "http" - Services = ["*"] - } -] -``` - - - -## Register the ingress gateway configuration entry - -You can register the configuration entry using the [`consul config` command](/consul/commands/config) or by calling the [`/config` API endpoint](/consul/api-docs/config). Refer to [How to Use Configuration Entries](/consul/docs/agent/config-entries) for details about applying configuration entries. - -The following example registers an ingress gateway configuration entry named `public-ingress-cfg.hcl` that is stored on the local system: - -```shell-session -$ consul config write public-ingress-cfg.hcl -``` - -The Envoy instance starts a listener on the port specified in the configuration entry and fetches the TLS certificate named from the SDS server. diff --git a/website/content/docs/connect/gateways/ingress-gateway/usage.mdx b/website/content/docs/connect/gateways/ingress-gateway/usage.mdx deleted file mode 100644 index 2b4c55e279a4..000000000000 --- a/website/content/docs/connect/gateways/ingress-gateway/usage.mdx +++ /dev/null @@ -1,127 +0,0 @@ ---- -layout: docs -page_title: Implement an ingress gateway -description: Learn how to implement ingress gateways, which are Consul service mesh constructs that listen for requests from external network locations and route authorized traffic to destinations in the service mesh. ---- - -# Implement an ingress gateway - -This topic describes how to add ingress gateways to your Consul service mesh. Ingress gateways enable connectivity within your organizational network by allowing services outside of the service mesh to send traffic to services in the mesh. Refer to [Ingress gateways overview](/consul/docs/connect/gateways/ingress-gateway/) for additional information about ingress gateways. - -This topic describes ingress gateway usage for virtual machine (VM) environments. Refer to [Configure ingress gateways for Consul on Kubernetes](/consul/docs/k8s/connect/ingress-gateways) for instructions on how to implement ingress gateways in Kubernetes environments. - -## Overview - -Ingress gateways are a type of proxy service included with Consul. Complete the following steps to set up an ingress gateway: - -1. Define listeners and the services they expose. Specify these details in an ingress gateway configuration entry. -1. Register an ingress gateway service. Define the services in a service definition file. -1. Start the ingress gateway. This step deploys the Envoy proxy that functions as the ingress gateway. - -After specifying listeners and services in the ingress gateway configuration entry, you can register the gateway service and start Envoy with a single CLI command instead of completing these steps separately. Refer [Register an ingress service on Envoy startup](#register-an-ingress-service-on-envoy-startup). - -## Requirements - -- Service mesh must be enabled for all agents. Set the [`connect.enabled` parameter](/consul/docs/agent/config/config-files#connect) to `true` to enable service mesh. -- The gRPC port must be configured for all server agents in the datacenter. Specify the gRPC port in the [`ports.grpc` parameter](/consul/docs/agent/config/config-files#grpc_port). We recommend setting the port to `8502` to simplify configuration when ACLs are enabled. Refer to [ACL requirements](#acl-requirements) for additional information. -- You must use Envoy for sidecar proxies in your service mesh. Refer to [Envoy Proxy Configuration for Service Mesh](/consul/docs/connect/proxies/envoy) for supported versions. - -### ACL requirements - -If ACLs are enabled, you must present a token when registering ingress gateways that grant the following permissions: - -`service:write` for the ingress gateway's service name -`service:read` for all services in the ingress gateway's configuration entry -`node:read` for all nodes of the services in the ingress gateway's configuration entry. - -These privileges authorize the token to route communications to other services in the mesh. If the Consul client agent on the gateway's node is not configured to use the default `8502` gRPC port, then the gateway's token must also provide `agent:read` for its node's name in order to discover the agent's gRPC port. gRPC is used to expose Envoy's xDS API to Envoy proxies. - -## Expose services - -Define and apply an ingress gateway configuration entry to specify which services in the mesh to expose to external services. - -### Define an ingress gateway configuration entry - -Ingress gateway configuration entries map downstream ingress listeners to upstream services. When you register an ingress gateway proxy that matches the configuration entry name, Consul applies the settings specified in the configuration entry. Configure the following fields: - -- [`Kind`](/consul/docs/connect/config-entries/ingress-gateway#kind): Set the value to `ingress-gateway`. -- [`Name`](/consul/docs/connect/config-entries/ingress-gateway#name): Consul applies the configuration entry settings to ingress gateway proxies with names that match the `Name` field. -- [`Listeners`](/consul/docs/connect/config-entries/ingress-gateway#listeners): Specify one or more listeners. - - [`Listeners.Port`](/consul/docs/connect/config-entries/ingress-gateway#listeners-port): Specify a port for the listener. Each listener is uniquely identified by its port number. - - [`Listeners.Protocol`](/consul/docs/connect/config-entries/ingress-gateway#listeners-protocol): The default protocol is `tcp`, but you must specify the protocol used by the services you want to allow traffic from. - - [`Listeners.Services`](/consul/docs/connect/config-entries/ingress-gateway#listeners-services): The `Services` field contains the services that you want to expose to upstream services. The field contains several options and sub-configurations that enable granular control over ingress traffic, such as health check and TLS configurations. - -For Consul Enterprise service meshes, you may also need to configure the [`Partition`](/consul/docs/connect/config-entries/ingress-gateway#partition) and [`Namespace`](/consul/docs/connect/config-entries/ingress-gateway#namespace) fields for the gateway and for each exposed service. - -Refer to [Ingress gateway configuration entry reference](/consul/docs/connect/config-entries/ingress-gateway) for details about the supported parameters. - -### Register an ingress gateway configuration entry - -You can register the configuration entry using the [`consul config` command](/consul/commands/config) or by calling the [`/config` API endpoint](/consul/api-docs/config). Refer to [How to Use Configuration Entries](/consul/docs/agent/config-entries) for details about applying configuration entries. - -The following example registers an ingress gateway configuration entry named `public-ingress.hcl` that is stored on the local system: - -```shell-session -$ consul config write public-ingress.hcl -``` - -## Deploy an ingress gateway service - -To deploy an ingress gateway service, create a service definition and register it with Consul. - -You can also define an ingress gateway service and register it with Consul while starting an Envoy proxy from the command line. Refer to [Register an ingress service on Envoy startup](#register-an-ingress-service-on-envoy-startup) for details. - -### Create a service definition for the ingress gateway - -Consul applies the settings defined in the ingress gateway configuration entry to ingress gateway services that match the configuration entry name. Refer to [Define services](/consul/docs/services/usage/define-services) for additional information about defining services in Consul. - -The following fields are required for the ingress gateway service definition: - -- [`Kind`](/consul/docs/services/configuration/services-configuration-reference#kind): The field must be set to `ingress-gateway`. -- [`Name`](/consul/docs/services/configuration/services-configuration-reference#name): The name should match the value specified for the `Name` field in the configuration entry. - -All other service definition fields are optional, but we recommend defining health checks to verify the health of the gateway. Refer to [Services configuration reference](/consul/docs/services/configuration/services-configuration-reference) for information about defining services. - -### Register the ingress gateway proxy service - -You can register the ingress gateway using API or CLI. Refer to [Register services and health checks](/consul/docs/services/usage/register-services-checks) for instructions on registering services in Consul. - -The following example registers an ingress gateway defined in `ingress-gateway.hcl` from the Consul CLI: - -```shell-session -$ consul services register ingress-service.hcl -``` - -## Start an Envoy proxy - -Run the `consul connect envoy` command to start Envoy. Specify the name of the ingress gateway service and include the `-gateway=ingress` flag. Refer to [Consul Connect Envoy](/consul/commands/connect/envoy) for details about using the command. - -The following example starts Envoy for the `ingress-service` gateway service: - -```shell-session -$ consul connect envoy -gateway=ingress ingress-service -``` - -### Register an ingress service on Envoy startup - -You can also automatically register the ingress gateway service when starting the Envoy proxy. Specify the following flags with the `consul connect envoy` command: - -- `-gateway=ingress` -- `-register` -- `-service=` - -The following example starts Envoy and registers an ingress gateway service named `ingress-service` bound to the agent address at port `8888`: - -```shell-session -$ consul connect envoy -gateway=ingress -register \ - -service ingress-service \ - -address '{{ GetInterfaceIP "eth0" }}:8888' -``` -You cannot register the ingress gateway service and start the proxy at the same time if you configure the gateway to retrieve and serve TLS certificates from their external downstreams. Refer to [Serve custom TLS certificates from an external service](/consul/docs/connect/gateways/ingress-gateway/tls-external-service) for more information. - -## Additional Envoy configurations - -Ingress gateways support additional Envoy gateway options and escape-hatch overrides. Specify gateway options in the ingress gateway service definition to use them. To use escape-hatch overrides, you must add them to your global proxy defaults configuration entry. Refer to the following documentation for additional information: - -- [Gateway options](/consul/docs/connect/proxies/envoy#gateway-options) -- [Escape-hatch overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) diff --git a/website/content/docs/connect/gateways/mesh-gateway/index.mdx b/website/content/docs/connect/gateways/mesh-gateway/index.mdx deleted file mode 100644 index efb1bc1066e3..000000000000 --- a/website/content/docs/connect/gateways/mesh-gateway/index.mdx +++ /dev/null @@ -1,309 +0,0 @@ ---- -layout: docs -page_title: Mesh Gateways -description: >- - Mesh gateways are specialized proxies that route data between services that cannot communicate directly. Learn how mesh gateways are used in different Consul configurations. ---- - -# Mesh Gateways - -Mesh gateways enable service mesh traffic to be routed between different Consul datacenters. -Datacenters can reside in different clouds or runtime environments where general interconnectivity between all services in all datacenters isn't feasible. - -## Prerequisites - -Mesh gateways can be used with any of the following Consul configurations for managing separate datacenters or partitions. - -1. WAN Federation - * [Mesh gateways can be used to route service-to-service traffic between datacenters](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters) - * [Mesh gateways can be used to route all WAN traffic, including from Consul servers](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways) -2. Cluster Peering - * [Mesh gateways can be used to route service-to-service traffic between datacenters](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) - * [Mesh gateways can be used to route control-plane traffic from Consul servers](/consul/docs/connect/gateways/mesh-gateway/peering-via-mesh-gateways) -3. Admin Partitions - * [Mesh gateways can be used to route service-to-service traffic between admin partitions in the same Consul datacenter](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions) - -### Consul - -Review the [specific guide](#prerequisites) for your use case to determine the required version of Consul. - -### Network - -* General network connectivity to all services within its local Consul datacenter. -* General network connectivity to all mesh gateways within remote Consul datacenters. - -### Proxy - -Envoy is the only proxy with mesh gateway capabilities in Consul. - -Mesh gateway proxies receive their configuration through Consul, which automatically generates it based on the proxy's registration. -Consul can only translate mesh gateway registration information into Envoy configuration. - -Sidecar proxies that send traffic to an upstream service through a gateway need to know the location of that gateway. They discover the gateway based on their sidecar proxy registrations. Consul can only translate the gateway registration information into Envoy configuration. - -Sidecar proxies that do not send upstream traffic through a gateway are not affected when you deploy gateways. If you are using Consul's built-in proxy as a Connect sidecar it will continue to work for intra-datacenter traffic and will receive incoming traffic even if that traffic has passed through a gateway. - -## Configuration - -Configure the following settings to register the mesh gateway as a service in Consul. - -* Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. -* Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and datacenter. Refer to the [`upstreams` documentation](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) for details. The service `proxy.upstreams.destination_name` is always required. The `proxy.upstreams.datacenter` must be configured to enable cross-datacenter traffic. The `proxy.upstreams.destination_namespace` configuration is only necessary if the destination service is in a different namespace. -* Define the `Proxy.Config` settings using opaque parameters compatible with your proxy (i.e., Envoy). For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information. -* If ACLs are enabled, a token granting `service:write` for the gateway's service name and `service:read` for all services in the datacenter or partition must be added to the gateway's service definition. These permissions authorize the token to route communications for other Consul service mesh services, but does not allow decrypting any of their communications. - -### Modes - -Each upstream associated with a service mesh proxy can be configured so that it is routed through a mesh gateway. -Depending on your network, the proxy's connection to the gateway can operate in one of the following modes: - -* `none` - No gateway is used and a service mesh sidecar proxy makes its outbound connections directly - to the destination services. This is the default for WAN federation. This setting is invalid for peered clusters - and will be treated as remote instead. - -* `local` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the - same datacenter. That gateway is responsible for ensuring that the data is forwarded to gateways in the destination datacenter. - -* `remote` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the destination datacenter. - The gateway forwards the data to the final destination service. This is the default for peered clusters. - -### Service Mesh Proxy Configuration - -Set the proxy to the preferred [mode](#modes) to configure the service mesh proxy. You can specify the mode globally or within child configurations to control proxy behaviors at a lower level. Consul recognizes the following order of precedence if the gateway mode is configured in multiple locations the order of precedence: - -1. Upstream definition (highest priority) -2. Service instance definition -3. Centralized `service-defaults` configuration entry -4. Centralized `proxy-defaults` configuration entry - -## Example Configurations - -Use the following example configurations to help you understand some of the common scenarios. - -### Enabling Gateways Globally - -The following `proxy-defaults` configuration will enable gateways for all mesh services in the `local` mode. - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -MeshGateway { - Mode = "local" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - meshGateway: - mode: local -``` - -```json -{ - "Kind": "proxy-defaults", - "Name": "global", - "MeshGateway": { - "Mode": "local" - } -} -``` - - - -### Enabling Gateways Per Service - -The following `service-defaults` configuration will enable gateways for all mesh services with the name `web`. - - - -```hcl -Kind = "service-defaults" -Name = "web" -MeshGateway { - Mode = "local" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: web -spec: - meshGateway: - mode: local -``` - -```json -{ - "Kind": "service-defaults", - "Name": "web", - "MeshGateway": { - "Mode": "local" - } -} -``` - - - -### Enabling Gateways for a Service Instance - -The following [proxy service configuration](/consul/docs/connect/proxies/deploy-service-mesh-proxies) - enables gateways for the service instance in the `remote` mode. - - - -```hcl -service { - name = "web-sidecar-proxy" - kind = "connect-proxy" - port = 8181 - proxy { - destination_service_name = "web" - mesh_gateway { - mode = "remote" - } - upstreams = [ - { - destination_name = "api" - datacenter = "secondary" - local_bind_port = 10000 - } - ] - } -} - -# Or alternatively inline with the service definition: - -service { - name = "web" - port = 8181 - connect { - sidecar_service { - proxy { - mesh_gateway { - mode = "remote" - } - upstreams = [ - { - destination_name = "api" - datacenter = "secondary" - local_bind_port = 10000 - } - ] - } - } - } -} -``` - -```json -{ - "service": { - "kind": "connect-proxy", - "name": "web-sidecar-proxy", - "port": 8181, - "proxy": { - "destination_service_name": "web", - "mesh_gateway": { - "mode": "remote" - }, - "upstreams": [ - { - "destination_name": "api", - "datacenter": "secondary", - "local_bind_port": 10000 - } - ] - } - } -} -``` - - - -### Enabling Gateways for a Proxy Upstream - -The following service definition will enable gateways in the `local` mode for one upstream, the `remote` mode for a second upstream and will disable gateways for a third upstream. - - - -```hcl -service { - name = "web-sidecar-proxy" - kind = "connect-proxy" - port = 8181 - proxy { - destination_service_name = "web" - upstreams = [ - { - destination_name = "api" - destination_peer = "cluster-01" - local_bind_port = 10000 - mesh_gateway { - mode = "remote" - } - }, - { - destination_name = "db" - datacenter = "secondary" - local_bind_port = 10001 - mesh_gateway { - mode = "local" - } - }, - { - destination_name = "logging" - datacenter = "secondary" - local_bind_port = 10002 - mesh_gateway { - mode = "none" - } - }, - ] - } -} -``` -```json -{ - "service": { - "kind": "connect-proxy", - "name": "web-sidecar-proxy", - "port": 8181, - "proxy": { - "destination_service_name": "web", - "upstreams": [ - { - "destination_name": "api", - "local_bind_port": 10000, - "mesh_gateway": { - "mode": "remote" - } - }, - { - "destination_name": "db", - "local_bind_port": 10001, - "mesh_gateway": { - "mode": "local" - } - }, - { - "destination_name": "logging", - "local_bind_port": 10002, - "mesh_gateway": { - "mode": "none" - } - } - ] - } - } -} -``` - - diff --git a/website/content/docs/connect/gateways/mesh-gateway/peering-via-mesh-gateways.mdx b/website/content/docs/connect/gateways/mesh-gateway/peering-via-mesh-gateways.mdx deleted file mode 100644 index 97334950f62a..000000000000 --- a/website/content/docs/connect/gateways/mesh-gateway/peering-via-mesh-gateways.mdx +++ /dev/null @@ -1,139 +0,0 @@ ---- -layout: docs -page_title: Enabling Peering Control Plane Traffic -description: >- - Mesh gateways are specialized proxies that route data between services that cannot communicate directly. Learn how to enable traffic across clusters in different datacenters or admin partitions that have an established peering connection. ---- - -# Enabling Peering Control Plane Traffic - -This topic describes how to configure a mesh gateway to route control plane traffic between Consul clusters that share a peer connection. For information about routing service traffic between cluster peers through a mesh gateway, refer to [Enabling Service-to-service Traffic Across Admin Partitions](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions). - -Control plane traffic between cluster peers includes -the initial secret handshake and the bi-directional stream replicating peering data. -This data is not decrypted by the mesh gateway(s). -Instead, it is transmitted end-to-end using the accepting cluster’s auto-generated TLS certificate on the gRPC TLS port. - - - - -[![Cluster peering with mesh gateways](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-with-mesh-gateways.png)](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-with-mesh-gateways.png) - - - - - -[![Cluster peering without mesh gateways](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-without-mesh-gateways.png)](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-without-mesh-gateways.png) - - - - -## Prerequisites - -To configure mesh gateways for cluster peering control plane traffic, make sure your Consul environment meets the following requirements: - -- Consul version 1.14.0 or newer. -- A local Consul agent in both clusters is required to manage mesh gateway configuration. -- Use [Envoy proxies](/consul/docs/connect/proxies/envoy). Envoy is the only proxy with mesh gateway capabilities in Consul. - -## Configuration - -Configure the following settings to register and use the mesh gateway as a service in Consul. - -### Gateway registration - -Register a mesh gateway in each of cluster that will be peered. - -- Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. -- Define the `Proxy.Config` settings using opaque parameters compatible with your proxy. For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information. -- Apply a [Mesh config entry](/consul/docs/connect/config-entries/mesh#peer-through-mesh-gateways) with `PeerThroughMeshGateways = true`. See [modes](#modes) for a discussion of when to apply this. - -Alternatively, you can also use the CLI to spin up and register a gateway in Consul. For additional information, refer to the [`consul connect envoy` command](/consul/commands/connect/envoy#mesh-gateways). - -For Consul Enterprise clusters, mesh gateways must be registered in the "default" partition because this is implicitly where Consul servers are assigned. - -### ACL configuration - - - - -In addition to the [ACL Configuration](/consul/docs/connect/cluster-peering/tech-specs#acl-specifications) necessary for service-to-service traffic, mesh gateways that route peering control plane traffic must be granted `peering:read` access to all peerings. - -This access allows the mesh gateway to list all peerings in a Consul cluster and generate unique routing per peered datacenter. - - - -```hcl -peering = "read" -``` - -```json -{ - "peering": "read" -} -``` - - - - - - - -In addition to the [ACL Configuration](/consul/docs/connect/cluster-peering/tech-specs#acl-specifications) necessary for service-to-service traffic, mesh gateways that route peering control plane traffic must be granted `peering:read` access to all peerings in all partitions. - -This access allows the mesh gateway to list all peerings in a Consul cluster and generate unique routing per peered partition. - - - -```hcl -partition_prefix "" { - peering = "read" -} -``` - -```json -{ - "partition_prefix": { - "": { - "peering": "read" - } - } -} -``` - - - - - - -### Modes - -Connect proxy configuration [Modes](/consul/docs/connect/gateways/mesh-gateway#connect-proxy-configuration#modes) are not applicable to peering control plane traffic. -The flow of control plane traffic through the gateway is implied by the presence of a [Mesh config entry](/consul/docs/connect/config-entries/mesh#peer-through-mesh-gateways) with `PeerThroughMeshGateways = true`. - - - -```hcl -Kind = "mesh" -Peering { - PeerThroughMeshGateways = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - peering: - peerThroughMeshGateways: true -``` - - -By setting this mesh config on a cluster before [creating a peering token](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#create-a-peering-token), inbound control plane traffic will be sent through the mesh gateway registered this cluster, also known the accepting cluster. -As mesh gateway instances are registered at the accepting cluster, their addresses will be exposed to the dialing cluster over the bi-directional peering stream. - -Setting this mesh config on a cluster before [establishing a connection](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#establish-a-connection-between-clusters) will cause the outbound control plane traffic to flow through the mesh gateway. - -To route all peering control plane traffic though mesh gateways, both the accepting and dialing cluster must have the mesh config entry applied. diff --git a/website/content/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions.mdx b/website/content/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions.mdx deleted file mode 100644 index 4c7fe3ba2aa1..000000000000 --- a/website/content/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions.mdx +++ /dev/null @@ -1,292 +0,0 @@ ---- -layout: docs -page_title: Enabling Service-to-service Traffic Across Admin Partitions -description: >- - Mesh gateways are specialized proxies that route data between services that cannot communicate directly with upstreams. Learn how to enable service-to-service traffic across admin partitions and review example configuration entries. ---- - -# Enabling Service-to-service Traffic Across Admin Partitions - --> **Consul Enterprise 1.11.0+:** Admin partitions are supported in Consul Enterprise versions 1.11.0 and newer. - -Mesh gateways enable you to route service mesh traffic between different Consul [admin partitions](/consul/docs/enterprise/admin-partitions). -Partitions can reside in different clouds or runtime environments where general interconnectivity between all services -in all partitions isn't feasible. - -Mesh gateways operate by sniffing and extracting the server name indication (SNI) header from the service mesh session and routing the connection to the appropriate destination based on the server name requested. The gateway does not decrypt the data within the mTLS session. - -## Prerequisites - -Ensure that your Consul environment meets the following requirements. - -### Consul - -* Consul Enterprise version 1.11.0 or newer. -* A local Consul agent is required to manage its configuration. -* Consul service mesh must be enabled in all partitions. Refer to the [`connect` documentation](/consul/docs/agent/config/config-files#connect) for details. -* Each partition must have a unique name. Refer to the [admin partitions documentation](/consul/docs/enterprise/admin-partitions) for details. -* If you want to [enable gateways globally](/consul/docs/connect/gateways/mesh-gateway#enabling-gateways-globally) you must enable [centralized configuration](/consul/docs/agent/config/config-files#enable_central_service_config). - -### Proxy - -Envoy is the only proxy with mesh gateway capabilities in Consul. - -Mesh gateway proxies receive their configuration through Consul, which automatically generates it based on the proxy's registration. -Consul can only translate mesh gateway registration information into Envoy configuration. - -Sidecar proxies that send traffic to an upstream service through a gateway need to know the location of that gateway. They discover the gateway based on their sidecar proxy registrations. Consul can only translate the gateway registration information into Envoy configuration. - -Sidecar proxies that do not send upstream traffic through a gateway are not affected when you deploy gateways. If you are using Consul's built-in proxy as a service mesh sidecar it will continue to work for intra-datacenter traffic and will receive incoming traffic even if that traffic has passed through a gateway. - -## Configuration - -Configure the following settings to register the mesh gateway as a service in Consul. - -* Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. -* Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and partition. Refer to the [`upstreams` documentation](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) for details. The service `proxy.upstreams.destination_name` is always required. The `proxy.upstreams.destination_partition` must be configured to enable cross-partition traffic. The `proxy.upstreams.destination_namespace` configuration is only necessary if the destination service is in a different namespace. -* Configure the `exported-services` configuration entry to enable Consul to export services contained in an admin partition to one or more additional partitions. Refer to the [Exported Services documentation](/consul/docs/connect/config-entries/exported-services) for details. -* Define the `Proxy.Config` settings using opaque parameters compatible with your proxy, i.e., Envoy. For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information. -* If ACLs are enabled, a token granting `service:write` for the gateway's service name and `service:read` for all services in the datacenter or partition must be added to the gateway's service definition. These permissions authorize the token to route communications for other Consul service mesh services, but does not allow decrypting any of their communications. - -### Modes - -Each upstream associated with a service mesh proxy can be configured so that it is routed through a mesh gateway. -Depending on your network, the proxy's connection to the gateway can operate in one of the following modes: - -* `none` - (Default) No gateway is used and a service mesh connect proxy makes its outbound connections directly - to the destination services. - -* `local` - The service mesh connect proxy makes an outbound connection to a gateway running in the same datacenter. The gateway at the outbound connection is responsible for ensuring that the data is forwarded to gateways in the destination partition. - -* `remote` - The service mesh connect proxy makes an outbound connection to a gateway running in the destination datacenter. - The gateway forwards the data to the final destination service. - -### Service Mesh Proxy Configuration - -Set the proxy to the preferred [mode](#modes) to configure the service mesh proxy. You can specify the mode globally or within child configurations to control proxy behaviors at a lower level. Consul recognizes the following order of precedence if the gateway mode is configured in multiple locations the order of precedence: - -1. Upstream definition (highest priority) -2. Service instance definition -3. Centralized `service-defaults` configuration entry -4. Centralized `proxy-defaults` configuration entry - -## Example Configurations - -Use the following example configurations to help you understand some of the common scenarios. - -### Enabling Gateways Globally - -The following `proxy-defaults` configuration will enable gateways for all mesh services in the `local` mode. - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -MeshGateway { - Mode = "local" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - meshGateway: - mode: local -``` - -```json -{ - "Kind": "proxy-defaults", - "Name": "global", - "MeshGateway": { - "Mode": "local" - } -} -``` - - - -### Enabling Gateways Per Service - -The following `service-defaults` configuration will enable gateways for all mesh services with the name `web`. - - - -```hcl -Kind = "service-defaults" -Name = "web" -MeshGateway { - Mode = "local" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: web -spec: - meshGateway: - mode: local -``` - -```json -{ - "Kind": "service-defaults", - "Name": "web", - "MeshGateway": { - "Mode": "local" - } -} -``` - - - -### Enabling Gateways for a Service Instance - -The following [proxy service configuration](/consul/docs/connect/proxies/deploy-service-mesh-proxies) -enables gateways for `web` service instances in the `finance` partition. - - - -```hcl -service { - name = "web-sidecar-proxy" - kind = "connect-proxy" - port = 8181 - proxy { - destination_service_name = "web" - mesh_gateway { - mode = "local" - } - upstreams = [ - { - destination_partition = "finance" - destination_namespace = "default" - destination_type = "service" - destination_name = "billing" - local_bind_port = 9090 - } - ] - } -} -``` - -```json -{ - "service": { - "kind": "connect-proxy", - "name": "web-sidecar-proxy", - "port": 8181, - "proxy": { - "destination_service_name": "web", - "mesh_gateway": { - "mode": "local" - }, - "upstreams": [ - { - "destination_name": "billing", - "destination_namespace": "default", - "destination_partition": "finance", - "destination_type": "service", - "local_bind_port": 9090 - } - ] - } - } -} -``` - - -### Enabling Gateways for a Proxy Upstream - -The following service definition will enable gateways in `local` mode for three different partitions. Note that each service exists in the same namespace, but are separated by admin partition. - - - -```hcl -service { - name = "web-sidecar-proxy" - kind = "connect-proxy" - port = 8181 - proxy { - destination_service_name = "web" - upstreams = [ - { - destination_name = "api" - destination_namespace = "dev" - destination_partition = "api" - local_bind_port = 10000 - mesh_gateway { - mode = "local" - } - }, - { - destination_name = "db" - destination_namespace = "dev" - destination_partition = "db" - local_bind_port = 10001 - mesh_gateway { - mode = "local" - } - }, - { - destination_name = "logging" - destination_namespace = "dev" - destination_partition = "logging" - local_bind_port = 10002 - mesh_gateway { - mode = "local" - } - }, - ] - } -} -``` - -```json -{ - "service": { - "kind": "connect-proxy", - "name": "web-sidecar-proxy", - "port": 8181, - "proxy": { - "destination_service_name": "web", - "upstreams": [ - { - "destination_name": "api", - "destination_namespace": "dev", - "destination_partition": "api", - "local_bind_port": 10000, - "mesh_gateway": { - "mode": "local" - } - }, - { - "destination_name": "db", - "destination_namespace": "dev", - "destination_partition": "db", - "local_bind_port": 10001, - "mesh_gateway": { - "mode": "local" - } - }, - { - "destination_name": "logging", - "destination_namespace": "dev", - "destination_partition": "logging", - "local_bind_port": 10002, - "mesh_gateway": { - "mode": "local" - } - } - ] - } - } -} -``` - diff --git a/website/content/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters.mdx b/website/content/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters.mdx deleted file mode 100644 index d9df2de8f18c..000000000000 --- a/website/content/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters.mdx +++ /dev/null @@ -1,313 +0,0 @@ ---- -layout: docs -page_title: Enabling Service-to-service Traffic Across WAN Federated Datacenters -description: >- - Mesh gateways are specialized proxies that route data between services that cannot communicate directly. Learn how to enable service-to-service traffic across wan-federated datacenters and review example configuration entries. ---- - -# Enabling Service-to-service Traffic Across WAN Federated Datacenters - --> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer. - -Mesh gateways enable service mesh traffic to be routed between different Consul datacenters. -Datacenters can reside in different clouds or runtime environments where general interconnectivity between all services -in all datacenters isn't feasible. - -Mesh gateways operate by sniffing and extracting the server name indication (SNI) header from the service mesh session and routing the connection to the appropriate destination based on the server name requested. The gateway does not decrypt the data within the mTLS session. - -The following diagram describes the architecture for using mesh gateways for cross-datacenter communication: - -![Mesh Gateway Architecture](/img/mesh-gateways.png) - --> **Mesh Gateway Tutorial**: Follow the [mesh gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-gateways) to learn important concepts associated with using mesh gateways for connecting services across datacenters. - -## Prerequisites - -Ensure that your Consul environment meets the following requirements. - -### Consul - -* Consul version 1.6.0 or newer. -* A local Consul agent is required to manage its configuration. -* Consul [service mesh](/consul/docs/agent/config/config-files#connect) must be enabled in both datacenters. -* Each [datacenter](/consul/docs/agent/config/config-files#datacenter) must have a unique name. -* Each datacenters must be [WAN joined](/consul/tutorials/networking/federation-gossip-wan). -* The [primary datacenter](/consul/docs/agent/config/config-files#primary_datacenter) must be set to the same value in both datacenters. This specifies which datacenter is the authority for service mesh certificates and is required for services in all datacenters to establish mutual TLS with each other. -* [gRPC](/consul/docs/agent/config/config-files#grpc_port) must be enabled. -* If you want to [enable gateways globally](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#enabling-gateways-globally) you must enable [centralized configuration](/consul/docs/agent/config/config-files#enable_central_service_config). - -### Network - -* General network connectivity to all services within its local Consul datacenter. -* General network connectivity to all mesh gateways within remote Consul datacenters. - -### Proxy - -Envoy is the only proxy with mesh gateway capabilities in Consul. - -Mesh gateway proxies receive their configuration through Consul, which automatically generates it based on the proxy's registration. -Consul can only translate mesh gateway registration information into Envoy configuration. - -Sidecar proxies that send traffic to an upstream service through a gateway need to know the location of that gateway. They discover the gateway based on their sidecar proxy registrations. Consul can only translate the gateway registration information into Envoy configuration. - -Sidecar proxies that do not send upstream traffic through a gateway are not affected when you deploy gateways. If you are using Consul's built-in proxy as a service mesh sidecar it will continue to work for intra-datacenter traffic and will receive incoming traffic even if that traffic has passed through a gateway. - -## Configuration - -Configure the following settings to register the mesh gateway as a service in Consul. - -* Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. -* Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and datacenter. Refer to the [`upstreams` documentation](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) for details. The service `proxy.upstreams.destination_name` is always required. The `proxy.upstreams.datacenter` must be configured to enable cross-datacenter traffic. The `proxy.upstreams.destination_namespace` configuration is only necessary if the destination service is in a different namespace. -* Define the `Proxy.Config` settings using opaque parameters compatible with your proxy (i.e., Envoy). For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information. -* If ACLs are enabled, a token granting `service:write` for the gateway's service name and `service:read` for all services in the datacenter or partition must be added to the gateway's service definition. These permissions authorize the token to route communications for other Consul service mesh services, but does not allow decrypting any of their communications. - -### Modes - -Each upstream associated with a service mesh proxy can be configured so that it is routed through a mesh gateway. -Depending on your network, the proxy's connection to the gateway can operate in one of the following modes (refer to the [mesh-architecture-diagram](#mesh-architecture-diagram)): - -* `none` - (Default) No gateway is used and a service mesh sidecar proxy makes its outbound connections directly - to the destination services. - -* `local` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the - same datacenter. That gateway is responsible for ensuring that the data is forwarded to gateways in the destination datacenter. - Refer to the flow labeled `local` in the [mesh-architecture-diagram](#mesh-architecture-diagram). - -* `remote` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the destination datacenter. - The gateway forwards the data to the final destination service. - Refer to the flow labeled `remote` in the [mesh-architecture-diagram](#mesh-architecture-diagram). - -### Service Mesh Proxy Configuration - -Set the proxy to the preferred [mode](#modes) to configure the service mesh proxy. You can specify the mode globally or within child configurations to control proxy behaviors at a lower level. Consul recognizes the following order of precedence if the gateway mode is configured in multiple locations the order of precedence: - -1. Upstream definition (highest priority) -2. Service instance definition -3. Centralized `service-defaults` configuration entry -4. Centralized `proxy-defaults` configuration entry - -## Example Configurations - -Use the following example configurations to help you understand some of the common scenarios. - -### Enabling Gateways Globally - -The following `proxy-defaults` configuration will enable gateways for all mesh services in the `local` mode. - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -MeshGateway { - Mode = "local" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - meshGateway: - mode: local -``` - -```json -{ - "Kind": "proxy-defaults", - "Name": "global", - "MeshGateway": { - "Mode": "local" - } -} -``` - - -### Enabling Gateways Per Service - -The following `service-defaults` configuration will enable gateways for all mesh services with the name `web`. - - - -```hcl -Kind = "service-defaults" -Name = "web" -MeshGateway { - Mode = "local" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: web -spec: - meshGateway: - mode: local -``` - -```json -{ - "Kind": "service-defaults", - "Name": "web", - "MeshGateway": { - "Mode": "local" - } -} - - - -### Enabling Gateways for a Service Instance - -The following [proxy service configuration](/consul/docs/connect/proxies/deploy-service-mesh-proxies) -enables gateways for the service instance in the `remote` mode. - - - -```hcl -service { - name = "web-sidecar-proxy" - kind = "connect-proxy" - port = 8181 - proxy { - destination_service_name = "web" - mesh_gateway { - mode = "remote" - } - upstreams = [ - { - destination_name = "api" - datacenter = "secondary" - local_bind_port = 10000 - } - ] - } -} - -# Or alternatively inline with the service definition: - -service { - name = "web" - port = 8181 - connect { - sidecar_service { - proxy { - mesh_gateway { - mode = "remote" - } - upstreams = [ - { - destination_name = "api" - datacenter = "secondary" - local_bind_port = 10000 - } - ] - } - } - } -} -``` - -```json -{ - "service": { - "kind": "connect-proxy", - "name": "web-sidecar-proxy", - "port": 8181, - "proxy": { - "destination_service_name": "web", - "mesh_gateway": { - "mode": "remote" - }, - "upstreams": [ - { - "destination_name": "api", - "datacenter": "secondary", - "local_bind_port": 10000 - } - ] - } - } -} -``` - - - -### Enabling Gateways for a Proxy Upstream - -The following service definition will enable gateways in the `local` mode for one upstream, the `remote` mode for a second upstream and will disable gateways for a third upstream. - - - -```hcl -service { - name = "web-sidecar-proxy" - kind = "connect-proxy" - port = 8181 - proxy { - destination_service_name = "web" - upstreams = [ - { - destination_name = "api" - local_bind_port = 10000 - mesh_gateway { - mode = "remote" - } - }, - { - destination_name = "db" - local_bind_port = 10001 - mesh_gateway { - mode = "local" - } - }, - { - destination_name = "logging" - local_bind_port = 10002 - mesh_gateway { - mode = "none" - } - }, - ] - } -} -``` -```json -{ - "service": { - "kind": "connect-proxy", - "name": "web-sidecar-proxy", - "port": 8181, - "proxy": { - "destination_service_name": "web", - "upstreams": [ - { - "destination_name": "api", - "local_bind_port": 10000, - "mesh_gateway": { - "mode": "remote" - } - }, - { - "destination_name": "db", - "local_bind_port": 10001, - "mesh_gateway": { - "mode": "local" - } - }, - { - "destination_name": "logging", - "local_bind_port": 10002, - "mesh_gateway": { - "mode": "none" - } - } - ] - } - } -} -``` - diff --git a/website/content/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways.mdx b/website/content/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways.mdx deleted file mode 100644 index d637a8f13461..000000000000 --- a/website/content/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways.mdx +++ /dev/null @@ -1,200 +0,0 @@ ---- -layout: docs -page_title: Enabling WAN Federation Control Plane Traffic -description: >- - You can use mesh gateways to simplify the networking requirements for WAN federated Consul datacenters. Mesh gateways reduce cross-datacenter connection paths, ports, and communication protocols. ---- - -# Enabling WAN Federation Control Plane Traffic - --> **1.8.0+:** This feature is available in Consul versions 1.8.0 and higher - -~> This topic requires familiarity with [mesh gateways](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters). - -WAN federation via mesh gateways allows for Consul servers in different datacenters -to be federated exclusively through mesh gateways. - -When setting up a -[multi-datacenter](/consul/tutorials/networking/federation-gossip-wan) -Consul cluster, operators must ensure that all Consul servers in every -datacenter must be directly connectable over their WAN-advertised network -address from each other. - -[![WAN federation without mesh gateways](/img/wan-federation-connectivity-traditional.png)](/img/wan-federation-connectivity-traditional.png) - -This requires that operators setting up the virtual machines or containers -hosting the servers take additional steps to ensure the necessary routing and -firewall rules are in place to allow the servers to speak to each other over -the WAN. - -Sometimes this prerequisite is difficult or undesirable to meet: - -- **Difficult:** The datacenters may exist in multiple Kubernetes clusters that - unfortunately have overlapping pod IP subnets, or may exist in different - cloud provider VPCs that have overlapping subnets. - -- **Undesirable:** Network security teams may not approve of granting so many - firewall rules. When using platform autoscaling, keeping rules up to date becomes untenable. - -Operators looking to simplify their WAN deployment and minimize the exposed -security surface area can elect to join these datacenters together using [mesh -gateways](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters) to do so. - -[![WAN federation with mesh gateways](/img/wan-federation-connectivity-mesh-gateways.png)](/img/wan-federation-connectivity-mesh-gateways.png) - -## Architecture - -There are two main kinds of communication that occur over the WAN link spanning -the gulf between disparate Consul datacenters: - -- **WAN gossip:** We leverage the serf and memberlist libraries to gossip - around failure detector knowledge about Consul servers in each datacenter. - By default this operates point to point between servers over `8302/udp` with - a fallback to `8302/tcp` (which logs a warning indicating the network is - misconfigured). - -- **Cross-datacenter RPCs:** Consul servers expose a special multiplexed port - over `8300/tcp`. Several distinct kinds of messages can be received on this - port, such as RPC requests forwarded from servers in other datacenters. - -In this network topology individual Consul client agents on a LAN in one -datacenter never need to directly dial servers in other datacenters. This -means you could introduce a set of firewall rules prohibiting `10.0.0.0/24` -from sending any traffic at all to `10.1.2.0/24` for security isolation. - -You may already have configured [mesh -gateways](/consul/tutorials/developer-mesh/service-mesh-gateways) -to allow for services in the service mesh to freely connect between datacenters -regardless of the lateral connectivity of the nodes hosting the Consul client -agents. - -By activating WAN federation via mesh gateways the servers -can similarly use the existing mesh gateways to reach each other without -themselves being directly reachable. - -## Configuration - -### TLS - -All Consul servers in all datacenters should have TLS configured with certificates containing -these SAN fields: - - server.. (normal) - .server.. (needed for wan federation) - -This can be achieved using any number of tools, including `consul tls cert create` with the `-node` flag. - -### Mesh Gateways - -There needs to be at least one mesh gateway configured to opt-in to exposing -the servers in its configuration. When using the `consul connect envoy` CLI -this is done by using the flag `-expose-servers`. All this does is to register -the mesh gateway into the catalog with the additional piece of service metadata -of `{"consul-wan-federation":"1"}`. If you are registering the mesh gateways -into the catalog out of band you may simply add this to your existing -registration payload. - -!> Before activating the feature on an existing cluster you should ensure that -there is at least one mesh gateway prepared to expose the servers registered in -each datacenter otherwise the WAN will become only partly connected. - -### Consul Server Options - -There are a few necessary additional pieces of configuration beyond those -required for standing up a -[multi-datacenter](/consul/tutorials/networking/federation-gossip-wan) -Consul cluster. - -Consul servers in the _primary_ datacenter should add this snippet to the -configuration file: - -```hcl -connect { - enabled = true - enable_mesh_gateway_wan_federation = true -} -``` - -Consul servers in all _secondary_ datacenters should add this snippet to the -configuration file: - -```hcl -primary_gateways = [ ":", ... ] -connect { - enabled = true - enable_mesh_gateway_wan_federation = true -} -``` - -The [`retry_join_wan`](/consul/docs/agent/config/config-files#retry_join_wan) addresses are -only used for the [traditional federation process](/consul/docs/k8s/deployment-configurations/multi-cluster#traditional-wan-federation). -They must be omitted when federating Consul servers via gateways. - --> The `primary_gateways` configuration can also use `go-discover` syntax just -like `retry_join_wan`. - -### Bootstrapping - -For ease of debugging (such as avoiding a flurry of misleading error messages) -when intending to activate WAN federation via mesh gateways it is best to -follow this general procedure: - -### New secondary - -1. Upgrade to the desired version of the consul binary for all servers, - clients, and CLI. -2. Start all consul servers and clients on the new version in the primary - datacenter. -3. Ensure the primary datacenter has at least one running, registered mesh gateway with - the service metadata key of `{"consul-wan-federation":"1"}` set. -4. Ensure you are _prepared_ to launch corresponding mesh gateways in all - secondaries. When ACLs are enabled actually registering these requires - upstream connectivity to the primary datacenter to authorize catalog - registration. -5. Ensure all servers in the primary datacenter have updated configuration and - restart. -6. Ensure all servers in the secondary datacenter have updated configuration. -7. Start all consul servers and clients on the new version in the secondary - datacenter. -8. When ACLs are enabled, shortly afterwards it should become possible to - resolve ACL tokens from the secondary, at which time it should be possible - to launch the mesh gateways in the secondary datacenter. - -### Existing secondary - -1. Upgrade to the desired version of the consul binary for all servers, - clients, and CLI. -2. Restart all consul servers and clients on the new version. -3. Ensure each datacenter has at least one running, registered mesh gateway with the - service metadata key of `{"consul-wan-federation":"1"}` set. -4. Ensure all servers in the primary datacenter have updated configuration and - restart. -5. Ensure all servers in the secondary datacenter have updated configuration and - restart. - -### Verification - -From any two datacenters joined together double check the following give you an -expected result: - -- Check that `consul members -wan` lists all servers in all datacenters with - their _local_ ip addresses and are listed as `alive`. - -- Ensure any API request that activates datacenter request forwarding. such as - [`/v1/catalog/services?dc=`](/consul/api-docs/catalog#dc-1) - succeeds. - -### Upgrading the primary gateways - -Once federation is established, secondary datacenters will continuously request -updated mesh gateway addresses from the primary datacenter. Consul routes the requests - through the primary datacenter's mesh gateways. This is because -secondary datacenters cannot directly dial the primary datacenter's Consul servers. -If the primary gateways are upgraded, and their previous instances are decommissioned -before the updates are propagated, then the primary datacenter will become unreachable. - -To safely upgrade primary gateways, we recommend that you apply one of the following policies: -- Avoid decommissioning primary gateway IP addresses. This is because the [primary_gateways](/consul/docs/agent/config/config-files#primary_gateways) addresses configured on the secondary servers act as a fallback mechanism for re-establishing connectivity to the primary. - -- Verify that addresses of the new mesh gateways in the primary were propagated -to the secondary datacenters before decommissioning the old mesh gateways in the primary. diff --git a/website/content/docs/connect/gateways/terminating-gateway.mdx b/website/content/docs/connect/gateways/terminating-gateway.mdx deleted file mode 100644 index 86014850eb20..000000000000 --- a/website/content/docs/connect/gateways/terminating-gateway.mdx +++ /dev/null @@ -1,130 +0,0 @@ ---- -layout: docs -page_title: Terminating Gateway | Service Mesh -description: >- - Terminating gateways send requests from inside the service mesh to external network locations and services outside the mesh. Learn about requirements and terminating gateway interactions with Consul's service catalog. ---- - -# Terminating Gateways - --> **1.8.0+:** This feature is available in Consul versions 1.8.0 and newer. - -Terminating gateways enable connectivity within your organizational network from services in the Consul service mesh to -services and [destinations](/consul/docs/connect/config-entries/service-defaults#terminating-gateway-destination) outside the mesh. These gateways effectively act as service mesh proxies that can -represent more than one service. They terminate service mesh mTLS connections, enforce intentions, -and forward requests to the appropriate destination. - -![Terminating Gateway Architecture](/img/terminating-gateways.png) - -For additional use cases and usage patterns, review the tutorial for -[understanding terminating gateways](/consul/tutorials/developer-mesh/service-mesh-terminating-gateways?utm_source=docs). - -~> **Known limitations:** Terminating gateways currently do not support targeting service subsets with -[L7 configuration](/consul/docs/connect/manage-traffic). They route to all instances of a service with no capabilities -for filtering by instance. - -## Security Considerations - -~> We recommend that terminating gateways are not exposed to the WAN or open internet. This is because terminating gateways -hold certificates to decrypt Consul service mesh traffic directed at them and may be configured with credentials to connect -to linked services. Connections over the WAN or open internet should flow through [mesh gateways](/consul/docs/connect/gateways/mesh-gateway) -whenever possible since they are not capable of decrypting traffic or connecting directly to services. - -By specifying a path to a [CA file](/consul/docs/connect/config-entries/terminating-gateway#cafile) connections -from the terminating gateway will be encrypted using one-way TLS authentication. If a path to a -[client certificate](/consul/docs/connect/config-entries/terminating-gateway#certfile) -and [private key](/consul/docs/connect/config-entries/terminating-gateway#keyfile) are also specified connections -from the terminating gateway will be encrypted using mutual TLS authentication. - -If none of these are provided, Consul will **only** encrypt connections to the gateway and not -from the gateway to the destination service. - -When certificates for linked services are rotated, the gateway must be restarted to pick up the new certificates from disk. -To avoid downtime, perform a rolling restart to reload the certificates. Registering multiple terminating gateway instances -with the same [name](/consul/commands/connect/envoy#service) provides additional fault tolerance -as well as the ability to perform rolling restarts. - --> **Note:** If certificates and keys are configured the terminating gateway will upgrade HTTP connections to TLS. -Client applications can issue plain HTTP requests even when connecting to servers that require HTTPS. - -## Prerequisites - -Each terminating gateway needs: - -1. A local Consul client agent to manage its configuration. -2. General network connectivity to services within its local Consul datacenter. -3. General network connectivity to services and destinations outside the mesh that are part of the gateway services list. - -Terminating gateways also require that your Consul datacenters are configured correctly: - -- You'll need to use Consul version 1.8.0 or newer. -- Consul [service mesh](/consul/docs/agent/config/config-files#connect) must be enabled on the datacenter's Consul servers. -- [gRPC](/consul/docs/agent/config/config-files#grpc_port) must be enabled on all client agents. - -Currently, [Envoy](https://www.envoyproxy.io/) is the only proxy with terminating gateway capabilities in Consul. - -- Terminating gateway proxies receive their configuration through Consul, which - automatically generates it based on the gateway's registration. Currently Consul - can only translate terminating gateway registration information into Envoy - configuration, therefore the proxies acting as terminating gateways must be Envoy. - -Service mesh proxies that send upstream traffic through a gateway aren't -affected when you deploy terminating gateways. If you are using non-Envoy proxies as -Service mesh proxies they will continue to work for traffic directed at services linked to -a terminating gateway as long as they discover upstreams with the -[/health/connect](/consul/api-docs/health#list-nodes-for-connect-capable-service) endpoint. - -## Running and Using a Terminating Gateway - -For a complete example of how to enable connections from services in the Consul service mesh to -services outside the mesh, review the [terminating gateway tutorial](/consul/tutorials/developer-mesh/terminating-gateways-connect-external-services). - -## Terminating Gateway Configuration - -Terminating gateways are configured in service definitions and registered with Consul like other services, with two exceptions. -The first is that the [kind](/consul/api-docs/agent/service#kind) must be "terminating-gateway". Second, -the terminating gateway service definition may contain a `Proxy.Config` entry just like a -service mesh proxy service, to define opaque configuration parameters useful for the actual proxy software. -For Envoy there are some supported [gateway options](/consul/docs/connect/proxies/envoy#gateway-options) as well as -[escape-hatch overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides). - --> **Note:** If ACLs are enabled, terminating gateways must be registered with a token granting `node:read` on the nodes -of all services in its configuration entry. The token must also grant `service:write` for the terminating gateway's service name **and** -the names of all services in the terminating gateway's configuration entry. These privileges will authorize the gateway -to terminate mTLS connections on behalf of the linked services and then route the traffic to its final destination. -If the Consul client agent on the gateway's node is not configured to use the default gRPC port, 8502, then the gateway's token -must also provide `agent:read` for its node's name in order to discover the agent's gRPC port. gRPC is used to expose Envoy's xDS API to Envoy proxies. - -You can link services and destinations to a terminating gateway with a `terminating-gateway` -[configuration entry](/consul/docs/connect/config-entries/terminating-gateway). This config entry can be applied via the -[CLI](/consul/commands/config/write) or [API](/consul/api-docs/config#apply-configuration). - -Gateways with the same name in Consul's service catalog are configured with a single configuration entry. -This means that additional gateway instances registered with the same name will determine their routing based on the existing configuration entry. -Adding replicas of a gateway that routes to a particular set of services requires running the -[envoy subcommand](/consul/commands/connect/envoy#terminating-gateways) on additional hosts and specifying -the same gateway name with the `service` flag. - -~> [Configuration entries](/consul/docs/agent/config-entries) are global in scope. A configuration entry for a gateway name applies -across all federated Consul datacenters. If terminating gateways in different Consul datacenters need to route to different -sets of services within their datacenter then the terminating gateways **must** be registered with different names. - -The services that the terminating gateway will proxy for must be registered with Consul, even the services outside the mesh. They must also be registered -in the same Consul datacenter as the terminating gateway. Otherwise the terminating gateway will not be able to -discover the services' addresses. These services can be registered with a local Consul agent. -If there is no agent present, the services can be registered [directly in the catalog](/consul/api-docs/catalog#register-entity) -by sending the registration request to a client or server agent on a different host. - -All services registered in the Consul catalog must be associated with a node, even when their node is -not managed by a Consul client agent. All agent-less services with the same address can be registered under the same node name and address. -However, ensure that the [node name](/consul/api-docs/catalog#node) for external services registered directly in the catalog -does not match the node name of any Consul client agent node. If the node name overlaps with the node name of a Consul client agent, -Consul's [anti-entropy sync](/consul/docs/architecture/anti-entropy) will delete the services registered via the `/catalog/register` HTTP API endpoint. - -Service-defaults [destinations](/consul/docs/connect/config-entries/service-defaults#destination) let you -define endpoints external to the mesh and routable through a terminating gateway in transparent mode. -After you define a service-defaults configuration entry for each destination, you can use the service-default name as part of the terminating gateway services list. -If a service and a destination service-defaults have the same name, the terminating gateway will use the service. - -For a complete example of how to register external services review the -[external services tutorial](/consul/tutorials/developer-discovery/service-registration-external-services). diff --git a/website/content/docs/connect/index.mdx b/website/content/docs/connect/index.mdx index 6bdc9989f52b..3531850d87b9 100644 --- a/website/content/docs/connect/index.mdx +++ b/website/content/docs/connect/index.mdx @@ -1,77 +1,50 @@ --- layout: docs -page_title: Service Mesh on Consul +page_title: Connect workloads to Consul service mesh description: >- - Consul’s service mesh makes application and microservice networking secure and observable with identity-based authentication, mutual TLS (mTLS) encryption, and explicit service-to-service authorization enforced by sidecar proxies. Learn how Consul’s service mesh works and get started on VMs or Kubernetes. + Consul's service mesh makes application and microservice networking secure and observable with identity-based authentication, mutual TLS (mTLS) encryption, and explicit service-to-service authorization enforced by sidecar proxies. Learn how to enable and configure Consul's service mesh and proxies. --- -# Consul service mesh - -Consul service mesh provides service-to-service connection authorization and -encryption using mutual Transport Layer Security (TLS). - -Applications can use [sidecar proxies](/consul/docs/connect/proxies) in a service mesh configuration to -establish TLS connections for inbound and outbound connections without being aware of the service mesh at all. -Applications may also [natively integrate with Consul service mesh](/consul/docs/connect/native) for optimal performance and security. -Consul service mesh can help you secure your services and provide data about service-to-service communications. - -The noun _connect_ is used throughout this documentation to refer to the connect -subsystem that provides Consul's service mesh capabilities. -Where you encounter the _noun_ connect, it is usually functionality specific to -service mesh. - -Review the video below to learn more about Consul service mesh from HashiCorp's co-founder Armon. - - - -## Application security - -Consul service mesh enables secure deployment best-practices with automatic -service-to-service encryption, and identity-based authorization. -Consul uses the registered service identity, rather than IP addresses, to -enforce access control with [intentions](/consul/docs/connect/intentions). This -makes it easier to control access and enables services to be -rescheduled by orchestrators, including Kubernetes and Nomad. Intention -enforcement is network agnostic, so Consul service mesh works with physical networks, cloud -networks, software-defined networks, cross-cloud, and more. - -## Observability - -One of the key benefits of Consul service mesh is the uniform and consistent view it can -provide of all the services on your network, irrespective of their different -programming languages and frameworks. When you configure Consul service mesh to use -sidecar proxies, those proxies see all service-to-service traffic and can -collect data about it. Consul service mesh can configure Envoy proxies to collect -layer 7 metrics and export them to tools like Prometheus. Correctly instrumented -applications can also send open tracing data through Envoy. - -## Getting started with Consul service mesh - -Complete the following tutorials try Consul service mesh in different environments: - -- The [Getting Started with Consul Service Mesh collection](/consul/tutorials/kubernetes-deploy/service-mesh?utm_source=docs) - walks you through installing Consul as service mesh for Kubernetes using the Helm - chart, deploying services in the service mesh, and using intentions to secure service - communications. - -- The [Getting Started With Consul for Kubernetes](/consul/tutorials/get-started-kubernetes?utm_source=docs) tutorials guides you through installing Consul on Kubernetes to set up a service mesh for establishing communication between Kubernetes services. - -- The [Secure Service-to-Service Communication tutorial](/consul/tutorials/developer-mesh/service-mesh-with-envoy-proxy?utm_source=docs) - is a simple walk through of connecting two services on your local machine - and configuring your first intention. - -- The [Kubernetes tutorial](/consul/tutorials/kubernetes/kubernetes-minikube?utm_source=docs) - walks you through configuring Consul service mesh in Kubernetes using the Helm - chart, and using intentions. You can run the guide on Minikube or an existing - Kubernetes cluster. - -- The [observability tutorial](/consul/tutorials/kubernetes/kubernetes-layer7-observability) - shows how to deploy a basic metrics collection and visualization pipeline on - a Minikube or Kubernetes cluster using the official Helm charts for Consul, - Prometheus, and Grafana. +# Connect workloads to Consul service mesh + +This page provides an overview of Consul's service mesh features and their configuration. Service mesh is enabled by default on Consul server agents. + +## Introduction + + + +In addition to the service discovery operations available to the Consul instance +that runs on the same node as your workload, you can use Consul to deploy Envoy +sidecar proxies to control traffic between each service and the rest of the +network. Consul includes a built-in certificate authority that can enforce mTLS +encryption between sidecar proxies. Use [Consul configuration entries](/consul/docs/fundamentals/config-entry) to further secure and monitor +service-to-service communication. + +## Service mesh configuration + +The `connect` block of a Consul server agent contains the configurations for the CA provider and locality information for the node. Refer to [Service mesh parameters](/consul/docs/reference/agent/configuration-file/service-mesh) for more information. + +To learn how to turn the service mesh off or back on again, refer to [enable service mesh](/consul/docs/connect/enable). + +## Envoy proxies + +Consul includes built-in support for Envoy proxies to manage service mesh operations. Configure behavior for individual proxies, or configure default behavior for proxies according to service identity. For more information about proxies and their specialized operations in the service mesh, refer to [Service mesh proxy overview](/consul/docs/connect/proxy). + +## Guidance + +Runtime-specific guidance is also available: + +- [Connect workloads to service mesh on VMs](/consul/docs/connect/vm) +- [Connect workloads to service mesh on Kubernetes](/consul/docs/connect/k8s) +- [Connect workloads to service mesh on ECS](/consul/docs/connect/ecs) +- [Connect Consul service mesh to AWS Lambda](/consul/docs/connect/lambda) +- [Connect workloads to service mesh on Nomad](/consul/docs/connect/nomad) + +## Debug and troubleshoot + +If you experience errors when connecting Consul's service mesh to your workloads, refer to the following resources: + +- [Consul service mesh troubleshooting overview](/consul/docs/connect/troubleshoot) +- [Debug Consul service mesh](/consul/docs/connect/troubleshoot/debug) +- [Troubleshoot service-to-service + communication](/consul/docs/connect/troubleshoot/service-to-service) diff --git a/website/content/docs/connect/intentions/create-manage-intentions.mdx b/website/content/docs/connect/intentions/create-manage-intentions.mdx deleted file mode 100644 index 46ec146824fb..000000000000 --- a/website/content/docs/connect/intentions/create-manage-intentions.mdx +++ /dev/null @@ -1,178 +0,0 @@ ---- -layout: docs -page_title: Create and manage service intentions -description: >- - Learn how to create and manage Consul service mesh intentions using service-intentions config entries, the `consul intentions` command, and `/connect/intentions` API endpoint. ---- - -# Create and manage intentions - -This topic describes how to create and manage service intentions, which are configurations for controlling access between services in the service mesh. - -## Overview - -You can create single intentions or create them in batches using the Consul API, CLI, or UI. You can also define a service intention configuration entry that sets default intentions for all services in the mesh. Refer to [Service intentions overview](/consul/docs/connect/intentions/) for additional background information about intentions. - -## Requirements - -- At least two services must be registered in the datacenter. -- TLS must be enabled to enforce L4 intentions. Refer to [Encryption](/consul/docs/security/encryption) for additional information. - -### ACL requirements - -Consul grants permissions for creating and managing intentions based on the destination, not the source. When ACLs are enabled, services and operators must present a token linked to a policy that grants read and write permissions to the destination service. - -Consul implicitly grants `intentions:read` permissions to destination services when they are configured with `service:read` or `service:write` permissions. This is so that the services can allow or deny inbound connections when they attempt to join the service mesh. Refer to [Service rules](/consul/docs/security/acl/acl-rules#service-rules) for additional information about configuring ACLs for intentions. - -The default ACL policy configuration determines the default behavior for intentions. If the policy is set to `deny`, then all connections or requests are denied and you must enable them explicitly. Refer to [`default_policy`](/consul/docs/agent/config/config-files#acl_default_policy) for details. - -## Create an intention - -You can create and manage intentions one at a time using the Consul API, CLI, or UI You can specify one destination or multiple destinations in a single intention. - -### API - -Send a `PUT` request to the `/connect/intentions/exact` HTTP API endpoint and specify the following query parameters: - -- `source`: Service sending the request -- `destination`: Service responding to the request -- `ns`: Namespace of the destination service - -For L4 intentions, you must also specify the intention action in the request payload. - -The following example creates an intention that allows `web` to send request to `db`: - -```shell-session -$ curl --request PUT \ ---data ' { "Action": "allow" } ' \ -http://localhost:8500/v1/connect/intentions/exact\?source\=web\&destination\=db -``` - -Refer to the `/connect/intentions/exact` [HTTP API endpoint documentation](/consul/api-docs/connect/intentions) for additional information request payload parameters. - -For L7 intentions, specify the `Permissions` in the request payload to configure attributes for dynamically enforcing intentions. In the following example payload, Consul allows HTTP GET requests if the request body is empty: - - - -```json -{ - "Permissions": [ - { - "Action": "allow", - "HTTP": { - "Methods": ["GET"], - "Header": [ - { - "Name": "Content-Length", - "Exact": "0" - } - ] - } - } - ] -} - -``` - - - -The `Permissions` object specifies a list of permissions for L7 traffic sources. The list contains one or more actions and a set of match criteria for each action. Refer to the [`Sources[].Permissions[]` parameter](/consul/docs/connect/config-entries/service-intentions#sources-permissions) in the service intentions configuration entry reference for configuration details. - -To apply the intention, call the endpoint and pass the configuration file containing the attributes to the endpoint: - -```shell-session -$ curl --request PUT \ ---data @payload.json \ -http://localhost:8500/v1/connect/intentions/exact\?source\=svc1\&destination\=sv2 -``` -### CLI - -Use the `consul intention create` command according to the following syntax to create a new intention: - -```shell-session -$ consul intention create - -``` - -The following example creates an intention that allows `web` service instances to connect to `db` service instances: - -```shell-session -$ consul intention create -allow web db -``` - -You can use the asterisk (`*`) wildcard to specify multiple destination services. Refer to [Precedence and match order](/consul/docs/connect/intentions/create-manage-intentions#precedence-and-match-order) for additional information. - -### Consul UI - -1. Log into the Consul UI and choose **Services** from the sidebar menu. -1. Click on a service and then click the **Intentions* tab. -1. Click **Create** and choose the source service from the drop-down menu. -1. You can add an optional description. -1. Choose one of the following options: - 1. **Allow**: Allows the source service to send requests to the destination. - 1. **Deny**: Prevents the source service from sending requests to the destination. - 1. **Application Aware**: Enables you to specify L7 criteria for dynamically enforcing intentions. Refer to [Configure application aware settings](#configure-application-aware-settings) for additional information. -1. Click **Save**. - -Repeat the procedure as necessary to create additional intentions. - -#### Configure application aware settings - -You can use the Consul UI to configure L7 permissions. - -1. Click **Add permission** to open the permission editor. -1. Enable the **Allow** or **Deny** option. -1. You can specify a path, request method, and request headers to match. All criteria must be satisfied for Consul to enforce the permission. Refer to the [`Sources[].Permissions[]` parameter](/consul/docs/connect/config-entries/service-intentions#sources-permissions) in the service intentions configuration entry reference for information about the available configuration fields. -1. Click **Save**. - -Repeat the procedure as necessary to create additional permissions. - -## Create multiple intentions - -You can create a service intentions configuration entry to specify default intentions for your service mesh. You can specify default settings for L4 or L7 application-aware traffic. - -### Define a service intention configuration entry - -Configure the following fields: - - - - - -- [`Kind`](/consul/docs/connect/config-entries/service-intentions#kind): Declares the type of configuration entry. Must be set to `service-intentions`. -- [`Name`](/consul/docs/connect/config-entries/service-intentions#kind): Specifies the name of the destination service for intentions defined in the configuration entry. You can use a wildcard character (*) to set L4 intentions for all services that are not protected by specific intentions. Wildcards are not supported for L7 intentions. -- [`Sources`](/consul/docs/connect/config-entries/service-intentions#sources): Specifies an unordered list of all intention sources and the authorizations granted to those sources. Consul stores and evaluates the list in reverse order sorted by intention precedence. -- [`Sources.Action`](/consul/docs/connect/config-entries/service-intentions#sources-action) or [`Sources.Permissions`](/consul/docs/connect/config-entries/service-intentions#sources-permissions): For L4 intentions, set the `Action` field to "allow" or "deny" so that Consul can enforce intentions that match the source service. For L7 intentions, configure the `Permissions` settings, which define a set of application-aware attributes for dynamically matching incoming requests. The `Actions` and `Permissions` settings are mutually exclusive. - - - - - -- [`apiVersion`](/consul/docs/connect/config-entries/service-intentions#apiversion): Specifies the Consul API version. Must be set to `consul.hashicorp.com/v1alpha1`. -- [`kind`](/consul/docs/connect/config-entries/service-intentions#kind): Declares the type of configuration entry. Must be set to `ServiceIntentions`. -- [`spec.destination.name`](/consul/docs/connect/config-entries/service-intentions#spec-destination-name): Specifies the name of the destination service for intentions defined in the configuration entry. You can use a wildcard character (*) to set L4 intentions for all services that are not protected by specific intentions. Wildcards are not supported for L7 intentions. -- [`spec.sources`](/consul/docs/connect/config-entries/service-intentions#spec-sources): Specifies an unordered list of all intention sources and the authorizations granted to those sources. Consul stores and evaluates the list in reverse order sorted by intention precedence. -- [`spec.sources.action`](/consul/docs/connect/config-entries/service-intentions#spec-sources-action) or [`spec.sources.permissions`](/consul/docs/connect/config-entries/service-intentions#spec-sources-permissions): For L4 intentions, set the `action` field to "allow" or "deny" so that Consul can enforce intentions that match the source service. For L7 intentions, configure the `permissions` settings, which define a set of application-aware attributes for dynamically matching incoming requests. The `actions` and `permissions` settings are mutually exclusive. - - - - - -Refer to the [service intentions configuration entry](/consul/docs/connect/config-entries/service-intentions) reference documentation for details about all configuration options. - -Refer to the [example service intentions configurations](/consul/docs/connect/config-entries/service-intentions#examples) for additional guidance. - -#### Interaction with other configuration entries - -L7 intentions defined in a configuration entry are restricted to destination services -configured with an HTTP-based protocol as defined in a corresponding -[service defaults configuration entry](/consul/docs/connect/config-entries/service-defaults) -or globally in a [proxy defaults configuration entry](/consul/docs/connect/config-entries/proxy-defaults). - -### Apply the service intentions configuration entry - -You can apply the configuration entry using the [`consul config` command](/consul/commands/config) or by calling the [`/config` API endpoint](/consul/api-docs/config). In Kubernetes environments, apply the `ServiceIntentions` custom resource definitions (CRD) to implement and manage Consul configuration entries. - -Refer to the following topics for details about applying configuration entries: - -- [How to Use Configuration Entries](/consul/docs/agent/config-entries) -- [Custom Resource Definitions for Consul on Kubernetes](/consul/docs/k8s/crds) diff --git a/website/content/docs/connect/intentions/index.mdx b/website/content/docs/connect/intentions/index.mdx deleted file mode 100644 index 8d6364638a0a..000000000000 --- a/website/content/docs/connect/intentions/index.mdx +++ /dev/null @@ -1,95 +0,0 @@ ---- -layout: docs -page_title: Service mesh intentions overview -description: >- - Intentions are access controls that allow or deny incoming requests to services in the mesh. ---- - -# Service intentions overview - -This topic provides overview information about Consul intentions, which are mechanisms that control traffic communication between services in the Consul service mesh. - -![Diagram showing how service intentions control access between services](/img/consul-connect/consul-service-mesh-intentions-overview.svg) - -## Intention types - -Intentions control traffic communication between services at the network layer, also called _L4_ traffic, or the application layer, also called _L7 traffic_. The protocol that the destination service uses to send and receive traffic determines the type of authorization the intention can enforce. - -### L4 traffic intentions - -If the destination service uses TCP or any non-HTTP-based protocol, then intentions can control traffic based on identities encoded in mTLS certificates. Refer to [Mutual transport layer security (mTLS)](/consul/docs/connect/connect-internals#mutual-transport-layer-security-mtls) for additional information. - -This implementation allows broad all-or-nothing access control between pairs of services. The only requirement is that the service is aware of the TLS handshake that wraps the opaque TCP connection. - -### L7 traffic intentions - -If the destination service uses an HTTP-based protocol, then intentions can enforce access based on application-aware request attributes, in addition to identity-based enforcement, to control traffic between services. Refer to [Service intentions configuration reference](/consul/docs/connect/config-entries/service-intentions#permissions) for additional information. - -## Workflow - -You can manually create intentions from the Consul UI, API, or CLI. You can also enable Consul to dynamically create them by defining traffic routes in service intention configuration entries. Refer to [Create and manage intentions](/consul/docs/connect/intentions/create-manage-intentions) for details. - -### Enforcement - -The [proxy](/consul/docs/connect/proxies) or [natively-integrated -application](/consul/docs/connect/native) enforces intentions on inbound connections or requests. Only one intention can control authorization between a pair of services at any single point in time. - -L4 intentions mediate the ability to establish new connections. Modifying an intention does not have an effect on existing connections. As a result, changing a connection from `allow` to `deny` does not sever the connection. - -L7 intentions mediate the ability to issue new requests. When an intention is modified, requests received after the modification use the latest intention rules to enforce access. Changing a connection from `allow` to `deny` does not sever the connection, but doing so blocks new requests from being processed. - -When using L7 intentions, we recommend that you review and update the [Mesh request normalization configuration](/consul/docs/connect/security#request-normalization-and-configured) to avoid unintended match rule circumvention. More details are available in the [Mesh configuration entry reference](/consul/docs/connect/config-entries/mesh#request-normalization). - -When you use L7 intentions with header matching and it is possible for a header to contain multiple values, we recommend using `contains` or `regex` instead of `exact`, `prefix`, or `suffix`. For more information, refer to the [service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions#spec-sources-permissions-http-header). - -### Caching - -The intentions for services registered with a Consul agent are cached locally on the agent. Supported proxies also cache intention data in their own configurations so that they can authorize inbound connections or requests without relying on the Consul agent. All actions in the data path of connections take place within the proxy. - -### Updates - -Consul propagates updates to intentions almost instantly as a result of the continuous blocking query the agent uses. A _blocking query_ is a Consul API feature that uses long polling to wait for potential changes. Refer to [Blocking Queries](/consul/api-docs/features/blocking) for additional information. Proxies also use blocking queries to quickly update their local configurations. - -Because all intention data is cached locally, authorizations for inbound connection persist, even if the agents are completely severed from the Consul servers or if the proxies are completely severed from their local Consul agent. If the connection is severed, Consul automatically applies changes to intentions when connectivity is restored. - -### Intention maintenance - -Services should periodically call the [intention match API](/consul/api-docs/connect/intentions#list-matching-intentions) to retrieve all relevant intentions for the target destination. After verifying the TLS client certificate, the cached intentions for each incoming connection or request determine if it should be accepted or rejected. - -## Precedence and match order - -Consul processes criteria defined in the service intention configuration entry to match incoming requests. When Consul finds a match, it applies the corresponding action specified in the configuration entry. The match criteria may include specific HTTP headers, request methods, or other attributes. Additionally, you can use regular expressions to programmatically match attributes. Refer to [Service intention configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for details. - -Consul orders the matches based the following factors: - -- Specificity: Incoming requests that match attributes directly have the highest precedence. For example, intentions that are configured to deny traffic from services that send `POST` requests take precedence over intentions that allow traffic from methods configured with the wildcard value `*`. -- Authorization: Consul enforces `deny` over `allow` if match criteria are weighted equally. - -The following table shows match precedence in descending order: - -| Precedence | Source Namespace | Source Name | Destination Namespace | Destination Name | -| -----------| ---------------- | ------------| --------------------- | ---------------- | -| 9 | Exact | Exact | Exact | Exact | -| 8 | Exact | `*` | Exact | Exact | -| 7 | `*` | `*` | Exact | Exact | -| 6 | Exact | Exact | Exact | `*` | -| 5 | Exact | `*` | Exact | `*` | -| 4 | `*` | `*` | Exact | `*` | -| 3 | Exact | Exact | `*` | `*` | -| 2 | Exact | `*` | `*` | `*` | -| 1 | `*` | `*` | `*` | `*` | - -Consul prints the precedence value to the service intentions configuration entry after it processes the matching criteria. The value is read-only. Refer to -[`Precedence`](/consul/docs/connect/config-entries/service-intentions#precedence) for additional information. - -Namespaces are an Enterprise feature. In Consul CE, the only allowable value for either namespace field is `"default"`. Other rows in the table are not applicable. - -The [intention match API](/consul/api-docs/connect/intentions#list-matching-intentions) -should be periodically called to retrieve all relevant intentions for the -target destination. After verifying the TLS client certificate, the cached -intentions should be consulted for each incoming connection/request to -determine if it should be accepted or rejected. - -The default intention behavior is defined by the [`default_policy`](/consul/docs/agent/config/config-files#acl_default_policy) configuration. -If the configuration is set `allow`, then all service-to-service connections in the mesh will be allowed by default. -If is set to `deny`, then all connections or requests will be denied by default. diff --git a/website/content/docs/connect/intentions/jwt-authorization.mdx b/website/content/docs/connect/intentions/jwt-authorization.mdx deleted file mode 100644 index 1c1c0f994e89..000000000000 --- a/website/content/docs/connect/intentions/jwt-authorization.mdx +++ /dev/null @@ -1,105 +0,0 @@ ---- -page_title: JWT authorization overview -description: |- - Consul can use service mesh proxies to check and validate JSON Web Tokens (JWT) to enable additional identify-based access security for both human and machine users. Learn how to configure a JWT provider configuration entry and a service intentions configuration entry to authorize requests. ---- - -# Use JWT authorization with service intentions - -JSON Web Tokens (JWT) are a method for identity-based access to services for both humans and machines. The [JWT provider configuration entry](/consul/docs/connect/config-entries/jwt-provider) enables you to define JWTs as part of a JSON Web Key Set (JWKS), which contains the information necessary for Consul to validate access and configure behavior for requests that include JWTs. - -By specifying a JSON Web Key Set (JWKS) in the configuration entry and referencing the key set in a service intention, Consul can enforce service intentions based on the presence of a JWT. This security configuration is not related to the [JSON Web Token Auth Method](/consul/docs/security/acl/auth-methods/jwt), which associates JWTs with the Consul ACLs instead of service intentions. - -## Workflow - -The process to configure your network to enforce service intentions based on JSON web tokens consists of the following steps: - -1. **Create a JWT provider configuration entry**. This configuration entry defines rules and behaviors for verifying tokens. These configurations apply to admin partitions in Consul Enterprise, which is functionally equivalent to a datacenter in Consul CE. Then, write the `jwt-provider` configuration entry to Consul. The ACL policy requirement to read and modify this configuration entry is `mesh:write`. - -1. **Create or update a service intentions configuration entry to reference the JWT provider**. This configuration invokes the name of the `jwt-provider` configuration entry you created, which causes the Envoy proxy to verify the token and the permissions it authorizes before the incoming request is accepted. Then, write the `service-intentions` configuration entry that references the JWT to Consul. The ACL policy requirement to read and modify this configuration entry is `mesh:write`. - -### Wildcards and intention defaults - -Because intentions without tokens are authorized when they arrive at the destination proxy, a [common pattern for the service-intentions configuration entry](/consul/docs/connect/config-entries/service-intentions#l4-intentions-for-all-destinations) sets the entry’s `Name` field as a wildcard, `*`. This pattern enables you to apply incoming requests from specific services to every service in the datacenter. - -When configuring your deployment to enforce service intentions with JSON Web Tokens, it is possible for multiple tokens with different permissions to apply to a single service’s incoming request based on attributes such as HTTP path or the request method. Because the `service-intentions` configuration entry applies the intention that most closely matches the request, using the `Name` wildcard with specific JWT authorization configurations can lead to unintended results. - -When you set the `JWT{}.Providers` field in a service intentions configuration entry to the wildcard `*`, you can configure default behavior for all services that present a token that matches an existing JWT provider configuration entry. In this configuration, services that have a valid token but do not have a more specific matching intention default to the behavior defined in the wildcard intention. - -## Requirements - -* **Enable ACLs**. Verify that ACLs are enabled and that the default_policy is set to deny. - -## Usage - -To configure Envoy proxies in the service mesh to validate JWTs before forwarding requests to servers, complete the following steps: - -### Create a JWT provider configuration entry - -The `jwt-provider` configuration requires the following fields: - -- `Kind`: This field must be set to `"jwt-provider"` -- `Name`: We recommend naming the configuration file after the JWT provider used in the configuration. -- `Issuer`: This field must match the token's `iss` claim - -You must also specify a JSON Web Key Set in the `JSONWebKeySet` field. You can specify the JWKS as one of the following: - -- A local string -- A path to a local file -- A remote location specified with a URI - -A JWKS can be made available locally or remotely, but not both. In addition, a local JWKS must be specified as either a string or a path to the file containing the token. - -You can also specify where the JWT is located, a retry policy, and text to append to the header when forwarding the request after token validation. - -The following example configures Consul to fetch a JSON Web Token issued by Okta. Consul fetches the token from the URI and keeps it in its cache for 30 minutes before the token expires. After validation, the token is forwarded to the backend with `user-token` appended to the HTTP header. - -```hcl -Kind = "jwt-provider" -Name = "okta" - -Issuer = "okta" - -JSONWebKeySet = { - Remote = { - URI = "https://.okta.com/oauth2/default/v1/keys" - CacheDuration = "30m" - } -} - -Forwarding = { - HeaderName = "user-token" -} -``` - -Refer to [JWT provider configuration entry](/consul/docs/connect/config-entries/jwt-provider) for more information about the fields you can configure. - -To write the configuration entry to Consul, use the [`consul config write` command](/consul/commands/config/write): - -```shell-session -$ consul config write okta-provider.hcl -``` - -### Update service intentions - -After you create the JWT provider entry, you can update your service intentions so that proxies validate the token before authorizing a request. The following example includes the minimum required configuration to enable JWT authorization with service intentions: - -```hcl -Kind = "service-intentions" -Name = "web" -JWT = { - Providers = [ - { - Name = "okta" - } - ] -} -``` - -You can include additional configuration information to require the token to match specific claims. You can also configure the `JWT` field to apply only to requests that come from certain HTTP paths. Refer to [JWT validations with intentions](/consul/docs/connect/config-entries/service-intentions#jwt-validations-with-intentions) for an example configuration. - -After you update the service intention, write the configuration to Consul so that it takes effect: - -```shell-session -$ consul config write web-intention.hcl -``` diff --git a/website/content/docs/connect/intentions/legacy.mdx b/website/content/docs/connect/intentions/legacy.mdx deleted file mode 100644 index 9151b1e8ca62..000000000000 --- a/website/content/docs/connect/intentions/legacy.mdx +++ /dev/null @@ -1,188 +0,0 @@ ---- -layout: docs -page_title: Intentions (Legacy Mode) -description: >- - Intentions define service communication permissions in the service mesh. As of version 1.9, Consul uses a new system for creating and managing intentions. Learn how intentions worked in earlier versions of Consul with this legacy documentation. ---- - -# Intentions in Legacy Mode - -~> **1.8.x and earlier:** This document only applies in Consul versions 1.8.x -and before. If you are using version 1.9.0 or later, refer to the [current intentions documentation](/consul/docs/connect/intentions). - -Intentions define access control for service-to-service connections in the service mesh. Intentions can be -managed via the API, CLI, or UI. - -Intentions are enforced by the [proxy](/consul/docs/connect/proxies) -or [natively integrated application](/consul/docs/connect/native) on -inbound connections. After verifying the TLS client certificate, the -[authorize API endpoint](/consul/api-docs/agent/connect#authorize) is called which verifies the connection -is allowed by testing the intentions. If authorize returns false the -connection must be terminated. - -The default intention behavior is defined by the default [ACL -policy](/consul/docs/agent/config/config-files#acl_default_policy). If the default ACL policy is -"allow all", then all service-to-service connections in the mesh are allowed by default. If the -default ACL policy is "deny all", then all service-to-service connections are denied by -default. - -## Intention Basics - -Intentions can be managed via the [API](/consul/api-docs/connect/intentions), -[CLI](/consul/commands/intention), or UI. Please see the respective documentation for -each for full details on options, flags, etc. Below is an example of a basic -intention to show the basic attributes of an intention. The full data model of -an intention can be found in the [API -documentation](/consul/api-docs/connect/intentions). - -```shell-session -$ consul intention create -deny web db -Created: web => db (deny) -``` - -The intention above is a deny intention with a source of "web" and -destination of "db". This says that connections from web to db are not -allowed and the connection will be rejected. - -When an intention is modified, existing connections will not be affected. -This means that changing a connection from "allow" to "deny" today -_will not_ kill the connection. Addressing this shortcoming is on -the near term roadmap for Consul. - -### Wildcard Intentions - -An intention source or destination may also be the special wildcard -value `*`. This matches _any_ value and is used as a catch-all. Example: - -```shell-session -$ consul intention create -deny web '*' -Created: web => * (deny) -``` - -This example says that the "web" service cannot connect to _any_ service. - -### Metadata - -Arbitrary string key/value data may be associated with intentions. This -is unused by Consul but can be used by external systems or for visibility -in the UI. - -```shell-session -$ consul intention create \ - -deny \ - -meta description='Hello there' \ - web db -... - -$ consul intention get web db -Source: web -Destination: db -Action: deny -ID: 31449e02-c787-f7f4-aa92-72b5d9b0d9ec -Meta[description]: Hello there -Created At: Friday, 25-May-18 02:07:51 CEST -``` - -## Precedence and Match Order - -Intentions are matched in an implicit order based on specificity, preferring -deny over allow. Specificity is determined by whether a value is an exact -specified value or is the wildcard value `*`. -The full precedence table is shown below and is evaluated -top to bottom, with larger numbers being evaluated first. - -| Source Namespace | Source Name | Destination Namespace | Destination Name | Precedence | -| ---------------- | ----------- | --------------------- | ---------------- | ---------- | -| Exact | Exact | Exact | Exact | 9 | -| Exact | `*` | Exact | Exact | 8 | -| `*` | `*` | Exact | Exact | 7 | -| Exact | Exact | Exact | `*` | 6 | -| Exact | `*` | Exact | `*` | 5 | -| `*` | `*` | Exact | `*` | 4 | -| Exact | Exact | `*` | `*` | 3 | -| Exact | `*` | `*` | `*` | 2 | -| `*` | `*` | `*` | `*` | 1 | - -The precedence value can be read from the [API](/consul/api-docs/connect/intentions) -after an intention is created. -Precedence cannot be manually overridden today. This is a feature that will -be added in a later version of Consul. - -In the case the two precedence values match, Consul will evaluate -intentions based on lexicographical ordering of the destination then -source name. In practice, this is a moot point since authorizing a connection -has an exact source and destination value so its impossible for two -valid non-wildcard intentions to match. - -The numbers in the table above are not stable. Their ordering will remain -fixed but the actual number values may change in the future. - --> **Consul Enterprise** - Namespaces are an Enterprise feature. In Consul CE, any of the rows in -the table with a `*` for either the source namespace or destination namespace are not applicable. - -## Intention Management Permissions - -Intention management can be protected by [ACLs](/consul/docs/security/acl). -Permissions for intentions are _destination-oriented_, meaning the ACLs -for managing intentions are looked up based on the destination value -of the intention, not the source. - -Intention permissions are by default implicitly granted at `read` level -when granting `service:read` or `service:write`. This is because a -service registered that wants to use service mesh needs `intentions:read` -for its own service name in order to know whether or not to authorize -connections. The following ACL policy will implicitly grant `intentions:read` -(note _read_) for service `web`. - -```hcl -service "web" { - policy = "write" -} -``` - -It is possible to explicitly specify intention permissions. For example, -the following policy will allow a service to be discovered without granting -access to read intentions for it. - -```hcl -service "web" { - policy = "read" - intentions = "deny" -} -``` - -Note that `intentions:read` is required for a token that a mesh-enabled -service uses to register itself or its proxy. If the token used does not -have `intentions:read` then the agent will be unable to resolve intentions -for the service and so will not be able to authorize any incoming connections. - -~> **Security Note:** Explicitly allowing `intentions:write` on the token you -provide to a service instance at registration time opens up a significant -additional vulnerability. Although you may trust the service _team_ to define -which inbound connections they accept, using a combined token for registration -allows a compromised instance to to redefine the intentions which allows many -additional attack vectors and may be hard to detect. We strongly recommend only -delegating `intentions:write` using tokens that are used by operations teams or -orchestrators rather than spread via application config, or only manage -intentions with management tokens. - -## Performance and Intention Updates - -The intentions for services registered with a Consul agent are cached -locally on that agent. They are then updated via a background blocking query -against the Consul servers. - -Service mesh connection attempts require only local agent -communication for authorization and generally only impose microseconds -of latency to the connection. All actions in the data path of connections -require only local data to ensure minimal performance overhead. - -Updates to intentions are propagated nearly instantly to agents since agents -maintain a continuous blocking query in the background for intention updates -for registered services. - -Because all the intention data is cached locally, the agents can fail static. -Even if the agents are severed completely from the Consul servers, inbound -connection authorization continues to work for a configured amount of time. -Changes to intentions will not be picked up until the partition heals, but -will then automatically take effect when connectivity is restored. diff --git a/website/content/docs/connect/k8s/crds.mdx b/website/content/docs/connect/k8s/crds.mdx new file mode 100644 index 000000000000..73aaa00674d6 --- /dev/null +++ b/website/content/docs/connect/k8s/crds.mdx @@ -0,0 +1,389 @@ +--- +layout: docs +page_title: Custom Resource Definitions (CRDs) for Consul on Kubernetes +description: >- + Configuration entries define service mesh behaviors in order to secure and manage traffic. Learn about Consul's different config entry kinds and get links to configuration reference pages. +--- + +# Custom Resource Definitions (CRDs) for Consul on Kubernetes + +This topic describes how to manage Consul [configuration +entries](/consul/docs/fundamentals/config-entry) with Kubernetes Custom +Resources. Configuration entries provide cluster-wide defaults for the service +mesh. + +## Supported configuration entries + +You may specify the following values in the `kind` field: + +- [`Mesh`](/consul/docs/reference/config-entry/mesh) +- [`ExportedServices`](/consul/docs/reference/config-entry/exported-services) +- [`PeeringAcceptor`](/consul/docs/east-west/cluster-peering/tech-specs/k8s#crd-specifications) +- [`PeeringDialer`](/consul/docs/east-west/cluster-peering/tech-specs/k8s#crd-specifications) +- [`ProxyDefaults`](/consul/docs/reference/config-entry/proxy-defaults) +- [`Registration`](/consul/docs/reference/config-entry/registration) +- [`SamenessGroup`](/consul/docs/reference/config-entry/sameness-group) +- [`ServiceDefaults`](/consul/docs/reference/config-entry/service-defaults) +- [`ServiceSplitter`](/consul/docs/reference/config-entry/service-splitter) +- [`ServiceRouter`](/consul/docs/reference/config-entry/service-router) +- [`ServiceResolver`](/consul/docs/reference/config-entry/service-resolver) +- [`ServiceIntentions`](/consul/docs/reference/config-entry/service-intentions) +- [`IngressGateway`](/consul/docs/reference/config-entry/ingress-gateway) +- [`TerminatingGateway`](/consul/docs/reference/config-entry/terminating-gateway) + +## Installation + +Verify that you have installed the minimum version of the Helm chart (`0.28.0`). + +```shell-session +$ Helm search repo hashicorp/consul +NAME CHART VERSION APP VERSION DESCRIPTION +hashicorp/consul 0.28.0 1.9.1 Official HashiCorp Consul Chart +``` + +Update your Helm repository cache if necessary. + +```shell-session +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "hashicorp" chart repository +Update Complete. ⎈Happy Helming!⎈ +``` + +Refer to [Install with Helm Chart](/consul/docs/deploy/server/k8s/helm) for +further installation instructions. + +**Note**: Configuration entries require `connectInject` to be enabled, which is +a default behavior in the official Helm Chart. If you disabled this setting, you +must re-enable it to use CRDs. + +## Usage + +Once installed, use `kubectl` to create and manage Consul's configuration entries. + +### Create + +Create configuration entries with `kubectl apply`. + +```shell-session +$ cat < protocol: tcp +servicedefaults.consul.hashicorp.com/foo edited +``` + +You can then use `kubectl get` to ensure the change was synced to Consul. + +```shell-session +$ kubectl get servicedefaults foo +NAME SYNCED +foo True +``` + +### Delete + +Use `kubectl delete [kind] [name]` to delete the configuration entry. + +```shell-session +$ kubectl delete servicedefaults foo +servicedefaults.consul.hashicorp.com "foo" deleted +``` + +Use `kubectl get` to ensure the configuration entry was deleted. + +```shell-session +$ kubectl get servicedefaults foo +Error from server (NotFound): servicedefaults.consul.hashicorp.com "foo" not found +``` + +#### Delete hanging + +If running `kubectl delete` hangs without exiting, there may be a dependent +configuration entry registered with Consul that prevents the target +configuration entry from being deleted. For example, if you set the protocol of +your service to `http` in `ServiceDefaults` and then create a `ServiceSplitter`, +you are not be able to delete `ServiceDefaults`. This is because by deleting the +`ServiceDefaults` config, you are setting the protocol back to the default, +which is `tcp`. Because `ServiceSplitter` requires that the service has an +`http` protocol, Consul does not allow you to delete the `ServiceDefaults` since +that would put Consul into a broken state. + +In order to delete the `ServiceDefaults` config, you would need to first delete +the `ServiceSplitter`. + +## Kubernetes namespaces + +### Consul CE ((#consul_oss)) + +Consul Community Edition (Consul CE) ignores Kubernetes namespaces and registers all services into the same +global Consul registry based on their names. For example, service `web` in Kubernetes namespace +`web-ns` and service `admin` in Kubernetes namespace `admin-ns` are registered into +Consul as `web` and `admin` with the Kubernetes source namespace ignored. + +When creating custom resources to configure these services, the namespace of the +custom resource is also ignored. For example, you can create a `ServiceDefaults` +custom resource for service `web` in the Kubernetes namespace `admin-ns` even though +the `web` service is actually running in the `web-ns` namespace (although this is not recommended): + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: web + namespace: admin-ns +spec: + protocol: http +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: web + namespace: web-ns +spec: ... +``` + +~> **Note:** If you create two custom resources with identical `kind` and `name` values in different Kubernetes namespaces, the last one you create is not able to sync. + +#### ServiceIntentions special case + +`ServiceIntentions` are different from the other custom resources because the +name of the resource doesn't matter. For other resources, the name of the resource +determines which service it configures. For example, this resource configures +the service `web`: + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: web +spec: + protocol: http +``` + + + +For `ServiceIntentions`, because we need to support the ability to create +wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service), +and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name` +to configure the destination service for the intention: + + + +```yaml +# foo => * (allow) +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: name-does-not-matter +spec: + destination: + name: '*' + sources: + - name: foo + action: allow +--- +# foo => web (allow) +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: name-does-not-matter +spec: + destination: + name: web + sources: + - name: foo + action: allow +``` + + + +If two `ServiceIntentions` resources set the same `spec.destination.name`, the +last one created is not synced. + +### Consul Enterprise + +Consul Enterprise supports multiple configurations for how Kubernetes namespaces are mapped +to Consul namespaces. The Consul namespace that the custom resource is registered +into depends on the configuration being used but in general, you should create your +custom resources in the same Kubernetes namespace as the service they configure. + +The details on each configuration are: + +1. **Mirroring** - The Kubernetes namespace is mirrored into Consul. For example, the + service `web` in Kubernetes namespace `web-ns` is registered as service `web` + in the Consul namespace `web-ns`. In the same vein, a `ServiceDefaults` custom resource with + name `web` in Kubernetes namespace `web-ns` configures that same service. + + This is configured with [`connectInject.consulNamespaces`](/consul/docs/reference/k8s/helm#v-connectinject-consulnamespaces): + + + + ```yaml + global: + name: consul + enableConsulNamespaces: true + image: hashicorp/consul-enterprise:-ent + connectInject: + consulNamespaces: + mirroringK8S: true + ``` + + + +1. **Mirroring with prefix** - The Kubernetes namespace is mirrored into Consul + with a prefix added to the Consul namespace. For example, if the prefix is `k8s-` then service `web` in Kubernetes namespace `web-ns` will be registered as service `web` + in the Consul namespace `k8s-web-ns`. In the same vein, a `ServiceDefaults` custom resource with + name `web` in Kubernetes namespace `web-ns` configures that same service. + + This is configured with [`connectInject.consulNamespaces`](/consul/docs/reference/k8s/helm#v-connectinject-consulnamespaces): + + + + ```yaml + global: + name: consul + enableConsulNamespaces: true + image: hashicorp/consul-enterprise:-ent + connectInject: + consulNamespaces: + mirroringK8S: true + mirroringK8SPrefix: k8s- + ``` + + + +1. **Single destination namespace** - The Kubernetes namespace is ignored and all services + are registered into the same Consul namespace. For example, if the destination Consul + namespace is `my-ns` then service `web` in Kubernetes namespace `web-ns` is registered as service `web` in Consul namespace `my-ns`. + + In this configuration, the Kubernetes namespace of the custom resource is ignored. + For example, a `ServiceDefaults` custom resource with the name `web` in Kubernetes + namespace `admin-ns` configures the service with name `web` even though that + service is running in Kubernetes namespace `web-ns` because the `ServiceDefaults` + resource ends up registered into the same Consul namespace `my-ns`. + + This is configured with [`connectInject.consulNamespaces`](/consul/docs/reference/k8s/helm#v-connectinject-consulnamespaces): + + + + ```yaml + global: + name: consul + enableConsulNamespaces: true + image: hashicorp/consul-enterprise:-ent + connectInject: + consulNamespaces: + consulDestinationNamespace: 'my-ns' + ``` + + + + ~> **Note:** In this configuration, if two custom resources are created in two Kubernetes namespaces with identical `name` and `kind` values, the last one created is not synced. + +#### ServiceIntentions Special Case (Enterprise) + +`ServiceIntentions` are different from the other custom resources because the +name of the resource does not matter. For other resources, the name of the resource +determines which service it configures. For example, this resource configures +the service `web`: + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: web +spec: + protocol: http +``` + + + +For `ServiceIntentions`, because we need to support the ability to create +wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to any service), +and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name` +to configure the destination service for the intention: + + + +```yaml +# foo => * (allow) +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: name-does-not-matter +spec: + destination: + name: '*' + sources: + - name: foo + action: allow +--- +# foo => web (allow) +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: name-does-not-matter +spec: + destination: + name: web + sources: + - name: foo + action: allow +``` + + + +In addition, we support the field `spec.destination.namespace` to configure +the destination service's Consul namespace. If `spec.destination.namespace` +is empty, then the Consul namespace used is the same as the other +config entries as outlined above. diff --git a/website/content/docs/connect/k8s/index.mdx b/website/content/docs/connect/k8s/index.mdx new file mode 100644 index 000000000000..43dae42002a2 --- /dev/null +++ b/website/content/docs/connect/k8s/index.mdx @@ -0,0 +1,43 @@ +--- +layout: docs +page_title: Connect Kubernetes services with Consul +description: >- + Consul documentation provides reference material for all features and options available in Consul. +--- + +# Connect Kubernetes service mesh with Consul + +This page describes the process to deploy sidecar proxies on Kubernetes so that your services can connect to Consul's service mesh. + +## Introduction + +Consul service mesh is enabled by default when you install Consul on Kubernetes using the Consul Helm chart. Consul also automatically injects sidecars into the pods in your clusters that run Envoy. These sidecar proxies, called Consul dataplanes, are enabled when `connectInject.default` is set to `false` in the Helm chart. + +## Workflows + +To get started with the Consul service mesh on Kubernetes, [enable and configure the connect injector](/consul/docs/connect/k8s/inject). + +If `connectInject.default` is set to `false` or you want to explicitly enable service mesh sidecar proxy injection for a specific deployment, add the `consul.hashicorp.com/connect-inject` annotation to the pod specification template and set it to `true` when connecting services to the mesh. + +Additional configuration examples are available to help you configure your workloads: + +- [Kubernetes Pods running as a deployment](/consul/docs/connect/k8s/workload#kubernetes-pods-running-as-a-deployment) +- [Connecting to mesh-enabled Services](/consul/docs/connect/k8s/workload#connecting-to-mesh-enabled-services) +- [Kubernetes Jobs](/consul/docs/connect/k8s/workload#kubernetes-jobs) +- [Kubernetes Pods with multiple ports](/consul/docs/connect/k8s/workload#kubernetes-pods-with-multiple-ports) + +## Service names + +When the service is onboarded, the name registered in Consul is set to the name of the Kubernetes Service associated with the Pod. You can use the [`consul.hashicorp.com/connect-service` annotation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service) to specify a custom name for the service, but if ACLs are enabled then the name of the service registered in Consul must match the Pod's `ServiceAccount` name. + +## Transparent proxy mode + +By default, the Consul service mesh runs in transparent proxy mode. This mode forces inbound and outbound traffic through the sidecar proxy even though the service binds to all interfaces. Transparent proxy infers the location of upstream services using Consul service intentions, and also allows you to use Kubernetes DNS as you normally would for your workloads. + +When transparent proxy mode is enabled, all service-to-service traffic is required to use mTLS. When onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication. You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/register/service/k8s/transparent-proxy) for additional information. + +## Next steps + +After you start the sidecar proxies, the rest of Consul's service mesh features are available. You can now use Consul to [manage traffic between services](/consul/docs/manage-traffic/k8s) and [observe service mesh telemetry](/consul/docs/observe/telemetry/k8s). + +Your current service mesh is not ready for production environments. To secure north/south access from external sources into the service mesh, [Deploy the Consul API gateway](/consul/docs/north-south/api-gateway). Then, you must secure service-to-service communication with mTLS certificates and service intentions. Refer to [secure the service mesh](/consul/docs/secure-mesh/k8s) for more information. \ No newline at end of file diff --git a/website/content/docs/connect/k8s/inject.mdx b/website/content/docs/connect/k8s/inject.mdx new file mode 100644 index 000000000000..038b4a590dc0 --- /dev/null +++ b/website/content/docs/connect/k8s/inject.mdx @@ -0,0 +1,200 @@ +--- +layout: docs +page_title: Connect Kubernetes services with Consul +description: >- + Consul documentation provides reference material for all features and options available in Consul. +--- + +# Custom Consul injection behavior + +This page describes the process to enable the Consul injector so that you can use Consul's service mesh features on Kubernetes, and then configure custom injection behavior such as defaults for Consul and Kubernetes namespaces. + +## Enable connect injector + +The service mesh sidecar proxy is injected via a +[mutating admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) +call the connect injector provided by the +[consul-k8s project](https://github.com/hashicorp/consul-k8s). +This enables the automatic pod mutation shown in the usage section above. +Installation of the mutating admission webhook is automated using the +[Helm chart](/consul/docs/deploy/server/k8s/helm). + +To install the connect injector, enable the connect injection feature using +[Helm values](/consul/docs/reference/k8s/helm#configuration-values) and +upgrade the installation using `helm upgrade` for existing installs or +`helm install` for a fresh install. + +```yaml +connectInject: + enabled: true +``` + +This will configure the injector to inject when the +[injection annotation](#consul-hashicorp-com-connect-inject) +is set to `true`. Other values in the Helm chart can be used to limit the namespaces +the injector runs in, enable injection by default, and more. + +## Verify the injection + +To verify the installation, run the +["Accepting Inbound Connections"](/consul/docs/k8s/connect#accepting-inbound-connections) +example from the "Usage" section above. After running this example, run +`kubectl get pod static-server --output yaml`. In the raw YAML output, you should +see connect injected containers and an annotation +`consul.hashicorp.com/connect-inject-status` set to `injected`. This +confirms that injection is working properly. + +If you do not see this, then use `kubectl logs` against the injector pod +and note any errors. + +## Controlling Injection with Annotations + +By default, the injector will inject only when the +[injection annotation](#consul-hashicorp-com-connect-inject) +on the pod (not the deployment) is set to `true`: + +```yaml +annotations: + 'consul.hashicorp.com/connect-inject': 'true' +``` + +### Injection Defaults + +If you wish for the injector to always inject, you can set the default to `true` +in the Helm chart: + +```yaml +connectInject: + enabled: true + default: true +``` + +You can then exclude specific pods via annotation: + +```yaml +annotations: + 'consul.hashicorp.com/connect-inject': 'false' +``` + +## Controlling Injection for Namespace + +You can control which Kubernetes namespaces are allowed to be injected via +the `k8sAllowNamespaces` and `k8sDenyNamespaces` keys: + +```yaml +connectInject: + enabled: true + k8sAllowNamespaces: ['*'] + k8sDenyNamespaces: [] +``` + +In the default configuration (shown above), services from all namespaces are allowed +to be injected. Whether or not they're injected depends on the value of `connectInject.default` +and the `consul.hashicorp.com/connect-inject` annotation. + +If you wish to only enable injection in specific namespaces, you can list only those +namespaces in the `k8sAllowNamespaces` key. In the configuration below +only the `my-ns-1` and `my-ns-2` namespaces will be enabled for injection. +All other namespaces will be ignored, even if the connect inject [annotation](#consul-hashicorp-com-connect-inject) +is set. + +```yaml +connectInject: + enabled: true + k8sAllowNamespaces: ['my-ns-1', 'my-ns-2'] + k8sDenyNamespaces: [] +``` + +If you wish to enable injection in every namespace _except_ specific namespaces, you can +use `*` in the allow list to allow all namespaces and then specify the namespaces to exclude in the deny list: + +```yaml +connectInject: + enabled: true + k8sAllowNamespaces: ['*'] + k8sDenyNamespaces: ['no-inject-ns-1', 'no-inject-ns-2'] +``` + +-> **NOTE:** The deny list takes precedence over the allow list. If a namespace +is listed in both lists, it will **not** be synced. + +~> **NOTE:** The `kube-system` and `kube-public` namespaces will never be injected. + +### Consul Enterprise Namespaces + +Consul Enterprise 1.7+ supports Consul namespaces. When Kubernetes pods are registered +into Consul, you can control which Consul namespace they are registered into. + +There are three options available: + +1. **Single Destination Namespace** – Register all Kubernetes pods, regardless of namespace, + into the same Consul namespace. + + This can be configured with: + + ```yaml + global: + enableConsulNamespaces: true + + connectInject: + enabled: true + consulNamespaces: + consulDestinationNamespace: 'my-consul-ns' + ``` + + -> **NOTE:** If the destination namespace does not exist we will create it. + +1. **Mirror Namespaces** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes namespace. + For example, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`. + If a mirrored namespace does not exist in Consul, it will be created. + + This can be configured with: + + ```yaml + global: + enableConsulNamespaces: true + + connectInject: + enabled: true + consulNamespaces: + mirroringK8S: true + ``` + +1. **Mirror Namespaces With Prefix** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes + namespace **with a prefix**. + For example, given a prefix `k8s-`, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`. + + This can be configured with: + + ```yaml + global: + enableConsulNamespaces: true + + connectInject: + enabled: true + consulNamespaces: + mirroringK8S: true + mirroringK8SPrefix: 'k8s-' + ``` + +### Consul Enterprise Namespace Upstreams + +When [transparent proxy](/consul/docs/connect/transparent-proxy) is enabled and ACLs are disabled, +the upstreams will be configured automatically across Consul namespaces. +When ACLs are enabled, you must configure it by specifying an [intention](/consul/docs/secure-mesh/intention), +allowing services across Consul namespaces to talk to each other. + +If you wish to specify an upstream explicitly via the `consul.hashicorp.com/connect-service-upstreams` annotation, +use the format `[service-name].[namespace]:[port]:[optional datacenter]`: + +```yaml +annotations: + 'consul.hashicorp.com/connect-inject': 'true' + 'consul.hashicorp.com/connect-service-upstreams': '[service-name].[namespace]:[port]:[optional datacenter]' +``` + +See [consul.hashicorp.com/connect-service-upstreams](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) for more details. + +-> **Note:** When you specify upstreams via an upstreams annotation, you will need to use +`localhost:` with the port from the upstreams annotation instead of KubeDNS to connect to your upstream +application. \ No newline at end of file diff --git a/website/content/docs/connect/k8s/workload.mdx b/website/content/docs/connect/k8s/workload.mdx new file mode 100644 index 000000000000..2d7a9059ad7e --- /dev/null +++ b/website/content/docs/connect/k8s/workload.mdx @@ -0,0 +1,486 @@ +--- +layout: docs +page_title: Kubernetes service mesh workload scenarios +description: >- + An injection annotation allows Consul to automatically deploy sidecar proxies on Kubernetes pods, enabling Consul's service mesh for containers running on k8s. Learn how to configure sidecars, enable services with multiple ports (multiport or multi-port Services), change default injection settings. +--- + +# Kubernetes service mesh workload scenarios + +This page provides example workflows for registering workloads on Kubernetes into Consul's service mesh in different scenarios, including multiport deployments. Each scenario provides an example Kubernetes manifest to demonstrate how to use Consul's service mesh with a specific Kubernetes workload type. + +-> **Note:** A Kubernetes Service is required in order to register services on the Consul service mesh. Consul monitors the lifecycle of the Kubernetes Service and its service instances using the service object. In addition, the Kubernetes service is used to register and de-register the service from Consul's catalog. + +## Kubernetes Pods running as a deployment + +The following example shows a Kubernetes configuration that specifically enables service mesh connections for the `static-server` service. Consul starts and registers a sidecar proxy that listens on port 20000 by default and proxies valid inbound connections to port 8080. + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + # This name will be the service name in Consul. + name: static-server +spec: + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-server +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + # If ACLs are enabled, the serviceAccountName must match the Consul service name. + serviceAccountName: static-server +``` + + + +To establish a connection to the upstream Pod using service mesh, a client must dial the upstream workload using a mesh proxy. The client mesh proxy will use Consul service discovery to find all available upstream proxies and their public ports. + +## Connecting to mesh-enabled Services + +The example Deployment specification below configures a Deployment that is capable +of establishing connections to our previous example "static-server" service. The +connection to this static text service happens over an authorized and encrypted +connection via service mesh. + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + # This name will be the service name in Consul. + name: static-client +spec: + selector: + app: static-client + ports: + - port: 80 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-client +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-client +spec: + replicas: 1 + selector: + matchLabels: + app: static-client + template: + metadata: + name: static-client + labels: + app: static-client + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + spec: + containers: + - name: static-client + image: curlimages/curl:latest + # Just spin & wait forever, we'll use `kubectl exec` to demo + command: ['/bin/sh', '-c', '--'] + args: ['while true; do sleep 30; done;'] + # If ACLs are enabled, the serviceAccountName must match the Consul service name. + serviceAccountName: static-client +``` + + + +By default when ACLs are enabled or when ACLs default policy is `allow`, +Consul will automatically configure proxies with all upstreams from the same datacenter. +When ACLs are enabled with default `deny` policy, +you must supply an [intention](/consul/docs/secure-mesh/intention) to tell Consul which upstream you need to talk to. + +When upstreams are specified explicitly with the +[`consul.hashicorp.com/connect-service-upstreams` annotation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams), +the injector will also set environment variables `_CONNECT_SERVICE_HOST` +and `_CONNECT_SERVICE_PORT` in every container in the Pod for every defined +upstream. This is analogous to the standard Kubernetes service environment variables, but +point instead to the correct local proxy port to establish connections via +service mesh. + +You cannot reference auto-generated environment variables when the upstream annotation contains a dot. This is because Consul also renders the environment variables to include a dot. For example, Consul renders the variables generated for `static-server.svc:8080` as `STATIC-SERVER.SVC_CONNECT_SERVICE_HOST` and `STATIC_SERVER.SVC_CONNECT_SERVICE_PORT`, which makes the variables unusable. +You can verify access to the static text server using `kubectl exec`. +Because transparent proxy is enabled by default, +use Kubernetes DNS to connect to your desired upstream. + +```shell-session +$ kubectl exec deploy/static-client -- curl --silent http://static-server/ +"hello world" +``` + +You can control access to the server using [intentions](/consul/docs/secure-mesh/intention). +If you use the Consul UI or [CLI](/consul/commands/intention/create) to +deny communication between +"static-client" and "static-server", connections are immediately rejected +without updating either of the running pods. You can then remove this +intention to allow connections again. + +```shell-session +$ kubectl exec deploy/static-client -- curl --silent http://static-server/ +command terminated with exit code 52 +``` + +## Kubernetes Jobs + +Kubernetes Jobs run pods that only make outbound requests to services on the mesh and successfully terminate when they are complete. In order to register a Kubernetes Job with the mesh, you must provide an integer value for the `consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds` annotation. Then, issue a request to the `http://127.0.0.1:20600/graceful_shutdown` API endpoint so that Kubernetes gracefully shuts down the `consul-dataplane` sidecar after the job is complete. + +Below is an example Kubernetes manifest that deploys a job correctly. + + + +```yaml +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: test-job + namespace: default +--- +apiVersion: v1 +kind: Service +metadata: + name: test-job + namespace: default +spec: + selector: + app: test-job + ports: + - port: 80 +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: test-job + namespace: default + labels: + app: test-job +spec: + template: + metadata: + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + 'consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds': '5' + labels: + app: test-job + spec: + containers: + - name: test-job + image: alpine/curl:3.14 + ports: + - containerPort: 80 + command: + - /bin/sh + - -c + - | + echo "Started test job" + sleep 10 + echo "Killing proxy" + curl --max-time 2 -s -f -X POST http://127.0.0.1:20600/graceful_shutdown + sleep 10 + echo "Ended test job" + serviceAccountName: test-job + restartPolicy: Never +``` + + + +Upon completing the job you should be able to verify that all containers are shut down within the pod. + +```shell-session +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +test-job-49st7 0/2 Completed 0 3m55s +``` + +```shell-session +$ kubectl get job +NAME COMPLETIONS DURATION AGE +test-job 1/1 30s 4m31s +``` + +In addition, based on the logs emitted by the pod you can verify that the proxy was shut down before the Job completed. + +```shell-session +$ kubectl logs test-job-49st7 -c test-job +Started test job +Killing proxy +Ended test job +``` + +## Kubernetes Pods with multiple ports + +To configure a pod with multiple ports to be a part of the service mesh and receive and send service mesh traffic, you +will need to add configuration so that a Consul service can be registered per port. This is because services in Consul +currently support a single port per service instance. + +In the following example, suppose we have a pod which exposes 2 ports, `8080` and `9090`, both of which will need to +receive service mesh traffic. + +First, decide on the names for the two Consul services that will correspond to those ports. In this example, the user +chooses the names `web` for `8080` and `web-admin` for `9090`. + +Create two service accounts for `web` and `web-admin`: + + + +```yaml +apiVersion: v1 +kind: ServiceAccount +metadata: + name: web +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: web-admin +``` + + + + +Create two Service objects for `web` and `web-admin`: + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: web +spec: + selector: + app: web + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: Service +metadata: + name: web-admin +spec: + selector: + app: web + ports: + - protocol: TCP + port: 80 + targetPort: 9090 +``` + + + +`web` will target `containerPort` `8080` and select pods labeled `app: web`. `web-admin` will target `containerPort` +`9090` and will also select the same pods. + +~> Kubernetes 1.24+ only +In Kubernetes 1.24+ you need to [create a Kubernetes secret](https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets) for each additional Consul service associated with the pod in order to expose the Kubernetes ServiceAccount token to the Consul dataplane container running under the pod serviceAccount. The Kubernetes secret name must match the ServiceAccount name: + + + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: web + annotations: + kubernetes.io/service-account.name: web +type: kubernetes.io/service-account-token +--- +apiVersion: v1 +kind: Secret +metadata: + name: web-admin + annotations: + kubernetes.io/service-account.name: web-admin +type: kubernetes.io/service-account-token +``` + + + +Create a Deployment with any chosen name, and use the following annotations: +```yaml +annotations: + 'consul.hashicorp.com/connect-inject': 'true' + 'consul.hashicorp.com/transparent-proxy': 'false' + 'consul.hashicorp.com/connect-service': 'web,web-admin' + 'consul.hashicorp.com/connect-service-port': '8080,9090' +``` +Note that the order the ports are listed in the same order as the service names, i.e. the first service name `web` +corresponds to the first port, `8080`, and the second service name `web-admin` corresponds to the second port, `9090`. + +The service account on the pod spec for the deployment should be set to the first service name `web`: +```yaml +serviceAccountName: web +``` + +The following deployment example demonstrates the required annotations for the manifest. In addition, the previous YAML manifests can also be combined into a single manifest for easier deployment. + + + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: web +spec: + replicas: 1 + selector: + matchLabels: + app: web + template: + metadata: + name: web + labels: + app: web + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + 'consul.hashicorp.com/transparent-proxy': 'false' + 'consul.hashicorp.com/connect-service': 'web,web-admin' + 'consul.hashicorp.com/connect-service-port': '8080,9090' + spec: + containers: + - name: web + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + - name: web-admin + image: hashicorp/http-echo:latest + args: + - -text="hello world from 9090" + - -listen=:9090 + ports: + - containerPort: 9090 + name: http + serviceAccountName: web +``` + + + +After deploying the `web` application, you can test service mesh connections by deploying the `static-client` +application with the configuration in the [previous section](#connecting-to-mesh-enabled-services) and add the +`consul.hashicorp.com/connect-service-upstreams: 'web:1234,web-admin:2234'` annotation to the pod template on `static-client`: + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + # This name will be the service name in Consul. + name: static-client +spec: + selector: + app: static-client + ports: + - port: 80 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-client +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-client +spec: + replicas: 1 + selector: + matchLabels: + app: static-client + template: + metadata: + name: static-client + labels: + app: static-client + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + 'consul.hashicorp.com/connect-service-upstreams': 'web:1234,web-admin:2234' + spec: + containers: + - name: static-client + image: curlimages/curl:latest + # Just spin & wait forever, we'll use `kubectl exec` to demo + command: ['/bin/sh', '-c', '--'] + args: ['while true; do sleep 30; done;'] + # If ACLs are enabled, the serviceAccountName must match the Consul service name. + serviceAccountName: static-client +``` + + + +If you exec on to a static-client pod, using a command like: +```shell-session +$ kubectl exec -it static-client-5bd667fbd6-kk6xs -- /bin/sh +``` +you can then run: +```shell-session +$ curl localhost:1234 +``` +to see the output `hello world` and run: +```shell-session +$ curl localhost:2234 +``` +to see the output `hello world from 9090`. + +The way this works is that a Consul service instance is being registered per port on the Pod, so there are 2 Consul +services in this case. An additional Envoy sidecar proxy and `connect-init` init container are also deployed per port in +the Pod. So the upstream configuration can use the individual service names to reach each port as seen in the example. + +### Caveats for Multi-port Pods + +- Transparent proxy is not supported for multi-port Pods. +- Metrics and metrics merging is not supported for multi-port Pods. +- Upstreams will only be set on the first service's Envoy sidecar proxy for the pod. + - This means that ServiceIntentions from a multi-port pod to elsewhere, will need to use the first service's name, + `web` in the example above to accept connections from either `web` or `web-admin`. ServiceIntentions from elsewhere + to a multi-port pod can use the individual service names within the multi-port Pod. +- Health checking is done on a per-Pod basis, so if any Kubernetes health checks (like readiness, liveness, etc) are + failing for any container on the Pod, the entire Pod is marked unhealthy, and any Consul service referencing that Pod + will also be marked as unhealthy. So, if `web` has a failing health check, `web-admin` would also be marked as + unhealthy for service mesh traffic. \ No newline at end of file diff --git a/website/content/docs/connect/lambda/function.mdx b/website/content/docs/connect/lambda/function.mdx new file mode 100644 index 000000000000..1600a67d8952 --- /dev/null +++ b/website/content/docs/connect/lambda/function.mdx @@ -0,0 +1,82 @@ +--- +layout: docs +page_title: Invoke AWS Lambda Functions +description: >- + You can invoke an Amazon Web Services Lambda function in your Consul service mesh by configuring terminating gateways or sidecar proxies. Learn how to declare a registered function as an upstream and why we recommend using terminating gateways with Lambda. +--- + +# Invoke Lambda Functions from Mesh Services + +This topic describes how to invoke AWS Lambda functions from the Consul service mesh. + +## Overview + +You can invoke Lambda functions from the Consul service mesh through terminating gateways (recommended) or directly from service mesh proxies. + +### Terminating Gateway + +We recommend invoking Lambda functions through terminating gateways. This method supports cross-datacenter communication, transparent +proxies, intentions, and all other Consul service mesh features. + +The terminating gateway must have [the appropriate IAM permissions](/consul/docs/lambda/registration#configure-iam-permissions-for-envoy) +to invoke the function. + +The following diagram shows the invocation procedure: + + + +![Terminating Gateway to Lambda](/img/terminating_gateway_to_lambda.svg) + + + +1. Make an HTTP request to the local service mesh proxy. +1. The service mesh proxy forwards the request to the terminating gateway. +1. The terminating gateway invokes the function. + +### Service Mesh Proxy + +You can invoke Lambda functions directly from a service's mesh sidecar proxy. +This method has the following limitations: +- Intentions are unsupported. Consul enforces intentions by validating the client certificates presented when a connection is received. Lambda does not support client certificate validation, which prevents Consul from supporting intentions using this method. +- Transparent proxies are unsupported. This is because Lambda services are not + registered to a proxy. + +This method is secure because AWS IAM permissions is required to invoke Lambda functions. Additionally, all communication is encrypted with Amazon TLS when invoking Lambda resources. + +The Envoy sidecar proxy must have the correct AWS IAM credentials to invoke the function. You can define the credentials in environment variables, EC2 metadata, or ECS task metadata. + +The following diagram shows the invocation procedure: + + + +![Service Mesh Proxy to Lambda](/img/connect_proxy_to_lambda.svg) + + + +1. Make an HTTP request to the local service mesh proxy. +2. The service mesh proxy invokes the Lambda. + +## Invoke a Lambda Function + +Before you can invoke a Lambda function, register the service used to invoke the Lambda function and the service running in Lambda with Consul (refer to [registration](/consul/docs/register/service/lambda) for instructions). The service used to invoke the function must be deployed to the service mesh. + +1. Update the invoking service to use the Lambda service as an upstream. In the following example, the `destination_name` for the invoking service (`api`) points to a Lambda service called `authentication`: + + ```hcl + upstreams { + local_bind_port = 2345 + destination_name = "authentication" + } + ``` + +1. Issue the `consul services register` command to store the configuration: + + ```shell-session + $ consul services register api-sidecar-proxy.hcl + ``` + +1. Call the upstream service to invoke the Lambda function. In the following example, the `api` service invokes the `authentication` service at `localhost:2345`: + + ```shell-session + $ curl https://localhost:2345 + ``` diff --git a/website/content/docs/connect/lambda/index.mdx b/website/content/docs/connect/lambda/index.mdx new file mode 100644 index 000000000000..06eb9ba0a93f --- /dev/null +++ b/website/content/docs/connect/lambda/index.mdx @@ -0,0 +1,40 @@ +--- +layout: docs +page_title: Connect Lambda services with Consul +description: >- + Consul documentation provides reference material for all features and options available in Consul. +--- + +# Connect Lambda services with Consul + +You can configure Consul to allow services in your mesh to invoke Lambda functions, as well as allow Lambda functions to invoke services in your mesh. Lambda functions are programs or scripts that run in AWS Lambda. Refer to the [AWS Lambda website](https://aws.amazon.com/lambda/) for additional information. + +## Register Lambda functions into Consul + +The first step is to register your Lambda functions into Consul. We recommend using the [Lambda registrator module](https://github.com/hashicorp/terraform-aws-consul-lambda/tree/main/modules/lambda-registrator) to automatically synchronize Lambda functions into Consul. You can also manually register Lambda functions into Consul if you are unable to use the Lambda registrator. + +Refer to [Lambda Function Registration Requirements](/consul/docs/register/service/lambda) for additional information about registering Lambda functions into Consul. + +## Invoke Lambda functions from Consul service mesh + +After registering AWS Lambda functions, you can invoke Lambda functions from the Consul service mesh through terminating gateways (recommended) or directly from connected proxies. + +Refer to [Invoke Lambda Functions from Services](/consul/docs/connect/lambda/function) for details. + +## Invoke mesh services from Lambda function + + + +Functionality associated with beta features are subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may have performance issues, scaling issues, and limited support. + + + +You can also add the `consul-lambda-extension` plugin as a layer in your Lambda functions, which enables them to send requests to services in the mesh. The plugin starts a lightweight sidecar proxy that directs requests from Lambda functions to [mesh gateways](/consul/docs/connect/gateways#mesh-gateways). The gateways route traffic to the destination service to complete the request. + +![Invoke mesh service from Lambda function](/img/invoke-service-from-lambda-flow.svg) + +Refer to [Invoke Services from Lambda Functions](/consul/docs/connect/lambda/service) for additional information about registering Lambda functions into Consul. + +Consul mesh gateways are required to send requests from Lambda functions to mesh services. Refer to [Mesh Gateways](/consul/docs/east-west/mesh-gateway/) for additional information. + +Note that L7 traffic management features are not supported. As a result, requests from Lambda functions ignore service routes and splitters. diff --git a/website/content/docs/connect/lambda/service.mdx b/website/content/docs/connect/lambda/service.mdx new file mode 100644 index 000000000000..7cd6b736609b --- /dev/null +++ b/website/content/docs/connect/lambda/service.mdx @@ -0,0 +1,273 @@ +--- +layout: docs +page_title: Invoke Services from Lambda Functions +description: >- + This topic describes how to invoke services in the mesh from Lambda functions registered with Consul. +--- + +# Invoke Services from Lambda Functions + +This topic describes how to invoke services in the mesh from Lambda functions registered with Consul. + +~> **Lambda-to-mesh functionality is currently in beta**: Functionality associated with beta features are subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may have performance issues, scaling issues, and limited support. + +## Introduction + +The following steps describe the process: + +1. Deploy the destination service and mesh gateway. +1. Deploy the Lambda extension layer. +1. Deploy the Lambda registrator. +1. Write the Lambda function code. +1. Deploy the Lambda function. +1. Invoke the Lambda function. + +You must add the `consul-lambda-extension` extension as a Lambda layer to enable Lambda functions to send requests to mesh services. Refer to the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.html) for instructions on how to add layers to your Lambda functions. + +The layer runs an external Lambda extension that starts a sidecar proxy. The proxy listens on one port for each upstream service and upgrades the outgoing connections to mTLS. It then proxies the requests through to [mesh gateways](/consul/docs/connect/gateways#mesh-gateways). + +## Prerequisites + +You must deploy the destination services and mesh gateway prior to deploying your Lambda service with the `consul-lambda-extension` layer. + +### Deploy the destination service + +There are several methods for deploying services to Consul service mesh. The following example configuration deploys a service named `static-server` with Consul on Kubernetes. + +```yaml +kind: Service +apiVersion: v1 +metadata: + # Specifies the service name in Consul. + name: static-server +spec: + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-server +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + serviceAccountName: static-server +``` + +### Deploy the mesh gateway + +The mesh gateway must be running and registered to the Lambda function’s Consul datacenter. Refer to the following documentation and tutorials for instructions: + +- [Mesh Gateways between WAN-Federated Datacenters](/consul/docs/east-west/mesh-gateway/federation) +- [Mesh Gateways between Admin Partitions](/consul/docs/east-west/mesh-gateway/admin-partition) +- [Establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/vm) +- [Connect Services Across Datacenters with Mesh Gateways](/consul/tutorials/developer-mesh/service-mesh-gateways) + +## Deploy the Lambda extension layer + +The `consul-lambda-extension` extension runs during the `Init` phase of the Lambda function execution. The extension retrieves the data that the Lambda registrator has been configured to store from AWS Parameter Store and creates a lightweight TCP proxy. The proxy creates a local listener for each upstream defined in the `CONSUL_SERVICE_UPSTREAMS` environment variable. + +The extension periodically retrieves the data from the AWS Parameter Store so that the function can process requests. When the Lambda function receives a shutdown event, the extension also stops. + +1. Download the `consul-lambda-extension` extension from [releases.hashicorp.com](https://releases.hashicorp.com/): + + ```shell-session + curl -o consul-lambda-extension__linux_amd64.zip https://releases.hashicorp.com/consul-lambda//consul-lambda-extension__linux_amd64.zip + ``` +1. Create the AWS Lambda layer in the same AWS region as the Lambda function. You can create the layer manually using the AWS CLI or AWS Console, but we recommend using Terraform: + + + + ```hcl + resource "aws_lambda_layer_version" "consul_lambda_extension" { + layer_name = "consul-lambda-extension" + filename = "consul-lambda-extension__linux_amd64.zip" + source_code_hash = filebase64sha256("consul-lambda-extension__linux_amd64.zip") + description = "Consul service mesh extension for AWS Lambda" + } + ``` + + + +## Deploy the Lambda registrator + +Configure and deploy the Lambda registrator. Refer to the [registrator configuration documentation](/consul/docs/lambda/registration/automate#configuration) and the [registrator deployment documentation](/consul/docs/lambda/registration/automate#deploy-the-lambda-registrator) for instructions. + +## Write the Lambda function code + +Refer to the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) for instructions on how to write a Lambda function. In the following example, the function calls an upstream service on port `2345`: + + +```go +package main + +import ( + "context" + "io" + "fmt" + "net/http" + "github.com/aws/aws-lambda-go/lambda" +) + +type Response struct { + StatusCode int `json:"statusCode"` + Body string `json:"body"` +} + +func HandleRequest(ctx context.Context, _ interface{}) (Response, error) { + resp, err := http.Get("http://localhost:2345") + fmt.Println("Got response", resp) + if err != nil { + return Response{StatusCode: 500, Body: "Something bad happened"}, err + } + + if resp.StatusCode != 200 { + return Response{StatusCode: resp.StatusCode, Body: resp.Status}, err + } + + defer resp.Body.Close() + + b, err := io.ReadAll(resp.Body) + if err != nil { + return Response{StatusCode: 500, Body: "Error decoding body"}, err + } + + return Response{StatusCode: 200, Body: string(b)}, nil +} + +func main() { + lambda.Start(HandleRequest) +} +``` + +## Deploy the Lambda function + +1. Create and apply an IAM policy that allows the Lambda function’s role to fetch the Lambda extension’s data from the AWS Parameter Store. The following example, creates an IAM role for the Lambda function, creates an IAM policy with the necessary permissions and attaches the policy to the role: + + + + ```hcl + resource "aws_iam_role" "lambda" { + name = "lambda-role" + + assume_role_policy = < + +1. Configure and deploy the Lambda function. Refer to the [Lambda extension configuration](#lambda-extension-configuration) reference for information about all available options. There are several methods for deploying Lambda functions. The following example uses Terraform to deploy a function that can invoke the `static-server` upstream service using mTLS data stored under the `/lambda_extension_data` prefix: + + + + ```hcl + resource "aws_lambda_function" "example" { + … + function_name = "lambda" + role = aws_iam_role.lambda.arn + tags = { + "serverless.consul.hashicorp.com/v1alpha1/lambda/enabled" = "true" + } + variables = { + environment = { + CONSUL_MESH_GATEWAY_URI = var.mesh_gateway_http_addr + CONSUL_SERVICE_UPSTREAMS = "static-server:2345:dc1" + CONSUL_EXTENSION_DATA_PREFIX = "/lambda_extension_data" + } + } + layers = [aws_lambda_layer_version.consul_lambda_extension.arn] + ``` + + + +1. Run the `terraform apply` command and Consul automatically configures a service for the Lambda function. + +### Lambda extension configuration + +Define the following environment variables in your Lambda functions to configure the Lambda extension. The variables apply to each Lambda function in your environment: + +| Variable | Description | Default | +| --- | --- | --- | +| `CONSUL_MESH_GATEWAY_URI` | Specifies the URI where the mesh gateways that the plugin makes requests are running. The mesh gateway should be registered in the same Consul datacenter and partition that the service is running in. For optimal performance, this mesh gateway should run in the same AWS region. | none | +| `CONSUL_EXTENSION_DATA_PREFIX` | Specifies the prefix that the plugin pulls configuration data from. The data must be located in the following directory:
`"${CONSUL_EXTENSION_DATA_PREFIX}/${CONSUL_SERVICE_PARTITION}/${CONSUL_SERVICE_NAMESPACE}/"` | none | +| `CONSUL_SERVICE_NAMESPACE` | Specifies the Consul namespace the service is registered into. | `default` | +| `CONSUL_SERVICE_PARTITION` | Specifies the Consul partition the service is registered into. | `default` | +| `CONSUL_REFRESH_FREQUENCY` | Specifies the amount of time the extension waits before re-pulling data from the Parameter Store. Use [Go `time.Duration`](https://pkg.go.dev/time@go1.19.1#ParseDuration) string values, for example, `"30s"`.
The time is added to the duration configured in the Lambda registrator `sync_frequency_in_minutes` configuration. Refer to [Lambda registrator configuration options](/consul/docs/lambda/registration/automate#lambda-registrator-configuration-options). The combined configurations determine how stale the data may become. Lambda functions can run for up to 14 hours, so we recommend configuring a value that results in acceptable staleness for certificates. | `"5m"` | +| `CONSUL_SERVICE_UPSTREAMS` | Specifies a comma-separated list of upstream services that the Lambda function can call. Specify the value as an unlabelled annotation according to the [`consul.hashicorp.com/connect-service-upstreams` annotation format](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) in Consul on Kubernetes. For example, `"[service-name]:[port]:[optional-datacenter]"` | none | + +## Invoke the Lambda function + +If _intentions_ are enabled in the Consul service mesh, you must create an intention that allows the Lambda function's Consul service to invoke all upstream services prior to invoking the Lambda function. Refer to [Service mesh intentions](/consul/docs/secure-mesh/intention) for additional information. + +There are several ways to invoke Lambda functions. In the following example, the `aws lambda invoke` CLI command invokes the function: + +```shell-session +$ aws lambda invoke --function-name lambda /dev/stdout | cat +``` diff --git a/website/content/docs/connect/manage-traffic/failover/index.mdx b/website/content/docs/connect/manage-traffic/failover/index.mdx deleted file mode 100644 index 52030e40689d..000000000000 --- a/website/content/docs/connect/manage-traffic/failover/index.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -layout: docs -page_title: Failover configuration overview -description: Learn about failover strategies and service mesh features you can implement to route traffic if services become unhealthy or unreachable, including sameness groups, prepared queries, and service resolvers. ---- - -# Failover overview - -Services in your mesh may become unhealthy or unreachable for many reasons, but you can mitigate some of the effects associated with infrastructure issues by configuring Consul to automatically route traffic to and from failover service instances. This topic provides an overview of the failover strategies you can implement with Consul. - -## Service failover strategies in Consul - -There are several methods for implementing failover strategies between datacenters in Consul. You can adopt one of the following strategies based on your deployment configuration and network requirements: - -- Configure the `Failover` stanza in a service resolver configuration entry to explicitly define which services should failover and the targeting logic they should follow. -- Make a prepared query for each service that you can use to automate geo-failover. -- Create a sameness group to identify partitions with identical namespaces and service names to establish default failover targets. - -The following table compares these strategies in deployments with multiple datacenters to help you determine the best approach for your service: - -| | `Failover` stanza | Prepared
query | Sameness groups | -| --- | :---: | :---: | :---: | -| **Supports WAN federation** | ✅ | ✅ | ❌ | -| **Supports cluster peering** | ✅ | ❌ | ✅ | -| **Supports locality-aware routing** | ✅ | ❌ | ✅ | -| **Multi-datacenter failover strength** | ✅ | ❌ | ✅ | -| **Multi-datacenter usage scenario** | Enables more granular logic for failover targeting. | Central policies that can automatically target the nearest datacenter. | Group size changes without edits to existing member configurations. | -| **Multi-datacenter usage scenario** | Configuring failover for a single service or service subset, especially for testing or debugging purposes | WAN-federated deployments where a primary datacenter is configured. Prepared queries are not replicated over peer connections. | Cluster peering deployments with consistently named services and namespaces. | - -Although cluster peering connections support the [`Failover` field of the prepared query request schema](/consul/api-docs/query#failover) when using Consul's service discovery features to [perform dynamic DNS queries](/consul/docs/services/discovery/dns-dynamic-lookups), they do not support prepared queries for service mesh failover scenarios. - -### Failover configurations for a service mesh with a single datacenter - -You can implement a service resolver configuration entry and specify a pool of failover service instances that other services can exchange messages with when the primary service becomes unhealthy or unreachable. We recommend adopting this strategy as a minimum baseline when implementing Consul service mesh and layering additional failover strategies to build resilience into your application network. - -Refer to the [`Failover` configuration ](/consul/docs/connect/config-entries/service-resolver#failover) for examples of how to configure failover services in the service resolver configuration entry on both VMs and Kubernetes deployments. - -### Failover configuration for WAN-federated datacenters - -If your network has multiple Consul datacenters that are WAN-federated, you can configure your applications to look for failover services with prepared queries. [Prepared queries](/consul/api-docs/) are configurations that enable you to define complex service discovery lookups. This strategy hinges on the secondary datacenter containing service instances that have the same name and residing in the same namespace as their counterparts in the primary datacenter. - -Refer to the [Automate geo-failover with prepared queries tutorial](/consul/tutorials/developer-discovery/automate-geo-failover) for additional information. - -### Failover configuration for peered clusters and partitions - -In networks with multiple datacenters or partitions that share a peer connection, each datacenter or partition functions as an independent unit. As a result, Consul does not correlate services that have the same name, even if they are in the same namespace. - -You can configure sameness groups for this type of network. Sameness groups allow you to define a group of admin partitions where identical services are deployed in identical namespaces. After you configure the sameness group, you can reference the `SamenessGroup` parameter in service resolver, exported service, and service intention configuration entries, enabling you to add or remove cluster peers from the group without making changes to every cluster peer every time. - -You can configure a sameness group so that it functions as the default for failover behavior. You can also reference sameness groups in a service resolver's `Failover` stanza or in a prepared query. Refer to [Failover with sameness groups](/consul/docs/connect/manage-traffic/failover/sameness) for more information. - -## Locality-aware routing - -By default, Consul balances traffic to all healthy upstream instances in the cluster, even if the instances are in different network regions and zones. You can configure Consul to route requests to upstreams in the same region and zone, which reduces latency and transfer costs. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. \ No newline at end of file diff --git a/website/content/docs/connect/manage-traffic/failover/sameness.mdx b/website/content/docs/connect/manage-traffic/failover/sameness.mdx deleted file mode 100644 index ac8c8745fecc..000000000000 --- a/website/content/docs/connect/manage-traffic/failover/sameness.mdx +++ /dev/null @@ -1,203 +0,0 @@ ---- -layout: docs -page_title: Failover with sameness groups -description: You can configure sameness groups so that when a service instance fails, traffic automatically routes to an identical service instance. Learn how to use sameness groups to create a failover strategy for deployments with multiple datacenters and cluster peering connections. ---- - -# Failover with sameness groups - -This page describes how to use sameness groups to automatically redirect service traffic to healthy instances in failover scenarios. Sameness groups are a user-defined set of Consul admin partitions with identical registered services. These admin partitions typically belong to Consul datacenters in different cloud regions, which enables sameness groups to participate in several service failover configuration strategies. - -To create a sameness group and configure each Consul datacenter to allow traffic from other members of the group, refer to [create sameness groups](/consul/docs/connect/cluster-peering/usage/create-sameness-groups). - -## Failover strategies - -You can edit a sameness group configuration entry so that all services failover to healthy instances on other members of a sameness group by default. You can also reference the sameness group in other configuration entries to enact other failover strategies for your network. - -You can establish a failover strategy by configuring sameness group behavior in the following locations: - -- Sameness group configuration entry -- Service resolver configuration entry -- Prepared queries - -You can also configure service instances to route to upstreams in the same availability region during a failover. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. - -### Failover with a sameness group configuration entry - -To define failover behavior using a sameness group configuration entry, set `DefaultForFailover=true` and then apply the updated configuration to all clusters that are members of the group. - -In the following example configuration entry, datacenter `dc1` has two partitions, `partition-1` and `partition-2`. A second datacenter, `dc2`, has a single partition named `partition-1`. All three partitions have identically configured services and established cluster peering connections. The configuration entry defines a sameness group, `example-sg` in `dc1`. When redirecting traffic during a failover scenario, Consul attempts to find a healthy instance in a specific order: `dc1-partition-1`, then `dc1-partition-2`, then `dc2-partition-1`. - - - - - -```hcl -Kind = "sameness-group" -Name = "example-sg" -Partition = "partition-1" -DefaultForFailover = true -Members = [ - {Partition = "partition-1"}, - {Partition = "partition-2"}, - {Peer = "dc2-partition-1"} - ] -``` - - - - - -``` -{ - "Kind": "sameness-group", - "Name": "example-sg", - "Partition": "partition-1", - "DefaultForFailover": true, - "Members": [ - { - "Partition": "partition-1" - }, - { - "Partition": "partition-2" - }, - { - "Peer": "dc2-partition-1" - } - ] -} -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: SamenessGroup -metadata: - name: example-sg -spec: - defaultForFailover: true - members: - - partition: partition-1 - - partition: partition-2 - - peer: dc2-partition-1 -``` - - - - -When a sameness group is configured as the failover default, sameness group failover takes place when a service resolver configuration entry does not implement more specific failover behavior. When a service resolver is defined for an upstream, it is used instead of the sameness group for default failover behavior. - -All services registered in the admin partition must failover to another member of the sameness group. You cannot choose subsets of services to use the sameness group as the failover default. If groups do not have identical services, or if a service is registered to some group members but not all members, this failover strategy may produce errors. - -For more information about specifying sameness group members and failover, refer to [sameness group configuration entry reference](/consul/docs/connect/config-entries/sameness-group). - -### Failover with a service resolver configuration entry - -When the sameness group is not configured as the failover default, you can reference the sameness group in a service resolver configuration entry. This approach enables you to use the sameness group as the failover destination for some services registered to group members. - -In the following example configuration, a database service called `db` is filtered into subsets based on a user-defined `version` tag. Services with a `v1` tag belong to the default subset, which uses the `product-group` sameness group for its failover. Instances of `db` with the `v2` tag, meanwhile, fail over to a service named `canary-db`. - - - - - -```hcl -Kind = "service-resolver" -Name = "db" -DefaultSubset = "v1" -Subsets = { - v1 = { - Filter = "Service.Meta.version == v1" - } - v2 = { - Filter = "Service.Meta.version == v2" - } -} -Failover { - v1 = { - SamenessGroup = "product-group" - } - v2 = { - Service = "canary-db" - } -} -``` - - - - - -``` -{ - "Kind": "service-resolver", - "Name": "db", - "DefaultSubset": "v1", - "Subsets": { - "v1": { - "Filter": "Service.Meta.version == v1" - }, - "v2": { - "Filter": "Service.Meta.version == v2" - } - }, - "Failover": { - "v1": { - "SamenessGroup": "product-group" - }, - "v2": { - "Service": "canary-db" - } - } -} -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceResolver -metadata: - name: db -spec: - defaultSubset: v1 - subsets: - v1: - filter: 'Service.Meta.version == v1' - v2: - filter: 'Service.Meta.version == v2' - failover: - v1: - samenessGroup: "product-group" - v2: - service: "canary-db" -``` - - - - -For more information, including additional examples, refer to [service resolver configuration entry reference](/consul/docs/connect/config-entries/service-resolver). - -### Failover with a prepared query - -You can specify a sameness group in a prepared query to return service instances from the first member that has healthy instances. When a member does not have healthy instances, Consul queries group members in the order defined in the list of members in the sameness group configuration entry. - -The following example demonstrates a prepared query that can be referenced with the name `query-1`. It queries members of the sameness group for healthy instances of `db` that are registered to the `store-ns` namespace on partitions named `partition-1`. - -```json -{ - "Name": "query-1", - "Service": { - "Service": "db", - "SamenessGroup": "product-group", - "Partition": "partition-1", - "Namespace": "store-ns" - } -} -``` - -In prepared queries, the sameness group is mutually exclusive with the [`Failover`](/consul/api-docs/query#failover) field because the sameness group includes failover targets based on the sameness group’s members. For more information about using prepared queries, refer to [Enable dynamic DNS queries](/consul/docs/services/discovery/dns-dynamic-lookups). diff --git a/website/content/docs/connect/manage-traffic/index.mdx b/website/content/docs/connect/manage-traffic/index.mdx deleted file mode 100644 index 29ff68d9cef2..000000000000 --- a/website/content/docs/connect/manage-traffic/index.mdx +++ /dev/null @@ -1,84 +0,0 @@ ---- -layout: docs -page_title: Service mesh traffic management overview -description: >- - Consul can route, split, and resolve Layer 7 traffic in a service mesh to support workflows like canary testing and blue/green deployments. Learn about the three configuration entry kinds that define L7 traffic management behavior in Consul. ---- - -# Service mesh traffic management overview - -This topic provides overview information about the application layer traffic management capabilities available in Consul service mesh. These capabilities are also referred to as *Layer 7* or *L7 traffic management*. - -## Introduction - -Consul service mesh allows you to divide application layer traffic between different subsets of service instances. You can leverage L7 traffic management capabilities to perform complex processes, such as configuring backup services for failover scenarios, canary and A-B testing, blue-green deployments, and soft multi-tenancy in which production, QA, and staging environments share compute resources. L7 traffic management with Consul service mesh allows you to designate groups of service instances in the Consul catalog smaller than all instances of single service and configure when that subset should receive traffic. - -You cannot manage L7 traffic with the [built-in proxy](/consul/docs/connect/proxies/built-in), -[native proxies](/consul/docs/connect/native), or some [Envoy proxy escape hatches](/consul/docs/connect/proxies/envoy#escape-hatch-overrides). - -## Discovery chain - -Consul uses a series of stages to discover service mesh proxy upstreams. Each stage represents different ways of managing L7 traffic. They are referred to as the _discovery chain_: - -- routing -- splitting -- resolution - -For information about integrating service mesh proxy upstream discovery using the discovery chain, refer to [Discovery Chain for Service Mesh Traffic Management](/consul/docs/connect/manage-traffic/discovery-chain). - -The Consul UI shows discovery chain stages in the **Routing** tab of the **Services** page: - -![screenshot of L7 traffic visualization in the UI](/img/l7-routing/full.png) - -You can define how Consul manages each stage of the discovery chain in a Consul _configuration entry_. [Configuration entries](/consul/docs/connect/config-entries) modify the default behavior of the Consul service mesh. - -When managing L7 traffic with cluster peering, there are additional configuration requirements to resolve peers in the discovery chain. Refer to [Cluster peering L7 traffic management](/consul/docs/connect/cluster-peering/usage/peering-traffic-management) for more information. - -### Routing - -The first stage of the discovery chain is the service router. Routers intercept traffic according to a set of L7 attributes, such as path prefixes and HTTP headers, and route the traffic to a different service or service subset. - -Apply a [service router configuration entry](/consul/docs/connect/config-entries/service-router) to implement a router. Service router configuration entries can only reference service splitter or service resolver configuration entries. - -![screenshot of service router in the UI](/img/l7-routing/Router.png) - -### Splitting - -The second stage of the discovery chain is the service splitter. Service splitters split incoming requests and route them to different services or service subsets. Splitters enable staged canary rollouts, versioned releases, and similar use cases. - -Apply a [service splitter configuration entry](/consul/docs/connect/config-entries/service-splitter) to implement a splitter. Service splitters configuration entries can only reference other service splitters or service resolver configuration entries. - -![screenshot of service splitter in the UI](/img/l7-routing/Splitter.png) - -If multiple service splitters are chained, Consul flattens the splits so that they behave as a single service spitter. In the following equation, `splitter[B]` references `splitter[A]`: - -```text -splitter[A]: A_v1=50%, A_v2=50% -splitter[B]: A=50%, B=50% ---------------------- -splitter[effective_B]: A_v1=25%, A_v2=25%, B=50% -``` - - -### Resolution - -The third stage of the discovery chain is the service resolver. Service resolvers specify which instances of a service satisfy discovery requests for the provided service name. Service resolvers enable several use cases, including: - -- Designate failovers when service instances become unhealthy or unreachable. -- Configure service subsets based on DNS values. -- Route traffic to the latest version of a service. -- Route traffic to specific Consul datacenters. -- Create virtual services that route traffic to instances of the actual service in specific Consul datacenters. - -Apply a [service resolver configuration entry](/consul/docs/connect/config-entries/service-resolver) to implement a resolver. Service resolver configuration entries can only reference other service resolvers. - - -![screenshot of service resolver in the UI](/img/l7-routing/Resolver.png) - -If no resolver is configured for a service, Consul sends all traffic to healthy instances of the service that have the same name in the current datacenter or specified namespace and ends the discovery chain. - -Service resolver configuration entries can also process network layer, also called level 4 (L4), traffic. As a result, you can implement service resolvers for services that communicate over `tcp` and other non-HTTP protocols. - -## Locality-aware routing - -By default, Consul balances traffic to all healthy upstream instances in the cluster, even if the instances are in different network regions and zones. You can configure Consul to route requests to upstreams in the same region and zone, which reduces latency and transfer costs. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. diff --git a/website/content/docs/connect/manage-traffic/limit-request-rates.mdx b/website/content/docs/connect/manage-traffic/limit-request-rates.mdx deleted file mode 100644 index adf2af5b706d..000000000000 --- a/website/content/docs/connect/manage-traffic/limit-request-rates.mdx +++ /dev/null @@ -1,144 +0,0 @@ ---- -layout: docs -page_title: Limit request rates to services in the mesh -description: Learn how to limit the rate of requests to services in a Consul service mesh. Rate limits on requests improves network resilience and availability. ---- - -# Limit request rates to services in the mesh - -This topic describes how to configure Consul to limit the request rate to services in the mesh. - - This feature is available in Consul Enterprise. - -## Introduction - -Consul allows you to configure settings to limit the rate of HTTP requests a service receives from sources in the mesh. Limiting request rates is one strategy for building a resilient and highly-available network. - -Consul applies rate limits per service instance. As an example, if you specify a rate limit of 100 requests per second (RPS) for a service and five instances of the service are available, the service accepts a total of 500 RPS, which equals 100 RPS per instance. - -You can limit request rates for all traffic to a service, as well as set rate limits for specific URL paths on a service. When multiple rate limits are configured on a service, Consul applies the limit configured for the first matching path. As a result, the maximum RPS for a service is equal to the number of service instances deployed for a service multiplied by either the rate limit configured for that service or the rate limit for the path. - -## Requirements - -Consul Enterprise v1.17.0 or later - -## Limit request rates to a service on all paths - -Specify request rate limits in the service defaults configuration entry. Create or edit the existing service defaults configuration entry for your service and specify the following fields: - - - - -1. `RateLimits.InstanceLevel.RequestPerSecond`: Set an average number of requests per second that Consul should allow to the service. The number of requests may momentarily exceed this value up to the value specified in the `RequestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. -1. `RateLimits.InstanceLevel.RequestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service. Consul blocks any additional requests over this limit. - -The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service to an average of 1000 requests per second, but allows the service to accept up to 1500 concurrent requests. - -```hcl -Kind = "service-defaults" -Name = "billing" -Protocol = "http" - -RateLimit { - InstanceLevel { - RequestsPerSecond = 1000 - RequestsMaxBurst = 1500 - } -} -``` - - - - -1. `spec.rateLimits.instanceLevel.requestPerSecond`: Set an average number of requests per second that Consul should allow to the service. The number of requests may momentarily exceed this value up to the value specified in the `requestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. -1. `spec.rateLimits.instanceLevel.requestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service. Consul blocks any additional requests over this limit. - -The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service to an average of 1000 requests per second, but allows the service to accept up to 1500 concurrent requests. - -```yaml -kind: ServiceDefaults -name: billing -protocol: http -rateLimit: - instanceLevel: - requestsPerSecond: 1000 - requestsMaxBurst: 1500 -``` - - - - -Refer to the [service defaults configuration entry reference](/consul/docs/connect/config-entries/service-defaults) for additional specifications and example configurations. - -## Specify request rate limits for specific paths - -Specify request rate limits in the service defaults configuration entry. Create or edit the existing service defaults configuration entry for your service and configure the following parameters: - - - - -1. Add a `RateLimits.InstanceLevel.Routes` block to the configuration entry. The block contains the limits and matching criteria for determining which paths to set limits on. -1. In the `Routes` block, configure one of the following match criteria to determine which path to set the limits on: - - `PathExact`: Specifies the exact path to match on the request path. - - `PathPrefix`: Specifies the path prefix to match on the request path. - - `PathRegex`: Specifies a regular expression to match on the request path. -1. Configure the limits you want to enforce in the `Routes` block as well. You can configure the following parameters: - - `RequestsPerSecond`: Set an average number of requests per second that Consul should allow to the service through the matching path. The number of requests may momentarily exceed this value up to the value specified in the `RequestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. This configuration overrides the value specified in `RateLimits.InstanceLevel.RequestPerSecond` field of the configuration entry. - - `RequestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service through the matching path. Consul blocks any additional requests over this limit. This configuration overrides the value specified in `RateLimits.InstanceLevel.RequestsMaxBurst` field of the configuration entry. - -The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service depending on the path it received the request on. The service is limited to an average of 500 requests when the request is made on an HTTP path with the `/api` prefix. When an instance of the billing service receives a request from the `/login` path, it is limited to an average of 100 requests per second and 500 concurrent connections. - -```hcl -Kind = "service-defaults" -Name = "billing" -Protocol = "http" - -RateLimit { - InstanceLevel { - Routes = [ - { - PathPrefix = "/api" - RequestsPerSecond = 500 - }, - { - PathPrefix = "/login" - RequestsPerSecond = 100 - RequestsMaxBurst = 500 - } - ] - } -} -``` - - - - -1. Add a `spec.rateLimits.instanceLevel.routes` block to the configuration entry. The block contains the limits and matching criteria for determining which paths to set limits on. -1. In the `routes` block, configure one of the following match criteria for enabling Consul to determine which path to set the limits on: - - `pathExact`: Specifies the exact path to match on the request path. When using this field. - - `pathPrefix`: Specifies the path prefix to match on the request path. - - `pathRegex`: Specifies a regular expression to match on the request path. -1. Configure the limits you want to enforce in the `routes` block as well. You can configure the following parameters: - - `requestsPerSecond`: Set an average number of requests per second that Consul should allow to the service through the matching path. The number of requests may momentarily exceed this value up to the value specified in the `requestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. This configuration overrides the value specified in `spec.rateLimits.instanceLevel.requestPerSecond` field of the CRD. - - `requestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service through the matching path. Consul blocks any additional requests over this limit. This configuration overrides the value specified in `spec.rateLimits.instanceLevel.requestsMaxBurst` field of the CRD. - -The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service depending on the path it received the request on. The service is limited to an average of 500 requests when the request is made on an HTTP path with the `/api` prefix. When an instance of the billing service receives a request from the `/login` path, it is limited to an average of 100 requests per second and 500 concurrent connections. - -```yaml -kind: service-defaults -name: billing -protocol: http -rateLimit: - instanceLevel: - routes: - - pathPrefix: /api - requestsPerSecond: 500 - - pathPrefix: /login - requestsPerSecond: 100 - requestsMaxBurst: 500 -``` - - - - -Refer to the [service defaults configuration entry reference](/consul/docs/connect/config-entries/service-defaults) for additional specifications and example configurations. \ No newline at end of file diff --git a/website/content/docs/connect/manage-traffic/route-to-local-upstreams.mdx b/website/content/docs/connect/manage-traffic/route-to-local-upstreams.mdx deleted file mode 100644 index 4d5be2e5b55d..000000000000 --- a/website/content/docs/connect/manage-traffic/route-to-local-upstreams.mdx +++ /dev/null @@ -1,361 +0,0 @@ ---- -layout: docs -page_title: Route traffic to local upstreams -description: Learn how to enable locality-aware routing in Consul so that proxies can send traffic to upstreams in the same region and zone as the downstream service. Routing traffic based on locality can reduce latency and cost. ---- - -# Route traffic to local upstreams - -This topic describes how to enable locality-aware routing so that Consul can prioritize sending traffic to upstream services that are in the same region and zone as the downstream service. - - This feature is available in Consul Enterprise. - -## Introduction - -By default, Consul balances traffic to all healthy upstream instances in the cluster, even if the instances are in different network zones. You can specify the cloud service provider (CSP) locality for Consul server agents and services registered to the service mesh, which enables several benefits: - -- Consul prioritizes the nearest upstream instances when routing traffic through the mesh. -- When upstream service instances becomes unhealthy, Consul prioritizes failing over to instances that are in the same region as the downstream service. Refer to [Failover](/consul/docs/connect/traffic-management/failover) for additional information about failover strategies in Consul. - -When properly implemented, routing traffic to local upstreams can reduce latency and transfer costs associated with sending requests to other regions. - - -### Workflow - -For networks deployed to virtual machines, complete the following steps to route traffic to local upstream services: - -1. Specify the region and zone for your Consul client agents. This allows services to inherit the region and zone configured for the Consul agent that the services are registered with. -1. Specify the localities of your service instances. This step is optional and is only necessary when defining a custom network topology or when your deployed environment requires explicitly set localities for certain service's instances. -1. Configure service mesh proxies to route traffic locally within the partition. - -#### Container orchestration platforms - -If you deployed Consul to a Kubernetes or ECS environment using `consul-k8s` or `consul-ecs`, service instance locality information is inherited from the host machine. As a result, you do not need to specify the regions and zones on containerized platforms unless you are implementing a custom deployment. - -On Kubernetes, Consul automatically populates geographic information about service instances using the `topology.kubernetes.io/region` and `topology.kubernetes.io/zone` labels from the Kubernetes nodes. On AWS ECS, Consul uses the `AWS_REGION` environment variable and `AvailabilityZone` attribute of the ECS task meta. - -### Requirements - -You should only enable locality-aware routing when each set of external upstream instances within the same zone and region have enough capacity to handle requests from downstream service instances in their respective zones. Locality-aware routing is an advanced feature that may adversely impact service capacity if used incorrectly. When enabled, Consul routes all traffic to the nearest set of service instances and only fails over when no healthy instances are available in the nearest set. - -## Specify the locality of your Consul agents - -The `locality` configuration on a Consul client applies to all services registered to the client. - -1. Configure the `locality` block in your Consul client agent configuration files. The `locality` block is a map containing the `region` and `zone` parameters. - - The parameters should match the values for regions and zones defined in your network. Refer to [`locality`](/consul/docs/agent/config/config-files#locality) in the agent configuration reference for additional information. - -1. Start or restart the agent to apply the configuration. Refer to [Starting a Consul agent](/consul/docs/agent#starting-the-consul-agent) for instructions. - -In the following example, the agent is running in the `us-west-1` region and `us-west-1a` zone on AWS: - -```hcl -locality = { - region = "us-west-1" - zone = "us-west-1a" -} -``` - -## Specify the localities of your service instances (optional) - -This step is optional in most scenarios. Refer to [Workflow](#workflow) for additional information. - -1. Configure the `locality` block in your service definition for both downstream (client) and upstream services. The `locality` block is a map containing the `region` and `zone` parameters. When you start a proxy for the service, Consul passes the locality to the proxy so that it can route traffic accordingly. - - The parameters should match the values for regions and zones defined in your network. Refer to [`locality`](/consul/docs/services/configuration/services-configuration-reference#locality) in the services configuration reference for additional information. - -1. Verify that your service is also configured with a proxy. Refer to [Define service mesh proxy](/consul/docs/connect/proxies/deploy-sidecar-services#define-service-mesh-proxy) for additional information. -Register or re-register the service to apply the configuration. Refer to [Register services and health checks](/consul/docs/services/usage/register-services-checks) for instructions. - -In the following example, the `web` service is available in the `us-west-1` region and `us-west-1a` zone on AWS: - -```hcl -service { - id = "web" - locality = { - region = "us-west-1" - zone = "us-west-1a" - } - connect = { sidecar_service = {} } -} -``` - -If registering services manually via the `/agent/service/register` API endpoint, you can specify the `locality` configuration in the payload. Refer to [Register Service](/consul/api-docs/agent/service#register-service) in the API documentation for additional information. - -## Enable service mesh proxies to route traffic locally - -You can configure the default routing behavior for all proxies in the mesh as well as configure the routing behavior for specific services. - -### Configure default routing behavior - -Configure the `PrioritizeByLocality` block in the proxy defaults configuration entry and specify the `failover` mode. This configuration enables proxies in the mesh to use the region and zone defined in the service configuration to route traffic. Refer to [`PrioritizeByLocality`](/consul/docs/connect/config-entries/proxy-defaults#prioritizebylocality) in the proxy defaults reference for details about the configuration. - - - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -PrioritizeByLocality = { - Mode = "failover" -} -``` - - - - - - -```json -{ - "kind": "proxy-defaults", - "name": "global", - "prioritizeByLocality": { - "mode": "failover" - } -} -``` - - - - - -```yaml -apiversion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - prioritizeByLocality: - mode: failover -``` - - - - - -Apply the configuration by either running the [`consul config write` CLI command](/consul/commands/config/write), applying the Kubernetes CRD, or calling the [`/config` HTTP API endpoint](/consul/api-docs/config). - - - - - ```shell-session - $ consul config write proxy-defaults.hcl - ``` - - - - - - ```shell-session - $ kubectl apply -f proxy-defaults.yaml - ``` - - - - - ```shell-session - $ curl --request PUT --data @proxy-defaults.hcl http://127.0.0.1:8500/v1/config - ``` - - - - -### Configure routing behavior for individual service - -1. Create a service resolver configuration entry and specify the following fields: - - `Name`: The name of the target upstream service for which downstream clients should use locality-aware routing. - - `PrioritizeByLocality`: This block enables proxies in the mesh to use the region and zone defined in the service configuration to route traffic. Set the `mode` inside the block to `failover`. Refer to [`PrioritizeByLocality`](/consul/docs/connect/config-entries/service-resolver#prioritizebylocality) in the service resolver reference for details about the configuration. - - - - - - ```hcl - Kind = "service-resolver" - Name = "api" - PrioritizeByLocality = { - Mode = "failover" - } - ``` - - - - - - - ```json - { - "kind": "service-resolver", - "name": "api", - "prioritizeByLocality": { - "mode": "failover" - } - } - ``` - - - - - - ```yaml - apiversion: consul.hashicorp.com/v1alpha1 - kind: ServiceResolver - metadata: - name: api - spec: - prioritizeByLocality: - mode: failover - ``` - - - - - -1. Apply the configuration by either running the [`consul config write` CLI command](/consul/commands/config/write), applying the Kubernetes CRD, or calling the [`/config` HTTP API endpoint](/consul/api-docs/config). - - - - - ```shell-session - $ consul config write api-resolver.hcl - ``` - - - - - - ```shell-session - $ kubectl apply -f api-resolver.yaml - ``` - - - - - ```shell-session - $ curl --request PUT --data @api-resolver.hcl http://127.0.0.1:8500/v1/config - ``` - - - - -### Configure locality on Kubernetes test clusters explicitly - -You can explicitly configure locality for each Kubernetes node so that you can test locality-aware routing with a local Kubernetes cluster or in an environment where `topology.kubernetes.io` labels are not set. - -Run the `kubectl label node` command and specify the locality as arguments. The following example specifies the `us-east-1` region and `us-east-1a` zone for the node: - -```shell-session -kubectl label node $K8S_NODE topology.kubernetes.io/region="us-east-1" topology.kubernetes.io/zone="us-east-1a" -``` - -After setting these values, subsequent service and proxy registrations in your cluster inherit the values from their local Kubernetes node. - -## Verify routes - -The routes from each downstream service instance to the nearest set of healthy upstream instances are the most immediately observable routing changes. - -Consul configures Envoy's built-in [`overprovisioning_factor`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/endpoint/v3/endpoint.proto#config-endpoint-v3-clusterloadassignment) and [outlier detection](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/outlier_detection.proto#config-cluster-v3-outlierdetection) settings to enforce failover behavior. However, Envoy does not provide granular metrics specific to failover or endpoint traffic within a cluster. As a result, using external observability tools that expose network traffic within your environment is the best method for observing route changes. - -To verify that locality-aware routing and failover configurations, you can inspect Envoy's xDS configuration dump for a downstream proxy. Refer to the [consul-k8s CLI docs](https://developer.hashicorp.com/consul/docs/k8s/k8s-cli#proxy-read) for details on how to obtain the xDS configuration dump on Kubernetes. For other workloads, use the Envoy [admin interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin) and ensure that you [include EDS](https://www.envoyproxy.io/docs/envoy/latest/operations/admin#get--config_dump?include_eds). - -Inspect the [priority](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/priority#arch-overview-load-balancing-priority-levels) on each set of endpoints under the upstream `ClusterLoadAssignment` in the `EndpointsConfigDump`. Alternatively, the same priorities should be visible within the output of the [`/clusters?format=json`](https://www.envoyproxy.io/docs/envoy/latest/operations/admin#get--clusters?format=json) admin endpoint. - -```json -{ - "@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", - "cluster_name": "web.default.dc1.internal.161d7b5a-bb5f-379c-7d5a-1fc7504f95da.consul", - "endpoints": [ - { - "lb_endpoints": [ - { - "endpoint": { - "address": { - "socket_address": { - "address": "10.42.2.6", - "port_value": 20000 - } - }, - "health_check_config": {} - }, - ... - }, - ... - ], - "locality": {} - }, - { - "lb_endpoints": [ - { - "endpoint": { - "address": { - "socket_address": { - "address": "10.42.3.6", - "port_value": 20000 - } - }, - "health_check_config": {} - }, - ... - }, - ... - ], - "locality": {}, - "priority": 1 - }, - { - "lb_endpoints": [ - { - "endpoint": { - "address": { - "socket_address": { - "address": "10.42.0.6", - "port_value": 20000 - } - }, - "health_check_config": {} - }, - ... - }, - ... - ], - "locality": {}, - "priority": 2 - } - ], - ... -} -``` - -### Force an observable failover - -To force a failover for testing purposes, scale the upstream service instances in the downstream's local zone or region, if no local zone instances are available, to `0`. - -Note the following behaviors: - - - Consul prioritizes failovers in ascending order starting with `0`. The highest priority, `0`, is not explicitly visible in xDS output. This is because `0` is the default value for that field. - - After Envoy failover configuration is in place, the specific timing of failover is determined by the downstream Envoy proxy, not Consul. Consul health status may not directly correspond to Envoy's failover behavior, which is also dependent on outlier detection. - -Refer to [Troubleshooting](#troubleshooting) if you do not observe the expected behavior. - -## Adjust load balancing and failover behavior - -You can adjust the global or per-service load balancing and failover behaviors by applying the property override Envoy extension. The property override extension allows you to set and remove individual properties on the Envoy resources Consul generates. Refer to [Configure Envoy proxy properties](/consul/docs/connect/proxies/envoy-extensions/usage/property-override) for additional information. - -1. Add the `EnvoyExtensions` configuration block to the service defaults or proxy defaults configuration entry. -1. Configure the following settings in the `EnvoyExtensions` configuration: - - [`overprovisioning_factor`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/endpoint/v3/endpoint.proto#config-endpoint-v3-clusterloadassignment) - - [outlier detection](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/outlier_detection.proto#config-cluster-v3-outlierdetection) configuration. -1. Apply the configuration. Refer to [Apply the configuration entry](/consul/docs/connect/proxies/envoy-extensions/usage/property-override#apply-the-configuration-entry) for details. - -By default, Consul sets `overprovisioning_factor` to `100000`, which enforces total failover, and `max_ejection_percent` to `100`. Refer to the Envoy documentation about these fields before attempting to modify them. - -## Troubleshooting - -If you do not see the expected priorities, verify that locality is configured in the Consul agent and that `PrioritizeByLocality` is enabled in your proxy defaults or service resolver configuration entry. When `PrioritizeByLocality` is enabled but the local proxy lacks locality configuration, Consul emits a warning log to indicate that the policy could not be applied: - -``` -`no local service locality provided, skipping locality failover policy` -``` diff --git a/website/content/docs/connect/native/go.mdx b/website/content/docs/connect/native/go.mdx deleted file mode 100644 index e3068058fd1e..000000000000 --- a/website/content/docs/connect/native/go.mdx +++ /dev/null @@ -1,253 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Native App Integration - Go Apps -description: >- - Consul's service mesh supports native integrations of Go applications into the service mesh through a Go library. Example code demonstrates how to connect your Go applications to the service mesh. ---- - -# Service Mesh Native Integration for Go Applications - - - -The Connect Native golang SDK is currently deprecated and will be removed in a future Consul release. -The SDK will be removed when the long term replacement to native application integration (such as a proxyless gRPC service mesh integration) is delivered. Refer to [GH-10339](https://github.com/hashicorp/consul/issues/10339) for additional information and to track progress toward one potential solution that is tracked as replacement functionality. - - - -We provide a library that makes it drop-in simple to integrate Consul service mesh -with most [Go](https://golang.org/) applications. This page shows examples -of integrating this library for accepting or establishing mesh-based -connections. For most Go applications, Consul service mesh can be natively integrated -in just a single line of code excluding imports and struct initialization. - -In addition to this, please read and understand the -[overview of service mesh native integrations](/consul/docs/connect/native). -In particular, after natively integrating applications with Consul service mesh, -they must declare that they accept mesh-based connections via their service definitions. - -The noun _connect_ is used throughout this documentation and the Go API -to refer to the connect subsystem that provides Consul's service mesh capabilities. - -## Accepting Connections - --> **Note:** When calling `ConnectAuthorize()` on incoming connections this library -will return _deny_ if `Permissions` are defined on the matching intention. -The method is currently only suited for networking layer 4 (e.g. TCP) integration. - -Any server that supports TLS (HTTP, gRPC, net/rpc, etc.) can begin -accepting mesh-based connections in just a few lines of code. For most -existing applications, converting the server to accept mesh-based -connections will require only a one-line change excluding imports and -structure initialization. - -The -Go library exposes a `*tls.Config` that _automatically_ communicates with -Consul to load certificates and authorize inbound connections during the -TLS handshake. This also automatically starts goroutines to update any -changing certs. - -Example, followed by more details: - -```go -import( - "net/http" - - "github.com/hashicorp/consul/api" - "github.com/hashicorp/consul/connect" -) - -func main() { - // Create a Consul API client - client, _ := api.NewClient(api.DefaultConfig()) - - // Create an instance representing this service. "my-service" is the - // name of _this_ service. The service should be cleaned up via Close. - svc, _ := connect.NewService("my-service", client) - defer svc.Close() - - // Creating an HTTP server that serves via service mesh - server := &http.Server{ - Addr: ":8080", - TLSConfig: svc.ServerTLSConfig(), - // ... other standard fields - } - - // Serve! - server.ListenAndServeTLS("", "") -} -``` - -The first step is to create a Consul API client. This is almost always the -default configuration with an ACL token set, since you want to communicate -to the local agent. The default configuration will also read the ACL token -from environment variables if set. The Go library will use this client to request certificates, -authorize connections, and more. - -Next, `connect.NewService` is called to create a service structure representing -the _currently running service_. This structure maintains all the state -for accepting and establishing connections. An application should generally -create one service and reuse that one service for all servers and clients. - -Finally, a standard `*http.Server` is created. The magic line is the `TLSConfig` -value. This is set to a TLS configuration returned by the service structure. -This TLS configuration is configured to automatically load certificates -in the background, cache them, and authorize inbound connections. The service -structure automatically handles maintaining blocking queries to update certificates -in the background if they change. - -Since the service returns a standard `*tls.Config`, _any_ server that supports -TLS can be configured. This includes gRPC, net/rpc, basic TCP, and more. -Another example is shown below with just a plain TLS listener: - -```go -import( - "crypto/tls" - - "github.com/hashicorp/consul/api" - "github.com/hashicorp/consul/connect" -) - -func main() { - // Create a Consul API client - client, _ := api.NewClient(api.DefaultConfig()) - - // Create an instance representing this service. "my-service" is the - // name of _this_ service. The service should be cleaned up via Close. - svc, _ := connect.NewService("my-service", client) - defer svc.Close() - - // Creating an HTTP server that serves via service mesh - listener, _ := tls.Listen("tcp", ":8080", svc.ServerTLSConfig()) - defer listener.Close() - - // Accept - go acceptLoop(listener) -} -``` - -## HTTP Clients - -For Go applications that need to connect to HTTP-based upstream dependencies, -the Go library can construct an `*http.Client` that automatically establishes -mesh-based connections as long as Consul-based service discovery is used. - -Example, followed by more details: - -```go -import( - "github.com/hashicorp/consul/api" - "github.com/hashicorp/consul/connect" -) - -func main() { - // Create a Consul API client - client, _ := api.NewClient(api.DefaultConfig()) - - // Create an instance representing this service. "my-service" is the - // name of _this_ service. The service should be cleaned up via Close. - svc, _ := connect.NewService("my-service", client) - defer svc.Close() - - // Get an HTTP client - httpClient := svc.HTTPClient() - - // Perform a request, then use the standard response - resp, _ := httpClient.Get("https://userinfo.service.consul/user/mitchellh") -} -``` - -The first step is to create a Consul API client and service. These are the -same steps as accepting connections and are explained in detail in the -section above. If your application is both a client and server, both the -API client and service structure can be shared and reused. - -Next, we call `svc.HTTPClient()` to return a specially configured -`*http.Client`. This client will automatically established mesh-based -connections using Consul service discovery. - -Finally, we perform an HTTP `GET` request to a hypothetical userinfo service. -The HTTP client configuration automatically sends the correct client -certificate, verifies the server certificate, and manages background -goroutines for updating our certificates as necessary. - -If the application already uses a manually constructed `*http.Client`, -the `svc.HTTPDialTLS` function can be used to configure the -`http.Transport.DialTLS` field to achieve equivalent behavior. - -### Hostname Requirements - -The hostname used in the request URL is used to identify the logical service -discovery mechanism for the target. **It's not actually resolved via DNS** but -used as a logical identifier for a Consul service discovery mechanism. It has -the following specific limitations: - -- The scheme must be `https://`. -- It must be a Consul DNS name in one of the following forms: - - `.service[.].consul` to discover a healthy service - instance for a given service. - - `.query[.].consul` to discover an instance via - [Prepared Query](/consul/api-docs/query). -- The top-level domain _must_ be `.consul` even if your cluster has a custom - `domain` configured for its DNS interface. This might be relaxed in the - future. -- Tag filters for services are not currently supported (i.e. - `tag1.web.service.consul`) however the same behavior can be achieved using a - prepared query. -- External DNS names, raw IP addresses and so on will cause an error and should - be fetched using a separate `HTTPClient`. - -## Raw TLS Connection - -For a raw `net.Conn` TLS connection, the `svc.Dial` function can be used. -This will establish a connection to the desired service via the service mesh and -return the `net.Conn`. This connection can then be used as desired. - -Example: - -```go -import ( - "context" - - "github.com/hashicorp/consul/api" - "github.com/hashicorp/consul/connect" -) - -func main() { - // Create a Consul API client - client, _ := api.NewClient(api.DefaultConfig()) - - // Create an instance representing this service. "my-service" is the - // name of _this_ service. The service should be cleaned up via Close. - svc, _ := connect.NewService("my-service", client) - defer svc.Close() - - // Connect to the "userinfo" Consul service. - conn, _ := svc.Dial(context.Background(), &connect.ConsulResolver{ - Client: client, - Name: "userinfo", - }) -} -``` - -This uses a familiar `Dial`-like function to establish raw `net.Conn` values. -The second parameter to dial is an implementation of the `connect.Resolver` -interface. The example above uses the `*connect.ConsulResolver` implementation -to perform Consul-based service discovery. This also automatically determines -the correct certificate metadata we expect the remote service to serve. - -## Static Addresses, Custom Resolvers - -In the raw TLS connection example, you see the use of a `connect.Resolver` -implementation. This interface can be implemented to perform address -resolution. This must return the address and also the URI SAN expected -in the TLS certificate served by the remote service. - -The Go library provides two built-in resolvers: - -- `*connect.StaticResolver` can be used for static addresses where no - service discovery is required. The expected cert URI SAN must be - manually specified. - -- `*connect.ConsulResolver` which resolves services and prepared queries - via the Consul API. This also automatically determines the expected - cert URI SAN. diff --git a/website/content/docs/connect/native/index.mdx b/website/content/docs/connect/native/index.mdx deleted file mode 100644 index 3cf64f346c2e..000000000000 --- a/website/content/docs/connect/native/index.mdx +++ /dev/null @@ -1,165 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Native App Integration - Overview -description: >- - When using sidecar proxies is not possible, applications can natively integrate with Consul service mesh, but have reduced access to service mesh features. Learn how "mesh-native" or "connect-native" apps use mTLS to authenticate with Consul and how to add integrations to service registrations. ---- - -# Service Mesh Native App Integration Overview - - - -The Connect Native Golang SDK and `v1/agent/connect/authorize`, `v1/agent/connect/ca/leaf`, -and `v1/agent/connect/ca/roots` APIs are deprecated and will be removed in a future release. Although Connect Native -will still operate as designed, we do not recommend leveraging this feature because it is deprecated and will be removed -removed when the long term replacement to native application integration (such as a proxyless gRPC service mesh integration) is delivered. Refer to [GH-10339](https://github.com/hashicorp/consul/issues/10339) for additional information and to track progress toward one potential solution that is tracked as replacement functionality. - -The Native App Integration does not support many of the Consul's service mesh features, and is not under active development. -The [Envoy proxy](/consul/docs/connect/proxies/envoy) should be used for most production environments. - - - -Applications can natively integrate with Consul's service mesh API to support accepting -and establishing connections to other mesh services without the overhead of a -[proxy sidecar](/consul/docs/connect/proxies). This option is especially useful -for applications that may be experiencing performance issues with the proxy -sidecar deployment. This page will cover the high-level overview of -integration, registering the service, etc. For language-specific examples, see -the sidebar navigation to the left. It is also required if your service uses -relies on a dynamic set of upstream services. - -Service mesh traffic is just basic mutual TLS. This means that almost any application -can easily integrate with Consul service mesh. There is no custom protocol in use; -any language that supports TLS can accept and establish mesh-based -connections. - -We currently provide an easy-to-use [Go integration](/consul/docs/connect/native/go) -to assist with the getting the proper certificates, verifying connections, -etc. We plan to add helper libraries for other languages in the future. -However, without library support, it is still possible for any major language -to integrate with Consul service mesh. - -The noun _connect_ is used throughout this documentation to refer to the connect -subsystem that provides Consul's service mesh capabilities. - -## Overview - -The primary work involved in natively integrating with service mesh is -[acquiring the proper TLS certificate](/consul/api-docs/agent/connect#service-leaf-certificate), -[verifying TLS certificates](/consul/api-docs/agent/connect#certificate-authority-ca-roots), -and [authorizing inbound connections or requests](/consul/api-docs/connect/intentions#list-matching-intentions). - -All of this is done using the Consul HTTP APIs linked above. - -An overview of the sequence is shown below. The diagram and the following -details may seem complex, but this is a _regular mutual TLS connection_ with -an API call to verify the incoming client certificate. - -![Native Integration Overview](/img/connect-native-overview.png) - --> **Note:** This diagram depicts the simpler networking layer 4 (e.g. TCP) [integration -mechanism](/consul/api-docs/agent/connect#authorize). - -Details on the steps are below: - -- **Service discovery** - This is normal service discovery using Consul, - a static IP, or any other mechanism. If you're using Consul DNS, the - [`.connect`](/consul/docs/services/discovery/dns-static-lookups#service-mesh-enabled-service-lookups) - syntax to find mesh-capable endpoints for a service. After service - discovery, choose one address from the list of **service addresses**. - -- **Mutual TLS** - As a client, connect to the discovered service address - over normal TLS. As part of the TLS connection, provide the - [service certificate](/consul/api-docs/agent/connect#service-leaf-certificate) - as the client certificate. Verify the remote certificate against the - [public CA roots](/consul/api-docs/agent/connect#certificate-authority-ca-roots). - As a client, if the connection is established then you've established - a mesh-based connection and there are no further steps! - -- **Authorization** - As a server accepting connections, verify the client - certificate against the [public CA - roots](/consul/api-docs/agent/connect#certificate-authority-ca-roots). After verifying - the certificate, parse some basic fields from it and use those to determine - if the connection should be allowed. How this is done is dependent on - the level of integration desired: - - - **Simple integration (TCP-only)** - Call the [authorizing - API](/consul/api-docs/agent/connect#authorize) against the local agent. If this returns - successfully, complete the TLS handshake and establish the connection. If - authorization fails, close the connection. - - -> **NOTE:** This API call is expected to be called in the connection path, - so if the local Consul agent is down or unresponsive it will effect the - success rate of new connections. The agent uses locally cached data to - authorize the connection and typically responds in microseconds. Therefore, - the impact to the TLS handshake is typically microseconds. - - - **Complete integration** - Like how the calls to acquire the leaf - certificate and CA roots are expected to be done out of band and reused, so - should the [intention match - API](/consul/api-docs/connect/intentions#list-matching-intentions). With all of the - relevant intentions cached for the destination, all enforcement operations - can be done entirely by the service without calling any Consul APIs in the - connection or request path. If the service is networking layer 7 (e.g. - HTTP) aware it can safely enforce intentions per _request_ instead of the - coarser per _connection_ model. - -## Update certificates and certificate roots - -The leaf certificate and CA roots can be updated at any time and the -natively integrated application must react to this relatively quickly -so that new connections are not disrupted. This can be done through -Consul blocking queries (HTTP long polling) or through periodic polling. - -The API calls for -[acquiring a service mesh TLS certificate](/consul/api-docs/agent/connect#service-leaf-certificate) -and [reading service mesh CA roots](/consul/api-docs/agent/connect#certificate-authority-ca-roots) -both support -[blocking queries](/consul/api-docs/features/blocking). By using blocking -queries, an application can efficiently wait for an updated value. For example, -the leaf certificate API will block until the certificate is near expiration -or the signing certificates have changed and will issue and return a new -certificate. - -In some languages, using blocking queries may not be simple. In that case, -we still recommend using the blocking query parameters but with a very short -`timeout` value set. Doing this is documented with -[blocking queries](/consul/api-docs/features/blocking). The low timeout will -ensure the API responds quickly. We recommend that applications poll the -certificate endpoints frequently, such as multiple times per minute. - -The overhead for the blocking queries (long or periodic polling) is minimal. -The API calls are to the local agent and the local agent uses locally -cached data multiplexed over a single TCP connection to the Consul leader. -Even if a single machine has 1,000 mesh-enabled services all blocking -on certificate updates, this translates to only one TCP connection to the -Consul server. - -Some language libraries such as the -[Go library](/consul/docs/connect/native/go) automatically handle updating -and locally caching the certificates. - -## Service registration - -Mesh-native applications must tell Consul that they support service mesh -natively. This enables the service to be returned as part of service -discovery for service mesh-capable services used by other mesh-native applications -and client [proxies](/consul/docs/connect/proxies). - -You can enable native service mesh support directly in the [service definition](/consul/docs/services/configuration/services-configuration-reference#connect) by configuring the `connect` block. In the following example, the `redis` service is configured to support service mesh natively: - -```json -{ - "service": { - "name": "redis", - "port": 8000, - "connect": { - "native": true - } - } -} -``` - -Services that support service mesh natively are still returned through the standard -service discovery mechanisms in addition to the mesh-only service discovery -mechanisms. diff --git a/website/content/docs/connect/nomad.mdx b/website/content/docs/connect/nomad.mdx index c65f07bc9162..48b14f0d2c43 100644 --- a/website/content/docs/connect/nomad.mdx +++ b/website/content/docs/connect/nomad.mdx @@ -1,11 +1,11 @@ --- layout: docs -page_title: Service Mesh - Nomad Integration +page_title: Connect Nomad services with Consul description: >- Consul's service mesh can be applied to provide secure communication between services managed by Nomad's scheduler and orchestrator functions, including Nomad jobs and task groups. Use the guide and reference documentation to learn more. --- -# Consul and Nomad Integration +# Connect Nomad services with Consul Consul service mesh can be used with [Nomad](https://www.nomadproject.io/) to provide secure service-to-service communication between Nomad jobs and task groups. @@ -26,4 +26,4 @@ For reference information about configuring Nomad jobs to use Consul service mes - [Nomad Job Specification - `sidecar_service`](/nomad/docs/job-specification/sidecar_service) - [Nomad Job Specification - `sidecar_task`](/nomad/docs/job-specification/sidecar_task) - [Nomad Job Specification - `proxy`](/nomad/docs/job-specification/proxy) -- [Nomad Job Specification - `upstreams`](/nomad/docs/job-specification/upstreams) +- [Nomad Job Specification - `upstreams`](/nomad/docs/job-specification/upstreams) \ No newline at end of file diff --git a/website/content/docs/connect/observability/access-logs.mdx b/website/content/docs/connect/observability/access-logs.mdx deleted file mode 100644 index 377b32b517c2..000000000000 --- a/website/content/docs/connect/observability/access-logs.mdx +++ /dev/null @@ -1,253 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Observability - Access Logs -description: >- - Consul can emit access logs for application connections and requests that pass through Envoy proxies in the service mesh. Learn how to configure access logs, including minimum configuration requirements and the default log format. ---- - -# Access Logs - -This topic describes configuration and usage for access logs. Consul can emit access logs to record application connections and requests that pass through proxies in a service mesh, including sidecar proxies and gateways. -You can use the application traffic records in access to logs to help you performance the following operations: - - - **Diagnosing and Troubleshooting Issues**: Operators and application owners can identify configuration issues in the service mesh or the application by analyzing failed connections and requests. - - **Threat Detection**: Operators can review details about unauthorized attempts to access the service mesh and their origins. - - **Audit Compliance**: Operators can use access less for security compliance requirements for traffic entering and exiting the service mesh through gateways. - -Consul supports access logs capture through Envoy proxies started through the [`consul connect envoy`](/consul/commands/connect/envoy) CLI command and [`consul-dataplane`](/consul/docs/connect/dataplane). Other proxies are not supported. - -## Enable access logs - -Access logs configurations are defined globally in the [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults#accesslogs) configuration entry. - -The following example is a minimal configuration for enabling access logs: - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -AccessLogs { - Enabled = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - accessLogs: - enabled: true -``` - -```json -{ - "Kind": "proxy-defaults", - "Name": "global", - "AccessLogs": { - "Enabled": true - } -} -``` - - - -All proxies, including sidecars and gateways, emit access logs when the behavior is enabled. -Both inbound and outbound traffic through the proxy are logged, including requests made directly to [Envoy's administration interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html?highlight=administration%20logs#administration-interface). - -If you enable access logs after the Envoy proxy was started, access logs for the administration interface are not captured until you restart the proxy. - -## Default log format - -Access logs use the following format when no additional customization is provided: - -~> **Security warning:** The following log format contains IP addresses which may be a data compliance issue, depending on your regulatory environment. -Operators should carefully inspect their chosen access log format to prevent leaking sensitive or personally identifiable information. - -```json -{ - "start_time": "%START_TIME%", - "route_name": "%ROUTE_NAME%", - "method": "%REQ(:METHOD)%", - "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%", - "protocol": "%PROTOCOL%", - "response_code": "%RESPONSE_CODE%", - "response_flags": "%RESPONSE_FLAGS%", - "response_code_details": "%RESPONSE_CODE_DETAILS%", - "connection_termination_details": "%CONNECTION_TERMINATION_DETAILS%", - "bytes_received": "%BYTES_RECEIVED%", - "bytes_sent": "%BYTES_SENT%", - "duration": "%DURATION%", - "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%", - "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%", - "user_agent": "%REQ(USER-AGENT)%", - "request_id": "%REQ(X-REQUEST-ID)%", - "authority": "%REQ(:AUTHORITY)%", - "upstream_host": "%UPSTREAM_HOST%", - "upstream_cluster": "%UPSTREAM_CLUSTER%", - "upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%", - "downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%", - "downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%", - "requested_server_name": "%REQUESTED_SERVER_NAME%", - "upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%" -} -``` - -Depending on the connection type, such TCP or HTTP, some of these fields may be empty. - -## Custom log format - -Envoy uses [command operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators) to expose information about application traffic. -You can use these fields to customize the access logs that proxies emit. - -Custom logs can be either JSON format or text format. - -### JSON format - -You can format access logs in JSON so that you can parse them with Application Monitoring Platforms (APMs). - -To use a custom access log, in the `proxy-defaults` configuration entry, set [`JSONFormat`](/consul/docs/connect/config-entries/proxy-defaults#jsonformat) to the string representation of the desired JSON. - -Nesting is supported. - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -AccessLogs { - Enabled = true - JSONFormat = < - -### Text format - -To use a custom access log formatted in plaintext, in the `proxy-defaults` configuration entry, set [`TextFormat`](/consul/docs/connect/config-entries/proxy-defaults#textformat) to the desired customized string. - -New lines are automatically added to the end of the log to keep each access log on its own line in the output. - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -AccessLogs { - Enabled = true - TextFormat = "MY START TIME: %START_TIME%, THIS CONNECTIONS PROTOCOL IS %PROTOCOL%" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - accessLogs: - enabled: true - textFormat: "MY START TIME: %START_TIME%, THIS CONNECTIONS PROTOCOL IS %PROTOCOL%" -``` - -```json -{ - "Kind": "proxy-defaults", - "Name": "global", - "AccessLogs": { - "Enabled": true, - "JSONFormat": "MY START TIME: %START_TIME%, THIS CONNECTIONS PROTOCOL IS %PROTOCOL%" - } -} -``` - - - - -## Kubernetes - -As part of its normal operation, the Envoy debugging logs for the `consul-dataplane`, `envoy`, or `envoy-sidecar` containers are written to `stderr`. -The access log [`Type`](/consul/docs/connect/config-entries/proxy-defaults#type) is set to `stdout` by default for access logs when enabled. -Use a log aggregating solution to separate the machine-readable access logs from the Envoy process debug logs. - -## Write to a file - -You can configure Consul to write access logs to a file on the host where Envoy runs. - -Envoy does not rotate log files. A log rotation solution, such as [logrotate](https://www.redhat.com/sysadmin/setting-logrotate), can prevent access logs from consuming too much of the host's disk space when writing to a file. - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -AccessLogs { - Enabled = true - Type = "file" - Path = "/var/log/envoy/access-logs.txt" -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - accessLogs: - enabled: true - type: file - path: "/var/log/envoy/access-logs.txt" -``` - -```json -{ - "Kind": "proxy-defaults", - "Name": "global", - "AccessLogs": { - "Enabled": true, - "Type": "file", - "Path": "/var/log/envoy/access-logs.txt" - } -} -``` - - diff --git a/website/content/docs/connect/observability/grafanadashboards/index.mdx b/website/content/docs/connect/observability/grafanadashboards/index.mdx deleted file mode 100644 index 2a21ec6f2fcf..000000000000 --- a/website/content/docs/connect/observability/grafanadashboards/index.mdx +++ /dev/null @@ -1,91 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Observability - Dashboards -description: >- - This documentation provides an overview of several dashboards designed for monitoring and managing services within a Consul-managed Envoy service mesh. Learn how to enable access logs and configure key performance and operational metrics to ensure the reliability and performance of services in the service mesh. ---- - -# Dashboards for service mesh observability - -This topic describes the configuration and usage of dashboards for monitoring and managing services within a Consul-managed Envoy service mesh. These dashboards provide critical insights into the health, performance, and resource utilization of services. The dashboards described here are essential tools for ensuring the stability, efficiency, and reliability of your service mesh environment. - -This page provides reference information about the Grafana dashboard configurations included in the [`grafana` directory in the `hashicorp/consul` GitHub repository](https://github.com/hashicorp/consul/tree/main/grafana). - -## Dashboards overview - -The repository includes the following dashboards: - - - **Consul service-to-service dashboard**: Provides a detailed view of service-to-service communications, monitoring key metrics like access logs, HTTP requests, error counts, response code distributions, and request success rates. The dashboard includes customizable filters for focusing on specific services and namespaces. - - - **Consul service dashboard**: Tracks key metrics for Envoy proxies at the cluster and service levels, ensuring the performance and reliability of individual services within the mesh. - - - **Consul dataplane dashboard**: Offers a comprehensive overview of service health and performance, including request success rates, resource utilization (CPU and memory), active connections, and cluster health. It helps operators maintain service reliability and optimize resource usage. - - - **Consul k8s dashboard**: Focuses on monitoring the health and resource usage of the Consul control plane within a Kubernetes environment, ensuring the stability of the control plane. - - - **Consul server dashboard**: Provides detailed monitoring of Consul servers, tracking key metrics like server health, CPU and memory usage, disk I/O, and network performance. This dashboard is critical for ensuring the stability and performance of Consul servers within the service mesh. - -## Enabling prometheus - -Add the following configurations to your Consul Helm chart to enable the prometheus tools. - - - -```yaml -global: - metrics: - enabled: true - provider: "prometheus" - enableAgentMetrics: true - agentMetricsRetentionTime: "10m" - -prometheus: - enabled: true - -ui: - enabled: true - metrics: - enabled: true - provider: "prometheus" - baseURL: http://prometheus-server.consul -``` - - - -## Enable access logs - -Access logs configurations are defined globally in the [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults#accesslogs) configuration entry. - -The following example is a minimal configuration for enabling access logs: - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -AccessLogs { - Enabled = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - accessLogs: - enabled: true -``` - -```json -{ - "Kind": "proxy-defaults", - "Name": "global", - "AccessLogs": { - "Enabled": true - } -} -``` - - diff --git a/website/content/docs/connect/observability/index.mdx b/website/content/docs/connect/observability/index.mdx deleted file mode 100644 index 95dce806a0ab..000000000000 --- a/website/content/docs/connect/observability/index.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Observability - Overview -description: >- - To use Consul's observability features, configure sidecar proxies in the service mesh to collect and emit L7 metrics. Learn about configuring metrics destinations and a service's protocol and upstreams. ---- - -# Service Mesh Observability Overview - -In order to take advantage of the service mesh's L7 observability features you will need -to: - -- Deploy sidecar proxies that are capable of emitting metrics with each of your - services. We have first class support for Envoy. -- Define where your proxies should send metrics that they collect. -- Define the protocols for each of your services. -- Define the upstreams for each of your services. - -If you are using Envoy as your sidecar proxy, you will need to [enable -gRPC](/consul/docs/agent/config/config-files#grpc_port) on your client agents. To define the -metrics destination and service protocol you may want to enable [configuration -entries](/consul/docs/agent/config/config-files#config_entries) and [centralized service -configuration](/consul/docs/agent/config/config-files#enable_central_service_config). - -### Kubernetes -If you are using Kubernetes, the Helm chart can simplify much of the configuration needed to enable observability. See -our [Kubernetes observability docs](/consul/docs/k8s/connect/observability/metrics) for more information. - -### Metrics destination - -For Envoy the metrics destination can be configured in the proxy configuration -entry's `config` section. - -``` -kind = "proxy-defaults" -name = "global" -config { - "envoy_dogstatsd_url": "udp://127.0.0.1:9125" -} -``` - -Find other possible metrics syncs in the [Envoy documentation](/consul/docs/connect/proxies/envoy#bootstrap-configuration). - -### Service protocol - -You can specify the [`protocol`](/consul/docs/connect/config-entries/service-defaults#protocol) -for all service instances in the `service-defaults` configuration entry. You can also override the default protocol when defining and registering proxies in a service definition file. Refer to [Expose Paths Configuration Reference](/consul/docs/connect/proxies/proxy-config-reference#expose-paths-configuration-reference) for additional information. - -By default, proxies only provide L4 metrics. -Defining the protocol allows proxies to handle requests at the L7 -protocol and emit L7 metrics. It also allows proxies to make per-request -load balancing and routing decisions. - -### Service upstreams - -You can set the upstream for each service using the proxy's -[`upstreams`](/consul/docs/connect/proxies/proxy-config-reference#upstreams) -sidecar parameter, which can be defined in a service's [sidecar registration](/consul/docs/connect/proxies/deploy-sidecar-services). diff --git a/website/content/docs/connect/observability/service.mdx b/website/content/docs/connect/observability/service.mdx deleted file mode 100644 index 55b4580aff7a..000000000000 --- a/website/content/docs/connect/observability/service.mdx +++ /dev/null @@ -1,212 +0,0 @@ ---- -layout: docs -page_title: Monitoring service-to-service communication with Envoy -description: >- - Learn to monitor the appropriate metrics when using Envoy proxy. ---- - -# Monitoring service-to-service communication with Envoy - -When running a service mesh with Envoy as the proxy, there are a wide array of possible metrics produced from traffic flowing through the data plane. This document covers a set of scenarios and key baseline metrics and potential alerts that will help you maintain the overall health and resilience of the mesh for HTTP services. In addition, it provides examples of using these metrics in specific ways to generate a Grafana dashboard using a Prometheus backend to better understand how the metrics behave. - -When collecting metrics, it is important to establish a baseline. This baseline ensures your Consul deployment is healthy, and serves as a reference point when troubleshooting abnormal Cluster behavior. Once you have established a baseline for your metrics, use them and the following recommendations to configure reasonable alerts for your Consul agent. - - - - The following examples assume that the operator adds the cluster name (i.e. datacenter) using the label “cluster” and the node name (i.e. machine or pod) using the label “node” to all scrape targets. - - - -## General scenarios - -### Is Envoy's configuration growing stale? - -When Envoy connects to the Consul control plane over xDS, it will rapidly converge to the current configuration that the control plane expects it to have. If the xDS stream terminates and does not reconnect for an extended period, then the xDS configuration currently in the Envoy instances will “fail static” and slowly grow out of date. - -##### Metric - -`envoy_control_plane_connected_state` - -#### Alerting - -If the value for a given node/pod/machine was 0 for an extended period of time. - -#### Example dashboard (table) - -``` -group(last_over_time(envoy_control_plane_connected_state{cluster="$cluster"}[1m] ) == 0) by (node) -``` - -## Inbound traffic scenarios - -### Is this service being sent requests? - -Within a mesh, a request travels from one service to another. You may choose to measure many relevant metrics from the calling-side, the serving-side, or both. - -It is useful to track the perceived request rate of requests from the calling-side as that would include all requests, even those that fail to arrive at the serving-side due to any failures. - -Any measurement of the request rate is also generally useful for capacity planning purposes as increased traffic typically correlates with a need for a scale-up event in the near future. - -##### Metric - -`envoy_cluster_upstream_rq_total` - -#### Alerting - -If the value has a significant change, check if services are properly interacting with each other and if you need to increase your Consul agent resource requirements. - -#### Example dashboard (plot; rate) - -``` -sum(irate(envoy_cluster_upstream_rq_total{consul_destination_datacenter="$cluster", -consul_destination_service="$service"}[1m])) by (cluster, local_cluster) -``` - -### Are requests sent to this service mostly successful? - -A service mesh is about communication between services, so it is important to track the perceived success rate of requests witnessed by the calling services. - -##### Metric - -`envoy_cluster_upstream_rq_xx` - -#### Alerting - -If the value crosses a user defined baseline. - -#### Example dashboard (plot; %) - -``` -sum(irate(envoy_cluster_upstream_rq_xx{envoy_response_code_class!="5",consul_destination_datacenter="$cluster",consul_destination_service="$service"}[1m])) by (cluster, local_cluster) / sum(irate(envoy_cluster_upstream_rq_xx{consul_destination_datacenter="$cluster",consul_destination_service="$service"}[1m])) by (cluster, local_cluster) -``` - -### Are requests sent to this service handled in a timely manner? - -If you undersize your infrastructure from a resource perspective, then you may expect a decline in response speed over time. You can track this by plotting the 95th percentile of the latency as experienced by the clients. - -##### Metric - -`envoy_cluster_upstream_rq_time_bucket` - -#### Alerting - -If the value crosses a user defined baseline. - -#### Example dashboard (plot; value) - -``` -histogram_quantile(0.95, sum(rate(envoy_cluster_upstream_rq_time_bucket{consul_destination_datacenter="$cluster",consul_destination_service="$service",local_cluster!=""}[1m])) by (le, cluster, local_cluster)) -``` - -### Is this service responding to requests that it receives? - -Unlike the perceived request rate, which is measured from the calling side, this is the real request rate measured on the serving-side. This is a serving-side parallel metric that can help clarify underlying causes of problems in the calling-side equivalent metric. Ideally this metric should roughly track the calling side values in a 1-1 manner. - -##### Metric - -`envoy_http_downstream_rq_total` - -#### Alerting - -If the value crosses a user defined baseline. - -#### Example dashboard (plot; rate) - -``` -sum(irate(envoy_http_downstream_rq_total{cluster="$cluster",local_cluster="$service",envoy_http_conn_manager_prefix="public_listener"}[1m])) -``` - -### Are responses from this service mostly successful? - -Unlike the perceived success rate of requests, which is measured from the calling side, this is the real success rate of requests measured on the serving-side. This is a serving-side parallel metric that can help clarify underlying causes of problems in the calling-side equivalent metric. Ideally this metric should roughly track the calling side values in a 1-1 manner. - -##### Metrics - -`envoy_http_downstream_rq_total` - -`envoy_http_downstream_rq_xx` - -#### Alerting - -If the value crosses a user defined baseline. - -#### Example dashboard (plot; %) - -##### Total - -``` -sum(increase(envoy_http_downstream_rq_total{cluster="$cluster",local_cluster="$service",envoy_http_conn_manager_prefix="public_listener"}[1m])) -``` - -##### BY STATUS CODE: - -``` -sum(increase(envoy_http_downstream_rq_xx{cluster="$cluster",local_cluster="$service",envoy_http_conn_manager_prefix="public_listener"}[1m])) by (envoy_response_code_class) -``` - -## Outbound traffic scenarios - -### Is this service sending traffic to its upstreams? - -Similar to the real request rate for requests arriving at a service, it may be helpful to view the perceived request rate departing from a service through its upstreams. - -##### Metric - -`envoy_cluster_upstream_rq_total` - -#### Alerting - -If the value crosses a user defined success threshold. - -#### Example dashboard (plot; rate) - -``` -sum(irate(envoy_cluster_upstream_rq_total{cluster="$cluster", -local_cluster="$service", -consul_destination_target!=""}[1m])) by (consul_destination_target) -``` - -### Are requests from this service to its upstreams mostly successful? - -Similar to the real success rate of requests arriving at a service, it is also important to track the perceived success rate of requests departing from a service through its upstreams. - -##### Metric - -`envoy_cluster_upstream_rq_xx` - -#### Alerting - -If the value crosses a user defined success threshold. - -#### Example dashboard (plot; value) - -``` -sum(irate(envoy_cluster_upstream_rq_xx{envoy_response_code_class!="5", -cluster="$cluster",local_cluster="$service", -consul_destination_target!=""}[1m])) by (consul_destination_target) / sum(irate(envoy_cluster_upstream_rq_xx{cluster="$cluster",local_cluster="$service",consul_destination_target!=""}[1m])) by (consul_destination_target) -``` - -### Are requests from this service to its upstreams handled in a timely manner? - -Similar to the latency of requests departing for a service, it is useful to track the 95th percentile of the latency of requests departing from a service through its upstreams. - -##### Metric - -`envoy_cluster_upstream_rq_time_bucket` - -#### Alerting - -If the value crosses a user defined success threshold. - -#### Example dashboard (plot; value) - -``` -histogram_quantile(0.95, sum(rate(envoy_cluster_upstream_rq_time_bucket{cluster="$cluster", -local_cluster="$service",consul_target!=""}[1m])) by (le, consul_destination_target)) -``` - -## Next steps - -In this guide, you learned recommendations for monitoring your Envoy metrics, and why monitoring these metrics is important for your Consul deployment. - -To learn about monitoring Consul components, visit our [Monitoring Consul components](/well-architected-framework/reliability/reliability-consul-monitoring-consul-components) documentation. diff --git a/website/content/docs/connect/observability/ui-visualization.mdx b/website/content/docs/connect/observability/ui-visualization.mdx deleted file mode 100644 index 2cbf0b0ae26b..000000000000 --- a/website/content/docs/connect/observability/ui-visualization.mdx +++ /dev/null @@ -1,726 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Observability - UI Visualization -description: >- - Consul's UI can display a service's topology and associated metrics from the service mesh. Learn how to configure the UI to collect metrics from your metrics provider, modify access for metrics proxies, and integrate custom metrics providers. ---- - -# Service Mesh Observability: UI Visualization - --> Coming here from "Configure metrics dashboard" or "Configure dashboard"? See [Configuring Dashboard URLs](#configuring-dashboard-urls). - -Since Consul 1.9.0, Consul's built in UI includes a topology visualization to -show a service's immediate connectivity at a glance. It is not intended as a -replacement for dedicated monitoring solutions, but rather as a quick overview -of the state of a service and its connections within the Service Mesh. - -The topology visualization requires services to be using [service mesh](/consul/docs/connect) via [sidecar proxies](/consul/docs/connect/proxies). - -The visualization may optionally be configured to include a link to an external -per-service dashboard. This is designed to provide convenient deep links to your -existing monitoring or Application Performance Monitoring (APM) solution for -each service. More information can be found in [Configuring Dashboard -URLs](#configuring-dashboard-urls). - -It is possible to configure the UI to fetch basic metrics from your metrics -provider storage to augment the visualization as displayed below. - -![Consul UI Service Mesh Visualization](/img/ui-service-topology-view-hover.png) - -Consul has built-in support for overlaying metrics from a -[Prometheus](https://prometheus.io) backend. Alternative metrics providers may -be supported using a new and experimental JavaScript API. See [Custom Metrics -Providers](#custom-metrics-providers). - -## Kubernetes - -If running Consul in Kubernetes, the Helm chart can automatically configure Consul's UI to display topology -visualizations. See our [Kubernetes observability docs](/consul/docs/k8s/connect/observability/metrics) for more information. - -## Configuring the UI To Display Metrics - -To configure Consul's UI to fetch metrics there are two required configuration settings. -These need to be set on each Consul Agent that is responsible for serving the -UI. If there are multiple clients with the UI enabled in a datacenter for -redundancy these configurations must be added to all of them. - -We assume that the UI is already enabled by setting -[`ui_config.enabled`](/consul/docs/agent/config/config-files#ui_config_enabled) to `true` in the -agent's configuration file. - -To use the built-in Prometheus provider -[`ui_config.metrics_provider`](/consul/docs/agent/config/config-files#ui_config_metrics_provider) -must be set to `prometheus`. - -The UI must query the metrics provider through a proxy endpoint. This simplifies -deployment where Prometheus is not exposed externally to UI user's browsers. - -To set this up, provide the URL that the _Consul agent_ should use to reach the -Prometheus server in -[`ui_config.metrics_proxy.base_url`](/consul/docs/agent/config/config-files#ui_config_metrics_proxy_base_url). -For example in Kubernetes, the Prometheus helm chart by default installs a -service named `prometheus-server` so each Consul agent can reach it on -`http://prometheus-server` (using Kubernetes' DNS resolution). - -A full configuration to enable Prometheus is given below. - - - - - -```hcl -ui_config { - enabled = true - metrics_provider = "prometheus" - metrics_proxy { - base_url = "http://prometheus-server" - } -} -``` - - - - - -```yaml -ui: - enabled: true - metrics: - enabled: true # by default, this inherits from the value global.metrics.enabled - provider: "prometheus" - baseURL: http://prometheus-server -``` - - - - - -```json -{ - "ui_config": { - "enabled": true, - "metrics_provider": "prometheus", - "metrics_proxy": { - "base_url": "http://prometheus-server" - } - } -} -``` - - - - - --> **Note**: For more information on configuring the observability UI on Kubernetes, use this [reference](/consul/docs/k8s/connect/observability/metrics). - -## Configuring Dashboard URLs - -Since Consul's visualization is intended as an overview of your mesh and not a -comprehensive monitoring tool, you can configure a service dashboard URL -template which allows users to click directly through to the relevant -service-specific dashboard in an external tool like -[Grafana](https://grafana.com) or a hosted provider. - -To configure this, you must provide a URL template in the [agent configuration -file](/consul/docs/agent/config/config-files#ui_config_dashboard_url_templates) for all agents that -have the UI enabled. The template is essentially the URL to the external -dashboard, but can have placeholder values which will be replaced with the -service name, namespace and datacenter where appropriate to allow deep-linking -to the relevant information. - -An example with Grafana is shown below. - - - - - -```hcl -ui_config { - enabled = true - dashboard_url_templates { - service = "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1&var-service={{Service.Name}}&var-namespace={{Service.Namespace}}&var-partition={{Service.Partition}}&var-dc={{Datacenter}}" - } -} -``` - - - - - -```yaml -# The UI is enabled by default so this stanza is not required. -ui: - enabled: true - # This configuration requires version 0.40.0 or later of the Helm chart. - dashboardURLTemplates: - service: "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1&var-service={{Service.Name}}&var-namespace={{Service.Namespace}}&var-dc={{Datacenter}}" - -# If you are using a version of the Helm chart older than 0.40.0, you must -# configure the dashboard URL template using the `server.extraConfig` parameter -# in the Helm chart's values file. -server: - extraConfig: | - { - "ui_config": { - "dashboard_url_templates": { - "service": "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1&var-service={{ "{{" }}Service.Name}}&var-namespace={{ "{{" }}Service.Namespace}}&var-dc={{ "{{" }}Datacenter}}" - } - } - } -``` - - - - - -```json -{ - "ui_config": { - "enabled": true, - "dashboard_url_templates": { - "service": "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1\u0026var-service={{Service.Name}}\u0026var-namespace={{Service.Namespace}}\u0026var-partition={{Service.Partition}}\u0026var-dc={{Datacenter}}" - } - } -} -``` - - - - - -~> **Note**: On Kubernetes, the Consul Server configuration set in the Helm config's -[`server.extraConfig`](/consul/docs/k8s/helm#v-server-extraconfig) key must be specified -as JSON. The `{{` characters in the URL must be escaped using `{{ "{{" }}` so that Helm -doesn't try to template them. - -![Consul UI Service Dashboard Link](/img/ui-dashboard-url-template.png) - -### Metrics Proxy - -In many cases the metrics backend may be inaccessible to UI user's browsers or -may be on a different domain and so subject to CORS restrictions. To make it -simpler to serve the metrics to the UI in these cases, the Consul agent can -proxy requests for metrics from the UI to the backend. - -**This is intended to simplify setup in test and demo environments. Careful -consideration should be given towards using this in production.** - -The simplest configuration is described in [Configuring the UI for -metrics](#configuring-the-ui-for-metrics). - -#### Metrics Proxy Security - -~> **Security Note**: Exposing a backend metrics service to potentially -un-authenticated network traffic via the proxy should be _carefully_ considered -in production. - -The metrics proxy endpoint is internal and intended only for UI use. However by -enabling it anyone with network access to the agent's API port may use it to -access metrics from the backend. - -**If ACLs are not enabled, full access to metrics will be exposed to -un-authenticated workloads on the network**. - -With ACLs enabled, the proxy endpoint requires a valid token with read access -to all nodes and services (across all namespaces in Enterprise): - - - - - - -```hcl -service_prefix "" { - policy = "read" -} -node_prefix "" { - policy = "read" -} -``` - -```json -{ - "service_prefix": { - "": { - "policy": "read" - } - }, - "node_prefix": { - "": { - "policy": "read" - } - } -} -``` - - - - - - - -```hcl -namespace_prefix "" { - service_prefix "" { - policy = "read" - } - node_prefix "" { - policy = "read" - } -} -``` - -```json -{ - "namespace_prefix": { - "": { - "service_prefix": { - "": { - "policy": "read" - } - }, - "node_prefix": { - "": { - "policy": "read" - } - } - } - } -} -``` - - - - - - -It's typical for most authenticated users to have this level of access in Consul -as it's required for viewing the catalog or discovering services. If you use a -[Single Sign-On integration](/consul/docs/security/acl/auth-methods/oidc) (Consul -Enterprise) users of the UI can be automatically issued an ACL token with the -privileges above to be allowed access to the metrics through the proxy. - -Even with ACLs enabled, the proxy endpoint doesn't deeply understand the query -language of the backend so there is no way it can enforce least-privilege access -to only specific service-related metrics. - -_If you are not comfortable with all users of Consul having full access to the -metrics backend, you should not use the proxy and find an alternative like using -a custom provider that can query the metrics backend directly_. - -##### Path Allowlist - -To limit exposure of the metrics backend, paths must be explicitly added to an -allowlist to avoid exposing unintended parts of the API. For example with -Prometheus, both the `/api/v1/query_range` and `/api/v1/query` endpoints are -needed to load time-series and individual stats. If the proxy had the `base_url` -set to `http://prometheus-server` then the proxy would also expose read access -to several other endpoints such as `/api/v1/status/config` which includes all -Prometheus configuration which might include sensitive information. - -If you use the built-in `prometheus` provider the proxy is limited to the -essential endpoints. The default value for `metrics_proxy.path_allowlist` is -`["/api/v1/query_range", "/api/v1/query"]` as required by the built-in -`prometheus` provider . - -If you use a custom provider that uses the metrics proxy, you'll need to -explicitly set the allowlist based on the endpoints the provider needs to -access. - -#### Adding Headers - -It is also possible to configure the proxy to add one or more headers to -requests as they pass through. This is useful when the metrics backend requires -authentication. For example if your metrics are shipped to a hosted provider, -you could provision an API token specifically for the Consul UI and configure -the proxy to add it as in the example below. This keeps the API token only -visible to Consul operators in the configuration file while UI users can query -the metrics they need without separately obtaining a token for that provider or -having a token exposed to them that they might be able to use elsewhere. - - - - - -```hcl -ui_config { - enabled = true - metrics_provider = "example-apm" - metrics_proxy { - base_url = "https://example-apm.com/api/v1/metrics" - add_headers = [ - { - name = "Authorization" - value = "Bearer " - } - ] - } -} -``` - - - - - -```json -{ - "ui_config": { - "enabled": true, - "metrics_provider": "example-apm", - "metrics_proxy": { - "base_url": "https://example-apm.com/api/v1/metrics", - "add_headers": [ - { - "name": "Authorization", - "value": "Bearer \u003ctoken\u003e" - } - ] - } - } -} -``` - - - - - -## Custom Metrics Providers - -Consul 1.9.0 includes a built-in provider for fetching metrics from -[Prometheus](https://prometheus.io). To enable the UI visualization feature -to work with other existing metrics stores and hosted services, we created a -"metrics provider" interface in JavaScript. A custom provider may be written and -the JavaScript file served by the Consul agent. - -~> **Note**: this interface is _experimental_ and may change in breaking ways or -be removed entirely as we discover the needs of the community. Please provide -feedback on [GitHub](https://github.com/hashicorp/consul) or -[Discuss](https://discuss.hashicorp.com/) on how you'd like to use this. - -The template for a complete provider JavaScript file is given below. - - - -```javascript -(function () { - var provider = { - /** - * init is called when the provider is first loaded. - * - * options.providerOptions contains any operator configured parameters - * specified in the `metrics_provider_options_json` field of the Consul - * agent configuration file. - * - * Consul will provide: - * - * 1. A boolean field options.metrics_proxy_enabled to indicate whether the - * agent has a metrics proxy configured. - * - * 2. A function options.fetch which is a thin wrapper around the browser's - * [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) - * that prefixes any url with the url of Consul's internal metrics proxy - * endpoint and adds your current Consul ACL token to the request - * headers. Otherwise it functions like the browser's native fetch. - * - * The provider should throw an Exception if the options are not valid, for - * example because it requires a metrics proxy and one is not configured. - */ - init: function(options) {}, - - /** - * serviceRecentSummarySeries should return time series for a recent time - * period summarizing the usage of the named service in the indicated - * datacenter. In Consul Enterprise a non-empty namespace is also provided. - * - * If these metrics aren't available then an empty series array may be - * returned. - * - * The period may (later) be specified in options.startTime and - * options.endTime. - * - * The service's protocol must be given as one of Consul's supported - * protocols e.g. "tcp", "http", "http2", "grpc". If it is empty or the - * provider doesn't recognize the protocol, it should treat it as "tcp" and - * provide basic connection stats. - * - * The expected return value is a JavaScript promise which resolves to an - * object that should look like the following: - * - * { - * // The unitSuffix is shown after the value in tooltips. Values will be - * // rounded and shortened. Larger values will already have a suffix - * // like "10k". The suffix provided here is concatenated directly - * // allowing for suffixes like "mbps/kbps" by using a suffix of "bps". - * // If the unit doesn't make sense in this format, include a - * // leading space for example " rps" would show as "1.2k rps". - * unitSuffix: " rps", - * - * // The set of labels to graph. The key should exactly correspond to a - * // property of every data point in the array below except for the - * // special case "Total" which is used to show the sum of all the - * // stacked graph values. The key is displayed in the tooltip so it - * // should be human-friendly but as concise as possible. The value is a - * // longer description that is displayed in the graph's key on request - * // to explain exactly what the metrics mean. - * labels: { - * "Total": "Total inbound requests per second.", - * "Successes": "Successful responses (with an HTTP response code ...", - * "Errors": "Error responses (with an HTTP response code in the ...", - * }, - * - * data: [ - * { - * time: 1600944516286, // milliseconds since Unix epoch - * "Successes": 1234.5, - * "Errors": 2.3, - * }, - * ... - * ] - * } - * - * Every data point object should have a value for every series label - * (except for "Total") otherwise it will be assumed to be "0". - */ - serviceRecentSummarySeries: function(serviceDC, namespace, serviceName, protocol, options) {}, - - /** - * serviceRecentSummaryStats should return four summary statistics for a - * recent time period for the named service in the indicated datacenter. In - * Consul Enterprise a non-empty namespace is also provided. - * - * If these metrics aren't available then an empty array may be returned. - * - * The period may (later) be specified in options.startTime and - * options.endTime. - * - * The service's protocol must be given as one of Consul's supported - * protocols e.g. "tcp", "http", "http2", "grpc". If it is empty or the - * provider doesn't recognize it it should treat it as "tcp" and provide - * just basic connection stats. - * - * The expected return value is a JavaScript promise which resolves to an - * object that should look like the following: - * - * { - // stats is an array of stats to show. The first four of these will be - // displayed. Fewer may be returned if not available. - * stats: [ - * { - * // label should be 3 chars or fewer as an abbreviation - * label: "SR", - * - * // desc describes the stat in a tooltip - * desc: "Success Rate - the percentage of all requests that were not 5xx status", - * - * // value is a string allowing the provider to format it and add - * // units as appropriate. It should be as compact as possible. - * value: "98%", - * } - * ] - * } - */ - serviceRecentSummaryStats: function(serviceDC, namespace, serviceName, protocol, options) {}, - - /** - * upstreamRecentSummaryStats should return four summary statistics for each - * upstream service over a recent time period, relative to the named service - * in the indicated datacenter. In Consul Enterprise a non-empty namespace - * is also provided. - * - * Note that the upstreams themselves might be in different datacenters but - * we only pass the target service DC since typically these metrics should - * be from the outbound listener of the target service in this DC even if - * the requests eventually end up in another DC. - * - * If these metrics aren't available then an empty array may be returned. - * - * The period may (later) be specified in options.startTime and - * options.endTime. - * - * The expected return value is a JavaScript promise which resolves to an - * object that should look like the following: - * - * { - * stats: { - * // Each upstream will appear as an entry keyed by the upstream - * // service name. The value is an array of stats with the same - * // format as serviceRecentSummaryStats response.stats. Note that - * // different upstreams might show different stats depending on - * // their protocol. - * "upstream_name": [ - * {label: "SR", desc: "...", value: "99%"}, - * ... - * ], - * ... - * } - * } - */ - upstreamRecentSummaryStats: function(serviceDC, namespace, serviceName, upstreamName, options) {}, - - /** - * downstreamRecentSummaryStats should return four summary statistics for - * each downstream service over a recent time period, relative to the named - * service in the indicated datacenter. In Consul Enterprise a non-empty - * namespace is also provided. - * - * Note that the service may have downstreams in different datacenters. For - * some metrics systems which are per-datacenter this makes it hard to query - * for all downstream metrics from one source. For now the UI will only show - * downstreams in the same datacenter as the target service. In the future - * this method may be called multiple times, once for each DC that contains - * downstream services to gather metrics from each. In that case a separate - * option for target datacenter will be used since the target service's DC - * is still needed to correctly identify the outbound clusters that will - * route to it from the remote DC. - * - * If these metrics aren't available then an empty array may be returned. - * - * The period may (later) be specified in options.startTime and - * options.endTime. - * - * The expected return value is a JavaScript promise which resolves to an - * object that should look like the following: - * - * { - * stats: { - * // Each downstream will appear as an entry keyed by the downstream - * // service name. The value is an array of stats with the same - * // format as serviceRecentSummaryStats response.stats. Different - * // downstreams may display different stats if required although the - * // protocol should be the same for all as it is the target - * // service's protocol that matters here. - * "downstream_name": [ - * {label: "SR", desc: "...", value: "99%"}, - * ... - * ], - * ... - * } - * } - */ - downstreamRecentSummaryStats: function(serviceDC, namespace, serviceName, options) {} - } - - // Register the provider with Consul for use. This example would be usable by - // configuring the agent with `ui_config.metrics_provider = "example-provider". - window.consul.registerMetricsProvider("example-provider", provider) - -}()); -``` - - - -Additionally, the built in [Prometheus -provider code](https://github.com/hashicorp/consul/blob/main/ui/packages/consul-ui/vendor/metrics-providers/prometheus.js) -can be used as a reference. - -### Configuring the Agent With a Custom Metrics Provider. - -In the example below, we configure the Consul agent to use a metrics provider -named `example-provider`, which is defined in -`/usr/local/bin/example-metrics-provider.js`. The name `example-provider` must -have been specified in the call to `consul.registerMetricsProvider` as in the -code listing in the last section. - - - - - -```hcl -ui_config { - enabled = true - metrics_provider = "example-provider" - metrics_provider_files = ["/usr/local/bin/example-metrics-provider.js"] - metrics_provider_options_json = <<-EOT - { - "foo": "bar" - } - EOT -} -``` - - - - - -```json -{ - "ui_config": { - "enabled": true, - "metrics_provider": "example-provider", - "metrics_provide_files": ["/usr/local/bin/example-metrics-provider.js"], - "metrics_provider_options_json": "{\"foo\":\"bar\"}" - } -} -``` - - - - -More than one JavaScript file may be specified in -[`metrics_provider_files`](/consul/docs/agent/config/config-files#ui_config_metrics_provider_files) -and all will be served allowing flexibility if needed to include dependencies. -Only one metrics provider can be configured and used at one time. - -The -[`metrics_provider_options_json`](/consul/docs/agent/config/config-files#ui_config_metrics_provider_options_json) -field is an optional literal JSON object which is passed to the provider's -`init` method at startup time. This allows configuring arbitrary parameters for -the provider in config rather than hard coding them into the provider itself to -make providers more reusable. - -The provider may fetch metrics directly from another source although in this -case the agent will probably need to serve the correct CORS headers to prevent -browsers from blocking these requests. These may be configured with -[`http_config.response_headers`](/consul/docs/agent/config/config-files#response_headers). - -Alternatively, the provider may choose to use the [built-in metrics -proxy](#metrics-proxy) to avoid cross domain issues or to inject additional -authorization headers without requiring each UI user to be separately -authenticated to the metrics backend. - -A function that behaves like the browser's [Fetch -API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) is provided to -the metrics provider JavaScript during `init` as `options.fetch`. This is a thin -wrapper that prefixes any url with the url of Consul's metrics proxy endpoint -and adds your current Consul ACL token to the request headers. Otherwise it -functions like the browser's native fetch and will forward your request on to the -metrics backend. The response will be returned without any modification to be -interpreted by the provider and converted into the format as described in the -interface above. - -Provider authors should make it clear to users which paths are required so they -can correctly configure the [path allowlist](#path-allowlist) in the metrics -proxy to avoid exposing more than needed of the metrics backend. - -### Custom Provider Security Model - -Since the JavaScript file(s) are included in Consul's UI verbatim, the code in -them must be treated as fully trusted by the operator. Typically they will have -authored this or will need to carefully vet providers written by third parties. - -This is equivalent to using the existing `-ui-dir` flag to serve an alternative -version of the UI - in either model the operator takes full responsibility for -the provenance of the code being served since it has the power to intercept ACL -tokens, access cookies and local storage for the Consul UI domain and possibly -more. - -## Current Limitations - -Currently there are some limitations to this feature. - -- **No cross-datacenter support** The initial metrics provider integration is - with Prometheus which is popular and easy to setup within one Kubernetes - cluster. However, when using the Consul UI in a multi-datacenter deployment, - the UI allows users to select any datacenter to view. - - This means that the Prometheus server that the Consul agent serving the UI can - access likely only has metrics for the local datacenter and a full solution - would need additional proxying or exposing remote Prometheus servers on the - network in remote datacenters. Later we may support an easy way to set this up - via Consul service mesh but initially we don't attempt to fetch metrics in the UI - if you are browsing a remote datacenter. - -- **Built-in provider requires metrics proxy** Initially the built-in - `prometheus` provider only support querying Prometheus via the [metrics - proxy](#metrics-proxy). Later it may be possible to configure it for direct - access to an expose Prometheus. diff --git a/website/content/docs/connect/proxies/built-in.mdx b/website/content/docs/connect/proxies/built-in.mdx deleted file mode 100644 index 93ec90bcb4e3..000000000000 --- a/website/content/docs/connect/proxies/built-in.mdx +++ /dev/null @@ -1,90 +0,0 @@ ---- -layout: docs -page_title: Built-in Proxy Configuration | Service Mesh -description: >- - Consul includes a built-in L4 proxy with limited capabilities to use for development and testing only. Use the built-in proxy config key reference to learn about the options you can configure. ---- - -# Built-in Proxy Configuration for Service Mesh - -~> **Note:** The built-in proxy is not supported for production deployments. It does not -support many of Consul's service mesh features, and is not under active development. -The [Envoy proxy](/consul/docs/connect/proxies/envoy) should be used for production deployments. - -Consul comes with a built-in L4 proxy for testing and development with Consul -service mesh. - -## Proxy Config Key Reference - -Below is a complete example of all the configuration options available -for the built-in proxy. - -```json -{ - "service": { - "name": "example-service", - "connect": { - "sidecar_service": { - "proxy": { - "config": { - "bind_address": "0.0.0.0", - "bind_port": 20000, - "local_service_address": "127.0.0.1:1234", - "local_connect_timeout_ms": 1000, - "handshake_timeout_ms": 10000, - "upstreams": [] - }, - "upstreams": [ - { - "destination_name": "example-upstream", - "config": { - "connect_timeout_ms": 1000 - } - } - ] - } - } - } - } -} -``` - -All fields are optional with a reasonable default. - -- `bind_address` - The address the proxy will bind its - _public_ mTLS listener to. It defaults to the same address the agent binds to. - -- `bind_port` - The port the proxy will bind its _public_ - mTLS listener to. If not provided, the agent will assign a random port from its - configured proxy port range specified by [`sidecar_min_port`](/consul/docs/agent/config/config-files#sidecar_min_port) - and [`sidecar_max_port`](/consul/docs/agent/config/config-files#sidecar_max_port). - -- `local_service_address`- The `[address]:port` - that the proxy should use to connect to the local application instance. By default - it assumes `127.0.0.1` as the address and takes the port from the service definition's - `port` field. Note that allowing the application to listen on any non-loopback - address may expose it externally and bypass the service mesh's access enforcement. It may - be useful though to allow non-standard loopback addresses or where an alternative - known-private IP is available for example when using internal networking between - containers. - -- `local_connect_timeout_ms` - The number - of milliseconds the proxy will wait to establish a connection to the _local application_ - before giving up. Defaults to `1000` or 1 second. - -- `handshake_timeout_ms` - The number of milliseconds - the proxy will wait for _incoming_ mTLS connections to complete the TLS handshake. - Defaults to `10000` or 10 seconds. - -- `upstreams`- **Deprecated** Upstreams are now specified - in the `connect.proxy` definition. Upstreams specified in the opaque config map - here will continue to work for compatibility but it's strongly recommended that - you move to using the higher level [upstream configuration](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference). - -## Proxy Upstream Config Key Reference - -All fields are optional with a reasonable default. - -- `connect_timeout_ms` - The number of milliseconds - the proxy will wait to establish a TLS connection to the discovered upstream instance - before giving up. Defaults to `10000` or 10 seconds. diff --git a/website/content/docs/connect/proxies/deploy-service-mesh-proxies.mdx b/website/content/docs/connect/proxies/deploy-service-mesh-proxies.mdx deleted file mode 100644 index 0bb3f7df0389..000000000000 --- a/website/content/docs/connect/proxies/deploy-service-mesh-proxies.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: docs -page_title: Deploy service mesh proxies -description: >- - Envoy and other proxies in Consul service mesh enable service-to-service communication across your network. Learn how to deploy service mesh proxies in this topic. ---- - -# Deploy service mesh proxies services - -This topic describes how to create, register, and start service mesh proxies in Consul. Refer to [Service mesh proxies overview](/consul/docs/connect/proxies) for additional information about how proxies enable Consul functionalities. - -For information about deploying proxies as sidecars for service instances, refer to [Deploy sidecar proxy services](/consul/docs/connect/proxies/deploy-sidecar-services). - -## Overview - -Complete the following steps to deploy a service mesh proxy: - -1. It is not required, but you can create a proxy defaults configuration entry that contains global passthrough settings for all Envoy proxies. -1. Create a service definition file and specify the proxy configurations in the `proxy` block. -1. Register the service using the API or CLI. -1. Start the proxy service. Proxies appear in the list of services registered to Consul, but they must be started before they begin to route traffic in your service mesh. - -## Requirements - -If ACLs are enabled and you want to configure global Envoy settings using the [proxy defaults configuration entry](/consul/docs/connect/config-entries/proxy-defaults), you must present a token with `operator:write` permissions. Refer to [Create a service token](/consul/docs/security/acl/tokens/create/create-a-service-token) for additional information. - -## Configure global Envoy passthrough settings - -If you want to define global passthrough settings for all Envoy proxies, create a proxy defaults configuration entry and specify default settings, such as access log configuration. Note that [service defaults configuration entries](/consul/docs/connect/config-entries/service-defaults) override proxy defaults and individual service configurations override both configuration entries. - -1. Create a proxy defaults configuration entry and specify the following parameters: - - `Kind`: Must be set to `proxy-defaults` - - `Name`: Must be set to `global` -1. Configure any additional settings you want to apply to all proxies. Refer to [Proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) for details about all settings available in the configuration entry. -1. Apply the configuration by either calling the [`/config` HTTP API endpoint](/consul/api-docs/config) or running the [`consul config write` CLI command](/consul/commands/config/write). The following example writes a proxy defaults configuration entry from a local HCL file using the CLI: - -```shell-session -$ consul config write proxy-defaults.hcl -``` - -## Define service mesh proxy - -Create a service definition file and configure the following fields to define a service mesh proxy: - -1. Set the `kind` field to `connect-proxy`. Refer to the [services configuration reference](/consul/docs/services/configuration/services-configuration-reference#kind) for information about other kinds of proxies you can declare. -1. Specify a name for the proxy service in the `name` field. Consul applies the configurations to any proxies you bootstrap with the same name. -1. In the `proxy.destination_service_name` field, specify the name of the service that the proxy represents. -1. Configure any additional proxy behaviors that you want to implement in the `proxy` block. Refer to the [Service mesh proxy configuration reference](/consul/docs/connect/proxies/proxy-config-reference) for information about all parameters. -1. Specify a port number where other services registered with Consul can discover and connect to the proxies service in the `port` field. To ensure that services only allow external connections established through the service mesh protocol, you should configure all services to only accept connections on a loopback address. - -Refer to the [Service mesh proxy configuration reference](/consul/docs/connect/proxies/proxy-config-reference) for example configurations. - -## Register the service - -Provide the service definition to the Consul agent to register your proxy service. You can use the same methods for registering proxy services as you do for registering application services: - -- Place the service definition in a Consul agent's configuration directory and start, restart, or reload the agent. Use this method when implementing changes to an existing proxy service. -- Use the `consul services register` command to register the proxy service with a running Consul agent. -- Call the `/agent/service/register` HTTP API endpoint to register the proxy service with a running Consul agent. - -Refer to [Register services and health checks](/consul/docs/services/usage/register-services-checks) for instructions. - -In the following example, the `consul services register` command registers a proxy service stored in `proxy.hcl`: - -```shell-session -$ consul services register proxy.hcl -``` - -## Start the proxy - -Envoy requires a bootstrap configuration file before it can start. Use the [`consul connect envoy` command](/consul/commands/connect/envoy) to create the Envoy bootstrap configuration and start the proxy service. Specify the ID of the proxy you want to start with the `-proxy-id` option. - -The following example command starts an Envoy proxy for the `web-proxy` service: - -```shell-session -$ consul connect envoy -proxy-id=web-proxy -``` - -For details about operating an Envoy proxy in Consul, refer to the [Envoy proxy reference](/consul/docs/connect/proxies/envoy). diff --git a/website/content/docs/connect/proxies/deploy-sidecar-services.mdx b/website/content/docs/connect/proxies/deploy-sidecar-services.mdx deleted file mode 100644 index c42a5b2c7f5f..000000000000 --- a/website/content/docs/connect/proxies/deploy-sidecar-services.mdx +++ /dev/null @@ -1,284 +0,0 @@ ---- -layout: docs -page_title: Deploy proxies as sidecar services -description: >- - You can register a service instance and its sidecar proxy at the same time. Learn about default settings, customizable parameters, limitations, and lifecycle behaviors of the sidecar proxy. ---- - -# Deploy sidecar services - -This topic describes how to create, register, and start sidecar proxy services in Consul. Refer to [Service mesh proxies overview](/consul/docs/connect/proxies) for additional information about how proxies enable Consul's functions and operations. For information about deploying service mesh proxies, refer to [Deploy service mesh proxies](/consul/docs/connect/proxies/deploy-service-mesh-proxies). - -## Overview - -Sidecar proxies run on the same node as the single service instance that they handle traffic for. -They may be on the same VM or running as a separate container in the same network namespace. - -You can attach a sidecar proxy to a service you want to deploy to your mesh: - -1. It is not required, but you can create a proxy defaults configuration entry that contains global passthrough settings for all Envoy proxies. -1. Create the service definition and include the `connect` block. The `connect` block contains the sidecar proxy configurations that allow the service to interact with other services in the mesh. -1. Register the service using either the API or CLI. -1. Start the sidecar proxy service. - -## Requirements - -If ACLs are enabled and you want to configure global Envoy settings in the [proxy defaults configuration entry](/consul/docs/connect/config-entries/proxy-defaults), you must present a token with `operator:write` permissions. Refer to [Create a service token](/consul/docs/security/acl/tokens/create/create-a-service-token) for additional information. - -## Configure global Envoy passthrough settings - -If you want to define global passthrough settings for all Envoy proxies, create a proxy defaults configuration entry and specify default settings, such as access log configuration. [Service defaults configuration entries](/consul/docs/connect/config-entries/service-defaults) override proxy defaults and individual service configurations override both configuration entries. - -1. Create a proxy defaults configuration entry and specify the following parameters: - - `Kind`: Must be set to `proxy-defaults` - - `Name`: Must be set to `global` -1. Configure any additional settings you want to apply to all proxies. Refer to [Proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) for details about all settings available in the configuration entry. -1. Apply the configuration by either calling the [`/config` API endpoint](/consul/api-docs/config) or running the [`consul config write` CLI command](/consul/commands/config/write). The following example writes a proxy defaults configuration entry from a local HCL file using the CLI: - -```shell-session -$ consul config write proxy-defaults.hcl -``` - -## Define service mesh proxy - -Create a service definition and configure the following fields: - -1. `name`: Specify a name for the service you want to attach a sidecar proxy to in the `name` field. This field is required for all services you want to register in Consul. -1. `port`: Specify a port number where other services registered with Consul can discover and connect to the service in the `port` field. This field is required for all services you want to register in Consul. -1. `connect`: Set the `connect` field to `{ sidecar_service: {} }`. The `{ sidecar_service: {} }` value is a macro that applies a set of default configurations that enable you to quickly implement a sidecar. Refer to [Sidecar service defaults](#sidecar-service-defaults) for additional information. -1. Configure any additional options for your service. Refer to [Services configuration reference](/consul/docs/services/configuration/services-configuration-reference) for details. - -In the following example, a service named `web` is configured with a sidecar proxy: - - - - - -```hcl -service = { - name = "web" - port = 8080 - connect = { sidecar_service = {} } -} -``` - - - - - -```json - -{ - "service": { - "name": "web", - "port": 8080, - "connect": { "sidecar_service": {} } - } -} - -``` - - - - - -When Consul processes the service definition, it generates the following configuration in place of the `sidecar_service` macro. Note that sidecar proxies services are based on the `connect-proxy` type: - - - - - -```hcl -services = [ - { - name = "web" - port = 8080 - } - checks = { - Interval = "10s" - Name = "Connect Sidecar Listening" - TCP = "127.0.0.1:20000" - } - checks = { - alias_service = "web" - name = "Connect Sidecar Aliasing web" - } - kind = "connect-proxy" - name = "web-sidecar-proxy" - port = 20000 - proxy = { - destination_service_id = "web" - destination_service_name = "web" - local_service_address = "127.0.0.1" - local_service_port = 8080 - } -] - -``` - - - - - -```json -{ - "services": [ - { - "name": "web", - "port": 8080 - }, - { - "name": "web-sidecar-proxy", - "port": 20000, - "kind": "connect-proxy", - "checks": [ - { - "Name": "Connect Sidecar Listening", - "TCP": "127.0.0.1:20000", - "Interval": "10s" - }, - { - "name": "Connect Sidecar Aliasing web", - "alias_service": "web" - } - ], - "proxy": { - "destination_service_name": "web", - "destination_service_id": "web", - "local_service_address": "127.0.0.1", - "local_service_port": 8080 - } - } - ] -} - -``` - - - - - -## Register the service - -Provide the service definition to the Consul agent to register your proxy service. You can use the same methods for registering proxy services as you do for registering application services: - -- Place the service definition in a Consul agent's configuration directory and start, restart, or reload the agent. Use this method when implementing changes to an existing proxy service. -- Use the `consul services register` command to register the proxy service with a running Consul agent. -- Call the `/agent/service/register` HTTP API endpoint to register the proxy service with a running Consul agent. - -Refer to [Register services and health checks](/consul/docs/services/usage/register-services-checks) for instructions. - -In the following example, the `consul services register` command registers a proxy service stored in `proxy.hcl`: - -```shell-session -$ consul services register proxy.hcl -``` - -## Start the proxy - -Envoy requires a bootstrap configuration file before it can start. Use the [`consul connect envoy` command](/consul/commands/connect/envoy) to create the Envoy bootstrap configuration and start the proxy service. Specify the name of the service with the attached proxy with the `-sidecar-for` option. - -The following example command starts an Envoy sidecar proxy for the `web` service: - -```shell-session -$ consul connect envoy -sidecar-for=web -``` - -For details about operating an Envoy proxy in Consul, refer to [](/consul/docs/connect/proxies/envoy) - -## Configuration reference - -The `sidecar_service` block is a service definition that can contain most regular service definition fields. Refer to [Limitations](#limitations) for information about unsupported service definition fields for sidecar proxies. - -Consul treats sidecar proxy service definitions as a root-level service definition. All fields are optional in nested definitions, which default to opinionated settings that are intended to reduce burden of setting up a sidecar proxy. - -## Sidecar service defaults - -The following fields are set by default on a sidecar service registration. With -[the exceptions noted](#limitations) any field may be overridden explicitly in -the `connect.sidecar_service` definition to customize the proxy registration. -The "parent" service refers to the service definition that embeds the sidecar -proxy. - -- `id` - ID defaults to `-sidecar-proxy`. This value cannot - be overridden as it is used to [manage the lifecycle](#lifecycle) of the - registration. -- `name` - Defaults to `-sidecar-proxy`. -- `tags` - Defaults to the tags of the parent service. -- `meta` - Defaults to the service metadata of the parent service. -- `port` - Defaults to being auto-assigned from a configurable - range specified by [`sidecar_min_port`](/consul/docs/agent/config/config-files#sidecar_min_port) - and [`sidecar_max_port`](/consul/docs/agent/config/config-files#sidecar_max_port). -- `kind` - Defaults to `connect-proxy`. This value cannot be overridden. -- `check`, `checks` - By default we add a TCP check on the local address and - port for the proxy, and a [service alias - check](/consul/docs/services/usage/checks#alias-checks) for the parent service. If either - `check` or `checks` fields are set, only the provided checks are registered. -- `proxy.destination_service_name` - Defaults to the parent service name. -- `proxy.destination_service_id` - Defaults to the parent service ID. -- `proxy.local_service_address` - Defaults to `127.0.0.1`. -- `proxy.local_service_port` - Defaults to the parent service port. - -### Example with overwritten configurations - -In the following example, the `sidecar_service` macro sets baselines configurations for the proxy, but the [proxy -upstreams](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) -and [built-in proxy -configuration](/consul/docs/connect/proxies/built-in) fields contain custom values: - -```json -{ - "name": "web", - "port": 8080, - "connect": { - "sidecar_service": { - "proxy": { - "upstreams": [ - { - "destination_name": "db", - "local_bind_port": 9191 - } - ], - "config": { - "handshake_timeout_ms": 1000 - } - } - } - } -} -``` - -## Limitations - -The following fields are not supported in the `connect.sidecar_service` block: - -- `id` - Sidecar services get an ID assigned and it is an error to override - this value. This ID is required to ensure that the agent can correctly deregister the sidecar service - later when the parent service is removed. -- `kind` - Kind defaults to `connect-proxy` and there is no way to - unset this behavior. -- `connect.sidecar_service` - Service definitions cannot be nested recursively. -- `connect.native` - The `kind` is fixed to `connect-proxy` and it is - an error to register a `connect-proxy` that is also service mesh-native. - -## Lifecycle - -Sidecar service registration is mostly a configuration syntax helper to avoid -adding lots of boiler plate for basic sidecar options, however the agent does -have some specific behavior around their lifecycle that makes them easier to -work with. - -The agent fixes the ID of the sidecar service to be based on the parent -service's ID, which enables the following behavior. - -- A service instance can only ever have one sidecar service registered. -- When re-registering through the HTTP API or reloading from configuration file: - - If something changes in the nested sidecar service definition, the update is applied to the current sidecar registration instead of creating a new - one. - - If a service registration removes the nested `sidecar_service` then the - previously registered sidecar for that service is deregistered - automatically. -- When reloading the configuration files, if a service definition changes its - ID, then a new service instance and a new sidecar instance are - registered. The old instance and proxy are removed because they are no longer found in - the configuration files. diff --git a/website/content/docs/connect/proxies/envoy-extensions/configuration/ext-authz.mdx b/website/content/docs/connect/proxies/envoy-extensions/configuration/ext-authz.mdx deleted file mode 100644 index ebe7f99a96e3..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/configuration/ext-authz.mdx +++ /dev/null @@ -1,739 +0,0 @@ ---- -layout: docs -page_title: External authorization extension configuration reference -description: Learn how to configure the ext-authz Envoy extension, which is a builtin Consul extension that configures Envoy proxies to request authorization from an external service. ---- - -# External authorization extension configuration reference - -This topic describes how to configure the external authorization Envoy extension, which configures Envoy proxies to request authorization from an external service. Refer to [Delegate authorization to an external service](/consul/docs/connect/proxies/envoy-extensions/usage/ext-authz) for usage information. - -## Configuration model - -The following list outlines the field hierarchy, data types, and requirements for the external authorization configuration. Place the configuration inside the `EnvoyExtension.Arguments` field in the proxy defaults or service defaults configuration entry. Refer to the following documentation for additional information: - -- [`EnvoyExtensions` in proxy defaults](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) -- [`EnvoyExtensions` in service defaults](/consul/docs/connect/config-entries/service-defaults#envoyextensions) - - [Envoy External Authorization documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/ext_authz/v3/ext_authz.proto) - -Click on a property name to view additional details, including default values. - -- [`Name`](#name): string | required | must be set to `builtin/ext-authz` -- [`Arguments`](#arguments): map | required - - [`ProxyType`](#arguments-proxytype): string | required | `connect-proxy` - - [`ListenerType`](#arguments-listenertype): string | required | `inbound` - - [`InsertOptions`](#arguments-insertoptions): map - - [`Location`](#arguments-insertoptions-location): string - - [FilterName](#arguments-insertoptions-filtername): string - - [`Config`](#arguments-config): map | required - - [`BootstrapMetadataLabelsKey`](#arguments-config-bootstrapmetadatalabelskey): string - - [`ClearRouteCache`](#arguments-config-grpcservice): boolean | `false` | HTTP only - - [`GrpcService`](#arguments-config-grpcservice): map - - [`Target`](#arguments-config-grpcservice-target): map | required - - [`Service`](#arguments-config-grpcservice-target-service): map - - [`Name`](#arguments-config-grpcservice-target-service): string - - [`Namespace`](#arguments-config-grpcservice-target-service): string | - - [`Partition`](#arguments-config-grpcservice-target-service): string | - - [`URI`](#arguments-config-grpcservice-target-uri): string - - [`Timeout`](#arguments-config-grpcservice-target-uri): string | `1s` - - [`Authority`](#arguments-config-grpcservice-authority): string - - [`InitialMetadata`](#arguments-config-grpcservice-initialmetadata): list - - [`Key`](#arguments-config-grpcservice-initialmetadata): string - - [`Value`](#arguments-config-grpcservice-initialmetadata): string - - [`HttpService`](#arguments-config-httpservice): map - - [`Target`](#arguments-config-httpservice-target): map | required - - [`Service`](#arguments-config-httpservice): string - - [`Name`](#arguments-config-httpservice-target-service): string - - [`Namespace`](#arguments-config-httpservice-target-service): string | - - [`Partition`](#arguments-config-httpservice-target-service): string | - - [`URI`](#arguments-config-httpservice): string - - [`Timeout`](#arguments-config-httpservice): string | `1s` - - [`PathPrefix`](#arguments-config-httpservice-pathprefix): string - - [`AuthorizationRequest`](#arguments-config-httpservice-authorizationrequest): map - - [`AllowedHeaders`](#arguments-config-httpservice-authorizationrequest-allowedheaders): list - - [`Contains`](#arguments-config-httpservice-authorizationrequest-allowedheaders): string - - [`Exact`](#arguments-config-httpservice-authorizationrequest-allowedheaders): string - - [`IgnoreCase`](#arguments-config-httpservice-authorizationrequest-allowedheaders): boolean - - [`Prefix`](#arguments-config-httpservice-authorizationrequest-allowedheaders): string - - [`SafeRegex`](#arguments-config-httpservice-authorizationrequest-allowedheaders): string - - [`HeadersToAdd`](#arguments-config-httpservice-authorizationrequest-headerstoadd): list - - [`Key`](#arguments-config-httpservice-authorizationrequest-headerstoadd): string - - [`Value`](#arguments-config-httpservice-authorizationrequest-headerstoadd): string - - [`AuthorizationResponse`](#arguments-config-httpservice-authorizationresponse): map - - [`AllowedUpstreamHeaders`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaders): list - - [`Contains`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaders): string - - [`Exact`](#arguments-config-httpservice-authorizationresponse-allowedheaders): string - - [`IgnoreCase`](#arguments-config-httpservice-authorizationresponse-allowedheaders): boolean - - [`Prefix`](#arguments-config-httpservice-authorizationresponse-allowedheaders): string - - [`SafeRegex`](#arguments-config-httpservice-authorizationresponse-allowedheaders): string - - [`Suffix`](#arguments-config-httpservice-authorizationresponse-allowedheaders): string - - [`AllowedUpstreamHeadersToAppend`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend): list - - [`Contains`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend): string - - [`Exact`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend): string - - [`IgnoreCase`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend): boolean - - [`Prefix`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend): string - - [`SafeRegex`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend): string - - [`Suffix`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend): string - - [`AllowedClientHeaders`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders): list - - [`Contains`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders): string - - [`Exact`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders): string - - [`IgnoreCase`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders): boolean - - [`Prefix`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders): string - - [`SafeRegex`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders): string - - [`Suffix`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders): string - - [`AllowedClientHeadersOnSuccess`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess): list - - [`Contains`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess): string - - [`Exact`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess): string - - [`IgnoreCase`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess): boolean - - [`Prefix`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess): string - - [`SafeRegex`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess): string - - [`Suffix`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess): string - - [`DynamicMetadataFromHeaders`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders): list - - [`Contains`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders): string - - [`Exact`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders): string - - [`IgnoreCase`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders): boolean - - [`Prefix`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders): string - - [`SafeRegex`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders): string - - [`Suffix`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders): string - - [`IncludePeerCertificate`](#arguments-config-includepeercertificate): boolean | `false` - - [`MetadataContextNamespaces`](#arguments-config-metadatacontextnamespaces): list of strings | HTTP only - - [`StatusOnError`](#arguments-config-statusonerror): number | `403` | HTTP only - - [`StatPrefix`](#arguments-config-statprefix): string | `response` - - [`WithRequestBody`](#arguments-config-withrequestbody): map | HTTP only - - [`MaxRequestBytes`](#arguments-config-withrequestbody-maxrequestbytes): number - - [`AllowPartialMessage`](#arguments-config-withrequestbody-allowpartialmessage): boolean | `false` - - [`PackAsBytes`](#arguments-config-withrequestbody-packasbytes): boolean | `false` - -## Complete configuration - -When each field is defined, an `ext-authz` configuration has the following form: - -```hcl -Name = "builtin/ext-authz" -Arguments = { - ProxyType = "connect-proxy" - InsertOptions = { - Location = "" - FilterName = "" - } - Config = { - BootstrapMetadataLabelsKey = "" - ClearRouteCache = false // HTTP only - GrpcService = { - Target = { - Service = { - Name = "" - Namespace = "" - Partition = "" - URI = "" - Timeout = "1s" - Authority = "" - InitialMetadata = [ - "" : "" - HttpService = { - Target = { - Service = { - Name = "" - Namespace = "" - Partition = "" - URI = "" - Timeout = "1s" - } - } - PathPrefix = "//" - AuthorizationRequest = { - AllowedHeaders = [ - Contains = "", - Exact = "", - IgnoreCase = false, - Prefix = "", - SafeRegex = "" - ] - HeadersToAdd = [ - "
" = "
" - ] - } - AuthorizationResponse = { - AllowedUpstreamHeaders = [ - Contains = "", - Exact = "", - IgnoreCase = false, - Prefix = "", - SafeRegex = "" - Suffix = "" - ] - AllowedUpstreamHeadersToAppend = [ - Contains = "", - Exact = "", - IgnoreCase = false, - Prefix = "", - SafeRegex = "" - Suffix = "" - ] - AllowedClientHeaders = [ - Contains = "", - Exact = "", - IgnoreCase = false, - Prefix = "", - SafeRegex = "" - Suffix = "" - ] - AllowedClientHeadersOnSuccess = [ - Contains = "", - Exact = "", - IgnoreCase = false, - Prefix = "", - SafeRegex = "" - Suffix = "" - DynamicMetadataFromHeaders = [ - Contains = "", - Exact = "", - IgnoreCase = false, - Prefix = "", - SafeRegex = "" - Suffix = "" - ] - IncludePeerCertificate = false - MetadataContextNamespaces = [ - "" - ] - StatusOnError = 403 // HTTP only - StatPrefix = "response" - WithRequestBody = { //HTTP only - MaxRequestBytes = - AllowPartialMessage = false - PackAsBytes = false -``` - -## Specification - -This section provides details about the fields you can configure for the external authorization extension. -### `Name` - -Specifies the name of the extension. Must be set to `builtin/ext-authz`. - -#### Values - -- Default: None -- This field is required. -- Data type: String value set to `builtin/ext-authz`. - -### `Arguments` - -Contains the global configuration for the extension. - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments.ProxyType` - -Specifies the type of Envoy proxy that this extension applies to. The extension only applies to proxies that match this type and is ignored for all other proxy types. The only supported value is `connect-proxy`. - -#### Values - -- Default: `connect-proxy` -- This field is required. -- Data type: String - -### `Arguments.ListenerType` - -Specifies the type of listener the extension applies to. The listener type is either `inbound` or `outbound`. If the listener type is set to `inbound`, Consul applies the extension so the external authorization is enabled when other services in the mesh send messages to the service attached to the proxy. If the listener type is set to `outbound`, Consul applies the extension so the external authorization is enabled when the attached proxy sends messages to other services in the mesh. - -#### Values - -- Default: `inbound` -- This field is required. -- Data type is one of the following string values: - - `inbound` - - `outbound` - -### `Arguments.InsertOptions` - -Specifies options for defining the insertion point for the external authorization filter in the Envoy filter chain. By default, the external authorization filter is inserted as the first filter in the filter chain per the default setting for the [`Location`](#arguments-insertoptions-location) field. - -#### Values - -- Default: None -- Data type: Map - -### `Arguments.InsertOptions.Location` - -Specifies the insertion point for the external authorization filter in the Envoy filter chain. You can specify one of the following string values: - -- `First`: Inserts the filter as the first filter in the filter chain, regardless of the filter specified in the `FilterName` field. -- `BeforeLast`: Inserts the filter before the last filter in the chain, regardless of the filter specified in the `FilterName` field. This allows the filter to be inserted after all other filters and immediately before the terminal filter. -- `AfterFirstMatch`: Inserts the filter after the first filter in the chain that has a name matching the value of the `FilterName` field. -- `AfterLastMatch`: Inserts the filter after the last filter in the chain that has a name matching the value of the `FilterName` field. -- `BeforeFirstMatch`: Inserts the filter before the first filter in the chain that has a name matching the value of the `FilterName` field. -- `BeforeLastMatch`: Inserts the filter before the last filter in the chain that has a name matching the value of the `FilterName` field. - -#### Values - -- Default: `BeforeFirstMatch` -- Data type: String - -### `Arguments.InsertOptions.FilterName` - -Specifies the name of an existing filter in the chain to match when inserting the external authorization filter. Specifying a filter name enables you to configure an insertion point relative to the position of another filter in the chain. - -#### Values - -- Default: `envoy.filters.network.tcp_proxy` for TCP services. `envoy.filters.http.router` for HTTP services. -- Data type: String - -### `Arguments.Config` - -Contains the configuration settings for the extension. - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments.Config.BootstrapMetadataLabelsKey` - -Specifies a key from the Envoy bootstrap metadata. Envoy adds labels associated with the key to the authorization request context. - -#### Values - -- Default: None -- Data type: String - -### `Arguments.Config.ClearRouteCache` - -Directs Envoy to clear the route cache so that the external authorization service correctly affects routing decisions. If set to `true`, the filter clears all cached routes. - -Envoy also clears cached routes if the status returned from the authorization service is `200` for HTTP responses or `0` for gRPC responses. Envoy also clears cached routes if at least one authorization response header is added to the client request or is used for altering another client request header. - -#### Values - -- Default: `false` -- Data type: Boolean - - -### `Arguments.Config.GrpcService` - -Specifies the external authorization configuration for gRPC requests. Configure the `GrpcService` or the [`HttpService`](#arguments-config-httpservice) settings, but not both. - -#### Values - -- Default: None -- Either the `GrpcService` or the `HttpService` configuration is required. -- Data type: Map - -### `Arguments.Config.GrpcService.Target` - -Configuration for specifying the service to send gRPC authorization requests to. The `Target` field may contain the following fields: - -- [`Service`](#arguments-config-grpcservice-target-service) or [`Uri`](#arguments-config-grpcservice-target-uri) -- [`Timeout`](#arguments-config-grpcservice-target-timeout) - - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments{}.Config{}.GrpcService{}.Target{}.Service{}` - -Specifies the upstream external authorization service. Configure this field when authorization requests are sent to an upstream service within the service mesh. The service must be configured as an upstream of the service that the filter is applied to. - -Configure either the `Service` field or the [`Uri`](#arguments-config-grpcservice-target-uri) field, but not both. - -#### Values - -- Default: None -- This field or [`Uri`](#arguments-config-grpcservice-target-uri) is required. -- Data type: Map - -The following table describes how to configure parameters for the `Service` field: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `Name` | Specifies the name of the upstream service. | String | None | -| `Namespace` | Specifies the Consul namespace that the upstream service belongs to. | String | `default` | -| `Partition` | Specifies the Consul admin partition that the upstream service belongs to. | String | `default` | - -### `Arguments.Config.GrpcService.Target.Uri` - -Specifies the URI of the external authorization service. Configure this field when you must provide an explicit URI to the external authorization service, such as cases in which the authorization service is running on the same host or pod. If set, the value of this field must be one of `localhost:`, `127.0.0.1:`, or `::1:`. - -Configure either the `Uri` field or the [`Service`](#arguments-config-grpcservice-target-service) field, but not both. - -#### Values - -- Default: None -- This field or [`Service`](#arguments-config-grpcservice-target-service) is required. -- Data type: String - -### `Arguments.Config.GrpcService.Target.Timeout` - -Specifies the maximum duration that a response can take to arrive upon request. - -#### Values - -- Default: `1s` -- Data type: String - -### `Arguments.Config.GrpcService.Authority` - -Specifies the authority header to send in the gRPC request. If this field is not set, the authority field is set to the cluster name. This field does not override the SNI that Envoy sends to the external authorization service. - -#### Values - -- Default: Cluster name -- Data type: String - -### `Arguments.Config.GrpcService.InitialMetadata[]` - -Specifies additional metadata to include in streams initiated to the `GrpcService`. You can specify metadata for injecting additional ad-hoc authorization headers, for example, `x-foo-bar: baz-key`. For more information, including details on header value syntax, refer to the [Envoy documentation on custom request headers](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#config-http-conn-man-headers-custom-request-headers). - -#### Values - -- Default: None -- Data type: List of one or more key-value pairs: - - - KEY: String - - VALUE: String - -### `Arguments{}.Config{}.HttpService{}` - -Contains the configuration for raw HTTP communication between the filter and the external authorization service. Configure the `HttpService` or the [`GrpcService`](#arguments-config-grpcservice) settings, but not both. - -#### Values - -- Default: None -- Either the `HttpService` or the `GrpcService` configuration is required. -- Data type: Map - -### `Arguments{}.Config{}.HttpService{}.Target{}` - -Configuration for specifying the service to send HTTP authorization requests to. The `Target` field may contain the following fields: - -- [`Service`](#arguments-config-httpservice-target-service) or [`Uri`](#arguments-config-httpservice-target-uri) -- [`Timeout`](#arguments-config-httpservice-target-timeout) - - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments{}.Config{}.HttpService{}.Target{}.Service{}` - -Specifies the upstream external authorization service. Configure this field when HTTP authorization requests are sent to an upstream service within the service mesh. The service must be configured as an upstream of the service that the filter is applied to. - -Configure either the `Service` field or the [`Uri`](#arguments-config-httpservice-target-uri) field, but not both. - -#### Values - -- Default: None -- This field or [`Uri`](#arguments-config-httpservice-target-uri) is required. -- Data type: Map - -The following table describes how to configure parameters for the `Service` field: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `Name` | Specifies the name of the upstream service. | String | None | -| `Namespace` | Specifies the Consul namespace that the upstream service belongs to. | String | `default` | -| `Partition` | Specifies the Consul admin partition that the upstream service belongs to. | String | `default` | - -### `Arguments{}.Config{}.HttpService{}.Target{}.Uri` - -Specifies the URI of the external authorization service. Configure this field when you must provide an explicit URI to the external authorization service, such as cases in which the authorization service is running on the same host or pod. If set, the value of this field must be one of `localhost:`, `127.0.0.1:`, or `::1:`. - -Configure either the `Uri` field or the [`Service`](#arguments-config-httpservice-target-service) field, but not both. - -#### Values - -- Default: None -- This field or [`Service`](#arguments-config-httpservice-target-service) is required. -- Data type: String - -### `Arguments{}.Config{}.HttpService{}.Target{}.Timeout` - -Specifies the maximum duration that a response can take to arrive upon request. - -#### Values - -- Default: `1s` -- Data type: String - -### `Arguments{}.Config{}.HttpService{}.PathPrefix` - -Specifies a prefix for the value of the authorization request header `Path`. You must include the preceding forward slash (`/`). - -#### Values - -- Default: None -- Data type: String - -### `Arguments{}.Config{}.HttpService{}.AuthorizationRequest{}` - -HTTP-only configuration that controls the HTTP authorization request metadata. The `AuthorizationRequest` field may contain the following parameters: - -- [`AllowHeaders`](#arguments-config-httpservice-authorizationrequest-allowheaders) -- [`HeadersToAdd`](#arguments-config-httpservice-authorizationrequest-headerstoadd) - -#### Values - -- Default: None -- Data type: Map - -### `Arguments{}.Config{}.HttpService{}.AuthorizationRequest{}.AllowHeaders[]` - -Specifies a set of rules for matching client request headers. The request to the external authorization service includes any client request headers that satisfy any of the rules. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/http/ext_authz/v3/ext_authz.proto#extensions-filters-http-ext-authz-v3-extauthz) for a detailed explanation. - -#### Values - -- Default: None -- Data type: List of key-value pairs - -The following table describes the matching rules you can configure in the `AllowHeaders` field: - -@include 'envoy_ext_rule_matcher.mdx' - -### `Arguments{}.Config{}.HttpService{}.AuthorizationRequest{}.HeadersToAdd[]` - -Specifies a list of headers to include in the request to the authorization service. Note that Envoy overwrites client request headers with the same key. - -#### Values - -- Default: None -- Data type: List of one or more key-value pairs: - - - KEY: String - - VALUE: String - -### `Arguments{}.Config{}.HttpService{}.AuthorizationResponse{}` - -HTTP-only configuration that controls HTTP authorization response metadata. The `AuthorizationResponse` field may contain the following parameters: - -- [`AllowedUpstreamHeaders`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaders) -- [`AllowedUpstreamHeadersToAppend`](#arguments-config-httpservice-authorizationresponse-allowedupstreamheaderstoappend) -- [`AllowedClientHeaders`](#arguments-config-httpservice-authorizationresponse-allowedclientheaders) -- [`AllowedClientHeadersOnSuccess`](#arguments-config-httpservice-authorizationresponse-allowedclientheadersonsuccess) -- [`DynamicMetadataFromHeaders`](#arguments-config-httpservice-authorizationresponse-dynamicmetadatafromheaders) - -#### Values - -- Default: None -- Data type: Map - -### `Arguments{}.Config{}.HttpService{}.AuthorizationResponse{}.AllowedUpstreamHeaders[]` - -Specifies a set of rules for matching authorization response headers. Envoy adds any headers from the external authorization service to the client response that satisfy the rules. Envoy overwrites existing headers. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the matching rules you can configure in the `AllowedUpstreamHeaders` field: - -@include 'envoy_ext_rule_matcher.mdx' - -### `Arguments{}.Config{}.HttpService{}.AuthorizationResponse{}.AllowedUpstreamHeadersToAppend[]` - -Specifies a set of rules for matching authorization response headers. Envoy appends any headers from the external authorization service to the client response that satisfy the rules. Envoy appends existing headers. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the matching rules you can configure in the `AllowedUpstreamHeadersToAppend` field: - -@include 'envoy_ext_rule_matcher.mdx' - -### `Arguments{}.Config{}.HttpService{}.AuthorizationResponse{}.AllowedClientHeaders[]` - -Specifies a set of rules for matching client response headers. Envoy adds any headers from the external authorization service to the client response that satisfy the rules. When the list is not set, Envoy includes all authorization response headers except `Authority (Host)`. When a header is included in this list, Envoy automatically adds the following headers: - -- `Path` -- `Status` -- `Content-Length` -- `WWWAuthenticate` -- `Location` - -#### Values - -- Default: None -- Data type: Map - -The following table describes the matching rules you can configure in the `AllowedClientHeaders` field: - -@include 'envoy_ext_rule_matcher.mdx' - -### `Arguments{}.Config{}.HttpService{}.AuthorizationResponse{}.AllowedClientHeadersOnSuccess[]` - -Specifies a set of rules for matching client response headers. Envoy adds headers from the external authorization service to the client response when the headers satisfy the rules and the authorization is successful. If the headers match the rules but the authorization fails or is denied, the headers are not added. If this field is not set, Envoy does not add any additional headers to the client's response on success. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the matching rules you can configure in the `AllowedClientHeadersOnSuccess` field: - -@include 'envoy_ext_rule_matcher.mdx' - -### `Arguments{}.Config{}.HttpService{}.AuthorizationResponse{}.DynamicMetadataFromHeaders[]` - -Specifies a set of rules for matching authorization response headers. Envoy emits headers from the external authorization service as dynamic metadata that the next filter in the chain can consume. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the matching rules you can configure in the `DynamicMetadataFromHeaders` field: - -@include 'envoy_ext_rule_matcher.mdx' - -### `Arguments{}.Config{}.IncludePeerCertificate` - -If set to `true`, Envoy includes the peer X.509 certificate in the authorization request if the certificate is available. - -#### Values - -- Default: `false` -- Data type: Boolean - -### `Arguments{}.Config{}.MetadataContextNamespace[]` - -HTTP only field that specifies a list of metadata namespaces. The values of the namespaces are included in the authorization request context. The `consul` namespace is always included in addition to the namespaces you configure. - -#### Values - -- Default: `["consul"]` -- Data type: List of string values - -### `Arguments{}.Config{}.StatusOnError` - -HTTP only field that specifies a return code status to respond with on error. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/type/v3/http_status.proto#enum-type-v3-statuscode) for additional information. - -#### Values - -- Default: `403` -- Data type: Integer - -### `Arguments{}.Config{}.StatPrefix` - -Specifies a prefix to add when writing statistics. - -#### Values - -- Default: `response` -- Data type: String - -### `Arguments{}.Config{}.WithRequestBody{}` - -HTTP only field that configures Envoy to buffer the client request body and send it with the authorization request. If unset, the request body is not sent with the authorization request. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the parameters that you can include in the `WithRequestBody` field: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `MaxRequestBytes` | Specifies the maximum size of the message body that the filter holds in memory. Envoy returns HTTP `403` and does not initiate the authorization process when the buffer reaches the number set in this field unless `AllowPartialMessage` is set to `true`. | uint32 | None | -| `AllowPartialMessage` | If set to `true`, Envoy buffers the request body until the value of `MaxRequestBytes` is reached. The authorization request is dispatched with a partial body and no `413` HTTP error returns by the filter. | Boolean | `false` | -| `PackAsBytes` | If set to `true`, Envoy sends the request body to the external authorization as raw bytes. Otherwise, Envoy sends the request body as a UTF-8 encoded string. | Boolean | `false` | - -## Examples - -The following examples demonstrate common configuration patterns for specific use cases. - -### Authorize gRPC requests to a URI - -In the following example, a service defaults configuration entry contains an `ext-authz` configuration. The configuration allows the `api` service to make gRPC authorization requests to a service at `localhost:9191`: - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/ext-authz" - Arguments = { - ProxyType = "connect-proxy" - Config = { - GrpcService = { - Target = { - URI = "127.0.0.1:9191" - } - } - } - } - } -] -``` - -### Upstream authorization - -In the following example, a service defaults configuration entry contains an `ext-authz` configuration. The configuration allows the `api` service to make gRPC authorization requests to a service named `authz`: - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/ext-authz" - Arguments = { - ProxyType = "connect-proxy" - Config = { - GrpcService = { - Target = { - Service = { - Name = "authz" - } - } - } - } - } - } -] -``` - -### Authorization requests after service intentions for Consul Enterprise - -In the following example for Consul Enterprise, the `api` service is configured to make an HTTP authorization requests to a service named `authz` in the `foo` namespace and `bar` partition. Envoy also inserts the external authorization filter after the `envoy.filters.http.rbac` filter: - -```hcl -Kind = "service-defaults" -Name = "api" -Protocol = "http" -EnvoyExtensions = [ - { - Name = "builtin/ext-authz" - Arguments = { - ProxyType = "connect-proxy" - InsertOptions = { - Location = "AfterLastMatch" - FilterName = "envoy.filters.http.rbac" - } - Config = { - HttpService = { - Target = { - Service = { - Name = "authz" - Namespace = "foo" - Partition = "bar" - } - } - } - } - } - } -] -``` diff --git a/website/content/docs/connect/proxies/envoy-extensions/configuration/otel-access-logging.mdx b/website/content/docs/connect/proxies/envoy-extensions/configuration/otel-access-logging.mdx deleted file mode 100644 index 9cef3a563650..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/configuration/otel-access-logging.mdx +++ /dev/null @@ -1,390 +0,0 @@ ---- -layout: docs -page_title: OpenTelemetry Access Logging extension configuration reference -description: Learn how to configure the otel-access-logging Envoy extension, which is a builtin Consul extension that configures Envoy proxies to send access logs to OpenTelemetry collector service. ---- - -# OpenTelemetry Access Logging extension configuration reference - -This topic describes how to configure the OpenTelemetry access logging Envoy extension, which configures Envoy proxies to send access logs to OpenTelemetry collector service. Refer to [Send access logs to OpenTelemetry collector service](/consul/docs/connect/proxies/envoy-extensions/usage/otel-access-logging) for usage information. - -## Configuration model - -The following list outlines the field hierarchy, data types, and requirements for the OpenTelemetry access logging configuration. Place the configuration inside the `EnvoyExtension.Arguments` field in the proxy defaults or service defaults configuration entry. Refer to the following documentation for additional information: - -- [`EnvoyExtensions` in proxy defaults](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) -- [`EnvoyExtensions` in service defaults](/consul/docs/connect/config-entries/service-defaults#envoyextensions) -- [Envoy OpenTelemetry Access Logging Configuration documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/access_loggers/open_telemetry/v3/logs_service.proto#extensions-access-loggers-open-telemetry-v3-opentelemetryaccesslogconfig) - -Click on a property name to view additional details, including default values. - -- [`Name`](#name): string | required | must be set to `builtin/otel-access-logging` -- [`Arguments`](#arguments): map | required - - [`ProxyType`](#arguments-proxytype): string | required | `connect-proxy` - - [`ListenerType`](#arguments-listenertype): string | required | `inbound` - - [`Config`](#arguments-config): map | required - - [`LogName`](#arguments-config-logname): string - - [`GrpcService`](#arguments-config-grpcservice): map - - [`Target`](#arguments-config-grpcservice-target): map | required - - [`Service`](#arguments-config-grpcservice-target-service): map - - [`Name`](#arguments-config-grpcservice-target-service): string - - [`Namespace`](#arguments-config-grpcservice-target-service): string | - - [`Partition`](#arguments-config-grpcservice-target-service): string | - - [`URI`](#arguments-config-grpcservice-target-uri): string - - [`Timeout`](#arguments-config-grpcservice-target-timeout): string | `1s` - - [`Authority`](#arguments-config-grpcservice-authority): string - - [`InitialMetadata`](#arguments-config-grpcservice-initialmetadata): list - - [`Key`](#arguments-config-grpcservice-initialmetadata): string - - [`Value`](#arguments-config-grpcservice-initialmetadata): string - - [`BufferFlushInterval`](#arguments-config-bufferflushinterval): string - - [`BufferSizeBytes`](#arguments-config-buffersizebytes): number - - [`FilterStateObjectsToLog`](#arguments-config-filterstateobjectstolog): list of strings - - [`RetryPolicy`](#arguments-config-retrypolicy): map - - [`RetryBackOff`](#arguments-config-retrypolicy-retrybackoff): map - - [`BaseInterval`](#arguments-config-retrypolicy-retrybackoff): string | `1s` - - [`MaxInterval`](#arguments-config-retrypolicy-retrybackoff): string | `30s` - - [`NumRetries`](#arguments-config-retrypolicy-numretries): number - - [`Body`](#arguments-config-body): string, number, boolean or list of bytes - - [`Attributes`](#arguments-config-attributes): map of string to string, number, boolean or list of bytes - - [`ResourceAttributes`](#arguments-config-resourceattributes): map of string to string, number, boolean or list of bytes - -## Complete configuration - -When each field is defined, an `otel-access-logging` configuration has the following form: - -```hcl -Name = "builtin/otel-access-logging" -Arguments = { - ProxyType = "connect-proxy" - ListenerType = "" - Config = { - LogName = "" - GrpcService = { - Target = { - Service = { - Name = "" - Namespace = "" - Partition = "" - } - URI = "" - Timeout = "1s" - } - Authority = "" - InitialMetadata = [ - "" : "" - ] - } - BufferFlushInterval = "1s" - BufferSizeBytes = 16384 - FilterStateObjectsToLog = [ - "Additional filter state objects to log in filter_state_objects" - ] - RetryPolicy = { - RetryBackOff = { - BaseInterval = "1s" - MaxInterval = "30s" - } - NumRetries = - } - Body = "Log Request Body" - Attributes = { - "" : "" - } - ResourceAttributes = { - "" : "" - } -``` - -## Specification - -This section provides details about the fields you can configure for the OpenTelemetry Access Logging extension. -### `Name` - -Specifies the name of the extension. Must be set to `builtin/otel-access-logging`. - -#### Values - -- Default: None -- This field is required. -- Data type: String value set to `builtin/otel-access-logging`. - -### `Arguments` - -Contains the global configuration for the extension. - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments.ProxyType` - -Specifies the type of Envoy proxy that this extension applies to. The extension only applies to proxies that match this type and is ignored for all other proxy types. The only supported value is `connect-proxy`. - -#### Values - -- Default: `connect-proxy` -- This field is required. -- Data type: String - -### `Arguments.ListenerType` - -Specifies the type of listener the extension applies to. The listener type is either `inbound` or `outbound`. If the listener type is set to `inbound`, Consul applies the extension so the access logging is enabled when other services in the mesh send messages to the service attached to the proxy. If the listener type is set to `outbound`, Consul applies the extension so the access logging is enabled when the attached proxy sends messages to other services in the mesh. - -#### Values - -- Default: `inbound` -- This field is required. -- Data type is one of the following string values: - - `inbound` - - `outbound` - -### `Arguments.Config` - -Contains the configuration settings for the extension. - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments.Config.LogName` - -Specifies the user-readable name of the access log to be returned in `StreamAccessLogsMessage.Identifier`. This allows the access log server to differentiate between different access logs coming from the same Envoy. If you leave it empty, it inherits the value from `ListenerType`. - -#### Values - -- Default: None -- Data type: String - -### `Arguments.Config.GrpcService` - -Specifies the OpenTelemetry Access Logging configuration for gRPC requests. - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments.Config.GrpcService.Target` - -Configuration for specifying the service to send gRPC access logging requests to. The `Target` field may contain the following fields: - -- [`Service`](#arguments-config-grpcservice-target-service) or [`Uri`](#arguments-config-grpcservice-target-uri) -- [`Timeout`](#arguments-config-grpcservice-target-timeout) - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `Arguments.Config.GrpcService.Target.Service` - -Specifies the upstream OpenTelemetry collector service. Configure this field when access logging requests are sent to an upstream service within the service mesh. The service must be configured as an upstream of the service that the filter is applied to. - -Configure either the `Service` field or the [`Uri`](#arguments-config-grpcservice-target-uri) field, but not both. - -#### Values - -- Default: None -- This field or [`Uri`](#arguments-config-grpcservice-target-uri) is required. -- Data type: Map - -The following table describes how to configure parameters for the `Service` field: - -| Parameter | Description | Data type | Default | -| ----------- | ---------------------------------------------------------------------------------------------------- | --------- | --------- | -| `Name` | Specifies the name of the upstream service. | String | None | -| `Namespace` | Specifies the Consul namespace that the upstream service belongs to. | String | `default` | -| `Partition` | Specifies the Consul admin partition that the upstream service belongs to. | String | `default` | - -### `Arguments.Config.GrpcService.Target.Uri` - -Specifies the URI of the OpenTelemetry collector service. Configure this field when you must provide an explicit URI to the OpenTelemetry collector service, such as cases in which the access logging service is running on the same host or pod. If set, the value of this field must be one of `localhost:`, `127.0.0.1:`, or `::1:`. - -Configure either the `Uri` field or the [`Service`](#arguments-config-grpcservice-target-service) field, but not both. - -#### Values - -- Default: None -- This field or [`Service`](#arguments-config-grpcservice-target-service) is required. -- Data type: String - -### `Arguments.Config.GrpcService.Target.Timeout` - -Specifies the maximum duration that a response can take to arrive upon request. - -#### Values - -- Default: `1s` -- Data type: String - -### `Arguments.Config.GrpcService.Authority` - -Specifies the authority header to send in the gRPC request. If this field is not set, the authority field is set to the cluster name. This field does not override the SNI that Envoy sends to the OpenTelemetry collector service. - -#### Values - -- Default: Cluster name -- Data type: String - -### `Arguments.Config.GrpcService.InitialMetadata` - -Specifies additional metadata to include in streams initiated to the `GrpcService`. You can specify metadata for injecting additional ad-hoc authorization headers, for example, `x-foo-bar: baz-key`. For more information, including details on header value syntax, refer to the [Envoy documentation on custom request headers](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#config-http-conn-man-headers-custom-request-headers). - -#### Values - -- Default: None -- Data type: List of one or more key-value pairs: - - - KEY: String - - VALUE: String - -### `Arguments.Config.BufferFlushInterval` - -Specifies an interval for flushing access logs to the gRPC stream. The logger flushes requests at the end of every interval or when the log reaches the batch size limit, whichever comes first. - -#### Values - -- Default: `1s` -- Data type: String - -### `Arguments.Config.BufferSizeBytes` - -Specifies the soft size limit in bytes for the access log entries buffer. The logger buffers requests until it reaches this limit or every time the flush interval elapses, whichever comes first. Set this field to `0` to disable batching. - -#### Values - -- Default: `16384` -- Data type: Integer - -### `Arguments.Config.FilterStateObjectsToLog` - -Specifies additional filter state objects to log in `filter_state_objects`. The logger calls `FilterState::Object::serializeAsProto` to serialize the filter state object. - -#### Values - -- Default: None -- Data type: List of String - -### `Arguments.Config.RetryPolicy` - -Defines a policy for retrying requests to the upstream service when fetching the plugin data. The `RetryPolicy` field is a map containing the following parameters: - -- [`RetryBackoff`](#pluginconfig-vmconfig-code-remote-retrypolicy) -- [`NumRetries`](#pluginconfig-vmconfig-code-remote-numretries) - -#### Values - -- Default: None -- Data type: Map - -### `Arguments.Config.RetryPolicy.RetryBackOff` - -Specifies parameters that control retry backoff strategy. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the fields you can specify in the `RetryBackOff` map: - -| Parameter | Description | Data type | Default | -| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ------- | -| `BaseInterval` | Specifies the base interval for determining the next backoff computation. Set a value greater than `0` and less than or equal to the `MaxInterval` value. | String | `1s` | -| `MaxInterval` | Specifies the maximum interval between retries. Set the value greater than or equal to the `BaseInterval` value. | String | `10s` | - -### `Arguments.Config.RetryPolicy.NumRetries` - -Specifies the number of times Envoy retries to fetch plugin data if the initial attempt is unsuccessful. - -#### Values - -- Default: `1` -- Data type: Integer - -### `Arguments.Config.Body` - -Specifies OpenTelemetry [LogResource](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/logs/v1/logs.proto) fields, following [Envoy access logging formatting](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage). See ‘body’ in the LogResource proto for more details. - -#### Values - -- Default: None -- Data type: String - -### `Arguments.Config.Attributes` - -Specifies `attributes` in the [LogResource](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/logs/v1/logs.proto). Refer to `attributes` in the LogResource proto for more details. - -#### Values - -- Default: None -- Data type: Map - -### `Arguments.Config.ResourceAttributes` - -Specifies OpenTelemetry [Resource](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/logs/v1/logs.proto#L51) attributes are filled with Envoy node information. - -#### Values - -- Default: None -- Data type: Map - -## Examples - -The following examples demonstrate common configuration patterns for specific use cases. - -### OpenTelemetry Access Logging requests to URI - -In the following example, a service defaults configuration entry contains an `otel-access-logging` configuration. The configuration allows the `api` service to make gRPC OpenTelemetry Access Logging requests to a service at `localhost:9191`: - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/otel-access-logging" - Arguments = { - ProxyType = "connect-proxy" - Config = { - GrpcService = { - Target = { - URI = "127.0.0.1:9191" - } - } - } - } - } -] -``` - -### Upstream OpenTelemetry Access Logging - -In the following example, a service defaults configuration entry contains an `otel-access-logging` configuration. The configuration allows the `api` service to make gRPC OpenTelemetry Access Logging requests to a service named `otel-collector`: - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/otel-access-logging" - Arguments = { - ProxyType = "connect-proxy" - Config = { - GrpcService = { - Target = { - Service = { - Name = "otel-collector" - } - } - } - } - } - } -] -``` diff --git a/website/content/docs/connect/proxies/envoy-extensions/configuration/property-override.mdx b/website/content/docs/connect/proxies/envoy-extensions/configuration/property-override.mdx deleted file mode 100644 index 610371b303da..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/configuration/property-override.mdx +++ /dev/null @@ -1,280 +0,0 @@ ---- -layout: docs -page_title: Property override configuration reference -description: Learn how to configure the property-override plugin, which is a builtin Consul plugin that allows you to set and remove Envoy proxy properties. ---- - -# Property override configuration reference - -This topic describes how to configure the `property-override` extension so that you can set and remove individual properties on the Envoy resources Consul generates. Refer to [Configure Envoy proxy properties](/consul/docs/connect/proxies/envoy-extensions/usage/property-override) for usage information. - -## Configuration model - -The following list outlines the field hierarchy, data types, and requirements for the `property-override` configuration. Place the configuration inside the `EnvoyExtension.Arguments` field in the proxy defaults or service defaults configuration entry. Refer the following documentation for additional information: - -- [`EnvoyExtensions` in proxy defaults](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) -- [`EnvoyExtensions` in service defaults](/consul/docs/connect/config-entries/service-defaults#envoyextensions) - -Click on a property name to view additional details, including default values. - -- [`ProxyType`](#proxytype): string | `connect-proxy` -- [`Debug`](#debug): bool | `false` -- [`Patches`](#patches): list | required - - [`ResourceFilter`](#patches-resourcefilter): map - - [`ResourceType`](#patches-resourcefilter-resourcetype): string | required - - [`TrafficDirection`](#patches-resourcefilter-trafficdirection): string | required - - [`Services`](#patches-resourcefilter-services): list - - [`Name`](#patches-resourcefilter-services-name): string - - [`Namespace`](#patches-resourcefilter-services-namespace): string | `default` | - - [`Partition`](#patches-resourcefilter-services-partition): string | `default` | - - [`Op`](#patches-op): string | required - - [`Path`](#patches-path): string | required - - [`Value`](#patches-value): map, number, boolean, or string - -## Complete configuration - -When each field is defined, a `property-override` configuration has the following form: - - -```hcl -ProxyType = "connect-proxy" -Debug = false -Patches = [ - { - ResourceFilter = { - ResourceType = "" - TrafficDirection = "" - Services = [ - { - Name = "" - Namespace = "" - Partition = "" - } - ] - } - Op = "" - Path = "" - Value = "" - } -] -``` - -## Specification - -This section provides details about the fields you can configure for the `property-override` extension. - -### `ProxyType` - -Specifies the type of Envoy proxy that the extension applies to. The only supported value is `connect-proxy`. - -#### Values - -- Default: `connect-proxy` -- Data type: String - -### `Debug` - -Enables full debug mode. When `Debug` is set to `true`, all possible fields for the given `ResourceType` and first unmatched segment of `Path` are returned on error. When set to `false`, the error message only includes the first ten possible fields. - -#### Values - -- Default: `false` -- Data type: Boolean - -### `Patches[]` - -Specifies a list of one or more JSON Patches that map to the Envoy proxy configurations you want to modify. Refer to [IETF RFC 6902](https://datatracker.ietf.org/doc/html/rfc6902/) for information about the JSON Patch specification. - -#### Values - -- Default: None -- The `Patches` parameter is a list of configurations in JSON Patch format. Each patch can contain the following fields: - - [`ResourceFilter`](#patches-resourcefilter) - - [`Op`](#patches-op) - - [`Path`](#patches-path) - - [`Value`](#patches-value) - - -### `Patches[].ResourceFilter{}` - -Specifies the filter for targeting specific Envoy resources. The `ResourceFilter` configuration is not part of the JSON Patch specification. - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -The following table describes how to configure a `ResourceFilter`: - -| Parameter | Description | Type | -| --- | --- | --- | -| `ProxyType` | Specifies the proxy type that the extension applies to. The only supported value is `connect-proxy`. | String | -| `ResourceType` | Specifies the Envoy resource type that the extension applies to. You can specify one of the following values for each `ResourceFilter`:
  • `cluster`
  • `cluster-load-assignment`
  • `route`
  • `listener`
| String | -| `TrafficDirection` | Specifies the type of traffic that the extension applies to relative to the current proxy. You can specify one of the following values for each `ResourceFilter`:
  • `inbound`: Targets resources for the proxy's inbound traffic.
  • `outbound`: Targets resources for the proxy's upstream services.
| String | -| `Services` | Specifies a list of services to target. Each member of the list has the following fields:
  • `Name`: Specifies the service associated with the traffic.
  • `Namespace`: Specifies the Consul Enterprise namespace the service is in.
  • `Partition`: Specifies the Consul Enterprise admin partition the service is in.
If `TrafficDirection` is set to `outbound`, upstream services in this field correspond to local Envoy resources that Consul patches at runtime.

Do not configure the `Services` field if `TrafficDirection` is set to `inbound`.

If this field is not set, Envoy targets all applicable resources. When patching outbound listeners, the patch includes the outbound transparent proxy listener only if `Services` is unset and if the local service is in transparent proxy mode. | List of maps | - -### `Patches[].Op` - -Specifies the JSON Patch operation to perform when the `ResourceFilter` matches a local Envoy proxy configuration. You can specify one of the following values for each patch: - -- `add`: Replaces a property or message specified by [`Path`](#patches-path) with the given value. The JSON Patch `add` operation does not merge objects. To emulate merges, you must configure discrete `add` operations for each changed field. Consul returns an error if the target field does not exist in the corresponding schema. -- `remove`: Unsets the value of the field specified by [`Path`](#patches-path). If the field is not set, no changes are made. Consul returns an error if the target field does not exist in the corresponding schema. - -#### Values - -- Default: None -- This field is required. -- Data type is one of the following string values: - - `add` - - `remove` - -### `Patches[].Path` - -Specifies where the extension performs the associated operation on the specified resource type. Refer to [`ResourceType`](#patches-resourcefilter) for information about specifying a resource type to target. Refer to [`Op`](#patches-op) for information about setting an operation to perform on the resources. - -The `Path` field does not support addressing array elements or protobuf map field entries. Refer to [Constructing paths](/consul/docs/connect/proxies/envoy-extensions/usage/property-override#constructing-paths) for information about how to construct paths. - -When setting fields, the extension sets any unset intermediate fields to their default values. A single operation on a nested field can set multiple intermediate fields. Because Consul sets the intermediate fields to their default values, you may need to configure subsequent patches to satisfy Envoy or Consul validation. - -#### Values - -- Default: None -- This field is required. -- Data type: String - -### `Patches[].Value{}` - -Defines a value to set at the specified [path](#patches-path) if the [operation](#patches-op) is set to `add`. You can specify either a scalar or enum value, an array of scalar or enum values (for repeated fields), or define a map that contains string keys and values corresponding to scalar or enum child fields. Single and repeated scalar and enum values are supported. Refer to the [example configurations](#examples) for additional guidance and to the [Envoy API documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/api) for additional information about Envoy proxy interfaces. - -If Envoy specifies a wrapper as the target field type, the extension automatically coerces simple values to the wrapped type when patching. For example, the value `32768` is allowed when targeting a cluster's `per_connection_buffer_limit_bytes`, which is a `UInt32Value` field. Refer to the [protobuf documentation](https://github.com/protocolbuffers/protobuf/blob/main/src/google/protobuf/wrappers.proto) for additional information about wrappers. - -#### Values - -- Default: None -- This field is required if [`Op`](#patches-op) is set to `add`, otherwise you must omit the field. -- This field takes one of the following data types: - - scalar - - enum - - map - -## Examples - -The following examples demonstrate patterns that you may be able to model your configurations on. - -### Enable `respect_dns_ttl` in a cluster - -In the following example, the `add` operation patches the outbound cluster corresponding to the `other-svc` upstream service to enable `respect_dns_ttl`. The `Path` specifies the [Cluster `/respect_dns_ttl`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/cluster.proto#envoy-v3-api-field-config-cluster-v3-cluster-respect-dns-ttl) top-level field and `Value` specifies a value of `true`: - -```hcl -Kind = "service-defaults" -Name = "my-svc" -Protocol = "http" -EnvoyExtensions = [ - { - Name = "builtin/property-override", - Arguments = { - ProxyType = "connect-proxy", - Patches = [ - { - ResourceFilter = { - ResourceType = "cluster" - TrafficDirection = "outbound" - Service = { - Name = "other-svc" - } - } - Op = "add" - Path = "/respect_dns_ttl" - Value = true - } - ] - } - } -] -``` - -### Update multiple values in a message field - -In the following example, both `ResourceFilter` blocks target the cluster corresponding to the `other-svc` upstream service and modify [Cluster `/outlier_detection`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/outlier_detection.proto) properties: - -```hcl -Kind = "service-defaults" -Name = "my-svc" -Protocol = "http" -EnvoyExtensions = [ - { - Name = "builtin/property-override", - Arguments = { - ProxyType = "connect-proxy", - Patches = [ - { - ResourceFilter = { - ResourceType = "cluster" - TrafficDirection = "outbound" - Services = [{ - Name = "other-svc" - }] - } - Op = "add" - Path = "/outlier_detection/max_ejection_time/seconds" - Value = 120 - }, - { - ResourceFilter = { - ResourceType = "cluster" - TrafficDirection = "outbound" - Services = [{ - Name = "other-svc" - }] - } - Op = "add" - Path = "/outlier_detection/max_ejection_time_jitter/seconds" - Value = 1 - } - ] - } - } -] -``` - -The use of `/seconds` in these examples corresponds to the same field in the [google.protobuf.Duration](https://github.com/protocolbuffers/protobuf/blob/main/src/google/protobuf/duration.proto) proto definition, since the extension does not support JSON serialized string forms of common protobuf types (e.g. `120s`). - --> **Note:** Using separate patches per field preserves any existing configuration of other fields in `outlier_detection` that may be directly set by Consul, such as [`enforcing_consecutive_5xx`](https://developer.hashicorp.com/consul/docs/connect/proxies/envoy#enforcing_consecutive_5xx). - -### Replace a message field - -In the following example, a `ResourceFilter` targets the cluster corresponding to the `other-svc` upstream service and _replaces_ the entire map of properties located at `/outlier_detection`, including explicitly set `enforcing_success_rate` and `success_rate_minimum_hosts` properties: - -```hcl -Kind = "service-defaults" -Name = "my-svc" -Protocol = "http" -EnvoyExtensions = [ - { - Name = "builtin/property-override" - Arguments = { - ProxyType = "connect-proxy" - Patches = [ - { - ResourceFilter = { - ResourceType = "cluster" - TrafficDirection = "outbound" - Services = [{ - Name = "other-svc" - }] - } - Op = "add" - Path = "/outlier_detection" - Value = { - "enforcing_success_rate" = 80 - "success_rate_minimum_hosts" = 2 - } - } - ] - } - } -] -``` - -Unlike the previous example, other `/outlier_detection` values set by Consul will _not_ be retained unless they match Envoy's defaults, because the entire value of `/outlier_detection` will be replaced. diff --git a/website/content/docs/connect/proxies/envoy-extensions/configuration/wasm.mdx b/website/content/docs/connect/proxies/envoy-extensions/configuration/wasm.mdx deleted file mode 100644 index 6884112aba31..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/configuration/wasm.mdx +++ /dev/null @@ -1,484 +0,0 @@ ---- -layout: docs -page_title: WebAssembly extension configuration reference -description: Learn how to configure the wasm Envoy extension, which is a builtin Consul extension that allows you to run WebAssembly plugins in Envoy proxies. ---- - -# WebAssembly extension configuration reference - -This topic describes how to configure the `wasm` extension, which directs Consul to run WebAssembly (Wasm) plugins in Envoy proxies. Refer to [Run WebAssembly plug-ins in Envoy proxy](/consul/docs/connect/proxies/envoy-extensions/usage/wasm) for usage information. - -## Configuration model - -The following list outlines the field hierarchy, data types, and requirements for the `wasm` configuration. Place the configuration inside the `EnvoyExtension.Arguments` field in the proxy defaults or service defaults configuration entry. Refer the following documentation for additional information: - -- [`EnvoyExtensions` in proxy defaults](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) -- [`EnvoyExtensions` in service defaults](/consul/docs/connect/config-entries/service-defaults#envoyextensions) - -Click on a property name to view additional details, including default values. - -- [`Protocol`](#protocol): string -- [`ListenerType`](#listenertype): string | required -- [`ProxyType`](#proxytype): string | `connect-proxy` -- [`PluginConfig`](#pluginconfig): map | required - - [`Name`](#pluginconfig-name): string - - [`RootID`](#pluginconfig-rootid): string | required - - [`VmConfig`](#pluginconfig-vmconfig): map - - [`VmID`](#pluginconfig-vmconfig-vmid): string - - [`Runtime`](#pluginconfig-vmconfig): string | `v8` - - [`Code`](#pluginconfig-vmconfig-code): map - - [`Local`](#pluginconfig-vmconfig-code-local): map - - [`Filename`](#pluginconfig-vmconfig-code-local): string - - [`Remote`](#pluginconfig-vmconfig-code-remote): map - - [`HttpURI`](#pluginconfig-vmconfig-code-remote-httpuri): map - - [`Service`](#pluginconfig-vmconfig-code-remote-httpuri-service): map - - [`Name`](#pluginconfig-vmconfig-code-remote-httpuri-service): string - - [`Namespace`](#pluginconfig-vmconfig-code-remote-httpuri-service): string - - [`Partition`](#pluginconfig-vmconfig-code-remote-httpuri-service): string - - [`URI`](#pluginconfig-vmconfig-code-remote-httpuri-uri): string - - [`Timeout`](#pluginconfig-vmconfig-code-remote-httpuri-timeout): string - - [`SHA256`](#pluginconfig-vmconfig-code-remote-sha256): string - - [`RetryPolicy`](#pluginconfig-vmconfig-code-remote-retrypolicy): map - - [`RetryBackOff`](#pluginconfig-vmconfig-code-remote-retrypolicy-retrybackoff): map - - [`BaseInterval`](#pluginconfig-vmconfig-code-remote-retrypolicy-retrybackoff): string - - [`MaxInterval`](#pluginconfig-vmconfig-code-remote-retrypolicy-retrybackoff): string - - [`NumRetries`](#pluginconfig-vmconfig-code-remote-retrypolicy-numretries): number | `-1` - - [`Configuration`](#pluginconfig-vmconfig-configuration): string - - [`EnvironmentVariables`](#pluginconfig-vmconfig-environmentvariables): map - - [`HostEnvKeys`](#pluginconfig-vmconfig-environmentvariables-hostenvkeys): list of strings - - [`KeyValues`](#pluginconfig-vmconfig-environmentvariables-keyvalues): map - - [`Configuration`](#pluginconfig-configuration): string - - [`CapabilityRestrictionConfiguration`](#pluginconfig-vmconfig-capabilityrestrictionconfiguration): map - - [`AllowedCapabilities`](#pluginconfig-vmconfig-capabilityrestrictionconfiguration): map of strings - -## Complete configuration - -When all parameters are set for the extension, the configuration has the following form: - -```hcl -Protocol = "" -ListenerType = "" -ProxyType = "connect-proxy" -PluginConfig = { - Name = "" - RootID = "" - VmConfig = { - VmID = "" - Runtime = "v8" - Code = { - Local = { # Set either `Local` or `Remote', not both - Filename = "" - } - Remote = { # Set either `Local` or `Remote', not both - HttpURI = { - Service = { - Name = "" - Namespace = "" - Partition = "" - } - URI = "" - Timeout = "1s" - SHA256 = "" - RetryPolicy = { - RetryBackOff = { - BaseInterval = "1s" - MaxInterval = "10s" - } - NumRetries = -1 - } - } - Configuration = "" - EnvironmentVariables = { - HostEnvKeys = [ - <"keys"> - ] - KeyValues = { - [ - <"key = value"> - ] - } - } - Configuration = "" - CapabilityRestrictionConfiguration = { - AllowedCapabilities = { - "fd_read" = {} - "fd_seek" = {} - "environ_get" = {} - "clock_get_time" = {} - } - } -} -``` - -## Specification - -This section provides details about the fields you can configure for the `wasm` extension. - -### `Protocol` - -Specifies the type of Wasm filter to apply. You can set either `tcp` or `http`. Set the `Protocol` to the protocol that the Wasm plugin implements when loaded by the filter. For Consul to apply the filter, the protocol must match the service's protocol. - -#### Values - -- Default: None -- This field is required. -- Data type is one of the following string values: - - `tcp` - - `http` - -### `ListenerType` - -Specifies the type of listener the extension applies to. The listener type is either `inbound` or `outbound`. If the listener type is set to `inbound`, Consul applies the extension so the Wasm plugin is run when other services in the mesh send messages to the service attached to the proxy. If the listener type is set to `outbound`, Consul applies the extension so the Wasm plugin is run when the attached proxy sends messages to other services in the mesh. - -#### Values - -- Default: None -- This field is required. -- Data type is one of the following string values: - - `inbound` - - `outbound` - -### `ProxyType` - -Specifies the type of Envoy proxy that the extension applies to. The only supported value is `connect-proxy`. - -#### Values - -- Default: `connect-proxy` -- This field is required. -- Data type: String - -### `PluginConfig{}` - -Map containing the following configuration parameters for your Wasm plugin: - -- [`Name`](#pluginconfig-name) -- [`RootID`](#pluginconfig-rootid) -- [`VmConfig`](#pluginconfig-vmconfig) -- [`Configuration`](#pluginconfig-configuration) -- [`CapabilitiesRestrictionConfiguration`](#pluginconfig-capabilitiesrestrictionconfiguration) - -#### Values - -- Default: None -- This field is required. -- Data type: Map - -### `PluginConfig{}.Name` - -Specifies a unique name for a filter in a VM. Envoy uses the name to identify specific filters if multiple filters are processed on a VM with the same `VmID` and `RootID`. The name also appears in logs for debugging purposes. - -#### Values - -- Default: None -- Data type: String - -### `PluginConfig{}.RootID` - -Specifies a unique ID for a set of filters in a VM that share a `RootContext` and `Contexts`, such as a Wasm `HttpFilter` and a Wasm `AccessLog`, if applicable. All filters with the same `RootID` and `VmID` share `Context`s. - -#### Values - -- Default: None -- Data type: String - -### `PluginConfig{}.VmConfig{}` - -Map containing the following configuration parameters for the VM that runs your Wasm plugin: - -- [`VmID`](#pluginconfig-vmconfig-vmid) -- [`Runtime`](#pluginconfig-vmconfig-runtime) -- [`Code`](#pluginconfig-vmconfig-code) -- [`Configuration`](#pluginconfig-vmconfig-configuration) -- [`EnvironmentVariables`](#pluginconfig-vmconfig-environmentvariables) - -#### Values - -- Default: None -- Data type: Map - -### `PluginConfig{}.VmConfig{}.VmID` - -Specifies an ID that Envoy uses with a hash of the Wasm code to determine which VM runs the plugin. All plugins with the same `VmID` and `Code` use the same VM. If unspecified, all plugins with the same code run in the same VM. Sharing a VM between plugins may have security implications, but can reduce memory utilization and can make data sharing easier. - -#### Values - -- Default: None -- Data type: String - -### `PluginConfig{}.VmConfig{}.Runtime` - -Specifies the type of Wasm runtime. -#### Values - -- Default: `v8` -- Data type is one of the following string values: - - `v8` - - `wastime` - - `wamr` - - `wavm` - -### `PluginConfig{}.VmConfig{}.Code{}` - -Map containing one of the following configuration parameters: - -- [`Local`](#pluginconfig-vmconfig-code-local) -- [`Remote`](#pluginconfig-vmconfig-code-local) - -You can configure either `Local` or `Remote`, but not both. The `Code` block instructs Consul how to find the Wasm plugin code for Envoy to execute. - -#### Values - -- Default: None -- This field is required. -- Data type is a map containing one of the following configurations: - - [`Local`](#pluginconfig-vmconfig-code-local) - - [`Remote`](#pluginconfig-vmconfig-code-local) - -### `PluginConfig{}.VmConfig{}.Code{}.Local{}` - -Instructs Envoy to load the plugin code from a local volume. Do not configure the `Local` parameter if the plugin code is on a remote server. - -The `Local` field is a map that contains a `Filename` parameter. The `Filename` parameter takes a string value that specifies the path to the plugin on the local file system. - -Local plug-ins are not supported in Kubernetes-orchestrated environments. - -#### Values - -- Default: None -- Data type is a map containing the `Filename` parameter. The `Filename` parameter takes a string value that specifies the path to the plugin on the local file system. - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}` - -Instructs Envoy to load the plugin code from a remote server. Do not configure the `Remote` parameter if the plugin code is on the local VM. - -The `Remote` field is a map containing the following parameters: - -- [`HttpURI`](#pluginconfig-vmconfig-code-remote-httpuri) -- [`SHA256`](#pluginconfig-vmconfig-code-remote-sha256) -- [`RetryPolicy`](#pluginconfig-vmconfig-code-remote-retrypolicy) - -#### Values - -- Default: None -- Data type: Map - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.HttpURI{}` - -Specifies the configuration for fetching the remote data. The `HttpURI` field is a map containing the following parameters: - -- [`Service`](#pluginconfig-vmconfig-code-remote-httpuri-service) -- [`URI`](#pluginconfig-vmconfig-code-remote-httpuri-uri) -- [`Timeout`](#pluginconfig-vmconfig-code-remote-httpuri-uri) - -#### Values - -- Default: None -- Data type: Map - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.HttpURI{}.Service` - -Specifies the upstream service to fetch the remote plugin from. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the fields you can specify in the `Service` map: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `Name` | Specifies the name of the upstream service. | String | None | -| `Namespace` | Specifies the Consul namespace that the upstream service belongs to. | String | `default` | -| `Partition` | Specifies the Consul admin partition that the upstream service belongs to. | String | `default` | - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.HttpURI{}.URI` - -Specifies the URI Envoy uses to fetch the plugin file from the upstream. This field is required for Envoy to retrieve plugin code from a remote location. You must specify the fully-qualified domain name (FQDN) of the remote URI, which includes the protocol, host, and path. - -#### Values - -- Default: None -- This field is required. -- Data type: String value that specifies a FQDN - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.HttpURI{}.Timeout` - -Specifies the maximum duration that a response can take to complete the request for the plugin data. - -#### Values - -- Default: `1s` -- Data type: String - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.SHA256` - -Specifies the required SHA256 string for verifying the remote data. - -#### Values - -- Default: None -- This field is required. -- Data type: String - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.RetryPolicy{}` - -Defines a policy for retrying requests to the upstream service when fetching the plugin data. The `RetryPolicy` field is a map containing the following parameters: - -- [`RetryBackoff`](#pluginconfig-vmconfig-code-remote-retrypolicy) -- [`NumRetries`](#pluginconfig-vmconfig-code-remote-numretries) - -#### Values - -- Default: None -- Data type: Map -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.RetryPolicy{}.RetryBackOff{}` - -Specifies parameters that control retry backoff strategy. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the fields you can specify in the `RetryBackOff` map: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `BaseInterval` | Specifies the base interval for determining the next backoff computation. Set a value greater than `0` and less than or equal to the `MaxInterval` value. | String | `1s` | -| `MaxInterval` | Specifies the maximum interval between retries. Set the value greater than or equal to the `BaseInterval` value. | String | `10s` | - -### `PluginConfig{}.VmConfig{}.Code{}.Remote{}.RetryPolicy{}.NumRetries` - -Specifies the number of times Envoy retries to fetch plugin data if the initial attempt is unsuccessful. - -#### Values - -- Default: `1` -- Data type: Integer - -### `PluginConfig{}.VmConfig{}.Configuration` - -Specifies the configuration Envoy encodes as bytes and passes to the plugin during VM startup. Refer to [`proxy_on_vm_start` in the Proxy Wasm ABI documentation](https://github.com/proxy-wasm/spec/tree/cefc2cbab70eaba2c187523dff0b38fce2f90771/abi-versions/vNEXT#proxy_on_vm_start) for additional information. - -#### Values - -- Default: None -- This field is required. -- Data type: String - -### `PluginConfig{}.VmConfig{}.EnvironmentVariables{}` - -Specifies environment variables for Envoy to inject into this VM so that they are available through WASI's `environ_get` and `environ_get_sizes` system calls. - -In most cases, WASI calls the functions implicitly in your language's standard library. As a result, you do not need to call them directly. You can also access environment variables as you would on native platforms. - -Envoy rejects the configuration if there is a key space conflict. - -The `EnvironmentVariables` field is a map containing parameters for setting the keys and values. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the parameters contained in the `EnvironmentVariables` map: - -| Parameter | Description | Data type | Default | -| --- | --- | --- | --- | -| `HostEnvKeys` | Specifies a list of Envoy environment variable keys to expose to the VM. If a key exists in Envoy's environment variables, then the key-value pair is injected. Envoy ignores `HostEnvKeys` that do not exist in its environment variables. | List | None | -| `KeyValues` | Specifies a map of explicit key-value pairs to inject into the VM. | Map of string keys and values | None | - -### `PluginConfig{}.Configuration` - -Specifies the configuration Consul encodes as bytes and passes to the plugin during plugin startup. Refer to [`proxy_on_configure` in the Envoy documentation](https://github.com/proxy-wasm/spec/tree/cefc2cbab70eaba2c187523dff0b38fce2f90771/abi-versions/vNEXT#proxy_on_configure) for additional information. - -#### Values - -- Default: None -- Data type: String - -### `PluginConfig{}.CapabilityRestrictionConfiguration{}` - -Specifies a configuration for restricting the proxy-Wasm capabilities that are available to the module. - -The `CapabilityRestrictionConfiguration` field is a map that contains a `AllowedCapabilities` parameter. The `AllowedCapabilities` parameter takes a map of string values that correspond to Envoy capability names. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/wasm/v3/wasm.proto#extensions-wasm-v3-capabilityrestrictionconfig) for additional information. - -!> **Security warning**: Consul ignores the value that each capability maps to. You can leave the `AllowedCapabilities` empty to allow all capabilities, but doing so gives the configured plugin full unrestricted access to the runtime API provided by the Wasm VM. You must set this to a non-empty map if you want to restrict access to specific capabilities provided by the Wasm runtime API. - -#### Values - -- Default: `""` -- Data type is a map containing the `AllowedCapabilities` parameter. The `AllowedCapabilities` parameter takes a map of string values that correspond to Envoy capability names. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/wasm/v3/wasm.proto#extensions-wasm-v3-capabilityrestrictionconfig) for additional information. - -## Examples - -The following examples demonstrate patterns that you may be able to model your configurations on. - -### Run a Wasm plugin from a local file - -In the following example, Consul figures the Envoy proxy for the `db` service with an inbound TCP Wasm filter that uses the plugin code from the local `/consul/extensions/sqli.wasm` file. - -```hcl - -Kind = "service-defaults" -Name = "db" -Protocol = "tcp" -EnvoyExtensions = [ - { - Name = "builtin/wasm" - Required = true - Arguments = { - Protocol = "tcp" - ListenerType = "inbound" - PluginConfig = { - VmConfig = { - Code = { - Local = { - Filename = "file:///consul/extensions/sqli.wasm" - } - } - } - Configuration = <- - Learn how Envoy extensions enables you to add support for additional Envoy features without modifying the Consul codebase. ---- - -# Envoy extensions overview - -This topic provides an overview of Envoy extensions in Consul service mesh deployments. You can modify Consul-generated Envoy resources to add additional functionality without modifying the Consul codebase. - -## Introduction - -Consul supports two methods for modifying Envoy behavior. You can either modify the Envoy resources Consul generates through [escape hatches](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) or configure your services to use Envoy extensions using the `EnvoyExtension` parameter. Implementing escape hatches requires rewriting the Envoy resources so that they are compatible with Consul, a task that also requires understanding how Consul names Envoy resources and enforces intentions. - -Instead of modifying Consul code, you can configure your services to use Envoy extensions through the `EnvoyExtensions` field. This field is definable in [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) and [`service-defaults`](/consul/docs/connect/config-entries/service-defaults#envoyextensions) configuration entries. - - -## Supported extensions - -Envoy extensions enable additional service mesh functionality in Consul by changing how the sidecar proxies behave. Extensions dynamically modify the configuration of Envoy proxies based on Consul configuration entries, enabling a wider set of use cases for the service mesh traffic that passes through an Envoy proxy. Consul supports the following extensions: - -- External authorization -- Fault injection -- Lua -- Lambda -- OpenTelemetry Access Logging -- Property override -- WebAssembly (Wasm) - -### External authorization - -The `ext-authz` extension lets you configure external authorization filters for Envoy proxy so that you can route requests to external authorization systems. Refer to the [external authorization documentation](/consul/docs/connect/proxies/envoy-extensions/usage/ext-authz) for more information. - -### Fault injection - -The `fault-injection` extension lets you alter responses from an upstream service so that users can test the resilience of their system to different unexpected issues. Refer to the [fault injection documentation](/consul/docs/connect/manage-traffic/fault-injection) for more information. - -### Lambda - -The `lambda` Envoy extension enables services to make requests to AWS Lambda functions through the mesh as if they are a normal part of the Consul catalog. Refer to the [Lambda extension documentation](/consul/docs/connect/proxies/envoy-extensions/usage/lambda) for more information. - -### Lua - -The `lua` Envoy extension enables HTTP Lua filters in your Consul Envoy proxies. It allows you to run Lua scripts during Envoy requests and responses from Consul-generated Envoy resources. Refer to the [Lua extension documentation](/consul/docs/connect/proxies/envoy-extensions/usage/lua) for more information. - -### OpenTelemetry Access Logging - -The `otel-access-logging` Envoy extension lets you configure Envoy proxies to send access logs to OpenTelemetry collector service. Refer to the [OpenTelemetry Access Logging extension documentation](/consul/docs/connect/proxies/envoy-extensions/usage/otel-access-logging) for more information. - -### Property override - -The `property-override` extension lets you set and unset individual properties on the Envoy resources that Consul generates. Use the extension instead of [escape-hatch overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) to enable advanced Envoy configuration. Refer to the [property override documentation](/consul/docs/connect/proxies/envoy-extensions/usage/property-override) for more information. - -### WebAssembly - -The `wasm` extension enables you to configure TCP and HTTP filters that invoke custom WebAssembly (Wasm) plugins. Refer to the [WebAssembly extension documentation](/consul/docs/connect/proxies/envoy-extensions/usage/wasm) for more information. diff --git a/website/content/docs/connect/proxies/envoy-extensions/usage/apigee-ext-authz.mdx b/website/content/docs/connect/proxies/envoy-extensions/usage/apigee-ext-authz.mdx deleted file mode 100644 index a0492de0b246..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/usage/apigee-ext-authz.mdx +++ /dev/null @@ -1,199 +0,0 @@ ---- -layout: docs -page_title: Delegate authorization to Apigee -description: Learn how to use the `ext-authz` Envoy extension to delegate data plane authorization requests to Apigee. ---- - -# Delegate authorization to Apigee - -This topic describes how to use the external authorization Envoy extension to delegate data plane authorization requests to Apigee. - -For more detailed guidance, refer to the [`learn-consul-apigee-external-authz` repo](https://github.com/hashicorp-education/learn-consul-apigee-external-authz) on GitHub. - -## Workflow - -Complete the following steps to use the external authorization extension with Apigee: - -1. Deploy the Apigee Adapter for Envoy and register the service in Consul. -1. Configure the `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. -1. Apply the configuration entry. - -## Deploy the Apigee Adapter for Envoy - -The [Apigee Adapter for Envoy](https://cloud.google.com/apigee/docs/api-platform/envoy-adapter/v2.0.x/concepts) is an Apigee-managed API gateway that uses Envoy to proxy API traffic. - -To download and install Apigee Adapter for Envoy, refer to the [getting started documentation](https://cloud.google.com/apigee/docs/api-platform/envoy-adapter/v2.0.x/getting-started) or follow along with the [`learn-consul-apigee-external-authz` repo](https://github.com/hashicorp-education/learn-consul-apigee-external-authz) on GitHub. - -After you deploy the service in your desired runtime, create a service defaults configuration entry for the service's gRPC protocol. - - - - - -```hcl -Kind = "service-defaults" -Name = "apigee-remote-service-envoy" -Protocol = "grpc" -``` - - - - - -```json -{ - "kind": "service-defaults", - "name": "apigee-remote-service-envoy", - "protocol": "grpc" -} -``` - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: apigee-remote-service-envoy - namespace: apigee -spec: - protocol: grpc -``` - - - - -## Configure the `EnvoyExtensions` - -Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. - -- When you configure Envoy extensions on proxy defaults, they apply to every service. -- When you configure Envoy extensions on service defaults, they apply to all instances of a service with that name. - - - Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. - - -Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. - -The following example configures the default behavior for all services named `api` so that the Envoy proxies running as sidecars for those service instances target the apigee-remote-service-envoy service for gRPC authorization requests: - - - - - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/ext-authz" - Arguments = { - ProxyType = "connect-proxy" - Config = { - GrpcService = { - Target = { - Service = { - Name = "apigee-remote-service-envoy" - } - } - } - } - } - } -] -``` - - - - - - -```json -{ - "Kind": "service-defaults", - "Name": "api", - "EnvoyExtensions": [{ - "Name": "builtin/ext-authz", - "Arguments": { - "ProxyType": "connect-proxy", - "Config": { - "GrpcService": { - "Target": { - "Service": { - "Name": "apigee-remote-service-envoy" - } - } - } - } - } - } - ] -} -``` - - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: api - namespace: default -spec: - envoyExtensions: - - name: builtin/ext-authz - arguments: - proxyType: connect-proxy - config: - grpcService: - target: - service: - name: apigee-remote-service-envoy - namespace: apigee -``` - - - - -Refer to the [external authorization extension configuration reference](/consul/docs/connect/proxies/envoy-extensions/configuration/ext-authz) for details on how to configure the extension. - -Refer to the [proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/connect/config-entries/service-defaults) for details on how to define the configuration entries. - -## Apply the configuration entry - -On the CLI, you can use the `consul config write` command and specify the names of the configuration entries to apply them to Consul. For Kubernetes-orchestrated networks, use the `kubectl apply` command to update the relevant CRD. - - - - -```shell-session -$ consul config write apigee-remote-service-envoy.hcl -$ consul config write api-auth-service-defaults.hcl -``` - - - - -```shell-session -$ consul config write apigee-remote-service-envoy.json -$ consul config write api-auth-service-defaults.json -``` - - - - -```shell-session -$ kubectl apply -f apigee-remote-service-envoy.yaml -$ kubectl apply -f api-auth-service-defaults.yaml -``` - - - diff --git a/website/content/docs/connect/proxies/envoy-extensions/usage/ext-authz.mdx b/website/content/docs/connect/proxies/envoy-extensions/usage/ext-authz.mdx deleted file mode 100644 index 51a004c17b32..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/usage/ext-authz.mdx +++ /dev/null @@ -1,153 +0,0 @@ ---- -layout: docs -page_title: Delegate authorization to an external service -description: Learn how to use the `ext-authz` Envoy extension to delegate data plane authorization requests to external systems. ---- - -# Delegate authorization to an external service - -This topic describes how to use the external authorization Envoy extension to delegate data plane authorization requests to external systems. - -## Workflow - -Complete the following steps to use the external authorization extension: - -1. Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. -1. Apply the configuration entry. - -## Add the `EnvoyExtensions` - -Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. - -- When you configure Envoy extensions on proxy defaults, they apply to every service. -- When you configure Envoy extensions on service defaults, they apply to a specific service. - -Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. - -The following example shows a service defaults configuration entry for the `api` service that directs the Envoy proxy to make gRPC authorization requests to the `authz` service: - - - - - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/ext-authz" - Arguments = { - ProxyType = "connect-proxy" - Config = { - GrpcService = { - Target = { - Service = { - Name = "authz" - } - } - } - } - } - } -] -``` - - - - - -```json -{ - "Kind": "service-defaults", - "Name": "api", - "EnvoyExtensions": [{ - "Name": "builtin/ext-authz", - "Arguments": { - "ProxyType": "connect-proxy", - "Config": { - "GrpcService": { - "Target": { - "Service": { - "Name": "authz" - } - } - } - } - } - } - ] -} -``` - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: api - namespace: default -spec: - envoyExtensions: - - name: builtin/ext-authz - arguments: - proxyType: connect-proxy - config: - grpcService: - target: - service: - name: authz - namespace: authz -``` - - - - -Refer to the [external authorization extension configuration reference](/consul/docs/connect/proxies/envoy-extensions/configuration/ext-authz) for details on how to configure the extension. - -Refer to the [proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/connect/config-entries/service-defaults) for details on how to define the configuration entries. - -!> **Warning:** Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. - -### Unsupported Envoy configuration fields - -The following Envoy configurations are not supported: - -| Configuration | Workaround | -| --- | --- | -| `deny_at_disable` | Disable filter by removing it from the service’s configuration in the configuration entry. | -| `failure_mode_allow` | Set the `EnvoyExtension.Required` field to `true` in the [service defaults configuration entry](/consul/docs/connect/config-entries/service-defaults#envoyextensions) or [proxy defaults configuration entry](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions). | -| `filter_enabled` | Set the `EnvoyExtension.Required` field to `true` in the [service defaults configuration entry](/consul/docs/connect/config-entries/service-defaults#envoyextensions) or [proxy defaults configuration entry](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions). | -| `filter_enabled_metadata` | Set the `EnvoyExtension.Required` field to `true` in the [service defaults configuration entry](/consul/docs/connect/config-entries/service-defaults#envoyextensions) or [proxy defaults configuration entry](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions). | -| `transport_api_version` | Consul only supports v3 of the transport API. As a result, there is no workaround for implementing the behavior of this field. | - -## Apply the configuration entry - -If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. - - - - -```shell-session -$ consul config write api-auth-service-defaults.hcl -``` - - - - -```shell-session -$ consul config write api-auth-service-defaults.json -``` - - - - -```shell-session -$ kubectl apply -f api-auth-service-defaults.yaml -``` - - - diff --git a/website/content/docs/connect/proxies/envoy-extensions/usage/lambda.mdx b/website/content/docs/connect/proxies/envoy-extensions/usage/lambda.mdx deleted file mode 100644 index c06bf3bd220c..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/usage/lambda.mdx +++ /dev/null @@ -1,161 +0,0 @@ ---- -layout: docs -page_title: Lambda Envoy Extension -description: >- - Learn how the `lambda` Envoy extension enables Consul to join AWS Lambda functions to its service mesh. ---- - -# Invoke Lambda functions in Envoy proxy - -The Lambda Envoy extension configures outbound traffic on upstream dependencies allowing mesh services to properly invoke AWS Lambda functions. Lambda functions appear in the catalog as any other Consul service. - -You can only enable the Lambda extension through `service-defaults`. This is because the Consul uses the `service-defaults` configuration entry name as the catalog name for the Lambda functions. - -## Specification - -The Lambda Envoy extension has the following arguments: - -| Arguments | Description | -| -------------------- | ------------------------------------------------------------------------------------------------ | -| `ARN` | Specifies the [AWS ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) for the service's Lambda. | -| `InvocationMode` | Determines if Consul configures the Lambda to be invoked using the synchronous or asynchronous [invocation mode](https://docs.aws.amazon.com/lambda/latest/operatorguide/invocation-modes.html). | -| `PayloadPassthrough` | Determines if the body Envoy receives is converted to JSON or directly passed to Lambda. | - -Be aware that unlike [manual lambda registration](/consul/docs/lambda/registration/manual#supported-meta-fields), region is inferred from the ARN when specified through an Envoy extension. - -## Workflow - -There are two steps to configure the Lambda Envoy extension: - -1. Configure EnvoyExtensions through `service-defaults`. -1. Apply the configuration entry. - -### Configure `EnvoyExtensions` - -To use the Lambda Envoy extension, you must configure and apply a `service-defaults` configuration entry. Consul uses the name of the entry as the Consul service name for the Lambdas in the catalog. Downstream services also use the name to invoke the Lambda. - -The following example configures the Lambda Envoy extension to create a service named `lambda` in the mesh that can invoke the associated Lambda function. - - - - - -```hcl -Kind = "service-defaults" -Name = "lambdaInvokingApp" -Protocol = "http" -EnvoyExtensions { - Name = "builtin/aws/lambda" - Arguments = { - ARN = "arn:aws:lambda:us-west-2:111111111111:function:lambda-1234" - } -} -``` - - - - - - -```hcl -{ - "kind": "service-defaults", - "name": "lambdaInvokingApp", - "protocol": "http", - "envoy_extensions": [{ - "name": "builtin/aws/lambda", - "arguments": { - "arn": "arn:aws:lambda:us-west-2:111111111111:function:lambda-1234" - } - }] -} - -``` - - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: lambda -spec: - protocol: http - envoyExtensions: - name = "builtin/aws/lambda" - arguments: - arn: arn:aws:lambda:us-west-2:111111111111:function:lambda-1234 -``` - - - - - - -For a full list of parameters for `EnvoyExtensions`, refer to the [`service-defaults`](/consul/docs/connect/config-entries/service-defaults#envoyextensions) and [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) configuration entries reference documentation. - -~> **Note:** You can only enable the Lambda extension through `service-defaults`. - -Refer to [Configuration specification](#configuration-specification) section to find a full list of arguments for the Lambda Envoy extension. - -### Apply the configuration entry - -Apply the `service-defaults` configuration entry. - - - - -```shell-session -$ consul config write lambda-envoy-extension.hcl -``` - - - - -```shell-session -$ consul config write lambda-envoy-extension.json -``` - - - - -```shell-session -$ kubectl apply lambda-envoy-extension.yaml -``` - - - - -## Examples - -In the following example, the Lambda Envoy extension adds a single Lambda function running in two regions into the mesh. Then, you can use the `lambda` service name to invoke it, as if it was any other service in the mesh. - - - -```hcl -Kind = "service-defaults" -Name = "lambda" -Protocol = "http" -EnvoyExtensions { - Name = "builtin/aws/lambda" - - Arguments = { - payloadPassthrough: false - arn: arn:aws:lambda:us-west-2:111111111111:function:lambda-1234 - } -} -EnvoyExtensions { - Name = "builtin/aws/lambda" - - Arguments = { - payloadPassthrough: false - arn: arn:aws:lambda:us-east-1:111111111111:function:lambda-1234 - } -} -``` - - \ No newline at end of file diff --git a/website/content/docs/connect/proxies/envoy-extensions/usage/lua.mdx b/website/content/docs/connect/proxies/envoy-extensions/usage/lua.mdx deleted file mode 100644 index 5bac9081360b..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/usage/lua.mdx +++ /dev/null @@ -1,227 +0,0 @@ ---- -layout: docs -page_title: Lua Envoy Extension -description: >- - Learn how the `lua` Envoy extension enables Consul to run Lua scripts during Envoy requests and responses from Consul-generated Envoy resources. ---- - -# Run Lua scripts in Envoy proxy - -The Lua Envoy extension enables the [HTTP Lua filter](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/lua_filter) in your Consul Envoy proxies, letting you run Lua scripts when requests and responses pass through Consul-generated Envoy resources. - -Envoy filters support setting and getting dynamic metadata, allowing a filter to share state information with subsequent filters. To set dynamic metadata, configure the HTTP Lua filter. Users can call `streamInfo:dynamicMetadata()` from Lua scripts to get the request's dynamic metadata. - -## Configuration specifications - -To use the Lua Envoy extension, configure the following arguments in the `EnvoyExtensions` block: - -- `ProxyType`: string | `connect-proxy` - Determines the proxy type the extension applies to. The only supported value is `connect-proxy`. -- `ListenerType`: string | required - Specifies if the extension is applied to the `inbound` or `outbound` listener. -- `Script`: string | required - The Lua script that is configured to run by the HTTP Lua filter. - -## Workflow - -There are two steps to configure the Lua Envoy extension: - -1. Configure EnvoyExtensions through `service-defaults` or `proxy-defaults`. -1. Apply the configuration entry. - -### Configure `EnvoyExtensions` - -To use Envoy extensions, you must configure and apply a `proxy-defaults` or `service-defaults` configuration entry with the Envoy extension. - -- When you configure Envoy extensions on `proxy-defaults`, they apply to every service. -- When you configure Envoy extensions on `service-defaults`, they apply to a specific service. - -Consul applies Envoy extensions configured in `proxy-defaults` before it applies extensions in `service-defaults`. As a result, the Envoy extension configuration in `service-defaults` may override configurations in `proxy-defaults`. - -The following example configures the Lua Envoy extension on every service by using the `proxy-defaults`. - - - - - -```hcl -Kind = "proxy-defaults" -Name = "global" -Config { - protocol = "http" -} -EnvoyExtensions { - Name = "builtin/lua" - Arguments = { - ProxyType = "connect-proxy" - Listener = "inbound" - Script = <<-EOF -function envoy_on_request(request_handle) - meta = request_handle:streamInfo():dynamicMetadata() - m = meta:get("consul") - request_handle:headers():add("x-consul-service", m["service"]) - request_handle:headers():add("x-consul-namespace", m["namespace"]) - request_handle:headers():add("x-consul-datacenter", m["datacenter"]) - request_handle:headers():add("x-consul-trust-domain", m["trust-domain"]) -end - EOF - } -} -``` - - - - - - -```hcl -{ - "kind": "proxy-defaults", - "name": "global", - "protocol": "http", - "envoy_extensions": [{ - "name": "builtin/lua", - "arguments": { - "proxy_type": "connect-proxy", - "listener": "inbound", - "script": "function envoy_on_request(request_handle)\nmeta = request_handle:streamInfo():dynamicMetadata()\nm = \nmeta:get("consul")\nrequest_handle:headers():add("x-consul-service", m["service"])\nrequest_handle:headers():add("x-consul-namespace", m["namespace"])\nrequest_handle:headers():add("x-consul-datacenter", m["datacenter"])\nrequest_handle:headers():add("x-consul-trust-domain", m["trust-domain"])\nend" - } - }] -} -``` - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - protocol: http - envoyExtensions: - name = "builtin/lua" - arguments: - proxyType: "connect-proxy" - listener: "inbound" - script: |- -function envoy_on_request(request_handle) - meta = request_handle:streamInfo():dynamicMetadata() - m = meta:get("consul") - request_handle:headers():add("x-consul-service", m["service"]) - request_handle:headers():add("x-consul-namespace", m["namespace"]) - request_handle:headers():add("x-consul-datacenter", m["datacenter"]) - request_handle:headers():add("x-consul-trust-domain", m["trust-domain"]) -end -``` - - - - - -For a full list of parameters for `EnvoyExtensions`, refer to the [`service-defaults`](/consul/docs/connect/config-entries/service-defaults#envoyextensions) and [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) configuration entries reference documentation. - -!> **Warning:** Applying `EnvoyExtensions` to `ProxyDefaults` may produce unintended consequences. We recommend enabling `EnvoyExtensions` with `ServiceDefaults` in most cases. - -Refer to [Configuration specification](#configuration-specification) section to find a full list of arguments for the Lua Envoy extension. - -### Apply the configuration entry - -Apply the `proxy-defaults` or `service-defaults` configuration entry. - - - - -```shell-session -$ consul config write lua-envoy-extension-proxy-defaults.hcl -``` - - - - -```shell-session -$ consul config write lua-envoy-extension-proxy-defaults.json - -``` - - - - -```shell-session -$ kubectl apply lua-envoy-extension-proxy-defaults.yaml -``` - - - - -## Examples - -In the following example, the `service-defaults` configure the Lua Envoy extension to insert the HTTP Lua filter for service `myservice` and add the Consul service name to the`x-consul-service` header for all inbound requests. The `ListenerType` makes it so that the extension applies only on the inbound listener of the service's connect proxy. - - - -```hcl -Kind = "service-defaults" -Name = "myservice" -EnvoyExtensions = [ - { - Name = "builtin/lua" - - Arguments = { - ProxyType = "connect-proxy" - Listener = "inbound" - Script = < - -Alternatively, you can apply the same extension configuration to [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults#envoyextensions) configuration entries. - -You can also specify multiple Lua filters through the Envoy extensions. They will not override each other. - - - -```hcl -Kind = "service-defaults" -Name = "myservice" -EnvoyExtensions = [ - { - Name = "builtin/lua", - Arguments = { - ProxyType = "connect-proxy" - Listener = "inbound" - Script = <<-EOF -function envoy_on_request(request_handle) - meta = request_handle:streamInfo():dynamicMetadata() - m = meta:get("consul") - request_handle:headers():add("x-consul-datacenter", m["datacenter1"]) -end - EOF - } - }, - { - Name = "builtin/lua", - Arguments = { - ProxyType = "connect-proxy" - Listener = "inbound" - Script = <<-EOF -function envoy_on_request(request_handle) - meta = request_handle:streamInfo():dynamicMetadata() - m = meta:get("consul") - request_handle:headers():add("x-consul-datacenter", m["datacenter2"]) -end - EOF - } - } -] -``` - - diff --git a/website/content/docs/connect/proxies/envoy-extensions/usage/otel-access-logging.mdx b/website/content/docs/connect/proxies/envoy-extensions/usage/otel-access-logging.mdx deleted file mode 100644 index 34b4d844acbd..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/usage/otel-access-logging.mdx +++ /dev/null @@ -1,148 +0,0 @@ ---- -layout: docs -page_title: Send access logs to OpenTelemetry collector service -description: Learn how to use the `otel-access-logging` Envoy extension to send access logs to OpenTelemetry collector service. ---- - -# Send access logs to OpenTelemetry collector service - -This topic describes how to use the OpenTelemetry Access Logging Envoy extension to send access logs to OpenTelemetry collector service. - -## Workflow - -Complete the following steps to use the OpenTelemetry Access Logging extension: - -1. Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. -1. Apply the configuration entry. - -## Add the `EnvoyExtensions` - -Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. - -- When you configure Envoy extensions on proxy defaults, they apply to every service. -- When you configure Envoy extensions on service defaults, they apply to a specific service. - -Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. - -The following example shows a service defaults configuration entry for the `api` service that directs the Envoy proxy to make gRPC OpenTelemetry Access Logging requests to the `otel-collector` service: - - - - - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/otel-access-logging" - Arguments = { - ProxyType = "connect-proxy" - Config = { - GrpcService = { - Target = { - Service = { - Name = "otel-collector" - } - } - } - } - } - } -] -``` - - - - - -```json -{ - "Kind": "service-defaults", - "Name": "api", - "EnvoyExtensions": [{ - "Name": "builtin/otel-access-logging", - "Arguments": { - "ProxyType": "connect-proxy", - "Config": { - "GrpcService": { - "Target": { - "Service": { - "Name": "otel-collector" - } - } - } - } - } - }] -} -``` - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: api - namespace: default -spec: - envoyExtensions: - - name: builtin/otel-access-logging - arguments: - proxyType: connect-proxy - config: - grpcService: - target: - service: - name: otel-collector - namespace: otel-collector -``` - - - - -Refer to the [OpenTelemetry Access Logging extension configuration reference](/consul/docs/connect/proxies/envoy-extensions/configuration/otel-access-logging) for details on how to configure the extension. - -Refer to the [proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/connect/config-entries/service-defaults) for details on how to define the configuration entries. - -!> **Warning:** Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. - -### Unsupported Envoy configuration fields - -The following Envoy configurations are not supported: - -| Configuration | Workaround | -| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------ | -| `transport_api_version` | Consul only supports v3 of the transport API. As a result, there is no workaround for implementing the behavior of this field. | - -## Apply the configuration entry - -If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. - - - - -```shell-session -$ consul config write api-otel-collector-service-defaults.hcl -``` - - - - -```shell-session -$ consul config write api-otel-collector-service-defaults.json -``` - - - - -```shell-session -$ kubectl apply -f api-otel-collector-service-defaults.yaml -``` - - - diff --git a/website/content/docs/connect/proxies/envoy-extensions/usage/property-override.mdx b/website/content/docs/connect/proxies/envoy-extensions/usage/property-override.mdx deleted file mode 100644 index 3c82d3c7c2ca..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/usage/property-override.mdx +++ /dev/null @@ -1,219 +0,0 @@ ---- -layout: docs -page_title: Configure Envoy proxy properties -description: Learn how to use the property-override extension for Envoy proxies to set and remove individual properties for the Envoy resources Consul generates. ---- - -# Configure Envoy proxy properties - -This topic describes how to use the `property-override` extension to set and remove individual properties for the Envoy resources Consul generates. The extension uses the [protoreflect](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect), which enables Consul to dynamically manipulate messages. - -The extension currently supports setting scalar and enum fields, removing individual fields addressable by `Path`, and initializing unset intermediate message fields indicated in `Path`. - -It currently does _not_ support the following use cases: -- Adding, updating, or removing repeated field members -- Adding or updating [protobuf `map`](https://protobuf.dev/programming-guides/proto3/#maps) fields -- Adding or updating [protobuf `Any`](https://protobuf.dev/programming-guides/proto3/#any) fields - -## Workflow - -- Complete the following steps to use the `property-override` extension: -- Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. -- Apply the configuration entry. - -!> **Security warning**: The property override extension is an advanced feature capable of introducing unintended consequences or reducing cluster security if used incorrectly. Consul does not enforce TLS retention, intentions, or other security-critical components of the Envoy configuration. Additionally, Consul does not verify that the configuration does not contain errors that affect service traffic. - -## Add the `EnvoyExtensions` - -Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. - -- When you configure Envoy extensions on proxy defaults, they apply to every service. -- When you configure Envoy extensions on service defaults, they apply to a specific service. - -Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. - -In the following proxy defaults configuration entry example, Consul sets the `/respect_dns_ttl` field on the `api` service proxy's cluster configuration for the `other-svc` upstream service: - - - - - -```hcl -Kind = "service-defaults" -Name = "api" -Protocol = "http" -EnvoyExtensions = [ - { - Name = "builtin/property-override" - Arguments = { - ProxyType = "connect-proxy" - Patches = [ - { - ResourceFilter = { - ResourceType = "cluster" - TrafficDirection = "outbound" - Services = [{ - Name = "other-svc" - }] - } - Op = "add" - Path = "/respect_dns_ttl" - Value = true - } - ] - } - } -] -``` - - - - - - -```json -{ - "kind": "service-defaults", - "name": "api", - "protocol": "http", - "envoyExtensions": [{ - "name": "builtin/property-override", - "arguments": { - "proxyType": "connect-proxy", - "patches": [{ - "resourceFilter": { - "resourceType": "cluster", - "trafficDirection": "outbound", - "services": [{ "name": "other-svc" }] - }, - "op": "add", - "path": "/respect_dns_ttl", - "value": true - }] - } - }] -} -``` - - - - - -```yaml -apiversion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: api -spec: - protocol: http - envoyExtensions: - name = "builtin/property-override" - arguments: - proxyType: "connect-proxy", - patches: - - resourceFilter: - resourceType: "cluster" - trafficDirection: "outbound" - services: - - name: "other-svc" - op: "add" - path: "/respect_dns_ttl", - value: true -``` - - - - - - -Refer to the [property override configuration reference](/consul/docs/connect/proxies/envoy-extensions/configuration/property-override) for details on how to configure the extension. - -Refer to the [proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/connect/config-entries/service-defaults) for details on how to define the configuration entries. - -!> **Warning:** Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. - -### Constructing paths - -To target the properties for an Envoy resource type, you must specify the path where the properties exist in the [`Path` field](/consul/docs/connect/proxies/envoy-extensions/configuration/property-override#patches-path) of the property override extension configuration. Set the `Path` field to an empty or partially invalid string when saving the configuration entry and Consul returns an error with a list of supported fields for the first unrecognized segment of the path. By default, Consul only returns the first ten fields, but you can set the [`Debug` field](/consul/docs/connect/proxies/envoy-extensions/configuration/property-override#debug) to `true` to direct Consul to output all possible fields. - -In the following example, Consul outputs the top-level fields available for the Envoy cluster resource: - -```hcl -Kind = "service-defaults" -Name = "api" -EnvoyExtensions = [ - { - Name = "builtin/property-override" - Arguments = { - Debug = true - ProxyType = "connect-proxy" - Patches = [ - { - ResourceFilter = { - ResourceType = "cluster" - TrafficDirection = "outbound" - } - Op = "add" - Path = "" - Value = 5 - } - ] - } - } -] -``` - -After applying the configuration entry, Consul prints a message that includes the possible fields for the resource: - -```shell-session -$ consul config write api.hcl -non-empty, non-root Path is required; -available envoy.config.cluster.v3.Cluster fields: -transport_socket_matches -name -alt_stat_name -type -cluster_type -eds_cluster_config -connect_timeout -... -``` - -You can use the output to help you construct the appropriate value for the `Path` field. For example: - -```shell-session -$ consul config write api.hcl 2>&1 | grep round_robin -round_robin_lb_config -``` - - - - -## Apply the configuration entry - -If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. - - - - -```shell-session -$ consul config write property-override-extension-service-defaults.hcl -``` - - - - -```shell-session -$ consul config write property-override-extension-service-defaults.json - -``` - - - - -```shell-session -$ kubectl apply property-override-extension-service-defaults.yaml -``` - - - diff --git a/website/content/docs/connect/proxies/envoy-extensions/usage/wasm.mdx b/website/content/docs/connect/proxies/envoy-extensions/usage/wasm.mdx deleted file mode 100644 index 5f5b371e7362..000000000000 --- a/website/content/docs/connect/proxies/envoy-extensions/usage/wasm.mdx +++ /dev/null @@ -1,194 +0,0 @@ ---- -layout: docs -page_title: Run WebAssembly plug-ins in Envoy proxy -description: Learn how to use the Consul wasm extension for Envoy, which directs Consul to run your WebAssembly (Wasm) plugins for Envoy proxies in your service mesh. ---- - - -# Run WebAssembly plug-ins in Envoy proxy - -This topic describes how to use the `wasm` extension, which directs Consul to run your WebAssembly (Wasm) plug-ins for Envoy proxies. - -## Workflow - -You can create Wasm plugins for Envoy and integrate them using the `wasm` extension. Wasm is a binary instruction format for stack-based virtual machines that has the potential to run anywhere after it has been compiled. Wasm plug-ins run as filters in a service mesh application's sidecar proxy. - -The following steps describe the process of integrating Wasm plugins: - -- Create your Wasm plugin. You must ensure that your plugin functions as expected. Refer to the [WebAssembly website](https://webassembly.org/) for information and links to documentation. -- Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. -- Apply the configuration entry. - -## Add the `EnvoyExtensions` - -Add Envoy extension configuration to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. - -- When you configure Envoy extensions on proxy defaults, they apply to every service. -- When you configure Envoy extensions on service defaults, they apply to a specific service. - -Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. - -In the following example, the extension uses an upstream service named `file-server` to serve a Wasm-based web application firewall (WAF). - - - - - -```hcl -Kind = "service-defaults" -Name = "api" -Protocol = "http" -EnvoyExtensions = [ - { - Name = "builtin/wasm" - Arguments = { - Protocol = "http" - ListenerType = "inbound" - PluginConfig = { - VmConfig = { - Code = { - Remote = { - HttpURI = { - Service = { - Name = "file-server" - } - URI = "https://file-server/waf.wasm" - } - SHA256 = "c9ef17f48dcf0738b912111646de6d30575718ce16c0cbde3e38b21bb1771807" - } - } - } - Configuration = < - - - - -```json -{ - "kind": "service-defaults", - "name": "api", - "protocol": "http", - "envoyExtensions": [{ - "name": "builtin/wasm", - "arguments": { - "protocol": "http", - "listenerType": "inbound", - "pluginConfig": { - "VmConfig": { - "Code": { - "Remote": { - "HttpURI": { - "Service": { - "Name": "file-server" - }, - "URI": "https://file-server/waf.wasm" - } - } - } - }, - "Configuration": { - "rules": [ - "Include @demo-conf", - "Include @crs-setup-demo-conf", - "SecDebugLogLevel 9", - "SecRuleEngine On", - "Include @owasp_crs/*.conf" - ] - } - - } - } - }] -} -``` - - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: api -spec: - protocol: http - envoyExtensions: - - name: builtin/wasm - required: true - arguments: - protocol: http - listenerType: inbound - pluginConfig: - VmConfig: - Code: - Remote: - HttpURI: - Service: - Name: file-server - URI: https://file-server/waf.wasm - Configuration: - rules: - - Include @demo-conf - - Include @crs-setup-demo-conf - - SecDebugLogLevel 9 - - SecRuleEngine On - - Include @owasp_crs/*.conf -``` - - - - - - -Refer to the [Wasm extension configuration reference](/consul/docs/connect/proxies/envoy-extensions/configuration/wasm) for details on how to configure the extension. - -Refer to the [proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/connect/config-entries/service-defaults) for details on how to define the configuration entries. - -!> **Warning:** Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. - -## Apply the configuration entry - -If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. - - - - -```shell-session -$ consul config write wasm-extension-serve-waf.hcl -``` - - - - -```shell-session -$ consul config write wasm-extension-serve-waf.json -``` - - - - -```shell-session -$ kubectl apply wasm-extension-serve-waf.yaml -``` - - - diff --git a/website/content/docs/connect/proxies/envoy.mdx b/website/content/docs/connect/proxies/envoy.mdx deleted file mode 100644 index 927bac2fcdf8..000000000000 --- a/website/content/docs/connect/proxies/envoy.mdx +++ /dev/null @@ -1,1216 +0,0 @@ ---- -layout: docs -page_title: Envoy Proxy Configuration | Service Mesh -description: >- - Consul supports Envoy proxies to direct traffic throughout the service mesh. Learn about Consul versions and their Envoy support, and use the reference guide to review options for bootstrap configuration, dynamic configuration, and advanced topics like escape hatch overrides. ---- - -# Envoy Proxy Configuration for Service Mesh - -Consul service mesh has first class support for using -[Envoy](https://www.envoyproxy.io) as a proxy. Consul configures Envoy by -optionally exposing a gRPC service on the local agent that serves [Envoy's xDS -configuration -API](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-docs/xds_protocol). - -Consul can configure Envoy sidecars to proxy traffic over the following protocols: - -| Protocol | Service network support | -| ----------------------- | ----------------------- | -| HTTP/1.1 | L7 | -| HTTP2 | L7 | -| gRPC | L7 | -| All TCP-based protocols | L4 | - -On Consul 1.5.0 and older, Envoy proxies can only proxy TCP traffic at L4. - -You can configure some [L7 features](/consul/docs/connect/manage-traffic) in [configuration entries](/consul/docs/agent/config-entries). You can add [custom Envoy configurations](#advanced-configuration) to the [proxy service definition](/consul/docs/connect/proxies/proxy-config-reference), which enables you to leverage Envoy features that are not exposed through configuration entries. You can also use the [Consul Envoy extensions](/consul/docs/connect/proxies/envoy-extensions) to implement Envoy features. - -~> **Note:** When using Envoy with Consul and not using the [`consul connect envoy` command](/consul/commands/connect/envoy) -Envoy must be run with the `--max-obj-name-len` option set to `256` or greater for Envoy versions prior to 1.11.0. - -## Supported Versions - -The following matrix describes Envoy compatibility for the -currently supported major Consul releases: -- The latest (N) release of Consul community edition (CE) and Enterprise -- The 2 preceding major releases (N-1 and N-2) of Consul Enterprise -- The 2 latest [Consul Enterprise LTS](/consul/docs/enterprise/long-term-support) major releases - -For previous Consul version compatibility, -refer to the previous release's version of this page. - -### Envoy and Consul Client Agent - -Every major Consul release initially supports **four major Envoy releases**. -However, [Consul Enterprise Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) -releases expand their Envoy version compatibility window in minor releases to -ensure compatibility with a maintained Envoy version. Standard (non-LTS) Consul -Enterprise releases may also expand support to a new major version of Envoy in -order to receive important security fixes, if the previous major Envoy version -has reached end-of-life. - -Every major Consul release maintains and tests compatibility with specific Envoy -patch releases to ensure users can benefit from bug and security fixes in Envoy. - -#### Standard releases - -Unless otherwise noted, rows in the following compatibility table -apply to both Consul Enterprise and Consul community edition (CE). - -| Consul Version | Compatible Envoy Versions | -| -------------- | -------------------------------------- | -| 1.20.x CE | 1.31.x, 1.30.x, 1.29.x, 1.28.x | -| 1.19.x CE | 1.29.x, 1.28.x, 1.27.x, 1.26.x | -| 1.18.x CE | 1.28.x, 1.27.x, 1.26.x, 1.25.x | -| 1.17.x | 1.27.x, 1.26.x, 1.25.x, 1.24.x | -| 1.16.x | 1.26.x, 1.25.x, 1.24.x, 1.23.x | - -#### Enterprise Long Term Support releases - -Active Consul Enterprise -[Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) -releases expand their Envoy version compatibility window -until the LTS release reaches its end of maintenance. - -| Consul Version | Compatible Envoy Versions | -| -------------- | -----------------------------------------------------------------------------------| -| 1.18.x Ent | 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x | -| 1.15.x Ent | 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x, 1.24.x, 1.23.x, 1.22.x | - -### Envoy and Consul Dataplane - -The Consul dataplane component was introduced in Consul v1.14 -as a way to manage Envoy proxies without the use of Consul clients. - -Each major version of Consul is released with a new major version of Consul dataplane, -which packages both Envoy and the `consul-dataplane` binary in a single container image. -To enable seamless upgrades, each major version of Consul also supports -the previous and next Consul dataplane versions. - -Compared to community edition releases, Consul Enterprise releases have -the following differences with Consul dataplane compatibility: -- [LTS-Only: Expanded compatibility window](#enterprise-long-term-support-releases): - Active Consul Enterprise LTS releases expand their Consul dataplane - version compatibility window to include the version of Consul dataplane - aligned with the next Consul LTS release. -- [Maintained Envoy version](#consul-dataplane-releases-that-span-envoy-major-versions): - Major versions of Consul dataplane aligned with a maintained Consul - Enterprise version may contain minor version updates that use a new - major version of Envoy. These minor version updates are necessary to - ensure that maintained versions of Consul dataplane use a maintained - version of Envoy. - -#### Standard releases - -Unless otherwise noted, rows in the following compatibility table -apply to both Consul Enterprise and Consul community edition (CE). - -| Consul Version | Default `consul-dataplane` Version | Other compatible `consul-dataplane` Versions | -| -------------- | -------------------------------------|----------------------------------------------| -| 1.20.x CE | 1.6.x (Envoy 1.31.x) | 1.5.x (Envoy 1.29.x) | -| 1.19.x CE | 1.5.x (Envoy 1.29.x) | 1.4.x (Envoy 1.28.x) | -| 1.18.x CE | 1.4.x (Envoy 1.28.x) | 1.3.x (Envoy 1.27.x) | -| 1.17.x | 1.3.x (Envoy 1.27.x) | 1.4.x (Envoy 1.28.x), 1.2.x (Envoy 1.26.x) | -| 1.16.x | 1.2.x (Envoy 1.26.x) | 1.3.x (Envoy 1.27.x), 1.1.x (Envoy 1.25.x) | - -#### Enterprise Long Term Support releases - -Active Consul Enterprise -[Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) -releases expand their Envoy version compatibility window -until the LTS release reaches its end of maintenance. - -| Consul Version | Default `consul-dataplane` Version | Other compatible `consul-dataplane` Versions | -| -------------- | -------------------------------------|----------------------------------------------| -| 1.18.x Ent | 1.4.x (Envoy 1.28.x) | 1.3.x (Envoy 1.27.x) | -| 1.15.x Ent | 1.1.x (Envoy 1.26.x) | 1.4.x (Envoy 1.28.x) - 1.0.x (Envoy 1.24.x) | - -#### Consul dataplane releases that span Envoy major versions - -Major versions of Consul dataplane aligned with active versions of Consul -may contain minor version updates that use a new major version of Envoy. -These minor version updates are necessary to ensure maintained versions -of Consul dataplane use a maintained version of Envoy including important -security fixes. - -| `consul-dataplane` Version Range | Associated Consul Enterprise version | Contained Envoy Binary Version | -| -------------------------------- | ---------------------------------------- | ------------------------------ | -| 1.1.11 - 1.1.latest | 1.15.x Ent | Envoy 1.27.x | -| 1.1.9 - 1.1.10 | 1.15.x Ent | Envoy 1.26.x | -| 1.1.0 - 1.1.8 | 1.15.x Ent | Envoy 1.25.x | - -## Getting Started - -To get started with Envoy and see a working example you can follow the [Using -Envoy with Consul service mesh](/consul/tutorials/developer-mesh/service-mesh-with-envoy-proxy?utm_source=docs) tutorial. - -## Configuration - -Envoy proxies require two types of configuration: an initial _bootstrap -configuration_ and a _dynamic configuration_ that is discovered from a "management -server", in this case Consul. - -The bootstrap configuration at a minimum needs to configure the proxy with an -identity (node id) and the location of its local Consul agent from which it -discovers all of its dynamic configuration. See [Bootstrap -Configuration](#bootstrap-configuration) for more details. - -The dynamic configuration Consul service mesh provides to each Envoy instance includes: - -- TLS certificates and keys to enable mutual authentication and keep certificates - rotating. -- [Intentions] to enforce service-to-service authorization rules. -- Service-discovery results for upstreams to enable each sidecar proxy to load-balance - outgoing connections. -- L7 configuration including timeouts and protocol-specific options. -- Configuration to [expose specific HTTP paths](/consul/docs/connect/proxies/proxy-config-reference#expose-paths-configuration-reference). - -For more information on the parts of the Envoy proxy runtime configuration -that are currently controllable via Consul service mesh, refer to [Dynamic -Configuration](#dynamic-configuration). - -We plan to enable more and more of Envoy's features through -Consul service mesh's first-class configuration over time, however some advanced users will -need additional control to configure Envoy in specific ways. To enable this, we -provide several ["escape hatch"](#advanced-configuration) options that allow -users to provide low-level raw Envoy config syntax for some sub-components in each -Envoy instance. This allows operators to have full control over and -responsibility for correctly configuring Envoy and ensuring version support etc. - -## Intention Enforcement - -[Intentions](/consul/docs/connect/intentions) are enforced using Envoy's RBAC filters. Depending on the -configured [protocol](/consul/docs/connect/config-entries/service-defaults#protocol) of the proxied service, intentions are either enforced -per-connection (L4) using a network filter, or per-request (L7) using an HTTP -filter. - --> **Note:** Prior to Consul 1.9.0 intentions were exclusively enforced -per-connection (L4) using an `ext_authz` network filter. - -## Fetching Certificates - -Envoy will use the [`CONSUL_HTTP_TOKEN`](/consul/commands#consul_http_token) and [`CONSUL_HTTP_ADDR`](/consul/commands#consul_http_addr) environment variables to contact Consul to fetch certificates if the following conditions are met: - -- The `CONSUL_HTTP_TOKEN` environment variable contains a Consul ACL token. -- The Consul ACL token has the necessary permissions to read configuration for that service. - -If TLS is enabled on Consul, you will also need to add the following environment variables _prior_ to starting Envoy: - -- [`CONSUL_CACERT`](/consul/commands#consul_cacert) -- [`CONSUL_CLIENT_CERT`](/consul/commands#consul_client_cert) -- [`CONSUL_CLIENT_KEY`](/consul/commands#consul_client_key) -- [`CONSUL_HTTP_SSL`](/consul/commands#consul_http_ssl) - -## Bootstrap Configuration - -Envoy requires an initial bootstrap configuration file. You can either create the file manually using the Consul command line or configure Consul Dataplane to generate the file. - -### Generate the bootstrap file on the Consul CLI - -Connect to a local Consul client agent and run the [`consul connect envoy` command](/consul/commands/connect/envoy) to create the Envoy bootstrap configuration. The command either outputs the bootstrap configuration directly to stdout or generates the configuration and issues an `exec` command to the Envoy binary as a convenience wrapper. For more information about using `exec` to bootstrap Envoy, refer to [Exec Security Details](/consul/commands/connect/envoy#exec-security-details). - -If you experience issues when bootstrapping Envoy proxies from the CLI, use the -`-enable-config-gen-logging` flag to enable debug message logging. These logs can -help you troubleshoot issues that occur during the bootstrapping process. -For more information about available flags and parameters, refer to the -[`consul connect envoy CLI` reference](/consul/commands/connect/envoy). - -### Generate the bootstrap file from Consul Dataplane - -Consul Dataplane automatically configures and manages an Envoy process. Consul Dataplane generates the Envoy bootstrap configuration file prior to starting Envoy. To configure how Consul Dataplane starts Envoy, refer to the [Consul Dataplane CLI reference](/consul/docs/connect/dataplane/consul-dataplane). - -### Control bootstrap configuration from proxy configuration - -Consul service mesh can control some parts of the bootstrap configuration by specifying Envoy proxy configuration options. - -Add the following configuration items to the [global `proxy-defaults` configuration -entry](/consul/docs/connect/config-entries/proxy-defaults) or override them directly in the `proxy.config` -field of a [proxy service definition](/consul/docs/proxies/proxy-config-reference). When -connected to a Consul client agent, you can place the configuration in the `proxy.config` field of -the [`sidecar_service`](/consul/docs/connect/proxies/deploy-sidecar-services) block. - -- `envoy_statsd_url` - A URL in the form `udp://ip:port` identifying a UDP - StatsD listener that Envoy should deliver metrics to. For example, this may be - `udp://127.0.0.1:8125` if every host has a local StatsD listener. In this case - users can configure this property once in the [global `proxy-defaults` - configuration entry](/consul/docs/connect/config-entries/proxy-defaults) for convenience. Currently, TCP is not supported. - - ~> **Note:** currently the url **must use an ip address** not a dns name due - to the way Envoy is setup for StatsD. - - Expansion of the environment variable `HOST_IP` is supported, e.g. - `udp://${HOST_IP}:8125`. - - Users can also specify the whole parameter in the form `$ENV_VAR_NAME`, which - will cause the `consul connect envoy` command to resolve the actual URL from - the named environment variable when it runs. This, for example, allows each - pod in a Kubernetes cluster to learn of a pod-specific IP address for StatsD - when the Envoy instance is bootstrapped while still allowing global - configuration of all proxies to use StatsD in the [global `proxy-defaults` - configuration entry](/consul/docs/connect/config-entries/proxy-defaults). The env - variable must contain a full valid URL value as specified above and nothing else. - -- `envoy_dogstatsd_url` - The same as `envoy_statsd_url` with the following - differences in behavior: - - - Envoy will use dogstatsd tags instead of statsd dot-separated metric names. - - As well as `udp://`, a `unix://` URL may be specified if your agent can - listen on a unix socket (e.g. the dogstatsd agent). - -- `envoy_prometheus_bind_addr` - Specifies that the proxy should expose a Prometheus - metrics endpoint to the _public_ network. It must be supplied in the form - `ip:port` and port and the ip/port combination must be free within the network - namespace the proxy runs. Typically the IP would be `0.0.0.0` to bind to all - available interfaces or a pod IP address. - - -> **Note:** Envoy versions prior to 1.10 do not export timing histograms - using the internal Prometheus endpoint. - -- `envoy_stats_bind_addr` - Specifies that the proxy should expose the /stats prefix - to the _public_ network. It must be supplied in the form `ip:port` and - the ip/port combination must be free within the network namespace the proxy runs. - Typically the IP would be `0.0.0.0` to bind to all available interfaces or a pod IP address. - -- `envoy_stats_tags` - Specifies one or more static tags that will be added to - all metrics produced by the proxy. - -- `envoy_stats_flush_interval` - Configures Envoy's - [`stats_flush_interval`](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-stats-flush-interval). - -- `envoy_telemetry_collector_bind_socket_dir` - Specifies the directory where Envoy creates a Unix socket. - Envoy sends metrics to the socket where a Consul telemetry collector can collect them. - The socket is not configured by default. - Enabling this sets Envoy's [`stats_flush_interval`](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-stats-flush-interval) to one minute if `envoy_stats_flush_interval` is unset and if no other stats sinks are configured, like `envoy_dogstats_url`, for instance. - -The [Advanced Configuration](#advanced-configuration) section describes additional configurations that allow incremental or complete control over the bootstrap configuration generated. - -### Bootstrap Envoy on Windows VMs - -> Complete the [Connect Services on Windows Workloads to Consul Service Mesh tutorial](/consul/tutorials/developer-mesh/consul-windows-workloads) to learn how to deploy Consul and use its service mesh on Windows VMs. - -If you are running Consul on a Windows VM, attempting to bootstrap Envoy with the `consul connect envoy` command returns the following output: - -```shell-session hideClipboard -Directly running Envoy is only supported on linux and macOS since envoy itself doesn't build on other platforms currently. -Use the -bootstrap option to generate the JSON to use when running envoy on a supported OS or via a container or VM. -``` - -To bootstrap Envoy on Windows VMs, you must generate the bootstrap configuration as a .json file and then manually edit it to add both your ACL token and a valid access log path. - -To generate the bootstrap configuration file, add the `-bootstrap` option to the command and then save the output to a file: - -```shell-session -$ consul connect envoy -bootstrap > bootstrap.json -``` - -Then, open `bootstrap.json` and update the following sections with your ACL token and log path. - - - -```json -{ - "admin": { - "access_log": [ - { - "name": "envoy.access_loggers.file", - "typed_config": { - "@type": "type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog", - "path": "/dev/null" - } - } - ], - "address": { - "socket_address": { - "address": "127.0.0.1", - "port_value": 19000 - } - } - }, - ## ... - "dynamic_resources": { - ## ... - "ads_config": { - ## ... - "grpc_services": { - "initial_metadata": [ - { - "key": "x-consul-token", - "value": "" - } - ], - ## ... - } - } - } -} -``` - - -To complete the bootstrap process, start Envoy and include the path to `bootstrap.json`: - -```shell-session -$ envoy -c bootstrap.json -``` - -~> **Security Note**: The bootstrap JSON contains the ACL token and should be handled as a secret. Because this token authorizes the identity of any service it has `service:write` permissions for, it can be used to access upstream services. - -## Dynamic Configuration - -Consul automatically generates Envoy's dynamic configuration based on its -knowledge of the cluster. Users may specify default configuration options for -a service through the available fields in the [`service-defaults` configuration -entry](/consul/docs/connect/config-entries/service-defaults). Consul will use this -information to configure appropriate proxy settings for that service's proxies -and also for the upstream listeners used by the service. - -One example is how users can define a service's protocol in the `Protocol` field of [`service-defaults` configuration -entry](/consul/docs/connect/config-entries/service-defaults). Agents with -[`enable_central_service_config`](/consul/docs/agent/config/config-files#enable_central_service_config) -set to true will automatically discover the protocol when configuring a proxy -for a service. The proxy will discover the main protocol of the service it -represents and use this to configure its main public listener. It will also -discover the protocols defined for any of its upstream services and -automatically configure its upstream listeners appropriately too as below. - -This automated discovery results in Consul auto-populating the `proxy.config` -and `proxy.upstreams[*].config` fields of the [proxy service -definition](/consul/docs/connect/proxies/proxy-config-reference) that is -actually registered. - -To learn about other options that can be configured centrally see the -[Configuration Entries](/consul/docs/agent/config-entries) docs. - -### Proxy Config Options - -These fields may also be overridden explicitly in `proxy.config` of the [proxy service -definition](/consul/docs/connect/proxies/proxy-config-reference), or defined in -the [global `proxy-defaults` configuration -entry](/consul/docs/connect/config-entries/proxy-defaults) to act as -defaults that are inherited by all services. - -- `protocol` - The protocol the service speaks. Consul service mesh's Envoy integration - currently supports the following `protocol` values: - - - `tcp` - Unless otherwise specified this is the default, which causes Envoy - to proxy at L4. This provides all the security benefits of the service mesh's mTLS - and works for any TCP-based protocol. Load-balancing and metrics are - available at the connection level. - - `http` - This specifies that the service speaks HTTP/1.x. Envoy will setup an - `http_connection_manager` and will be able to load-balance requests - individually to available upstream services. Envoy will also emit L7 metrics - such as request rates broken down by HTTP response code family (2xx, 4xx, 5xx, - etc). - - `http2` - This specifies that the service speaks http2 (specifically h2c since - Envoy will still only connect to the local service instance via plain TCP not - TLS). This behaves much like `http` with L7 load-balancing and metrics but has - additional settings that correctly enable end-to-end http2. - - `grpc` - gRPC is a common RPC protocol based on http2. In addition to the - http2 support above, Envoy listeners will be configured with a - [gRPC bridge - filter](https://www.envoyproxy.io/docs/envoy/v1.17.2/configuration/http/http_filters/grpc_http1_bridge_filter) - that translates HTTP/1.1 calls into gRPC, and instruments - metrics with `gRPC-status` trailer codes. - - ~> **Note:** The protocol of a service should ideally be configured via the - [`protocol`](/consul/docs/connect/config-entries/service-defaults#protocol) - field of a - [`service-defaults`](/consul/docs/connect/config-entries/service-defaults) - config entry for the service. Configuring it in a - proxy config will not fully enable some [L7 - features](/consul/docs/connect/manage-traffic). - It is supported here for backwards compatibility with Consul versions prior to 1.6.0. - -- `bind_address` - Override the address Envoy's public listener binds to. By - default Envoy will bind to the service address or 0.0.0.0 if there is not explicit address on the service registration. - -- `bind_port` - Override the port Envoy's public listener binds to. By default - Envoy will bind to the service port. - -- `local_connect_timeout_ms` - The number of milliseconds allowed to make - connections to the local application instance before timing out. Defaults to 5000 - (5 seconds). - -- `local_request_timeout_ms` - In milliseconds, the request timeout for HTTP requests - to the local application instance. Applies to HTTP based protocols only. If not - specified, inherits the Envoy default for route timeouts (15s). A value of 0 will - disable request timeouts. - -- `local_idle_timeout_ms` - In milliseconds, the idle timeout for HTTP requests - to the local application instance. Applies to HTTP based protocols only. If not - specified, inherits the Envoy default for route idle timeouts (15s). A value of 0 - disables request timeouts. - -- `max_inbound_connections` - The maximum number of concurrent inbound connections - to the local application instance. If not specified, inherits the Envoy default (1024). - -- `balance_inbound_connections` - The strategy used for balancing inbound connections - across Envoy worker threads. Consul service mesh Envoy integration supports the - following `balance_inbound_connections` values: - - - `""` - Empty string (default). No connection balancing strategy is used. Consul does not balance inbound connections. - - `exact_balance` - Inbound connections to the service use the - [Envoy Exact Balance Strategy.](https://cloudnative.to/envoy/api-v3/config/listener/v3/listener.proto.html#config-listener-v3-listener-connectionbalanceconfig-exactbalance) - -- `xds_fetch_timeout_ms` - In milliseconds, the amount of time for Envoy to wait for EDS and RDS configuration before timing out. If not specified, this field uses Envoy's default value of `15000`, or 15 seconds. When an Envoy instance is configured with a large number of upstreams that take a significant amount of time to populate with data, setting this field to a higher value may prevent temporary disruption caused by unexpected timeouts. - -### Proxy Upstream Config Options - -The following configuration items may be overridden directly in the -`proxy.upstreams[].config` field of a [proxy service -definition](/consul/docs/connect/proxies/proxy-config-reference) or -[`sidecar_service`](/consul/docs/connect/proxies/deploy-sidecar-services) block. - -- `protocol` - Same as above in main config but affects the listener setup for - the upstream. - - ~> **Note:** The protocol of a service should ideally be configured via the - [`protocol`](/consul/docs/connect/config-entries/service-defaults#protocol) - field of a - [`service-defaults`](/consul/docs/connect/config-entries/service-defaults) - config entry for the upstream destination service. Configuring it in a - proxy upstream config will not fully enable some [L7 - features](/consul/docs/connect/manage-traffic). - It is supported here for backwards compatibility with Consul versions prior to 1.6.0. - -- `connect_timeout_ms` - The number of milliseconds to allow when making upstream - connections before timing out. Defaults to 5000 - (5 seconds). - - ~> **Note:** The connection timeout for a service should ideally be - configured via the - [`connect_timeout`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) - field of a - [`service-resolver`](/consul/docs/connect/config-entries/service-resolver) - config entry for the upstream destination service. Configuring it in a - proxy upstream config will override any values defined in config entries. - It is supported here for backwards compatibility with Consul versions prior to 1.6.0. - -- `limits` - A set of limits to apply when connecting to the upstream service. - These limits are applied on a per-service-instance basis. The following - limits are respected: - - - `max_connections` - The maximum number of connections a service instance - will be allowed to establish against the given upstream. Use this to limit - HTTP/1.1 traffic, since HTTP/1.1 has a request per connection. - - `max_pending_requests` - The maximum number of requests that will be queued - while waiting for a connection to be established. For this configuration to - be respected, a L7 protocol must be defined in the `protocol` field. - - `max_concurrent_requests` - The maximum number of concurrent requests that - will be allowed at a single point in time. Use this to limit HTTP/2 traffic, - since HTTP/2 has many requests per connection. For this configuration to be - respected, a L7 protocol must be defined in the `protocol` field. - -- `passive_health_check` - Passive health checks remove hosts from the upstream - cluster that are unreachable or that return errors. - - - `interval` - The time in nanosecond between checks. Each check will cause - hosts which have exceeded `max_failures` to be removed from the load - balancer, and any hosts which have passed their ejection time to be - returned to the load balancer. If not specified, it uses the default value. - For example, 10s for Envoy proxy. - - `max_failures` - The number of consecutive failures that cause a host to be - removed from the upstream cluster. If not specified, Consul uses the proxy's - default value. For example, `5` for Envoy proxy. - - `enforcing_consecutive_5xx` - A percentage representing the chance that a - host will be actually ejected when the proxy detects an outlier status - through consecutive errors in the 500 code range. If not specified, Consul - uses the proxy's default value. For example, `100` for Envoy proxy. - - `max_ejection_percent` - The maximum percentage of hosts that can be ejected - from a upstream cluster due to passive health check failures. If not specified, - inherits Envoy's default of 10% or at least one host. - - `base_ejection_time` - The base time that a host is ejected for. The real - time is equal to the base time multiplied by the number of times the host has - been ejected and is capped by max_ejection_time (Default 300s). If not - specified, inherits Envoy's default value of 30s. - -- `balance_outbound_connections` - Specifies the strategy for balancing outbound connections - across Envoy worker threads. Consul service mesh Envoy integration supports the - following `balance_outbound_connections` values: - - - `""` - Empty string (default). No connection balancing strategy is used. Consul does not balance outbound connections. - - `exact_balance` - Outbound connections from the upstream use the - [Envoy Exact Balance Strategy.](https://cloudnative.to/envoy/api-v3/config/listener/v3/listener.proto.html#config-listener-v3-listener-connectionbalanceconfig-exactbalance) - -### Gateway Options - -These fields may also be overridden explicitly in the [proxy service -definition](/consul/docs/connect/proxies/proxy-config-reference), or defined in -the [global `proxy-defaults` configuration -entry](/consul/docs/connect/config-entries/proxy-defaults) to act as -defaults that are inherited by all services. - -Prior to 1.8.0 these settings were specific to Mesh Gateways. The deprecated -names such as `envoy_mesh_gateway_bind_addresses` and `envoy_mesh_gateway_no_default_bind` -will continue to be supported. - -- `connect_timeout_ms` - The number of milliseconds to allow when making upstream - connections before timing out. Defaults to 5000 (5 seconds). If the upstream - service has the configuration option - [`connect_timeout_ms`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) - set for the `service-resolver`, that timeout value will take precedence over - this gateway option. - -- `envoy_gateway_bind_tagged_addresses` - Indicates that the gateway - services tagged addresses should be bound to listeners in addition to the - default listener address. - -- `envoy_gateway_bind_addresses` - A map of additional addresses to be bound. - This map's keys are the name of the listeners to be created and the values are - a map with two keys, address and port, that combined make the address to bind the - listener to. These are bound in addition to the default address. - -- `envoy_gateway_no_default_bind` - Prevents binding to the default address - of the gateway service. This should be used with one of the other options - to configure the gateway's bind addresses. - -- `envoy_dns_discovery_type` - Determines how Envoy will resolve hostnames. Defaults to `LOGICAL_DNS`. - Must be one of `STRICT_DNS` or `LOGICAL_DNS`. Details for each type are available in - the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/v1.17.2/intro/arch_overview/upstream/service_discovery). - This option applies to terminating gateways that route to services - addressed by a hostname, such as a managed database. It also applies to mesh gateways, - such as when gateways in other Consul datacenters are behind a load balancer that is addressed by a hostname. - -- `envoy_gateway_remote_tcp_enable_keepalive` - Enables TCP keepalive settings on remote - upstream connections for mesh and terminating gateways. Defaults to `false`. Must be one - of `true` or `false`. Details for this feature are available in the - [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto#envoy-v3-api-msg-config-core-v3-tcpkeepalive). - -- `envoy_gateway_remote_tcp_keepalive_time` - The number of seconds a connection needs to - be idle before keep-alive probes start being sent. For more information, see the - [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto#envoy-v3-api-msg-config-core-v3-tcpkeepalive). - This option only applies to remote upstream connections for mesh and terminating gateways. - -- `envoy_gateway_remote_tcp_keepalive_interval` - The number of seconds between keep-alive probes. - For more information, see the - [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto#envoy-v3-api-msg-config-core-v3-tcpkeepalive). - This option only applies to remote upstream connections for mesh and terminating gateways. - -- `envoy_gateway_remote_tcp_keepalive_probes` - Maximum number of keepalive probes to send without - response before deciding the connection is dead. For more information, see the - [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto#envoy-v3-api-msg-config-core-v3-tcpkeepalive). - This option only applies to remote upstream connections for mesh and terminating gateways. - -## Advanced Configuration - -To support more flexibility when configuring Envoy, several "lower-level" options exist -that require knowledge of Envoy's configuration format. -Many options allow configuring a subsection of either the bootstrap or -dynamic configuration using your own custom protobuf config. - -We separate these into two sets, [Advanced Bootstrap -Options](#advanced-bootstrap-options) and [Escape Hatch -Overrides](#escape-hatch-overrides). Both require writing Envoy config in the -protobuf JSON encoding. Advanced options cover smaller chunks that might -commonly need to be set for tasks like configuring tracing. In contrast, escape hatches -give almost complete control over the proxy setup, but require operators to -manually code the entire configuration in protobuf JSON. - -~> **Advanced Topic!** This section covers options that allow users to take almost -complete control of Envoy's configuration. We provide these options so users can -experiment or take advantage of features not yet fully supported in Consul service mesh. We -plan to retain this ability in the future, but it should still be considered -experimental because it requires in-depth knowledge of Envoy's configuration format. -Users should consider Envoy version compatibility when using these features because they can configure Envoy in ways that -are outside of Consul's control. Incorrect configuration could prevent all -proxies in your mesh from functioning correctly, or bypass the security -guarantees Consul service mesh is designed to enforce. - -### Configuration Formatting - -All configurations are specified as strings containing the serialized proto3 JSON encoding -of the specified Envoy configuration type. They are full JSON types except where -noted. - -The JSON supplied may describe a protobuf `types.Any` message with an `@type` -field set to the appropriate type (for example -`type.googleapis.com/envoy.config.listener.v3.Listener`). - -For example, given a tracing config: - - - -```json -{ - "http": { - "name": "envoy.tracers.zipkin", - "typedConfig": { - "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", - "collector_cluster": "zipkin", - "collector_endpoint_version": "HTTP_JSON", - "collector_endpoint": "/api/v1/spans", - "shared_span_context": false - } - } -} -``` - - - -JSON escape the value of `tracing` into a string, for example using [https://codebeautify.org/json-escape-unescape](https://codebeautify.org/json-escape-unescape), -or using [jq](https://stedolan.github.io/jq/). - -```shell -$ cat < - - ```json - { - "name": "local-service-cluster", - "load_assignment": { - "cluster_name": "local-service-cluster", - "endpoints": [ - { - "lb_endpoints": [ - { - "endpoint": { - "address": { - "socket_address": { - "address": "127.0.0.1", - "port_value": 32769 - } - } - } - } - ] - } - ] - } - } - ``` - - - -- `envoy_extra_static_listeners_json` - Similar to - `envoy_extra_static_clusters_json` but appends one or more [Envoy listeners](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/listener/v3/listener.proto) to the array of [static - listener](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-staticresources-listeners) definitions. - Can be used to setup limited access that bypasses the service mesh's mTLS or - authorization for health checks or metrics. - - - - ```json - { - "name": "test_envoy_mtls_bypass_listener", - "address": { - "socket_address": { - "address": "0.0.0.0", - "port_value": 20201 - } - }, - "filter_chains": [ - { - "filters": [ - { - "name": "envoy.filters.network.http_connection_manager", - "typedConfig": { - "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager", - "http_filters": [ - { - "name": "envoy.filters.http.router" - } - ], - "route_config": { - "name": "self_admin_route", - "virtual_hosts": [ - { - "name": "self_admin", - "domains": [ - "*" - ], - "routes": [ - { - "match": { - "path": "/" - }, - "route": { - "cluster": "local-service-cluster" - } - } - ] - } - ] - }, - "stat_prefix": "envoy_mtls_bypass", - "tracing": { - "random_sampling": {} - } - } - } - ] - } - ] - } - ``` - - -- `envoy_extra_stats_sinks_json` - Similar to `envoy_extra_static_clusters_json` - but for [stats sinks](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-stats-sinks). - These are appended to any sinks defined by use of the - higher-level [`envoy_statsd_url`](#envoy_statsd_url) or - [`envoy_dogstatsd_url`](#envoy_dogstatsd_url) config options. - - - - ```json - { - "name": "envoy.stat_sinks.dog_statsd", - "typed_config": { - "@type": "type.googleapis.com/envoy.config.metrics.v3.DogStatsdSink", - "address": { - "socket_address": { - "protocol": "UDP", - "port_value": 8125, - "address": "172.31.20.6" - } - } - } - } - ``` - - -- `envoy_stats_config_json` - The entire [stats - config](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-stats-config). - If provided this will override the higher-level - [`envoy_stats_tags`](#envoy_stats_tags). It allows full control over dynamic - tag replacements etc. - - - - ```json - { - "stats_matcher": { - "reject_all": true - }, - "stats_tags": [ - { - "tag_name": "envoy.http_user_agent", - "regex": "^http(?=\\.).*?\\.user_agent\\.((.+?)\\.)\\w+?$" - } - ], - "use_all_default_tags": false - } - ``` - - -- `envoy_tracing_json` - The entire [tracing - config](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-tracing). - Most tracing providers will also require adding static clusters to define the - endpoints to send tracing data to. - - - - ```json - { - "http": { - "name": "envoy.tracers.zipkin", - "typedConfig": { - "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", - "collector_cluster": "zipkin", - "collector_endpoint_version": "HTTP_JSON", - "collector_endpoint": "/api/v1/spans", - "shared_span_context": false - } - } - } - ``` - - -### Escape-Hatch Overrides - -Users may add the following configuration items to the [global `proxy-defaults` -configuration -entry](/consul/docs/connect/config-entries/proxy-defaults) or -override them directly in the `proxy.config` field of a [proxy service -definition](/consul/docs/connect/proxies/proxy-config-reference) or -[`sidecar_service`](/consul/docs/connect/proxies/deploy-sidecar-services) block. - -- `envoy_bootstrap_json_tpl` - Specifies a template in Go template syntax that - is used in place of [the default - template](https://github.com/hashicorp/consul/blob/71d45a34601423abdfc0a64d44c6a55cf88fa2fc/command/connect/envoy/bootstrap_tpl.go#L129) - when generating bootstrap via [`consul connect envoy` - command](/consul/commands/connect/envoy). The variables that are available - to be interpolated are [documented - here](https://github.com/hashicorp/consul/blob/71d45a34601423abdfc0a64d44c6a55cf88fa2fc/command/connect/envoy/bootstrap_tpl.go#L5). - This offers complete control of the proxy's bootstrap although major - deviations from the default template may break Consul's ability to correctly - manage the proxy or enforce its security model. - -- `envoy_public_listener_json` - Specifies a complete [Envoy listener](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/listener/v3/listener.proto) - to be delivered in place of the main public listener that the proxy used to - accept inbound connections. This will be used verbatim with the following - exceptions: - - - Every `FilterChain` added to the listener will have its `TlsContext` - overridden by the Connect TLS certificates and validation context. This - means there is no way to override the service mesh's mutual TLS for the public - listener. - - Every `FilterChain` will have the `envoy.filters.{network|http}.rbac` filter - prepended to the filters array to ensure that all inbound connections are - authorized by the service mesh. Before Consul 1.9.0 `envoy.ext_authz` was inserted instead. - - - - ```json - { - "@type": "type.googleapis.com/envoy.config.listener.v3.Listener", - "name": "public_listener", - "address": { - "socket_address": { - "address": "127.0.0.1", - "port_value": 21002 - } - }, - "filter_chains": [ - { - "filters": [ - { - "name": "envoy.filters.network.http_connection_manager", - "typed_config": { - "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager", - "stat_prefix": "ingress_http", - "http_filters": [ - { - "name": "envoy.filters.http.router" - } - ], - "route_config": { - "name": "local_route", - "virtual_hosts": [ - { - "name": "local_service", - "domains": ["*"], - "routes": [ - { - "match": { - "prefix": "/" - }, - "route": { - "cluster": "local-service-cluster", - } - } - ] - } - ] - } - } - } - ] - } - ], - "traffic_direction": "INBOUND" - } - ``` - - ```json - { - "@type": "type.googleapis.com/envoy.config.listener.v3.Listener", - "name": "public_listener", - "address": { - "socket_address": { - "address": "127.0.0.1", - "port_value": 21002 - } - }, - "filter_chains": [ - { - "filters": [ - { - "name": "envoy.filters.network.tcp_proxy", - "typed_config": { - "@type": "type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy", - "stat_prefix": "ingress_tcp", - "cluster": "local-service-cluster" - } - } - ] - } - ], - "traffic_direction": "INBOUND" - } - ``` - - - - -- `envoy_listener_tracing_json` - Specifies a [tracing - configuration](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/filters/network/http_connection_manager/v3/http_connection_manager.proto#envoy-v3-api-msg-extensions-filters-network-http-connection-manager-v3-httpconnectionmanager-tracing) - to be inserted in the proxy's public and upstreams listeners. - - - - ```json - { - "@type" : "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager.Tracing", - "provider" : { - "name" : "envoy.tracers.zipkin", - "typed_config" : { - "@type" : "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", - "collector_cluster" : "otelcolector", - "collector_endpoint" : "/api/v2/spans", - "collector_endpoint_version" : "HTTP_JSON", - "shared_span_context" : false - } - }, - "custom_tags" : [ - { - "tag" : "custom_header", - "request_header" : { - "name" : "x-custom-traceid", - "default_value" : "" - } - }, - { - "tag" : "alloc_id", - "environment" : { - "name" : "NOMAD_ALLOC_ID" - } - } - ] - } - ``` - - - -- `envoy_local_cluster_json` - Specifies a complete [Envoy cluster](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/cluster/v3/cluster.proto) - to be delivered in place of the local application cluster. This allows - customization of timeouts, rate limits, load balancing strategy etc. - - - - ```json - { - "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", - "name": "local_app", - "type": "STATIC", - "connect_timeout": "5s", - "circuit_breakers": { - "thresholds": [ - { - "priority": "DEFAULT", - "max_connections": 2048 - } - ] - }, - "load_assignment": { - "cluster_name": "local_app", - "endpoints": [ - { - "lb_endpoints": [ - { - "endpoint": { - "address": { - "socket_address": { - "address": "127.0.0.1", - "port_value": 8080 - } - } - } - } - ] - } - ] - } - } - ``` - - - - -The following configuration items may be overridden directly in the -`proxy.upstreams[].config` field of a [proxy service -definition](/consul/docs/connect/proxies/proxy-config-reference) or -[`sidecar_service`](/consul/docs/connect/deploy/deploy-sidecar-services) block. - -~> **Note:** - When a -[`service-router`](/consul/docs/connect/config-entries/service-router), -[`service-splitter`](/consul/docs/connect/config-entries/service-splitter), or -[`service-resolver`](/consul/docs/connect/config-entries/service-resolver) config -entry exists for a service the below escape hatches are ignored and will log a -warning. - -- `envoy_listener_json` - Specifies a complete [Listener](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/listener/v3/listener.proto) - to be delivered in place of the upstream listener that the proxy exposes to - the application for outbound connections. This will be used verbatim with the - following exceptions: - - - Every `FilterChain` added to the listener will have its `TlsContext` - overridden by the service mesh TLS certificates and validation context. This - means there is no way to override the service mesh's mutual TLS for the public - listener. - - - - ```json - { - "@type": "type.googleapis.com/envoy.config.listener.v3.Listener", - "name": "example-service", - "address": { - "socket_address": { - "address": "0.0.0.0", - "port_value": 14000 - } - }, - "filter_chains": [ - { - "filters": [ - { - "name": "envoy.filters.network.http_connection_manager", - "typedConfig": { - "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager", - "access_log": [ - { - "name": "envoy.access_loggers.file", - "typedConfig": { - "@type": "type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog", - "path": "/var/log/envoy-access/example-service.log" - } - } - ], - "http_filters": [ - { - "name": "envoy.filters.http.router" - } - ], - "route_config": { - "name": "example-service", - "virtual_hosts": [ - { - "name": "example-service", - "domains": [ - "*" - ], - "routes": [ - { - "match": { - "prefix": "/" - }, - "route": { - "cluster": "example-service", - "timeout": "90s", - "retry_policy": { - "retry_on": "5xx,connect-failure", - "num_retries": 2, - "per_try_timeout": "60s" - } - } - } - ] - } - ] - }, - "stat_prefix": "example-service", - "tracing": { - "random_sampling": {} - } - } - } - ] - } - ], - "traffic_direction": "OUTBOUND" - } - ``` - - -- `envoy_cluster_json` - Specifies a complete [Envoy cluster](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/cluster/v3/cluster.proto) - to be delivered in place of the discovered upstream cluster. This allows - customization of timeouts, circuit breaking, rate limits, load balancing - strategy etc. - - - - ```json - { - "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster", - "name": "example-service", - "type": "EDS", - "eds_cluster_config": { - "eds_config": { - "ads": {} - } - }, - "connect_timeout": "90s", - "lb_policy": "ROUND_ROBIN", - "circuit_breakers": { - "thresholds": [ - { - "priority": "DEFAULT", - "max_connections": 1024, - "max_pending_requests": 1024, - "max_requests": 1024, - "max_retries": 3 - } - ] - } - } - ``` - diff --git a/website/content/docs/connect/proxies/index.mdx b/website/content/docs/connect/proxies/index.mdx deleted file mode 100644 index 839a24a49f4f..000000000000 --- a/website/content/docs/connect/proxies/index.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -layout: docs -page_title: Service mesh proxy overview -description: >- - In Consul service mesh, each service has a sidecar proxy that secures connections with other services in the mesh without modifying the underlying application code. You can use the built-in proxy, Envoy, or a custom proxy to handle communication and verify TLS connections. ---- - -# Service mesh proxy overview - -This topic provides an overview of how Consul uses proxies in your service mesh. A proxy is a type of service that enables unmodified applications to connect to other services in the service mesh. Consul ships with a built-in L4 proxy and has first class support for Envoy. You can plug other proxies into your environment as well, and apply configurations in Consul to define proxy behavior. - -## Proxy use cases - -You can configure proxies to perform several different types of functions in Consul. - -### Service mesh proxies - -A _service mesh proxy_ is any type of proxy service that you use for communication in a service mesh. Specialized proxy implementations, such as sidecar proxies and gateways, are subsets of service mesh proxies. Refer to [Deploy service mesh proxy services](/consul/docs/connect/proxies/deploy-service-mesh-proxies) for instructions on how to deploy a service mesh proxy. - -### Sidecars - -Sidecar proxies are service mesh proxy implementations that transparently handles inbound and outbound service connections. Sidecars automatically wrap and verify TLS connections. In a non-containerized network, each service in your mesh should have its own sidecar proxy. - -Refer to [Deploy sidecar services](/consul/docs/connect/proxies/deploy-sidecar-services) for additional information. - -### Gateways - -You can configure service mesh proxies to operate as gateway services, which allow service-to-service traffic across different network areas, including peered clusters, WAN-federated datacenters, and nodes outside the mesh. Consul ships with several types of gateway capabilities, but gateways deliver the underlying functionality. - -Refer to [Gateways overview](/consul/docs/connect/gateways) for additional information. - -### Dynamic traffic control - -You can configure proxies to dynamically redirect or split traffic to implement a failover strategy, test new features, and roll out new versions of your applications. You can apply a combination of configurations to enable complex scenarios, such as failing over to the geographically nearest service or to services in different network areas that you have designated as being functionally identical. - -Refer to the following topics for additional information: - -- [Service mesh traffic management overview](/consul/docs/connect/manage-traffic) -- [Failover overview](/consul/docs/connect/manage-traffic/failover) - - -## Supported proxies - -Consul has first-class support for Envoy proxies, which is a highly configurable open source edge service proxy. Consul configures Envoy by optionally exposing a gRPC service on the local agent that serves Envoy's xDS configuration API. Refer to the following documentation for additional information: - -- [Envoy proxy reference](/consul/docs/connect/proxies/envoy) -- [Envoy API documentation](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-docs/xds_protocol) - -You can use Consul's built-in proxy service that supports L4 network traffic, which is suitable for testing and development but not recommended for production environments. Refer to the [built-in proxy reference](/consul/docs/connect/proxies/built-in) for additional information. - -## Workflow - -The following procedure describes how to implement proxies: - -1. **Configure global proxy settings**. You can configure global passthrough settings for all proxies deployed to your service mesh in the proxy defaults configuration entry. This step is not required, but it enables you to define common behaviors in a central configuration. -1. **Deploy your service mesh proxy**. Configure proxy behavior in a service definition and register the proxy with Consul. -1. **Start the proxy service**. Proxies appear in the list of services registered to Consul and must be started before they begin to route traffic in your service mesh. - -### Dynamic upstreams require native integration - -Service mesh proxies do not support dynamic upstreams. If an application requires dynamic dependencies that are only available at runtime, you must [natively integrate](/consul/docs/connect/native) the application with Consul service mesh. After integration, the application can use the HTTP API or [DNS interface](/consul/docs/services/discovery/dns-static-lookups#service-mesh-enabled-service-lookups) to connect to other services in the mesh. - -## Proxies in Kubernetes-orchestrated networks - -For Kubernetes-orchestrated environments, Consul deploys _dataplanes_ by default to manage sidecar proxies. Consul dataplanes are light-weight processes that leverage existing Kubernetes sidecar orchestration capabilities. Refer to the [dataplanes documentation](/consul/docs/connect/dataplane) for additional information. - -## Guidance - -Refer to the following resources for help using service mesh proxies: - -### Tutorial - -- [Using Envoy with Consul service mesh](/consul/tutorials/developer-mesh/service-mesh-with-envoy-proxy) - -### Usage documentation - -- [Deploy service mesh proxies](/consul/docs/connect/proxies/deploy-service-mesh-proxies) -- [Deploy sidecar proxies](/consul/docs/connect/proxies/deploy-sidecar-services) -- [Extend Envoy proxies](/consul/docs/connect/proxies/envoy-extensions) -- [Integrate custom proxies](/consul/docs/connect/proxies/integrate) - -### Reference documentation - -- [Proxy defaults configuration entry reference](/consul/docs/connect/config-entries/proxy-defaults) for additional information. -- [Envoy proxies reference](/consul/docs/connect/proxies/envoy) -- [Service mesh proxy configuration reference](/consul/docs/connect/proxies/proxy-config-reference) -- [`consul connect envoy` command](/consul/commands/connect/envoy) diff --git a/website/content/docs/connect/proxies/integrate.mdx b/website/content/docs/connect/proxies/integrate.mdx deleted file mode 100644 index 80708badff0e..000000000000 --- a/website/content/docs/connect/proxies/integrate.mdx +++ /dev/null @@ -1,232 +0,0 @@ ---- -layout: docs -page_title: Custom Proxy Configuration | Service Mesh -description: >- - Consul supports custom proxy integrations for service discovery and sidecar instantiation. Learn about proxy requirements for service mesh operations, as well as how to authorize inbound and outbound connections for your custom proxy. ---- - -# Custom Proxy Configuration for Service Mesh - - - - The Connect Native Golang SDK and `v1/agent/connect/authorize`, `v1/agent/connect/ca/leaf`, - and `v1/agent/connect/ca/roots` APIs are deprecated and will be removed in a future release. Although Connect Native - will still operate as designed, we do not recommend leveraging this feature because it is deprecated and will be removed when the long term replacement to native application integration (such as a proxyless gRPC service mesh integration) is delivered. Refer to [GH-10339](https://github.com/hashicorp/consul/issues/10339) for additional information and to track progress toward one potential solution that is tracked as replacement functionality. - - The Native App Integration does not support many of the Consul's service mesh features, and is not under active development. - The [Envoy proxy](/consul/docs/connect/proxies/envoy) should be used for most production environments. - - - -This topic describes the process and API endpoints you can use to extend proxies for integration with Consul. - -## Overview - -You can extend any proxy to support Consul service mesh. Consul ships with a built-in -proxy suitable for an out-of-the-box development experience, but you may require a more robust proxy solution for production environments. - -The proxy you integrate must be able to accept inbound connections and/or establish outbound connections identified as a particular service. In some cases, either ability may be acceptable, but both are generally required to support for full sidecar functionality. - -Sidecar proxies may support L4 or L7 network functionality. L4 integration is simpler and adequate for securing all traffic. L4 treats all traffic as TCP, however, so advanced routing or metrics features are not supported. - -Full L7 support is built on top of L4 support. An L7 proxy integration supports most or all of the L7 traffic routing features in Consul service mesh by dynamically configuring routing, retries, and other L7 features. The built-in proxy only supports L4, while [Envoy](/consul/docs/connect/proxies/envoy) supports the full L7 feature set. - -Areas where the integration approach differs between L4 and L7 are identified in this topic. - -The noun _connect_ is used throughout this documentation to refer to the connect -subsystem that provides Consul's service mesh capabilities. - -## Accepting Inbound Connections - -The proxy must accept TLS connections on some port to accept inbound connections. - -### Obtaining and validating client certificates - -Call the [`/v1/agent/connect/ca/leaf/`] API endpoint to obtain the client certificate, e.g.: - - - -```shell -$ curl http://:8500/v1/agent/connect/ca/leaf/ -``` - - - -The client certificate from the inbound connection must be validated against the service mesh CA root certificates. Call the [`/v1/agent/connect/ca/roots`] endpoint to obtain the root certificates from the service mesh CA, e.g.: - - - -```shell -$ curl http://:8500/v1/agent/connect/ca/roots -``` - - - -### Authorizing the connection - -After validating the client certificate from the caller, the proxy can authorize the entire connection (L4) or each request (L7). Depending upon the [protocol] of the proxied service, authorization is performed either on a per-connection (L4) or per-request (L7) basis. Authentication is based on "service identity" (TLS), and is implemented at the -transport layer. - --> **Note:** Some features, such as (local) rate limiting or max connections, are expected to be proxy-level configurations enforced separately when authorization calls are made. Proxies can enforce the configurations based on information about request rates and other states that should already be available. - -The proxy can authorize the connection by either calling the [`/v1/agent/connect/authorize`](/consul/api-docs/agent/connect) API endpoint or by querying the [intention match API](/consul/api-docs/connect/intentions#list-matching-intentions) endpoint. - -The [`/v1/agent/connect/authorize`](/consul/api-docs/agent/connect) endpoint should be called in the connection path for each received connection. -If the local Consul agent is down or unresponsive, the success rate of new connections will be compromised. -The agent uses locally-cached data to authorize the connection and typically responds in microseconds. As a result, the TLS handshake typically spans microseconds. - -~> **Note:** This endpoint is only suitable for L4 (e.g., TCP) integration. The endpoint always treats intentions with `Permissions` defined (i.e., L7 criteria) as `deny` intentions during evaluation. - -The proxy can query the [intention match API](/consul/api-docs/connect/intentions#list-matching-intentions) endpoint on startup to retrieve a list of intentions that match the proxy destination. The matches should be stored in the native filter configuration of the proxy, such as RBAC for Envoy. - -For performance and reliability reasons, querying the intention match API endpoint is the recommended method for implementing intention enforcement. The cached intentions should be consulted for each incoming connection (L4) or request (L7) to determine if the connection or request should be accepted or rejected. - -#### Persistent TCP connections and intentions - -For a proxied service configured with the TCP [protocol], potentially long-lived TCP connections will only be authorized when the connections are initially established. But because many services, such as databases, typically use persistent connection pools, changing intentions to deny access does not terminate existing connections. This behavior violates the updated intention. In these cases, it may appear as if the intention is not being enforced. - -Implement one of the following strategies to close connections: - -1. **Configure connections to terminate after a maximum lifetime**, e.g., several hours. This balances the overhead of establishing new connections with determining how long existing connections remain open after an intention changes. - -1. **Periodically re-authorize every open connection**. The authorization call is inexpensive and should be a local, in-memory operation on the Consul agent. Periodically authorizing thousands of open connections (e.g., once every minute) is likely to be negligible overhead, but doing so enforces a tighter upper boundary on how long it takes to enforce intention changes without affecting the protocol efficiency of persistent connections. - -#### Certificate serial in authorization - -Intentions currently use TLS URI Subject Alternative Name (SAN) for enforcement. The `AuthZ` API in the Go SDK contains a field for passing the serial number ([`consul/connect/tls.go`]). Proxies may provide this value during authorization. - -### Updating data - -The API endpoints described in this section operate on agent-local data that is updated in the -background. The leaf, roots, and intentions should be updated in the background -by the proxy. - -The leaf cert, root cert, and intentions endpoints support [blocking -queries](/consul/api-docs/features/blocking), which should be used to get near-immediate -updates for root key rotations, new leaf certs before expiry, and intention -changes. - -### SPIFFE certificates - -Although Consul follows the SPIFFE spec for certificates, some CA providers do not allow strict adherence. For example, CA certificates may not have the correct trust-domain SPIFFE URI SAN for the -cluster. If SPIFFE validation is performed in the proxy, be aware that it -should be possible to opt out, otherwise certain CA providers supported by -Consul will not be compatible with the use of that proxy. Neither -Envoy nor the built-in proxy currently validate the SPIFFE URI of the chain beyond the -leaf certificate. - -## Establishing Outbound Connections - -For outbound connections, the proxy should communicate with a mesh-capable -endpoint for a service and provide a client certificate from the -[`/v1/agent/connect/ca/leaf/`] API endpoint. The proxy must use the root certificate obtained from the [`/v1/agent/connect/ca/roots`] endpoint to verify the certificate served from the destination endpoint. - -## Configuration Discovery - -The [`/v1/agent/service/:service_id`](/consul/api-docs/agent/service#get-service-configuration) -API endpoint enables any proxy to discover proxy configurations registered with a local service. This endpoint supports hash-based blocking, which enables long-polling for changes -to the registration/configuration. Any changes to the registration/config will -result in the new config being returned immediately. - -Refer to the [built-in proxy](/consul/docs/connect/proxies/built-in) for an example implementation. Using the Go SDK, the proxy calls the HTTP "pull" API via the `watch` package: [`consul/connect/proxy/config.go`]. - -The [discovery chain] for each upstream service should be fetched from the -[`/v1/discovery-chain/:service_id`](/consul/api-docs/discovery-chain#read-compiled-discovery-chain) -API endpoint. This will return a compiled graph of configurations a sidecar needs for a particular upstream service. - -If you are only implementing L4 support in your proxy, set the -[`OverrideProtocol`](/consul/api-docs/discovery-chain#overrideprotocol) value to `tcp` when -fetching the discovery chain so that L7 features, such as HTTP routing rules, are -not returned. - -For each [target](/consul/docs/connect/manage-traffic/discovery-chain#targets) in the resulting -discovery chain, a list of healthy, mesh-capable endpoints may be fetched -from the [`/v1/health/connect/:service_id`] API endpoint as described in the [Service -Discovery](#service-discovery) section. - -The remaining nodes in the chain include configurations that should be -translated into the nearest equivalent for features, such as HTTP routing, connection -timeouts, connection pool settings, rate limits, etc. See the full [discovery -chain] documentation and relevant [config entry](/consul/docs/agent/config-entries) -documentation for details about supported configuration parameters. - -### Service Discovery - -Proxies can use Consul's [service discovery API](/consul/api-docs/health#list-service-instances-for-mesh-enabled-service) to return all available, mesh-capable endpoints for a given service. This endpoint supports a `cached` query parameter, which uses [agent caching](/consul/api-docs/features/caching) to improve -performance. The API package provides a [`UseCache`] query option to leverage caching. -In addition to performance improvements, using the cache makes the mesh more resilient to Consul server outages. This is because the mesh "fails static" with the last known set of service instances still used, rather than errors on new connections. - -Proxies can decide whether to perform just-in-time queries to the API when a -new connection needs to be routed, or to use blocking queries to load the -current set of endpoints for a service and keep that list updated. The SDK and -built-in proxy currently use just-in-time resolution however many existing -proxies are likely to find it easier to integrate by pulling the set of -endpoints and maintaining it in local memory using blocking queries. - -Upstreams may be defined with the Prepared Query target type. These upstreams -should use Consul's [prepared query](/consul/api-docs/query) API to determine a list of upstream endpoints for the service. Note that the `PreparedQuery` API does not support blocking, so proxies choosing to populate endpoints in memory will need to poll the endpoint at a suitable and, ideally, configurable frequency. - --> **Long-term support for [`service-resolver`](/consul/docs/connect/config-entries/service-resolver) configuration -entries**. The `service-resolver` configuration will completely replace prepared queries in future versions of Consul. In some instances, however, prepared queries are still used. - -## Sidecar Instantiation - -Consul does not start or manage sidecar proxy processes. Proxies running on a -physical host or VM are designed to be started and run by process supervisor -systems, such as init, systemd, supervisord, etc. If deployed within a -cluster scheduler (Kubernetes, Nomad), proxies should run as a sidecar container in the -same namespace. - -Proxies will use the [`CONSUL_HTTP_TOKEN`](/consul/commands#consul_http_token) and -[`CONSUL_HTTP_ADDR`](/consul/commands#consul_http_addr) environment variables to -contact Consul and fetch certificates. This occurs if the `CONSUL_HTTP_TOKEN` -environment variable contains a Consul ACL token that has the necessary permissions -to read configurations for that service. If you use the Go [`api` package], then -the environment variables will be read and the client configured for you -automatically. - -Alternatively, you may also use the flags `-token` or `-token-file` to provide the Consul ACL token. - - - - - -```shell -$ consul connect envoy -sidecar-for "web" -token-file=/etc/consul.d/consul.token -``` - - - - - -```shell -$ consul connect proxy -sidecar-for "web" -token-file=/etc/consul.d/consul.token -``` - - - - - -If TLS is enabled on Consul, you will also need to add the following environment variables _prior_ to starting the proxy: - -- [`CONSUL_CACERT`](/consul/commands#consul_cacert) -- [`CONSUL_CLIENT_CERT`](/consul/commands#consul_client_cert) -- [`CONSUL_CLIENT_KEY`](/consul/commands#consul_client_key) - -The `CONSUL_CACERT`, `CONSUL_CLIENT_CERT` and `CONSUL_CLIENT_KEY` can also be provided as CLI flags. Refer to the [`consul connect proxy` documentation](/consul/commands/connect/proxy) for details. - -The proxy service ID comes from the user. See [`consul connect envoy`](/consul/commands/connect/envoy#examples) for an example. You can use the `-proxy-id` flag to specify the ID of the proxy service you have already registered with the local agent. - -Alternatively, you can start the service using the `-sidecar-for=` option. This option queries Consul for a proxy that is registered as a sidecar for the specified ``. If exactly one service associated with the proxy is returned, the ID will be used to start the proxy. Your controller only needs to accept `-proxy-id` as an argument; the Consul CLI will resolve the -ID for the name specified in `-sidecar-for` flag. - - - -[`/v1/agent/connect/ca/leaf/`]: /consul/api-docs/agent/connect#service-leaf-certificate -[`/v1/agent/connect/ca/roots`]: /consul/api-docs/agent/connect#certificate-authority-ca-roots -[`api` package]: https://github.com/hashicorp/consul/tree/main/api -[`consul/connect/proxy/config.go`]: https://github.com/hashicorp/consul/blob/v1.8.3/connect/proxy/config.go#L187 -[`consul/connect/tls.go`]: https://github.com/hashicorp/consul/blob/v1.8.3/connect/tls.go#L232-L237 -[discovery chain]: /consul/docs/connect/manage-traffic/discovery-chain -[`usecache`]: https://github.com/hashicorp/consul/blob/v1.8.3/api/api.go#L99-L102 -[protocol]: /consul/docs/connect/config-entries/service-defaults#protocol diff --git a/website/content/docs/connect/proxies/proxy-config-reference.mdx b/website/content/docs/connect/proxies/proxy-config-reference.mdx deleted file mode 100644 index 2d64f0e225b8..000000000000 --- a/website/content/docs/connect/proxies/proxy-config-reference.mdx +++ /dev/null @@ -1,556 +0,0 @@ ---- -layout: docs -page_title: Service mesh proxy configuration reference -description: >- - You can register a service mesh sidecar proxy separately from the registration of the service instance it fronts. Learn about proxy configuration options and how to format them with examples. ---- - -# Service mesh proxy configuration - -This topic describes how to declare a service mesh proxy in a service definition. The `kind` must be declared and information about the service they represent must be provided to function as a Consul service mesh proxy. - -## Configuration - -Configure a service mesh proxy using the following syntax: - - - -```hcl -name = -kind = "connect-proxy" -proxy = { - destination_service_name = "" - = "" -} -port = -``` - -```json -{ - "name": "", - "kind": "connect-proxy", - "proxy": { - "destination_service_name": "", - "" : "" - }, - "port": -} -``` - - - -The following table describes the parameters that must be added to the service definition to declare the service as a proxy. - -| Parameter | Description | Required | Default | -| --- | --- | --- | --- | -| `kind` | String value that declares the type for the service. This should always be set to `connect-proxy` to declare the services as a service mesh proxy. | Required | None | -| `proxy` | Object that contains the [proxies parameters](#proxy-parameters).
The `destination_service_name` parameter must be included in the `proxy` configuration. The `destination_service_name` parameter specifies the name of the services that the proxy represents.
This parameter replaces `proxy_destination` used in Consul 1.2.0 to 1.3.0. The `proxy_destination` parameter was deprecated in 1.5.0. | Required | None | -| `port` | Integer value that specifies the port where other services in the mesh can discover and connect to proxied services. | Required | None | -| `address` | Specifies the IP address of the proxy. | Optional
The address will be inherited from the node configuration. | `address` specified in the node configuration. | - -You can specify several additional parameters to configure the proxy to meet your requirements. See [Proxy Parameters](#proxy-parameters) for additional information. - -### Example - -In the following example, a proxy named `redis-proxy` is registered as a service mesh proxy. It proxies to the `redis` service and is available at port `8181`. As a result, any service mesh clients searching for a mesh-capable endpoint for `redis` will find this proxy. - - - -```hcl -kind = "connect-proxy" -name = "redis-proxy" -port = 8181 -proxy = { - destination_service_name = "redis" -} -``` - -```json -{ - "name": "redis-proxy", - "kind": "connect-proxy", - "proxy": { - "destination_service_name": "redis" - }, - "port": 8181 -} -``` - - - -### Sidecar proxy configuration - -Many service mesh proxies are deployed as sidecars. -Sidecar proxies are co-located with the single service instance they represent and proxy all inbound traffic to. - -Specify the following parameters in the `proxy` code block to configure a sidecar proxy in its own service registration: - -* `destination_service_id`: String value that specifies the ID of the service being proxied. Refer to the [proxy parameters reference](#destination-service-id) for details. -* `local_service_port`: Integer value that specifies the port that the proxy should use to connect to the _local_ service instance. Refer to the [proxy parameters reference](#local-service-port) for details. -* `local_service_address`: String value that specifies the IP address or hostname that the proxy should use to connect to the _local_ service. Refer to the [proxy parameters reference](#local-service-address) for details. - -Refer to [Deploy sidecar services](/consul/docs/connect/proxies/deploy-sidecar-services) for additional information about configuring service mesh proxies as sidecars. - -### Complete configuration example - -The following example includes values for all available options when registering a proxy instance. - - - -```hcl -kind = "connect-proxy" -name = "redis-proxy" -port = 8181 -proxy = { - config = {} - destination_service_id = "redis1" - destination_service_name = "redis" - expose = {} - local_service_address = "127.0.0.1" - local_service_port = 9090 - local_service_socket_path = "/tmp/redis.sock" - mesh_gateway = {} - mode = "transparent" - transparent_proxy = {} - upstreams = [] -} -``` - -```json -{ - "name": "redis-proxy", - "kind": "connect-proxy", - "proxy": { - "destination_service_name": "redis", - "destination_service_id": "redis1", - "local_service_address": "127.0.0.1", - "local_service_port": 9090, - "local_service_socket_path": "/tmp/redis.sock", - "mode": "transparent", - "transparent_proxy": {}, - "config": {}, - "upstreams": [], - "mesh_gateway": {}, - "expose": {} - }, - "port": 8181 -} -``` - - - -### Proxy parameters - -The following table describes all parameters that can be defined in the `proxy` block. - -| Parameter | Description | Required | Default | -| --- | --- | --- | --- | -| `destination_service_id`
| String value that specifies the ID of a single service instance represented by the proxy.
This parameter is only applicable for sidecar proxies that run on the same node as the service.
Consul checks for the proxied service on the same agent.
The ID is unique and may differ from its `name` value.
Specifying this parameter helps tools identify which sidecar proxy instances are associated with which application instance, as well as enable fine-grained analysis of the metrics coming from the proxy.| Required when registering proxy as a sidecar | None | -| `local_service_port`
| Integer value that specifies the port that a sidecar proxy should use to connect to the _local_ service instance. | Required when registering proxy as a sidecar | Port advertised by the service instance configured in `destination_service_id` | -| `local_service_address` | String value that specifies the IP address or hostname that a sidecar proxy should use to connect to the _local_ service. | Optional | `127.0.0.1` | -| `destination_service_name` | String value that specifies the _name_ of the service the instance is proxying. The name is used during service discovery to route to the correct proxy instances for a given service name. | Required | None | -| `local_service_socket_path` | String value that specifies the path of a Unix domain socket for connecting to the local application instance.
This parameter value is created by the application and conflicts with `local_service_address` and `local_service_port`.
Supported when using Envoy for the proxy. | Optional | None | -| `mode` | String value that specifies the proxy mode. See [Proxy Modes](#proxy-modes) for additional information. | Optional | `direct` | -| `transparent_proxy` | Object value that specifies the configuration specific to proxies in `transparent` mode.
See [Proxy Modes](#proxy-modes) and [Transparent Proxy Configuration Reference](#transparent-proxy-configuration-reference) for additional information.
This parameter was added in Consul 1.10.0. | Optional | None | -| `config` | Object value that specifies an opaque JSON configuration. The JSON is stored and returned along with the service instance when called from the API. | Optional | None | -| `upstreams` | An array of objects that specify the upstream services that the proxy should create listeners for. Refer to [Upstream Configuration Reference](#upstream-configuration-reference) for details. | Optional | None | -| `mesh_gateway` | Object value that specifies the mesh gateway configuration for the proxy. Refer to [Mesh Gateway Configuration Reference](#mesh-gateway-configuration-reference) for details. | Optional | None | -| `expose` | Object value that specifies a configuration for exposing HTTP paths through the proxy.
This parameter is only compatible with Envoy proxies.
Refer to [Expose Paths Configuration Reference](#expose-paths-configuration-reference) for details. | Optional | None | - -### Upstream configuration reference - -You can configure the service mesh proxy to create listeners for upstream services. The listeners enable the upstream service to accept requests. You can specify the following parameters to configure upstream service listeners. - -| Parameter | Description | Required | Default | -| --- | --- | --- | --- | -|`destination_name` | String value that specifies the name of the service or prepared query to route the service mesh to. The prepared query should be the name or the ID of the prepared query. | Required | None | -| `destination_namespace` | String value that specifies the namespace containing the upstream service. | Optional | Defaults to the local namespace | -| `destination_peer` | String value that specifies the name of the peer cluster containing the upstream service. | Optional | None | -| `destination_partition` | String value that specifies the name of the admin partition containing the upstream service. If `destination_peer` is set, `destination_partition` refers to the local admin partition in which the peering was established. | Optional | Defaults to the local partition | -| `local_bind_port` | Integer value that specifies the port to bind a local listener to. The application will make outbound connections to the upstream from the local port. | Required | None | -| `local_bind_address` | String value that specifies the address to bind a local listener to. The application will make outbound connections to the upstream service from the local bind address. | Optional | `127.0.0.1` | -| `local_bind_socket_path` | String value that specifies the path at which to bind a Unix domain socket listener. The application will make outbound connections to the upstream from the local bind socket path.
This parameter conflicts with the `local_bind_port` or `local_bind_address` parameters.
Supported when using Envoy as a proxy. | Optional | None| -| `local_bind_socket_mode` | String value that specifies a Unix octal that configures file permissions for the socket. | Optional | None | -| `destination_type` | String value that specifies the type of discovery query the proxy should use for finding service mesh instances. The following values are supported:
  • `service`: Queries for upstream `service` types.
  • `prepared_query`: Queries for upstream prepared queries.
  • | Optional | `service` | -| `datacenter` | String value that specifies the datacenter to issue the discovery query to. | Optional | Defaults to the local datacenter. | -| `config` | Object value that specifies opaque configuration options that will be provided to the proxy instance for the upstream.
    Valid JSON objects are also supported.
    The `config` parameter can specify timeouts, retries, and other proxy-specific features for the given upstream.
    See the [built-in proxy configuration reference](/consul/docs/connect/proxies/built-in#proxy-upstream-config-key-reference) for configuration options when using the built-in proxy.
    If using Envoy as a proxy, see [Envoy configuration reference](/consul/docs/connect/proxies/envoy#proxy-upstream-config-options) | Optional | None | -| `mesh_gateway` | Object that defines the mesh gateway configuration for the proxy. Refer to the [Mesh Gateway Configuration Reference](#mesh-gateway-configuration-reference) for configuration details. | Optional | None | - -### Upstream configuration examples - -Upstreams support multiple destination types. The following examples include information about each implementation. Note that the examples in this topic use snake case, which is a convention that separates words with underscores, because the format is supported in configuration files and API registrations. - - - -```hcl -destination_type = "service" -destination_name = "redis" -datacenter = "dc1" -local_bind_address = "127.0.0.1" -local_bind_port = 1234 -local_bind_socket_path = "/tmp/redis_5678.sock" -local_bind_socket_mode = "0700" -mesh_gateway = { - mode = "local" -} -``` - -```json -{ - "destination_type": "service", - "destination_name": "redis", - "datacenter": "dc1", - "local_bind_address": "127.0.0.1", - "local_bind_port": 1234, - "local_bind_socket_path": "/tmp/redis_5678.sock", - "local_bind_socket_mode": "0700", - "mesh_gateway": { - "mode": "local" - } -} -``` - - - - - -```hcl -destination_type = "prepared_query" -destination_name = "database" -local_bind_address = "127.0.0.1" -local_bind_port = 1234 -config = {} -``` - -```json -{ - "destination_type": "prepared_query", - "destination_name": "database", - "local_bind_address": "127.0.0.1", - "local_bind_port": 1234, - "config": {} -} -``` - - - - - -```hcl -destination_partition = "finance" -destination_namespace = "default" -destination_type = "service" -destination_name = "billing" -local_bind_port = 9090 -``` - -```json -{ - "destination_partition": "finance", - "destination_namespace": "default", - "destination_type": "service", - "destination_name": "billing", - "local_bind_port": 9090 -} -``` - - - - -```hcl -destination_peer = "cloud-services" -destination_partition = "finance" -destination_namespace = "default" -destination_type = "service" -destination_name = "api" -local_bind_port = 9090 -``` - -```json -{ - "destination_peer": "cloud-services", - "destination_partition": "finance", - "destination_namespace": "default", - "destination_type": "service", - "destination_name": "api", - "local_bind_port": 9090 -} -``` - - - -## Proxy modes - -You can configure which mode a proxy operates in by specifying `"direct"` or `"transparent"` in the `mode` parameter. The proxy mode determines the how proxies direct traffic. This feature was added in Consul 1.10.0. - -* `transparent`: In this mode, inbound and outbound application traffic is captured and redirected through the proxy. This mode does not enable the traffic redirection. It directs Consul to configure Envoy as if traffic is already being redirected. -* `direct`: In this mode, the proxy's listeners must be dialed directly by the local application and other proxies. - -You can also specify an empty string (`""`), which configures the proxy to operate in the default mode. The default mode is inherited from parent parameters in the following order of precedence: - -1. Proxy service's `Proxy` configuration -1. The `service-defaults` configuration for the service. -1. The `global` `proxy-defaults`. - -The proxy will default to `direct` mode if a mode cannot be determined from the parent parameters. - -### Transparent proxy configuration reference - -The following examples show additional configuration for transparent proxies. - -Note that the examples in this topic use snake case, which is a convention that separates words with underscores, because the format is supported in configuration files and API registrations. - -#### Configure a proxy listener for outbound traffic on port 22500 - -```json -{ - "outbound_listener_port": 22500, - "dialed_directly": true -} -``` - -- `outbound_listener_port` `(int: 15001)` - The port the proxy should listen on for outbound traffic. - This must be the port where outbound application traffic is captured and redirected to. -- `dialed_directly` `(bool: false)` - Determines whether this proxy instance's IP address can be dialed - directly by transparent proxies. Transparent proxies typically dial upstreams using the "virtual" - tagged address, which load balances across instances. A database cluster with a leader is an example - where dialing individual instances can be helpful. Cannot be used with upstreams which define a `destination_peer`. - - ~> **Note:** Dynamic routing rules such as failovers and redirects do not apply to services dialed directly. - Additionally, the connection is proxied using a TCP proxy with a connection timeout of 5 seconds. - -### Mesh gateway configuration reference - -The following examples show all possible mesh gateway configurations. - - Note that the examples in this topic use snake case, which is a convention that separates words with underscores, because the format is supported in configuration files and API registrations. - -#### Using local and egress gateways in the local datacenter - -```json -{ - "mode": "local" -} -``` - -#### Direct to remote and ingress services in a remote datacenter - -```json -{ - "mode": "remote" -} -``` - -#### Disable a mesh gateway - -```json -{ - "mode": "none" -} -``` - -#### Specify the default mesh gateway mode - -```json -{ - "mode": "" -} -``` - -- `mode` `(string: "")` - This defines the mode of operation for how - upstreams with a remote destination datacenter get resolved. - - `"local"` - Mesh gateway services in the local datacenter will be used - as the next-hop destination for the upstream connection. - - `"remote"` - Mesh gateway services in the remote/target datacenter will - be used as the next-hop destination for the upstream connection. - - `"none"` - No mesh gateway services will be used and the next-hop destination - for the connection will be directly to the final service(s). - - `""` - Default mode. The default mode will be `"none"` if no other configuration - enables them. The order of precedence for setting the mode is - 1. Upstream - 2. Proxy Service's `Proxy` configuration - 3. The `service-defaults` configuration for the service. - 4. The `global` `proxy-defaults`. - -### Expose paths configuration reference - -The following examples show possible configurations to expose HTTP paths through Envoy. - -Exposing paths through Envoy enables a service to protect itself by only listening on localhost, while still allowing -non-mesh-enabled applications to contact an HTTP endpoint. -Some examples include: exposing a `/metrics` path for Prometheus or `/healthz` for kubelet liveness checks. - -Note that the examples in this topic use snake case, which is a convention that separates words with underscores, because the format is supported in configuration files and API registrations. - -#### Expose listeners in Envoy for health checks - -The following example exposes Envoy listeners to HTTP and GRPC checks registered with the local Consul agent: - -```json -{ - "expose": { - "checks": true - } -} -``` - -#### Expose an HTTP listener - -The following example exposes and HTTP listener in Envoy at port `21500` that routes to an HTTP server listening at port `8080`: - -```json -{ - "expose": { - "paths": [ - { - "path": "/healthz", - "local_path_port": 8080, - "listener_port": 21500 - } - ] - } -} -``` - -#### Expose an HTTP2 listener - -The following example expose an HTTP2 listener in Envoy at port `21501` that routes to a gRPC server listening at port `9090`: - -```json -{ - "expose": { - "paths": [ - { - "path": "/grpc.health.v1.Health/Check", - "protocol": "http2", - "local_path_port": 9090, - "listener_port": 21501 - } - ] - } -} -``` - -- `checks` `(bool: false)` - If enabled, all HTTP and gRPC checks registered with the agent are exposed through Envoy. - Envoy will expose listeners for these checks and will only accept connections originating from localhost or Consul's - [advertise address](/consul/docs/agent/config/config-files#advertise). The port for these listeners are dynamically allocated from - [expose_min_port](/consul/docs/agent/config/config-files#expose_min_port) to [expose_max_port](/consul/docs/agent/config/config-files#expose_max_port). - This flag is useful when a Consul client cannot reach registered services over localhost. One example is when running - Consul on Kubernetes, and Consul agents run in their own pods. -- `paths` `array: []` - A list of paths to expose through Envoy. - - `path` `(string: "")` - The HTTP path to expose. The path must be prefixed by a slash. ie: `/metrics`. - - `local_path_port` `(int: 0)` - The port where the local service is listening for connections to the path. - - `listener_port` `(int: 0)` - The port where the proxy will listen for connections. This port must be available for - the listener to be set up. If the port is not free then Envoy will not expose a listener for the path, - but the proxy registration will not fail. - - `protocol` `(string: "http")` - Sets the protocol of the listener. One of `http` or `http2`. For gRPC use `http2`. - -### Unix domain sockets - -To connect to a service using a local Unix domain socket instead of a port, add `local_bind_socket_path` and optionally `local_bind_socket_mode` to the upstream config for a service. The following examples show additional configurations for Unix domain sockets: - - - -```hcl -upstreams = [ - { - destination_name = "service-1" - local_bind_socket_path = "/tmp/socket_service_1" - local_bind_socket_mode = "0700" - } -] -``` - -```json -{ - "upstreams": [ - { - "destination_name": "service-1", - "local_bind_socket_path": "/tmp/socket_service_1", - "local_bind_socket_mode": "0700" - } - ] -} -``` - - - -Envoy creates a socket with the specified path and mode and connects to `service-1`. - -The `mode` field is optional. When omitted, Envoy uses the default mode. This is not applicable for abstract sockets. Refer to the -[Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/core/v3/address.proto#envoy-v3-api-msg-config-core-v3-pipe) -for details. - --> These options conflict with the `local_bind_socket_port` and -`local_bind_socket_address` options. For a given upstream the proxy can bind either to an IP port or -a Unix socket, but not both. - -Similarly to expose a service listening on a Unix Domain socket to the service -mesh, use either the `socket_path` field in the service definition or the -`local_service_socket_path` field in the proxy definition. These -fields are analogous to the `port` and `service_port` fields in their -respective locations. - - - -```hcl -services { - name = "service-2" - socket_path = "/tmp/socket_service_2" -} -``` - -```json -{ - "services": { - "name": "service-2", - "socket_path": "/tmp/socket_service_2" - } -} -``` - - - -Or in the proxy definition: - - - -```hcl -services { - name = "socket_service_2" - connect { - sidecar_service { - proxy { - name = "service-2" - local_service_socket_path = "/tmp/socket_service_2" - } - } - } -} -``` - -```json -{ - "services": [ - { - "name": "socket_service_2", - "connect": { - "sidecar_service": { - "proxy": { - "name": "service-2", - "local_service_socket_path": "/tmp/socket_service_2" - } - } - } - } - ] -} -``` - - - -There is no mode field since the service is expected to create the -socket it is listening on, not the Envoy proxy. -Again, the `socket_path` and `local_service_socket_path` fields conflict -with `address`/`port` and `local_service_address`/`local_service_port` -configuration options. diff --git a/website/content/docs/connect/proxy/custom.mdx b/website/content/docs/connect/proxy/custom.mdx new file mode 100644 index 000000000000..60b8ed692ca0 --- /dev/null +++ b/website/content/docs/connect/proxy/custom.mdx @@ -0,0 +1,179 @@ +--- +layout: docs +page_title: Use a custom proxy integration for service mesh +description: >- + Consul supports custom proxy integrations for service discovery and sidecar instantiation. Learn about proxy requirements for service mesh operations, as well as how to authorize inbound and outbound connections for your custom proxy. +--- + +# Use a custom proxy integration for service mesh + + + + The Connect Native Golang SDK and `v1/agent/connect/authorize`, `v1/agent/connect/ca/leaf`, + and `v1/agent/connect/ca/roots` APIs are deprecated and will be removed in a future release. Although Connect Native + will still operate as designed, we do not recommend leveraging this feature because it is deprecated and will be removed when the long term replacement to native application integration (such as a proxyless gRPC service mesh integration) is delivered. Refer to [GH-10339](https://github.com/hashicorp/consul/issues/10339) for additional information and to track progress toward one potential solution that is tracked as replacement functionality. + + The Native App Integration does not support many of the Consul's service mesh features, and is not under active development. + The [Envoy proxy](/consul/docs/reference/proxy/envoy) should be used for most production environments. + + + +This topic describes the process and API endpoints you can use to extend proxies for integration with Consul. + +## Overview + +You can extend any proxy to support Consul service mesh. Consul ships with a built-in proxy suitable for an out-of-the-box development experience, but you may require a more robust proxy solution for production environments. + +The proxy you integrate must be able to accept inbound connections and/or establish outbound connections identified as a particular service. In some cases, either ability may be acceptable, but both are generally required to support for full sidecar functionality. + +Sidecar proxies may support L4 or L7 network functionality. L4 integration is simpler and adequate for securing all traffic. L4 treats all traffic as TCP, however, so advanced routing or metrics features are not supported. + +Full L7 support is built on top of L4 support. An L7 proxy integration supports most or all of the L7 traffic routing features in Consul service mesh by dynamically configuring routing, retries, and other L7 features. The built-in proxy only supports L4, while [Envoy](/consul/docs/reference/proxy/envoy) supports the full L7 feature set. + +Areas where the integration approach differs between L4 and L7 are identified in this topic. + +The noun _connect_ is used throughout this documentation to refer to the connect subsystem that provides Consul's service mesh capabilities. + +## Accepting Inbound Connections + +The proxy must accept TLS connections on some port to accept inbound connections. + +### Obtaining and validating client certificates + +Call the [`/v1/agent/connect/ca/leaf/`] API endpoint to obtain the client certificate, e.g.: + + + +```shell +$ curl http://:8500/v1/agent/connect/ca/leaf/ +``` + + + +The client certificate from the inbound connection must be validated against the service mesh CA root certificates. Call the [`/v1/agent/connect/ca/roots`] endpoint to obtain the root certificates from the service mesh CA, e.g.: + + + +```shell +$ curl http://:8500/v1/agent/connect/ca/roots +``` + + + +### Authorizing the connection + +After validating the client certificate from the caller, the proxy can authorize the entire connection (L4) or each request (L7). Depending upon the [protocol] of the proxied service, authorization is performed either on a per-connection (L4) or per-request (L7) basis. Authentication is based on "service identity" (TLS), and is implemented at the +transport layer. + +-> **Note:** Some features, such as (local) rate limiting or max connections, are expected to be proxy-level configurations enforced separately when authorization calls are made. Proxies can enforce the configurations based on information about request rates and other states that should already be available. + +The proxy can authorize the connection by either calling the [`/v1/agent/connect/authorize`](/consul/api-docs/agent/connect) API endpoint or by querying the [intention match API](/consul/api-docs/connect/intentions#list-matching-intentions) endpoint. + +The [`/v1/agent/connect/authorize`](/consul/api-docs/agent/connect) endpoint should be called in the connection path for each received connection. +If the local Consul agent is down or unresponsive, the success rate of new connections will be compromised. +The agent uses locally-cached data to authorize the connection and typically responds in microseconds. As a result, the TLS handshake typically spans microseconds. + +~> **Note:** This endpoint is only suitable for L4 (e.g., TCP) integration. The endpoint always treats intentions with `Permissions` defined (i.e., L7 criteria) as `deny` intentions during evaluation. + +The proxy can query the [intention match API](/consul/api-docs/connect/intentions#list-matching-intentions) endpoint on startup to retrieve a list of intentions that match the proxy destination. The matches should be stored in the native filter configuration of the proxy, such as RBAC for Envoy. + +For performance and reliability reasons, querying the intention match API endpoint is the recommended method for implementing intention enforcement. The cached intentions should be consulted for each incoming connection (L4) or request (L7) to determine if the connection or request should be accepted or rejected. + +#### Persistent TCP connections and intentions + +For a proxied service configured with the TCP [protocol], potentially long-lived TCP connections will only be authorized when the connections are initially established. But because many services, such as databases, typically use persistent connection pools, changing intentions to deny access does not terminate existing connections. This behavior violates the updated intention. In these cases, it may appear as if the intention is not being enforced. + +Implement one of the following strategies to close connections: + +1. **Configure connections to terminate after a maximum lifetime**, e.g., several hours. This balances the overhead of establishing new connections with determining how long existing connections remain open after an intention changes. + +1. **Periodically re-authorize every open connection**. The authorization call is inexpensive and should be a local, in-memory operation on the Consul agent. Periodically authorizing thousands of open connections (e.g., once every minute) is likely to be negligible overhead, but doing so enforces a tighter upper boundary on how long it takes to enforce intention changes without affecting the protocol efficiency of persistent connections. + +#### Certificate serial in authorization + +Intentions currently use TLS URI Subject Alternative Name (SAN) for enforcement. The `AuthZ` API in the Go SDK contains a field for passing the serial number ([`consul/connect/tls.go`]). Proxies may provide this value during authorization. + +### Updating data + +The API endpoints described in this section operate on agent-local data that is updated in the background. The leaf, roots, and intentions should be updated in the background by the proxy. + +The leaf cert, root cert, and intentions endpoints support [blocking queries](/consul/api-docs/features/blocking), which should be used to get near-immediate updates for root key rotations, new leaf certs before expiry, and intention changes. + +### SPIFFE certificates + +Although Consul follows the SPIFFE spec for certificates, some CA providers do not allow strict adherence. For example, CA certificates may not have the correct trust-domain SPIFFE URI SAN for the cluster. If SPIFFE validation is performed in the proxy, be aware that it should be possible to opt out, otherwise certain CA providers supported by Consul will not be compatible with the use of that proxy. Neither Envoy nor the built-in proxy currently validate the SPIFFE URI of the chain beyond the leaf certificate. + +## Establishing Outbound Connections + +For outbound connections, the proxy should communicate with a mesh-capable endpoint for a service and provide a client certificate from the [`/v1/agent/connect/ca/leaf/`] API endpoint. The proxy must use the root certificate obtained from the [`/v1/agent/connect/ca/roots`] endpoint to verify the certificate served from the destination endpoint. + +## Configuration Discovery + +The [`/v1/agent/service/:service_id`](/consul/api-docs/agent/service#get-service-configuration) API endpoint enables any proxy to discover proxy configurations registered with a local service. This endpoint supports hash-based blocking, which enables long-polling for changes to the registration/configuration. Any changes to the registration/config will result in the new config being returned immediately. + +Refer to the [built-in proxy](/consul/docs/reference/proxy/built-in) for an example implementation. Using the Go SDK, the proxy calls the HTTP "pull" API via the `watch` package: [`consul/connect/proxy/config.go`]. + +The [discovery chain] for each upstream service should be fetched from the [`/v1/discovery-chain/:service_id`](/consul/api-docs/discovery-chain#read-compiled-discovery-chain) API endpoint. This will return a compiled graph of configurations a sidecar needs for a particular upstream service. + +If you are only implementing L4 support in your proxy, set the [`OverrideProtocol`](/consul/api-docs/discovery-chain#overrideprotocol) value to `tcp` when fetching the discovery chain so that L7 features, such as HTTP routing rules, are not returned. + +For each [target](/consul/docs/connect/manage-traffic/discovery-chain#targets) in the resulting discovery chain, a list of healthy, mesh-capable endpoints may be fetched from the [`/v1/health/connect/:service_id`] API endpoint as described in the [Service Discovery](#service-discovery) section. + +The remaining nodes in the chain include configurations that should be translated into the nearest equivalent for features, such as HTTP routing, connection timeouts, connection pool settings, rate limits, etc. See the full [discovery chain] documentation and relevant [config entry](/consul/docs/fundamentals/config-entry) documentation for details about supported configuration parameters. + +### Service Discovery + +Proxies can use Consul's [service discovery API](/consul/api-docs/health#list-service-instances-for-mesh-enabled-service) to return all available, mesh-capable endpoints for a given service. This endpoint supports a `cached` query parameter, which uses [agent caching](/consul/api-docs/features/caching) to improve performance. The API package provides a [`UseCache`] query option to leverage caching. In addition to performance improvements, using the cache makes the mesh more resilient to Consul server outages. This is because the mesh "fails static" with the last known set of service instances still used, rather than errors on new connections. + +Proxies can decide whether to perform just-in-time queries to the API when a new connection needs to be routed, or to use blocking queries to load the current set of endpoints for a service and keep that list updated. The SDK and built-in proxy currently use just-in-time resolution however many existing proxies are likely to find it easier to integrate by pulling the set of endpoints and maintaining it in local memory using blocking queries. + +Upstreams may be defined with the Prepared Query target type. These upstreams should use Consul's [prepared query](/consul/api-docs/query) API to determine a list of upstream endpoints for the service. Note that the `PreparedQuery` API does not support blocking, so proxies choosing to populate endpoints in memory will need to poll the endpoint at a suitable and, ideally, configurable frequency. + +-> **Long-term support for [`service-resolver`](/consul/docs/reference/config-entry/service-resolver) configuration entries**. The `service-resolver` configuration will completely replace prepared queries in future versions of Consul. In some instances, however, prepared queries are still used. + +## Sidecar Instantiation + +Consul does not start or manage sidecar proxy processes. Proxies running on a physical host or VM are designed to be started and run by process supervisor systems, such as init, systemd, supervisord, etc. If deployed within a cluster scheduler (Kubernetes, Nomad), proxies should run as a sidecar container in the same namespace. + +Proxies will use the [`CONSUL_HTTP_TOKEN`](/consul/commands#consul_http_token) and [`CONSUL_HTTP_ADDR`](/consul/commands#consul_http_addr) environment variables to contact Consul and fetch certificates. This occurs if the `CONSUL_HTTP_TOKEN` environment variable contains a Consul ACL token that has the necessary permissions to read configurations for that service. If you use the Go [`api` package], then the environment variables will be read and the client configured for you automatically. + +Alternatively, you may also use the flags `-token` or `-token-file` to provide the Consul ACL token. + + + + +```shell +$ consul connect envoy -sidecar-for "web" -token-file=/etc/consul.d/consul.token +``` + + + + + +```shell +$ consul connect proxy -sidecar-for "web" -token-file=/etc/consul.d/consul.token +``` + + + + +If TLS is enabled on Consul, you will also need to add the following environment variables _prior_ to starting the proxy: + +- [`CONSUL_CACERT`](/consul/commands#consul_cacert) +- [`CONSUL_CLIENT_CERT`](/consul/commands#consul_client_cert) +- [`CONSUL_CLIENT_KEY`](/consul/commands#consul_client_key) + +The `CONSUL_CACERT`, `CONSUL_CLIENT_CERT` and `CONSUL_CLIENT_KEY` can also be provided as CLI flags. Refer to the [`consul connect proxy` documentation](/consul/commands/connect/proxy) for details. + +The proxy service ID comes from the user. See [`consul connect envoy`](/consul/commands/connect/envoy#examples) for an example. You can use the `-proxy-id` flag to specify the ID of the proxy service you have already registered with the local agent. + +Alternatively, you can start the service using the `-sidecar-for=` option. This option queries Consul for a proxy that is registered as a sidecar for the specified ``. If exactly one service associated with the proxy is returned, the ID will be used to start the proxy. Your controller only needs to accept `-proxy-id` as an argument; the Consul CLI will resolve the ID for the name specified in `-sidecar-for` flag. + +[`/v1/agent/connect/ca/leaf/`]: /consul/api-docs/agent/connect#service-leaf-certificate +[`/v1/agent/connect/ca/roots`]: /consul/api-docs/agent/connect#certificate-authority-ca-roots +[`api` package]: https://github.com/hashicorp/consul/tree/main/api +[`consul/connect/proxy/config.go`]: https://github.com/hashicorp/consul/blob/v1.8.3/connect/proxy/config.go#L187 +[`consul/connect/tls.go`]: https://github.com/hashicorp/consul/blob/v1.8.3/connect/tls.go#L232-L237 +[discovery chain]: /consul/docs/connect/manage-traffic/discovery-chain +[`usecache`]: https://github.com/hashicorp/consul/blob/v1.8.3/api/api.go#L99-L102 +[protocol]: /consul/docs/reference/config-entry/service-defaults#protocol diff --git a/website/content/docs/connect/proxy/index.mdx b/website/content/docs/connect/proxy/index.mdx new file mode 100644 index 000000000000..d528c678791f --- /dev/null +++ b/website/content/docs/connect/proxy/index.mdx @@ -0,0 +1,86 @@ +--- +layout: docs +page_title: Service mesh proxy overview +description: >- + In Consul service mesh, each service has a sidecar proxy that secures connections with other services in the mesh without modifying the underlying application code. You can use the built-in proxy, Envoy, or a custom proxy to handle communication and verify TLS connections. +--- + +# Service mesh proxy overview + +This topic provides an overview of how Consul uses proxies in your service mesh. A proxy is a type of service that enables unmodified applications to connect to other services in the service mesh. Consul ships with a built-in L4 proxy and has first class support for Envoy. You can plug other proxies into your environment as well, and apply configurations in Consul to define proxy behavior. + +## Proxy use cases + +You can configure proxies to perform several different types of functions in Consul. + +### Service mesh proxies + +A _service mesh proxy_ is any type of proxy service that you use for communication in a service mesh. Specialized proxy implementations, such as sidecar proxies and gateways, are subsets of service mesh proxies. Refer to [Deploy service mesh proxy services](/consul/docs/connect/proxy/mesh) for instructions on how to deploy a service mesh proxy. + +### Sidecars + +Sidecar proxies are service mesh proxy implementations that transparently handles inbound and outbound service connections. Sidecars automatically wrap and verify TLS connections. In a non-containerized network, each service in your mesh should have its own sidecar proxy. + +Refer to [Deploy sidecar services](/consul/docs/reference/proxy/sidecar) for additional information. + +### Gateways + +You can configure service mesh proxies to operate as gateway services, which allow service-to-service traffic across different network areas, including peered clusters, WAN-federated datacenters, and nodes outside the mesh. Consul ships with several types of gateway capabilities, but gateways deliver the underlying functionality. + +Refer to [Gateways overview](/consul/docs/architecture/data-plane/gateway) for additional information. + +### Dynamic traffic control + +You can configure proxies to dynamically redirect or split traffic to implement a failover strategy, test new features, and roll out new versions of your applications. You can apply a combination of configurations to enable complex scenarios, such as failing over to the geographically nearest service or to services in different network areas that you have designated as being functionally identical. + +Refer to the following topics for additional information: + +- [Service mesh traffic management overview](/consul/docs/manage-traffic) +- [Failover overview](/consul/docs/manage-traffic/failover) + + +## Supported proxies + +Consul has first-class support for Envoy proxies, which is a highly configurable open source edge service proxy. Consul configures Envoy by optionally exposing a gRPC service on the local agent that serves Envoy's xDS configuration API. Refer to the following documentation for additional information: + +- [Envoy proxy reference](/consul/docs/reference/proxy/envoy) +- [Envoy API documentation](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-docs/xds_protocol) + +You can use Consul's built-in proxy service that supports L4 network traffic, which is suitable for testing and development but not recommended for production environments. Refer to the [built-in proxy reference](/consul/docs/reference/proxy/built-in) for additional information. + +## Workflow + +The following procedure describes how to implement proxies: + +1. **Configure global proxy settings**. You can configure global passthrough settings for all proxies deployed to your service mesh in the proxy defaults configuration entry. This step is not required, but it enables you to define common behaviors in a central configuration. +1. **Deploy your service mesh proxy**. Configure proxy behavior in a service definition and register the proxy with Consul. +1. **Start the proxy service**. Proxies appear in the list of services registered to Consul and must be started before they begin to route traffic in your service mesh. + +### Dynamic upstreams require native integration + +Service mesh proxies do not support dynamic upstreams. If an application requires dynamic dependencies that are only available at runtime, you must [natively integrate](/consul/docs/automate/native) the application with Consul service mesh. After integration, the application can use the HTTP API or [DNS interface](/consul/docs/services/discovery/dns-static-lookups#service-mesh-enabled-service-lookups) to connect to other services in the mesh. + +## Proxies in Kubernetes-orchestrated networks + +For Kubernetes-orchestrated environments, Consul deploys _dataplanes_ by default to manage sidecar proxies. Consul dataplanes are light-weight processes that leverage existing Kubernetes sidecar orchestration capabilities. Refer to the [dataplanes documentation](/consul/docs/architecture/control-plane/dataplane) for additional information. + +## Guidance + +Refer to the following resources for help using service mesh proxies: + +### Tutorial + +- [Using Envoy with Consul service mesh](/consul/tutorials/developer-mesh/service-mesh-with-envoy-proxy) + +### Usage documentation + +- [Deploy service mesh proxies](/consul/docs/connect/proxy/mesh) +- [Deploy sidecar proxies](/consul/docs/reference/proxy/sidecar) +- [Integrate custom proxies](/consul/docs/connect/proxy/custom) + +### Reference documentation + +- [Proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) for additional information. +- [Envoy proxies reference](/consul/docs/reference/proxy/envoy) +- [Service mesh proxy configuration reference](/consul/docs/reference/proxy/connect-proxy) +- [`consul connect envoy` command](/consul/commands/connect/envoy) diff --git a/website/content/docs/connect/proxy/mesh.mdx b/website/content/docs/connect/proxy/mesh.mdx new file mode 100644 index 000000000000..ae7c2061f4f5 --- /dev/null +++ b/website/content/docs/connect/proxy/mesh.mdx @@ -0,0 +1,79 @@ +--- +layout: docs +page_title: Deploy service mesh proxies +description: >- + Envoy and other proxies in Consul service mesh enable service-to-service communication across your network. Learn how to deploy service mesh proxies in this topic. +--- + +# Deploy service mesh proxies services + +This topic describes how to create, register, and start service mesh proxies in Consul. Refer to [Service mesh proxies overview](/consul/docs/connect/proxy) for additional information about how proxies enable Consul functionalities. + +For information about deploying proxies as sidecars for service instances, refer to [Deploy sidecar proxy services](/consul/docs/reference/proxy/sidecar). + +## Overview + +Complete the following steps to deploy a service mesh proxy: + +1. It is not required, but you can create a proxy defaults configuration entry that contains global passthrough settings for all Envoy proxies. +1. Create a service definition file and specify the proxy configurations in the `proxy` block. +1. Register the service using the API or CLI. +1. Start the proxy service. Proxies appear in the list of services registered to Consul, but they must be started before they begin to route traffic in your service mesh. + +## Requirements + +If ACLs are enabled and you want to configure global Envoy settings using the [proxy defaults configuration entry](/consul/docs/reference/config-entry/proxy-defaults), you must present a token with `operator:write` permissions. Refer to [Create a service token](/consul/docs/secure/acl/token/service) for additional information. + +## Configure global Envoy passthrough settings + +If you want to define global passthrough settings for all Envoy proxies, create a proxy defaults configuration entry and specify default settings, such as access log configuration. Note that [service defaults configuration entries](/consul/docs/reference/config-entry/service-defaults) override proxy defaults and individual service configurations override both configuration entries. + +1. Create a proxy defaults configuration entry and specify the following parameters: + - `Kind`: Must be set to `proxy-defaults` + - `Name`: Must be set to `global` +1. Configure any additional settings you want to apply to all proxies. Refer to [Proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) for details about all settings available in the configuraiton entry. +1. Apply the configuration by either calling the [`/config` HTTP API endpoint](/consul/api-docs/config) or running the [`consul config write` CLI command](/consul/commands/config/write). The following example writes a proxy defaults configuration entry from a local HCL file using the CLI: + +```shell-session +$ consul config write proxy-defaults.hcl +``` + +## Define service mesh proxy + +Create a service definition file and configure the following fields to define a service mesh proxy: + +1. Set the `kind` field to `connect-proxy`. Refer to the [services configuration reference](/consul/docs/reference/service#kind) for information about other kinds of proxies you can declare. +1. Specify a name for the proxy service in the `name` field. Consul applies the configurations to any proxies you bootstrap with the same name. +1. In the `proxy.destination_service_name` field, specify the name of the service that the proxy represents. +1. Configure any additional proxy behaviors that you want to implement in the `proxy` block. Refer to the [Service mesh proxy configuration reference](/consul/docs/reference/proxy/connect-proxy) for information about all parameters. +1. Specify a port number where other services registered with Consul can discover and connect to the proxies service in the `port` field. To ensure that services only allow external connections established through the service mesh protocol, you should configure all services to only accept connections on a loopback address. + +Refer to the [Service mesh proxy configuration reference](/consul/docs/reference/proxy/connect-proxy) for example configurations. + +## Register the service + +Provide the service definition to the Consul agent to register your proxy service. You can use the same methods for registering proxy services as you do for registering application services: + +- Place the service definition in a Consul agent's configuration directory and start, restart, or reload the agent. Use this method when implementing changes to an existing proxy service. +- Use the `consul services register` command to register the proxy service with a running Consul agent. +- Call the `/agent/service/register` HTTP API endpoint to register the proxy service with a running Consul agent. + +Refer to [Register services and health checks](/consul/docs/register/service/vm) for instructions. + +In the following example, the `consul services register` command registers a proxy service stored in `proxy.hcl`: + +```shell-session +$ consul services register proxy.hcl +``` + +## Start the proxy + +Envoy requires a bootstrap configuration file before it can start. Use the [`consul connect envoy` command](/consul/commands/connect/envoy) to create the Envoy bootstrap configuration and start the proxy service. Specify the ID of the proxy you want to start with the `-proxy-id` option. + +The following example command starts an Envoy proxy for the `web-proxy` service: + +```shell-session +$ consul connect envoy -proxy-id=web-proxy +``` + +For details about operating an Envoy proxy in Consul, refer to the [Envoy proxy reference](/consul/docs/reference/proxy/envoy). diff --git a/website/content/docs/connect/proxy/sidecar.mdx b/website/content/docs/connect/proxy/sidecar.mdx new file mode 100644 index 000000000000..543f579cf947 --- /dev/null +++ b/website/content/docs/connect/proxy/sidecar.mdx @@ -0,0 +1,283 @@ +--- +layout: docs +page_title: Deploy proxies as sidecar services +description: >- + You can register a service instance and its sidecar proxy at the same time. Learn about default settings, customizable parameters, limitations, and lifecycle behaviors of the sidecar proxy. +--- + +# Deploy sidecar services + +This topic describes how to create, register, and start sidecar proxy services in Consul. Refer to [Service mesh proxies overview](/consul/docs/connect/proxy) for additional information about how proxies enable Consul's functions and operations. For information about deploying service mesh proxies, refer to [Deploy service mesh proxies](/consul/docs/connect/proxy/mesh). + +## Overview + +Sidecar proxies run on the same node as the single service instance that they handle traffic for. +They may be on the same VM or running as a separate container in the same network namespace. + +You can attach a sidecar proxy to a service you want to deploy to your mesh: + +1. It is not required, but you can create a proxy defaults configuration entry that contains global passthrough settings for all Envoy proxies. +1. Create the service definition and include the `connect` block. The `connect` block contains the sidecar proxy configurations that allow the service to interact with other services in the mesh. +1. Register the service using either the API or CLI. +1. Start the sidecar proxy service. + +## Requirements + +If ACLs are enabled and you want to configure global Envoy settings in the [proxy defaults configuration entry](/consul/docs/reference/config-entry/proxy-defaults), you must present a token with `operator:write` permissions. Refer to [Create a service token](/consul/docs/secure/acl/token/service) for additional information. + +## Configure global Envoy passthrough settings + +If you want to define global passthrough settings for all Envoy proxies, create a proxy defaults configuration entry and specify default settings, such as access log configuration. [Service defaults configuration entries](/consul/docs/reference/config-entry/service-defaults) override proxy defaults and individual service configurations override both configuration entries. + +1. Create a proxy defaults configuration entry and specify the following parameters: + - `Kind`: Must be set to `proxy-defaults` + - `Name`: Must be set to `global` +1. Configure any additional settings you want to apply to all proxies. Refer to [Proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) for details about all settings available in the configuraiton entry. +1. Apply the configuration by either calling the [`/config` API endpoint](/consul/api-docs/config) or running the [`consul config write` CLI command](/consul/commands/config/write). The following example writes a proxy defaults configuration entry from a local HCL file using the CLI: + +```shell-session +$ consul config write proxy-defaults.hcl +``` + +## Define service mesh proxy + +Create a service definition and configure the following fields: + +1. `name`: Specify a name for the service you want to attach a sidecar proxy to in the `name` field. This field is required for all services you want to register in Consul. +1. `port`: Specify a port number where other services registered with Consul can discover and connect to the service in the `port` field. This field is required for all services you want to register in Consul. +1. `connect`: Set the `connect` field to `{ sidecar_service: {} }`. The `{ sidecar_service: {} }` value is a macro that applies a set of default configurations that enable you to quickly implement a sidecar. Refer to [Sidecar service defaults](#sidecar-service-defaults) for additional information. +1. Configure any additional options for your service. Refer to [Services configuration reference](/consul/docs/reference/service) for details. + +In the following example, a service named `web` is configured with a sidecar proxy: + + + + + +```hcl +service = { + name = "web" + port = 8080 + connect = { sidecar_service = {} } +} +``` + + + + + +```json + +{ + "service": { + "name": "web", + "port": 8080, + "connect": { "sidecar_service": {} } + } +} + +``` + + + + + +When Consul processes the service definition, it generates the following configuration in place of the `sidecar_service` macro. Note that sidecar proxies services are based on the `connect-proxy` type: + + + + + +```hcl +services = [ + { + name = "web" + port = 8080 + } + checks = { + Interval = "10s" + Name = "Connect Sidecar Listening" + TCP = "127.0.0.1:20000" + } + checks = { + alias_service = "web" + name = "Connect Sidecar Aliasing web" + } + kind = "connect-proxy" + name = "web-sidecar-proxy" + port = 20000 + proxy = { + destination_service_id = "web" + destination_service_name = "web" + local_service_address = "127.0.0.1" + local_service_port = 8080 + } +] + +``` + + + + + +```json +{ + "services": [ + { + "name": "web", + "port": 8080 + }, + { + "name": "web-sidecar-proxy", + "port": 20000, + "kind": "connect-proxy", + "checks": [ + { + "Name": "Connect Sidecar Listening", + "TCP": "127.0.0.1:20000", + "Interval": "10s" + }, + { + "name": "Connect Sidecar Aliasing web", + "alias_service": "web" + } + ], + "proxy": { + "destination_service_name": "web", + "destination_service_id": "web", + "local_service_address": "127.0.0.1", + "local_service_port": 8080 + } + } + ] +} + +``` + + + + + +## Register the service + +Provide the service definition to the Consul agent to register your proxy service. You can use the same methods for registering proxy services as you do for registering application services: + +- Place the service definition in a Consul agent's configuration directory and start, restart, or reload the agent. Use this method when implementing changes to an existing proxy service. +- Use the `consul services register` command to register the proxy service with a running Consul agent. +- Call the `/agent/service/register` HTTP API endpoint to register the proxy service with a running Consul agent. + +Refer to [Register services and health checks](/consul/docs/register/service/vm) for instructions. + +In the following example, the `consul services register` command registers a proxy service stored in `proxy.hcl`: + +```shell-session +$ consul services register proxy.hcl +``` + +## Start the proxy + +Envoy requires a bootstrap configuration file before it can start. Use the [`consul connect envoy` command](/consul/commands/connect/envoy) to create the Envoy bootstrap configuration and start the proxy service. Specify the name of the service with the attached proxy with the `-sidecar-for` option. + +The following example command starts an Envoy sidecar proxy for the `web` service: + +```shell-session +$ consul connect envoy -sidecar-for=web +``` + +For details about operating an Envoy proxy in Consul, refer to [](/consul/docs/reference/proxy/envoy) + +## Configuration reference + +The `sidecar_service` block is a service definition that can contain most regular service definition fields. Refer to [Limitations](#limitations) for information about unsupported service definition fields for sidecar proxies. + +Consul treats sidecar proxy service definitions as a root-level service definition. All fields are optional in nested definitions, which default to opinionated settings that are intended to reduce burden of setting up a sidecar proxy. + +## Sidecar service defaults + +The following fields are set by default on a sidecar service registration. With +[the exceptions noted](#limitations) any field may be overridden explicitly in +the `connect.sidecar_service` definition to customize the proxy registration. +The "parent" service refers to the service definition that embeds the sidecar +proxy. + +- `id` - ID defaults to `-sidecar-proxy`. This value cannot + be overridden as it is used to [manage the lifecycle](#lifecycle) of the + registration. +- `name` - Defaults to `-sidecar-proxy`. +- `tags` - Defaults to the tags of the parent service. +- `meta` - Defaults to the service metadata of the parent service. +- `port` - Defaults to being auto-assigned from a configurable + range specified by [`sidecar_min_port`](/consul/docs/reference/agent/configuration-file/general#sidecar_min_port) + and [`sidecar_max_port`](/consul/docs/reference/agent/configuration-file/general#sidecar_max_port). +- `kind` - Defaults to `connect-proxy`. This value cannot be overridden. +- `check`, `checks` - By default we add a TCP check on the local address and + port for the proxy, and a [service alias check](/consul/docs/register/health-check/vm#alias-checks) for the parent service. If either + `check` or `checks` fields are set, only the provided checks are registered. +- `proxy.destination_service_name` - Defaults to the parent service name. +- `proxy.destination_service_id` - Defaults to the parent service ID. +- `proxy.local_service_address` - Defaults to `127.0.0.1`. +- `proxy.local_service_port` - Defaults to the parent service port. + +### Example with overwritten configurations + +In the following example, the `sidecar_service` macro sets baselines configurations for the proxy, but the [proxy +upstreams](/consul/docs/reference/proxy/connect-proxy#upstream-configuration-reference) +and [built-in proxy +configuration](/consul/docs/reference/proxy/built-in) fields contain custom values: + +```json +{ + "name": "web", + "port": 8080, + "connect": { + "sidecar_service": { + "proxy": { + "upstreams": [ + { + "destination_name": "db", + "local_bind_port": 9191 + } + ], + "config": { + "handshake_timeout_ms": 1000 + } + } + } + } +} +``` + +## Limitations + +The following fields are not supported in the `connect.sidecar_service` block: + +- `id` - Sidecar services get an ID assigned and it is an error to override + this value. This ID is required to ensure that the agent can correctly deregister the sidecar service + later when the parent service is removed. +- `kind` - Kind defaults to `connect-proxy` and there is no way to + unset this behavior. +- `connect.sidecar_service` - Service definitions cannot be nested recursively. +- `connect.native` - The `kind` is fixed to `connect-proxy` and it is + an error to register a `connect-proxy` that is also service mesh-native. + +## Lifecycle + +Sidecar service registration is mostly a configuration syntax helper to avoid +adding lots of boiler plate for basic sidecar options, however the agent does +have some specific behavior around their lifecycle that makes them easier to +work with. + +The agent fixes the ID of the sidecar service to be based on the parent +service's ID, which enables the following behavior. + +- A service instance can only ever have one sidecar service registered. +- When re-registering through the HTTP API or reloading from configuration file: + - If something changes in the nested sidecar service definition, the update is applied to the current sidecar registration instead of creating a new + one. + - If a service registration removes the nested `sidecar_service` then the + previously registered sidecar for that service is deregistered + automatically. +- When reloading the configuration files, if a service definition changes its + ID, then a new service instance and a new sidecar instance are + registered. The old instance and proxy are removed because they are no longer found in + the configuration files. diff --git a/website/content/docs/connect/proxy/transparent-proxy/ecs.mdx b/website/content/docs/connect/proxy/transparent-proxy/ecs.mdx new file mode 100644 index 000000000000..2e796cd584ee --- /dev/null +++ b/website/content/docs/connect/proxy/transparent-proxy/ecs.mdx @@ -0,0 +1,10 @@ +--- +layout: docs +page_title: Transparent proxy on ECS +description: >- + This topic provides +--- + +# Transparent proxy on ECS + +Editor's Note: This empty page represents a known content gap between our existing documentation and the refreshed documentation. \ No newline at end of file diff --git a/website/content/docs/connect/proxy/transparent-proxy/enable.mdx b/website/content/docs/connect/proxy/transparent-proxy/enable.mdx new file mode 100644 index 000000000000..75affa86df65 --- /dev/null +++ b/website/content/docs/connect/proxy/transparent-proxy/enable.mdx @@ -0,0 +1,256 @@ +--- +layout: docs +page_title: Enable transparent proxy mode +description: >- + Learn how to enable transparent proxy mode, which enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh and increase application security without configuring individual upstream services. +--- + +# Enable transparent proxy mode + +This topic describes how to use transparent proxy mode in your service mesh. Transparent proxy allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. Refer to [Transparent proxy overview](/consul/docs/connect/proxy/transparent-proxy) for additional information. + +## Requirements + +Your network must meet the following environment and software requirements to use transparent proxy. + +* Transparent proxy is available for Kubernetes environments. +* Consul 1.10.0+ +* Consul Helm chart 0.32.0+. If you want to use the Consul CNI plugin to redirect traffic, Helm chart 0.48.0+ is required. Refer to [Enable the Consul CNI plugin](#enable-the-consul-cni-plugin) for additional information. +* You must create [service intentions](/consul/docs/secure-mesh/intention) that explicitly allow communication between intended services so that Consul can infer upstream connections and use sidecar proxies to route messages appropriately. +* The `ip_tables` kernel module must be running on all worker nodes within a Kubernetes cluster. If you are using the `modprobe` Linux utility, for example, issue the following command: + + `$ modprobe ip_tables` + +~> **Upgrading to a supported version**: Always follow the [proper upgrade path](/consul/docs/upgrade/version-specific/#transparent-proxy-on-kubernetes) when upgrading to a supported version of Consul, Consul on Kubernetes (`consul-k8s`), and the Consul Helm chart. + +## Enable transparent proxy + +Transparent proxy mode is enabled for the entire cluster by default when you install Consul on Kubernetes using the Consul Helm chart. Refer to the [Consul Helm chart reference](/consul/docs/reference/k8s/helm) for information about all default configurations. + +You can explicitly enable transparent proxy for the entire cluster, individual namespaces, and individual services. + +### Entire cluster + +Use the `connectInject.transparentProxy.defaultEnabled` Helm value to enable or disable transparent proxy for the entire cluster: + +```yaml +connectInject: + transparentProxy: + defaultEnabled: true +``` + +### Kubernetes namespace + +Apply the `consul.hashicorp.com/transparent-proxy=true` label to enable transparent proxy for a Kubernetes namespace. The label overrides the `connectInject.transparentProxy.defaultEnabled` Helm value and defines the default behavior of Pods in the namespace. The following example enables transparent proxy for Pods in the `my-app` namespace: + +```shell-session +$ kubectl label namespaces my-app "consul.hashicorp.com/transparent-proxy=true" +``` +### Individual service + +Apply the `consul.hashicorp.com/transparent-proxy=true` annotation to enable transparent proxy on the Pod for each service. The annotation overrides the Helm value and the namespace label. The following example enables transparent proxy for the `static-server` service: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: static-server +spec: + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-server +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + 'consul.hashicorp.com/transparent-proxy': 'true' + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + serviceAccountName: static-server +``` + +## Enable the Consul CNI plugin + +By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization. + +Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. Because the plugin is executed by the Kubernetes kubelet, it already has the elevated privileges necessary to configure the network. Additionally, you do not need to specify annotations that automatically overwrite Kubernetes HTTP health probes when the plugin is enabled (see [Overwrite Kubernetes HTTP health probes](#overwrite-kubernetes-http-health-probes)). + +The Consul Helm chart installs the CNI plugin, but it is disabled by default. Refer to the [instructions for enabling the CNI plugin](/consul/docs/k8s/installation/install#enable-the-consul-cni-plugin) in the Consul on Kubernetes installation documentation for additional information. + +## Traffic redirection + +There are two mechanisms for redirecting traffic through the sidecar proxies. By default, Consul injects an init container that redirects all inbound and outbound traffic. The default mechanism requires elevated permissions (CAP_NET_ADMIN) in order to redirect traffic to the service mesh. + +Alternatively, you can enable the Consul CNI plugin to handle traffic redirection. Because the Kubernetes kubelet runs CNI plugins, the Consul CNI plugin has the necessary privileges to apply routing tables in the network. + +Both mechanisms redirect all inbound and outbound traffic, but you can configure exceptions for specific Pods or groups of Pods. The following annotations enable you to exclude certain traffic from being redirected to sidecar proxies. + +### Exclude inbound ports + +The [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) annotation defines a comma-separated list of inbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations: + consul.hashicorp.com/transparent-proxy-exclude-inbound-ports: "8200, 8201" +``` + + + +### Exclude outbound ports + +The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) annotation defines a comma-separated list of outbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations": + consul.hashicorp.com/transparent-proxy-exclude-outbound-ports: "8200, 8201" +``` + + + +### Exclude outbound CIDR blocks + +The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) annotation +defines a comma-separated list of outbound CIDR blocks to exclude from traffic redirection when running in transparent proxy mode. The CIDR blocks are string data values. +In the following example, services in the `3.3.3.3/24` IP range are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations: + consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: "3.3.3.3,3.3.3.3/24" +``` + + +### Exclude user IDs + +The [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-uids) annotation +defines a comma-separated list of additional user IDs to exclude from traffic redirection when running in transparent proxy mode. The user IDs are string data values. +In the following example, services with the IDs `4444 ` and `44444 ` are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations: + consul.hashicorp.com/transparent-proxy-exclude-uids: "4444,44444" + } +} +``` + + + +## Kubernetes HTTP health probes configuration + +By default, `connect-inject` is disabled. As a result, Consul on Kubernetes uses a mechanism for traffic redirection that interferes with [Kubernetes HTTP health +probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). This is because probes expect the kubelet to reach the application container on the probe's endpoint. Instead, traffic is redirected through the sidecar proxy. As a result, health probes return errors because the kubelet does not encrypt that traffic using a mesh proxy. + +There are two methods for solving this issue. The first method is to set the `connectInject.transparentProxy.defaultOverwriteProbes` annotation to overwrite the Kubernetes HTTP health probes so that they point to the proxy. The second method is to [enable the Consul container network interface (CNI) plugin](#enable-the-consul-cni-plugin) to perform traffic redirection. Refer to the [Consul on Kubernetes installation instructions](/consul/docs/deploy/server/k8s/helm) for additional information. + +### Overwrite Kubernetes HTTP health probes + +You can either include the `connectInject.transparentProxy.defaultOverwriteProbes` Helm value to your command or add the `consul.hashicorp.com/transparent-proxy-overwrite-probes` Kubernetes annotation to your pod configuration to overwrite health probes. + +Refer to [Kubernetes Health Checks in Consul on Kubernetes](/consul/docs/register/health-check/k8s) for additional information. + +## Dial services across Kubernetes cluster + +If your [Consul servers are federated between Kubernetes clusters](/consul/docs/east-west/wan-federation/k8s), +then you must configure services in one Kubernetes cluster to explicitly dial a service in the datacenter of another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-connect-service-upstreams) annotation. +The following example configures the service to dial an upstream service called `my-service` in datacenter `dc2` on port `1234`: + +```yaml +consul.hashicorp.com/connect-service-upstreams: "my-service:1234:dc2" +``` + +If your Consul cluster is deployed to a [single datacenter spanning multiple Kubernetes clusters](/consul/docs/deploy/server/k8s/multi-cluster), +then you must configure services in one Kubernetes cluster to explicitly dial a service in another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-connect-service-upstreams) annotation. +The following example configures the service to dial an upstream service called `my-service` in another Kubernetes cluster on port `1234`: + +```yaml +consul.hashicorp.com/connect-service-upstreams: "my-service:1234" +``` + +You do not need to configure services to explicitly dial upstream services if your Consul clusters are connected with a [peering connection](/consul/docs/east-west/cluster-peering). + +## Configure service selectors + +When transparent proxy is enabled, traffic sent to [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) +or Pod IP addresses is redirected through the proxy. You must use a selector to bind Kubernetes Services to Pods as you define Kubernetes Services in the mesh. +The Kubernetes Service name must match the Consul service name to use KubeDNS. This is the default behavior unless you have applied the `consul.hashicorp.com/connect-service` +Kubernetes annotation to the service pods. The annotation overrides the Consul service name. + +Consul configures redirection for each Pod bound to the Kubernetes Service using `iptables` rules. The rules redirect all inbound and outbound traffic through an inbound and outbound listener on the sidecar proxy. Consul configures the proxy to route traffic to the appropriate upstream services based on [service +intentions](/consul/docs/reference/config-entry/service-intentions), which address the upstream services using KubeDNS. + +In the following example, the Kubernetes service selects `sample-app` application Pods so that they can be reached within the mesh. + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: sample-app + namespace: default +spec: + selector: + app: sample-app + ports: + - protocol: TCP + port: 80 +``` + + + +Additional services can query the [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) at `sample-app.default.svc.cluster.local` to reach `sample-app`. If ACLs are enabled and configured with default `deny` policies, the configuration also requires a [`ServiceIntention`](/consul/docs/reference/config-entry/service-intentions) to allow it to talk to `sample-app`. + +You can query the KubeDNS for a service that belongs to a sameness group at `sample-app.virtual.group-name.sg.consul`. This syntax is required when failover is desired and `spec.defaultForFailover` is set to `false` in the sameness group CRD. Refer to [sameness group configuration entry reference](/consul/docs/reference/config-entry/sameness-group) for more information. + +### Headless services +For services that are not addressed using a virtual cluster IP, you must configure the upstream service using the [DialedDirectly](/consul/docs/reference/config-entry/service-defaults#dialeddirectly) option. Then, use DNS to discover individual instance addresses and dial them through the transparent proxy. When this mode is enabled on the upstream, services present service mesh certificates for mTLS and intentions are enforced at the destination. + +Note that when dialing individual instances, Consul ignores the HTTP routing rules configured with configuration entries. The transparent proxy acts as a TCP proxy to the original destination IP address. + +## Known limitations + +- Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations. + +- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol. \ No newline at end of file diff --git a/website/content/docs/connect/proxy/transparent-proxy/index.mdx b/website/content/docs/connect/proxy/transparent-proxy/index.mdx new file mode 100644 index 000000000000..12c92b8dd576 --- /dev/null +++ b/website/content/docs/connect/proxy/transparent-proxy/index.mdx @@ -0,0 +1,47 @@ +--- +layout: docs +page_title: Transparent proxy overview +description: >- + Transparent proxy enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh. Learn how transparently proxying increases application security without configuring individual upstream services. +--- + +# Transparent proxy overview + +This topic provides overview information about transparent proxy mode, which allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. + +## Introduction + +When service mesh proxies are in transparent mode, Consul service mesh uses IPtables to direct all inbound and outbound traffic to the sidecar. Consul also uses information configured in service intentions to infer routes, which eliminates the need to explicitly configure upstreams. + +### Transparent proxy enabled + +The following diagram shows how Consul routes traffic when proxies are in transparent mode: + +![Diagram demonstrating that with transparent proxy, connections are automatically routed through the mesh](/img/consul-connect/with-transparent-proxy.png) + +### Transparent proxy disabled + +When transparent proxy mode is disabled, you must manually configure explicit upstreams, configure your applications to query for services at `localhost:`, and configure applications to only listen on the loopback interface to prevent services from bypassing the mesh. + +The following diagram shows how Consul routes traffic when transparent proxy mode is disabled: + +![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png) + +Transparent proxy is available for Kubernetes environments. As part of the integration with Kubernetes, Consul registers Kubernetes Services, injects sidecar proxies, and enables traffic redirection. + +## Supported networking architectures + +Transparent proxy mode enables several networking architectures and workflows. You can query Consul DNS to discover upstreams for single services, virtual services, and failover service instances that are in peered clusters. + +Consul supports the following intra-datacenter connection types for discovering upstreams when transparent proxy mode is enabled: + +- KubeDNS lookups across WAN-federated datacenters +- Consul DNS lookups across WAN-federated datacenters +- KubeDNS lookups in peered clusters and admin partitions +- Consul DNS lookups in peered clusters and admin partitions + +## Mutual TLS for transparent proxy mode + +Transparent proxy mode is enabled by default when you install Consul on Kubernetes using the Consul Helm chart. As a result, all services in the mesh must communicate through sidecar proxies, which enforce service intentions and mTLS encryption for the service mesh. While onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication. + +You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/register/external/permissive-mtls) for additional information. diff --git a/website/content/docs/connect/proxy/transparent-proxy/k8s.mdx b/website/content/docs/connect/proxy/transparent-proxy/k8s.mdx new file mode 100644 index 000000000000..218e66d39b13 --- /dev/null +++ b/website/content/docs/connect/proxy/transparent-proxy/k8s.mdx @@ -0,0 +1,256 @@ +--- +layout: docs +page_title: Enable transparent proxy mode on Kubernetes +description: >- + Learn how to enable transparent proxy mode, which enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh and increase application security without configuring individual upstream services. +--- + +# Enable transparent proxy mode on Kubernetes + +This topic describes how to use transparent proxy mode in your service mesh. Transparent proxy allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. Refer to [Transparent proxy overview](/consul/docs/connect/proxy/transparent-proxy) for additional information. + +## Requirements + +Your network must meet the following environment and software requirements to use transparent proxy. + +* Transparent proxy is available for Kubernetes environments. +* Consul 1.10.0+ +* Consul Helm chart 0.32.0+. If you want to use the Consul CNI plugin to redirect traffic, Helm chart 0.48.0+ is required. Refer to [Enable the Consul CNI plugin](#enable-the-consul-cni-plugin) for additional information. +* You must create [service intentions](/consul/docs/secure-mesh/intention) that explicitly allow communication between intended services so that Consul can infer upstream connections and use sidecar proxies to route messages appropriately. +* The `ip_tables` kernel module must be running on all worker nodes within a Kubernetes cluster. If you are using the `modprobe` Linux utility, for example, issue the following command: + + `$ modprobe ip_tables` + +~> **Upgrading to a supported version**: Always follow the [proper upgrade path](/consul/docs/upgrade/version-specific/#transparent-proxy-on-kubernetes) when upgrading to a supported version of Consul, Consul on Kubernetes (`consul-k8s`), and the Consul Helm chart. + +## Enable transparent proxy + +Transparent proxy mode is enabled for the entire cluster by default when you install Consul on Kubernetes using the Consul Helm chart. Refer to the [Consul Helm chart reference](/consul/docs/reference/k8s/helm) for information about all default configurations. + +You can explicitly enable transparent proxy for the entire cluster, individual namespaces, and individual services. + +### Entire cluster + +Use the `connectInject.transparentProxy.defaultEnabled` Helm value to enable or disable transparent proxy for the entire cluster: + +```yaml +connectInject: + transparentProxy: + defaultEnabled: true +``` + +### Kubernetes namespace + +Apply the `consul.hashicorp.com/transparent-proxy=true` label to enable transparent proxy for a Kubernetes namespace. The label overrides the `connectInject.transparentProxy.defaultEnabled` Helm value and defines the default behavior of Pods in the namespace. The following example enables transparent proxy for Pods in the `my-app` namespace: + +```shell-session +$ kubectl label namespaces my-app "consul.hashicorp.com/transparent-proxy=true" +``` +### Individual service + +Apply the `consul.hashicorp.com/transparent-proxy=true` annotation to enable transparent proxy on the Pod for each service. The annotation overrides the Helm value and the namespace label. The following example enables transparent proxy for the `static-server` service: + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: static-server +spec: + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-server +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + 'consul.hashicorp.com/transparent-proxy': 'true' + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + serviceAccountName: static-server +``` + +## Enable the Consul CNI plugin + +By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization. + +Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. Because the plugin is executed by the Kubernetes kubelet, it already has the elevated privileges necessary to configure the network. Additionally, you do not need to specify annotations that automatically overwrite Kubernetes HTTP health probes when the plugin is enabled (see [Overwrite Kubernetes HTTP health probes](#overwrite-kubernetes-http-health-probes)). + +The Consul Helm chart installs the CNI plugin, but it is disabled by default. Refer to the [instructions for enabling the CNI plugin](/consul/docs/k8s/installation/install#enable-the-consul-cni-plugin) in the Consul on Kubernetes installation documentation for additional information. + +## Traffic redirection + +There are two mechanisms for redirecting traffic through the sidecar proxies. By default, Consul injects an init container that redirects all inbound and outbound traffic. The default mechanism requires elevated permissions (CAP_NET_ADMIN) in order to redirect traffic to the service mesh. + +Alternatively, you can enable the Consul CNI plugin to handle traffic redirection. Because the Kubernetes kubelet runs CNI plugins, the Consul CNI plugin has the necessary privileges to apply routing tables in the network. + +Both mechanisms redirect all inbound and outbound traffic, but you can configure exceptions for specific Pods or groups of Pods. The following annotations enable you to exclude certain traffic from being redirected to sidecar proxies. + +### Exclude inbound ports + +The [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) annotation defines a comma-separated list of inbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations: + consul.hashicorp.com/transparent-proxy-exclude-inbound-ports: "8200, 8201" +``` + + + +### Exclude outbound ports + +The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) annotation defines a comma-separated list of outbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations": + consul.hashicorp.com/transparent-proxy-exclude-outbound-ports: "8200, 8201" +``` + + + +### Exclude outbound CIDR blocks + +The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) annotation +defines a comma-separated list of outbound CIDR blocks to exclude from traffic redirection when running in transparent proxy mode. The CIDR blocks are string data values. +In the following example, services in the `3.3.3.3/24` IP range are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations: + consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: "3.3.3.3,3.3.3.3/24" +``` + + +### Exclude user IDs + +The [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-transparent-proxy-exclude-uids) annotation +defines a comma-separated list of additional user IDs to exclude from traffic redirection when running in transparent proxy mode. The user IDs are string data values. +In the following example, services with the IDs `4444 ` and `44444 ` are not redirected through the transparent proxy: + + + +```yaml +metadata: + annotations: + consul.hashicorp.com/transparent-proxy-exclude-uids: "4444,44444" + } +} +``` + + + +## Kubernetes HTTP health probes configuration + +By default, `connect-inject` is disabled. As a result, Consul on Kubernetes uses a mechanism for traffic redirection that interferes with [Kubernetes HTTP health +probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). This is because probes expect the kubelet to reach the application container on the probe's endpoint. Instead, traffic is redirected through the sidecar proxy. As a result, health probes return errors because the kubelet does not encrypt that traffic using a mesh proxy. + +There are two methods for solving this issue. The first method is to set the `connectInject.transparentProxy.defaultOverwriteProbes` annotation to overwrite the Kubernetes HTTP health probes so that they point to the proxy. The second method is to [enable the Consul container network interface (CNI) plugin](#enable-the-consul-cni-plugin) to perform traffic redirection. Refer to the [Consul on Kubernetes installation instructions](/consul/docs/deploy/server/k8s/helm) for additional information. + +### Overwrite Kubernetes HTTP health probes + +You can either include the `connectInject.transparentProxy.defaultOverwriteProbes` Helm value to your command or add the `consul.hashicorp.com/transparent-proxy-overwrite-probes` Kubernetes annotation to your pod configuration to overwrite health probes. + +Refer to [Kubernetes Health Checks in Consul on Kubernetes](/consul/docs/register/health-check/k8s) for additional information. + +## Dial services across Kubernetes cluster + +If your [Consul servers are federated between Kubernetes clusters](/consul/docs/east-west/wan-federation/k8s), +then you must configure services in one Kubernetes cluster to explicitly dial a service in the datacenter of another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-connect-service-upstreams) annotation. +The following example configures the service to dial an upstream service called `my-service` in datacenter `dc2` on port `1234`: + +```yaml +consul.hashicorp.com/connect-service-upstreams: "my-service:1234:dc2" +``` + +If your Consul cluster is deployed to a [single datacenter spanning multiple Kubernetes clusters](/consul/docs/deploy/server/k8s/multi-cluster), +then you must configure services in one Kubernetes cluster to explicitly dial a service in another Kubernetes cluster using the +[consul.hashicorp.com/connect-service-upstreams](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-connect-service-upstreams) annotation. +The following example configures the service to dial an upstream service called `my-service` in another Kubernetes cluster on port `1234`: + +```yaml +consul.hashicorp.com/connect-service-upstreams: "my-service:1234" +``` + +You do not need to configure services to explicitly dial upstream services if your Consul clusters are connected with a [peering connection](/consul/docs/east-west/cluster-peering). + +## Configure service selectors + +When transparent proxy is enabled, traffic sent to [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) +or Pod IP addresses is redirected through the proxy. You must use a selector to bind Kubernetes Services to Pods as you define Kubernetes Services in the mesh. +The Kubernetes Service name must match the Consul service name to use KubeDNS. This is the default behavior unless you have applied the `consul.hashicorp.com/connect-service` +Kubernetes annotation to the service pods. The annotation overrides the Consul service name. + +Consul configures redirection for each Pod bound to the Kubernetes Service using `iptables` rules. The rules redirect all inbound and outbound traffic through an inbound and outbound listener on the sidecar proxy. Consul configures the proxy to route traffic to the appropriate upstream services based on [service +intentions](/consul/docs/reference/config-entry/service-intentions), which address the upstream services using KubeDNS. + +In the following example, the Kubernetes service selects `sample-app` application Pods so that they can be reached within the mesh. + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: sample-app + namespace: default +spec: + selector: + app: sample-app + ports: + - protocol: TCP + port: 80 +``` + + + +Additional services can query the [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) at `sample-app.default.svc.cluster.local` to reach `sample-app`. If ACLs are enabled and configured with default `deny` policies, the configuration also requires a [`ServiceIntention`](/consul/docs/reference/config-entry/service-intentions) to allow it to talk to `sample-app`. + +You can query the KubeDNS for a service that belongs to a sameness group at `sample-app.virtual.group-name.sg.consul`. This syntax is required when failover is desired and `spec.defaultForFailover` is set to `false` in the sameness group CRD. Refer to [sameness group configuration entry reference](/consul/docs/reference/config-entry/sameness-group) for more information. + +### Headless services +For services that are not addressed using a virtual cluster IP, you must configure the upstream service using the [DialedDirectly](/consul/docs/reference/config-entry/service-defaults#dialeddirectly) option. Then, use DNS to discover individual instance addresses and dial them through the transparent proxy. When this mode is enabled on the upstream, services present service mesh certificates for mTLS and intentions are enforced at the destination. + +Note that when dialing individual instances, Consul ignores the HTTP routing rules configured with configuration entries. The transparent proxy acts as a TCP proxy to the original destination IP address. + +## Known limitations + +- Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations. + +- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol. \ No newline at end of file diff --git a/website/content/docs/connect/security.mdx b/website/content/docs/connect/security.mdx deleted file mode 100644 index cefcaa3be21d..000000000000 --- a/website/content/docs/connect/security.mdx +++ /dev/null @@ -1,158 +0,0 @@ ---- -layout: docs -page_title: Service Mesh Security - Best Practices -description: >- - Consul provides secure service mesh communication by default. Additional configuration can improve network security by preventing unauthorized access and traffic sniffing. Review security considerations, our recommendations, and best practices. ---- - -# Best Practices for Service Mesh Security - -Consul service mesh enables secure service-to-service communication over mutual TLS. This -provides both in-transit data encryption as well as authorization. This page -will document how to secure the service mesh. To try service mesh locally, complete the -[Getting Started guide](/consul/tutorials/kubernetes-deploy/service-mesh?utm_source=docs) or for a full security model reference, -see the dedicated [Consul security model](/consul/docs/security) page. When -setting up service mesh in production, review this [tutorial](/consul/tutorials/developer-mesh/service-mesh-production-checklist?utm_source=consul.io&utm_medium=docs). - -Consul service mesh will function in any Consul configuration. However, unless the checklist -below is satisfied, the service mesh is not providing the security guarantees it was -built for. The checklist below can be incrementally adopted towards full -security if you prefer to operate in less secure models initially. - -~> **Warning**: The checklist below should not be considered exhaustive. Please -read and understand the [Consul security model](/consul/docs/security) -in depth to assess whether your deployment satisfies the security requirements -of Consul. - -## Checklist - -### Default Intention Policy Set - -Consul should be configured with a default deny intention policy. This forces -all service-to-service communication to be explicitly -allowed via an allow [intention](/consul/docs/connect/intentions). - -One advantage of using a default deny policy in combination with specific "allow" rules -is that a failure of intentions due to misconfiguration always results in -_denied_ traffic, rather than unwanted _allowed_ traffic. - -In the absence of `default_intention_policy` Consul will fall back to the ACL -default policy when determining whether to allow or deny communications without -an explicit intention. - -### Request Normalization Configured for L7 Intentions - -~> **Compatibility warning**: This feature is available as of Consul CE 1.20.1 and Consul Enterprise 1.20.1, 1.19.2, 1.18.3, and 1.15.15. We recommend upgrading to the latest version of Consul to take advantage of the latest features and improvements. - -Atypical traffic patterns may interfere with the enforcement of L7 intentions. For -example, if a service makes request to a non-normalized URI path and Consul is not -configured to force path normalization, it becomes possible to circumvent path match rules. While a -default deny policy can limit the impact of this issue, we still recommend -that you review your current request normalization configuration. Normalization is critical to avoid unwanted -traffic, especially when using unrecommended security options such as a default allow intentions policy. - -Consul adopts a default normalization mode that adheres to [RFC 3986]( -https://tools.ietf.org/html/rfc3986#section-6), but additional options to enable stricter -normalization are available in the cluster-wide [Mesh configuration entry]( -/consul/docs/connect/config-entries/mesh). We recommend reviewing these options and -enabling the strictest set that does not interfere with application traffic. - -We also recommend that you review L7 intention header match rules for potential -issues with multiple header values. Refer to the [service intentions -configuration entry reference](/consul/docs/connect/config-entries/service-intentions#spec-sources-permissions-http-header) -for more information. - -You do not need to enable request normalization if you are not using L7 intentions. -However, normalization may also benefit the use of other service mesh features that -rely on L7 attribute matching, such as [service routers](/consul/docs/connect/manage-traffic#routing). - -### ACLs Enabled with Default Deny - -Consul must be configured to use ACLs with a default deny policy. This forces -all requests to have explicit anonymous access or provide an ACL token. - -To learn how to enable ACLs, please see the -[tutorial on ACLs](/consul/tutorials/security/access-control-setup-production). - -**If ACLs are enabled but are in default allow mode**, then services will be -able to communicate by default. Additionally, if a proper anonymous token -is not configured, this may allow anyone to edit intentions. We do not recommend -this. **If ACLs are not enabled**, deny intentions will still be enforced, but anyone -may edit intentions. This renders the security of the created intentions -effectively useless. - -The advantage of a default deny policy in combination with specific "allow" rules -is that at worst, a failure of intentions due to misconfiguration will result in -_denied_ traffic, rather than unwanted _allowed_ traffic. - -### TCP and UDP Encryption Enabled - -TCP and UDP encryption must be enabled to prevent plaintext communication -between Consul agents. At a minimum, `verify_outgoing` should be enabled -to verify server authenticity with each server having a unique TLS certificate. -`verify_incoming` provides additional agent verification, but doesn't directly -affect service mesh since requests must also always contain a valid ACL token. -Clients calling Consul APIs should be forced over encrypted connections. - -See the [Consul agent encryption page](/consul/docs/security/encryption) to -learn more about configuring agent encryption. - -**If encryption is not enabled**, a malicious actor can sniff network -traffic or perform a man-in-the-middle attack to steal ACL tokens, always -authorize connections, etc. - -### Prevent Unauthorized Access to the Config and Data Directories - -The configuration and data directories of the Consul agent on both -clients and servers should be protected from unauthorized access. This -protection must be done outside of Consul via access control systems provided -by your target operating system. - -The [full Consul security model](/consul/docs/security) explains the -risk of unauthorized access for both client agents and server agents. In -general, the blast radius of unauthorized access for client agent directories -is much smaller than servers. However, both must be protected against -unauthorized access. - -### Prevent Non-Mesh Traffic to Services - -For services that are using -[proxies](/consul/docs/connect/proxies) -(are not [natively integrated](/consul/docs/connect/native)), -network access via their unencrypted listeners must be restricted -to only the proxy. This requires at a minimum restricting the listener -to bind to loopback only. More complex solutions may involve using -network namespacing techniques provided by the underlying operating system. - -For scenarios where multiple services are running on the same machine -without isolation, these services must all be trusted. We call this the -**trusted multi-tenancy** deployment model. Any service could theoretically -connect to any other service via the loopback listener, bypassing the service mesh -completely. In this scenario, all services must be trusted _or_ isolation -mechanisms must be used. - -For developer or operator access to a service, we recommend -using a local service mesh proxy. This is documented in the -[development and debugging guide](/consul/docs/connect/dev). - -**If non-proxy traffic can communicate with the service**, this traffic -will not be encrypted or authorized via service mesh. - -### Restrict Access to Envoy's Administration Interface - -Envoy exposes an **unauthenticated** -[administration interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin) -that can be used to query and modify the proxy. This interface -allows potentially sensitive information to be retrieved, such as: - -* Envoy configuration -* TLS certificates -* List of upstream services and endpoints - -We **strongly advise** only exposing the administration interface on a loopback -address (default configuration) and restricting access to a subset of users. - -**If the administration interface is exposed externally**, for -example by specifying a routable [`-admin-bind`](/consul/commands/connect/envoy#admin-bind) -address, it may be possible for a malicious actor to gain access to Envoy's -configuration, or impact the service's availability within the cluster. diff --git a/website/content/docs/connect/troubleshoot/debug.mdx b/website/content/docs/connect/troubleshoot/debug.mdx new file mode 100644 index 000000000000..2566a2bdc038 --- /dev/null +++ b/website/content/docs/connect/troubleshoot/debug.mdx @@ -0,0 +1,45 @@ +--- +layout: docs +page_title: Debug Consul service mesh +description: >- + Use the `consul connect proxy` command to connect to services or masquerade as other services for development and debugging purposes. Example code demonstrates connecting to services that are part of the service mesh as listeners only. +--- + +# Debug Consul service mesh + +It is often necessary to connect to a service for development or debugging. If a service only exposes a service mesh listener, then we need a way to establish a mutual TLS connection to the service. The [`consul connect proxy` command](/consul/commands/connect/proxy) can be used for this task on any machine with access to a Consul agent (local or remote). + +Restricting access to services only via service mesh ensures that the only way to connect to a service is through valid authorization of the [intentions](/consul/docs/secure-mesh/intention). This can extend to developers and operators too. + +## Connecting to mesh-only services + +As an example, let's assume that we have a PostgreSQL database running that we want to connect to via `psql`, but the only non-loopback listener is via Connect. Let's also assume that we have an ACL token to identify as `operator-mitchellh`. We can start a local proxy: + +```shell-session +$ consul connect proxy \ + -service operator-mitchellh \ + -upstream postgresql:8181 +``` + +This works because the source `-service` does not need to be registered in the local Consul catalog. However, to retrieve a valid identifying certificate, the ACL token must have `service:write` permissions. This can be used as a sort of "debug service" to represent people, too. In the example above, the proxy is identifying as `operator-mitchellh`. + +With the proxy running, we can now use `psql` like normal: + +```shell-session +$ psql --host=127.0.0.1 --port=8181 --username=mitchellh mydb +> +``` + +This `psql` session is now happening through our local proxy via an authorized mutual TLS connection to the PostgreSQL service in our Consul catalog. + +### Masquerading as a service + +You can also easily masquerade as any source service by setting the `-service` value to any service. Note that the proper ACL permissions are required to perform this task. + +For example, if you have an ACL token that allows `service:write` for `web` and you want to connect to the `postgresql` service as "web", you can start a proxy like so: + +```shell-session +$ consul connect proxy \ + -service web \ + -upstream postgresql:8181 +``` diff --git a/website/content/docs/connect/troubleshoot/index.mdx b/website/content/docs/connect/troubleshoot/index.mdx new file mode 100644 index 000000000000..274260875912 --- /dev/null +++ b/website/content/docs/connect/troubleshoot/index.mdx @@ -0,0 +1,42 @@ +--- +layout: docs +page_title: Service mesh troubleshoot overview +description: >- + Consul includes a built-in tool for troubleshooting communication between services in a service mesh. +--- + +# Service mesh troubleshoot overview + +This topic provides an overview of Consul’s built-in service-to-service troubleshooting capabilities. When communication between an upstream service and a downstream service in a service mesh fails, you can run the `consul troubleshoot` command to initiate a series of automated validation tests. + +For more information, refer to the [`consul troubleshoot` CLI documentation](/consul/commands/troubleshoot) or the [`consul-k8s troubleshoot` CLI reference](/consul/docs/reference/cli/consul-k8s#troubleshoot). + +## Introduction + +When communication between upstream and downstream services in a service mesh fails, you can diagnose the cause manually with one or more of Consul’s built-in features, including [health check queries](/consul/docs/register/health-check/vm), [the UI topology view](/consul/docs/observe/telemetry/vm), and [agent telemetry metrics](/consul/docs/reference/agent/telemetry). + +The `consul troubleshoot` command performs several checks in sequence that enable you to discover issues that impede service-to-service communication. The process systematically queries the [Envoy administration interface API](https://www.envoyproxy.io/docs/envoy/latest/operations/admin) and the Consul API to determine the cause of the communication failure. + +The troubleshooting command validates service-to-service communication by checking for the following common issues: + +- Upstream service does not exist +- One or both hosts are unhealthy +- A filter affects the upstream service +- The CA has expired mTLS certificates +- The services have expired mTLS certificates + +Consul outputs the results of these validation checks to the terminal along with suggested actions to resolve the service communication failure. When it detects rejected configurations or connection failures, Consul also outputs Envoy metrics for services. + +### Envoy proxies in a service mesh + +Consul validates communication in a service mesh by checking the Envoy proxies that are deployed as sidecars for the upstream and downstream services. As a result, troubleshooting requires that [Consul’s service mesh features are enabled](/consul/docs/connect). + +For more information about using Envoy proxies with Consul, refer to [Envoy proxy configuration for service mesh](/consul/docs/reference/proxy/envoy). + +### Technical constraints + +When troubleshooting service-to-service communication issues, be aware of the following constraints: + +- The troubleshooting tool does not check service intentions. For more information about intentions, including precedence and match order, refer to [service mesh intentions](/consul/docs/secure-mesh/intention). +- The troubleshooting tool validates one direct connection between a downstream service and an upstream service. You must run the `consul troubleshoot` command with the Envoy ID for an individual upstream service. It does support validating multiple connections simultaneously. +- The troubleshooting tool only validates Envoy configurations for sidecar proxies. As a result, the troubleshooting tool does not validate Envoy configurations on upstream proxies such as mesh gateways and terminating gateways. \ No newline at end of file diff --git a/website/content/docs/connect/troubleshoot/service-to-service.mdx b/website/content/docs/connect/troubleshoot/service-to-service.mdx new file mode 100644 index 000000000000..4ad92dbe7ac7 --- /dev/null +++ b/website/content/docs/connect/troubleshoot/service-to-service.mdx @@ -0,0 +1,155 @@ +--- +layout: docs +page_title: Service-to-service troubleshooting overview +description: >- + Consul includes a built-in tool for troubleshooting communication between services in a service mesh. Learn how to use the `consul troubleshoot` command to validate communication between upstream and downstream Envoy proxies on VM and Kubernetes deployments. +--- + +# Troubleshoot service-to-service communication + +This page describes the process for troubleshooting service-to-service communication in a service mesh using the `consul troubleshoot` CLI command. + +For more information, refer to the [`consul troubleshoot` CLI documentation](/consul/commands/troubleshoot) or the [`consul-k8s troubleshoot` CLI reference](/consul/docs/reference/cli/consul-k8s#troubleshoot). + +## Introduction + +When communication between upstream and downstream services in a service mesh fails, you can diagnose the cause manually with one or more of Consul’s built-in features, including [health check queries](/consul/docs/register/health-check/vm), [the UI topology view](/consul/docs/observe/telemetry/vm), and [agent telemetry metrics](/consul/docs/reference/agent/telemetry). + +The `consul troubleshoot` command performs several checks in sequence that enable you to discover issues that impede service-to-service communication. The process systematically queries the [Envoy administration interface API](https://www.envoyproxy.io/docs/envoy/latest/operations/admin) and the Consul API to determine the cause of the communication failure. + +The troubleshooting command validates service-to-service communication by checking for the following common issues: + +- Upstream service does not exist +- One or both hosts are unhealthy +- A filter affects the upstream service +- The CA has expired mTLS certificates +- The services have expired mTLS certificates + +Consul outputs the results of these validation checks to the terminal along with suggested actions to resolve the service communication failure. When it detects rejected configurations or connection failures, Consul also outputs Envoy metrics for services. + +### Envoy proxies in a service mesh + +Consul validates communication in a service mesh by checking the Envoy proxies that are deployed as sidecars for the upstream and downstream services. As a result, troubleshooting requires that [Consul’s service mesh features are enabled](/consul/docs/fundamentals/config-entry). + +For more information about using Envoy proxies with Consul, refer to [Envoy proxy configuration for service mesh](/consul/docs/reference/proxy/envoy). + +## Requirements + +- Consul v1.15 or later. +- For Kubernetes, the `consul-k8s` CLI must be installed. + +### Technical constraints + +When troubleshooting service-to-service communication issues, be aware of the following constraints: + +- The troubleshooting tool does not check service intentions. For more information about intentions, including precedence and match order, refer to [service mesh intentions](/consul/docs/secure-mesh/intention). +- The troubleshooting tool validates one direct connection between a downstream service and an upstream service. You must run the `consul troubleshoot` command with the Envoy ID for an individual upstream service. It does support validating multiple connections simultaneously. +- The troubleshooting tool only validates Envoy configurations for sidecar proxies. As a result, the troubleshooting tool does not validate Envoy configurations on upstream proxies such as mesh gateways and terminating gateways. + +## Usage + +Using the service-to-service troubleshooting tool is a two-step process: + +1. Find the identifier for the upstream service. +1. Use the upstream’s identifier to validate communication. + +In deployments without transparent proxies, the identifier is the _Envoy ID for the upstream service’s sidecar proxy_. If you use transparent proxies, the identifier is the _upstream service’s IP address_. For more information about using transparent proxies, refer to [Enable transparent proxy mode](/consul/docs/connect/transparent-proxy). + +## Requirements + +- Consul v1.15 or later. +- For Kubernetes, the `consul-k8s` CLI must be installed. + +## Troubleshoot on VMs + +To troubleshoot service-to-service communication issues in deployments that use VMs or bare-metal servers: + +1. Run the `consul troubleshoot upstreams` command to retrieve the upstream information for the service that is experiencing communication failures. Depending on your network’s configuration, the upstream information is either an Envoy ID or an IP address. + + ```shell-session + $ consul troubleshoot upstreams + ==> Upstreams (explicit upstreams only) (0) + ==> Upstreams IPs (transparent proxy only) (1) + [10.4.6.160 240.0.0.3] true map[backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul] + If you cannot find the upstream address or cluster for a transparent proxy upstream: + - Check intentions: Tproxy upstreams are configured based on intentions. Make sure you have configured intentions to allow traffic to your upstream. + - To check that the right cluster is being dialed, run a DNS lookup for the upstream you are dialing. For example, run `dig backend.svc.consul` to return the IP address for the `backend` service. If the address you get from that is missing from the upstream IPs, it means that your proxy may be misconfigured. + ``` + +1. Run the `consul troubleshoot proxy` command and specify the Envoy ID or IP address with the `-upstream-ip` flag to identify the proxy you want to perform the troubleshooting process on. The following example uses the upstream IP to validate communication with the upstream service `backend`: + + ```shell-session + $ consul troubleshoot proxy -upstream-ip 10.4.6.160 + ==> Validation + ✓ Certificates are valid + ✓ Envoy has 0 rejected configurations + ✓ Envoy has detected 0 connection failure(s) + ✓ Listener for upstream "backend" found + ✓ Route for upstream "backend" found + ✓ Cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ✓ Healthy endpoints for cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ✓ Cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ! No healthy endpoints for cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + -> Check that your upstream service is healthy and running + -> Check that your upstream service is registered with Consul + -> Check that the upstream proxy is healthy and running + -> If you are explicitly configuring upstreams, ensure the name of the upstream is correct + ``` + +In the example output, troubleshooting upstream communication reveals that the `backend` service has two service instances running in datacenter `dc1`. One of the services is healthy, but Consul cannot detect healthy endpoints for the second service instance. This information appears in the following lines of the example: + +```text hideClipboard + ✓ Cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ✓ Healthy endpoints for cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ✓ Cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ! No healthy endpoints for cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found +``` + +The output from the troubleshooting process identifies service instances according to their [Consul DNS address](/consul/docs/services/discovery/dns-static-lookups#standard-lookup). Use the DNS information for failing services to diagnose the specific issues affecting the service instance. + +For more information, refer to the [`consul troubleshoot` CLI documentation](/consul/commands/troubleshoot). + +## Troubleshoot on Kubernetes + +To troubleshoot service-to-service communication issues in deployments that use Kubernetes, retrieve the upstream information for the pod that is experiencing communication failures and use the upstream information to identify the proxy you want to perform the troubleshooting process on. + +1. Run the `consul-k8s troubleshoot upstreams` command and specify the pod ID with the `-pod` flag to retrieve upstream information. Depending on your network’s configuration, the upstream information is either an Envoy ID or an IP address. The following example displays all transparent proxy upstreams in Consul service mesh from the given pod. + + ```shell-session + $ consul-k8s troubleshoot upstreams -pod frontend-767ccfc8f9-6f6gx + ==> Upstreams (explicit upstreams only) (0) + ==> Upstreams IPs (transparent proxy only) (1) + [10.4.6.160 240.0.0.3] true map[backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul] + If you cannot find the upstream address or cluster for a transparent proxy upstream: + - Check intentions: Tproxy upstreams are configured based on intentions. Make sure you have configured intentions to allow traffic to your upstream. + - To check that the right cluster is being dialed, run a DNS lookup for the upstream you are dialing. For example, run `dig backend.svc.consul` to return the IP address for the `backend` service. If the address you get from that is missing from the upstream IPs, it means that your proxy may be misconfigured. + ``` + +1. Run the `consul-k8s troubleshoot proxy` command and specify the pod ID and upstream IP address to identify the proxy you want to troubleshoot. The following example uses the upstream IP to validate communication with the upstream service `backend`: + + ```shell-session + $ consul-k8s troubleshoot proxy -pod frontend-767ccfc8f9-6f6gx -upstream-ip 10.4.6.160 + ==> Validation + ✓ certificates are valid + ✓ Envoy has 0 rejected configurations + ✓ Envoy has detected 0 connection failure(s) + ✓ listener for upstream "backend" found + ✓ route for upstream "backend" found + ✓ cluster "backend.default.dc1.internal..consul" for upstream "backend" found + ✓ healthy endpoints for cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ✓ cluster "backend2.default.dc1.internal..consul" for upstream "backend" found + ! no healthy endpoints for cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ``` + +In the example output, troubleshooting upstream communication reveals that the `backend` service has two clusters in datacenter `dc1`. One of the clusters returns healthy endpoints, but Consul cannot detect healthy endpoints for the second cluster. This information appears in the following lines of the example: + + ```text hideClipboard + ✓ cluster "backend.default.dc1.internal..consul" for upstream "backend" found + ✓ healthy endpoints for cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ✓ cluster "backend2.default.dc1.internal..consul" for upstream "backend" found + ! no healthy endpoints for cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found +``` + +The output from the troubleshooting process identifies service instances according to their [Consul DNS address](/consul/docs/k8s/dns). Use the DNS information for failing services to diagnose the specific issues affecting the service instance. + +For more information, refer to the [`consul-k8s troubleshoot` CLI reference](/consul/docs/reference/cli/consul-k8s#troubleshoot). \ No newline at end of file diff --git a/website/content/docs/connect/vm.mdx b/website/content/docs/connect/vm.mdx new file mode 100644 index 000000000000..044adcabce9c --- /dev/null +++ b/website/content/docs/connect/vm.mdx @@ -0,0 +1,42 @@ +--- +layout: docs +page_title: Connect services with Consul on virtual machines (VMs) +description: >- + Learn how to deploy sidecar proxies on VMs so that your services can connect to Consul's service mesh. +--- + +# Connect services with Consul on virtual machines (VMs) + +This page describes the process to deploy sidecar proxies on VMs so that your services can connect to Consul's service mesh. + +## Workflow + +For each service you want to connect to the service mesh, complete the following steps: + +1. Configure global proxy settings. + + Configure global passthrough settings for all proxies deployed to your service mesh in the [proxy defaults configuration entry](/consul/docs/reference/config-entry/proxy-defaults). +1. Deploy your service mesh proxy. + + [Configure proxy behavior in a service definition](/consul/docs/reference/service) to register the proxy with Consul. +1. Start the proxy service. + + Proxies appear in the list of services registered to Consul. You must start them before they can route traffic in your service mesh. +1. Restart services to listen on `localhost` so that they no longer accept traffic from any source but the Envoy proxy. + +For additional guidance, refer to the [Securely connect your services with Consul service mesh tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-service-mesh). This tutorial includes scripts that automate configuration steps in order to demonstrate the practical workflows involved to start each proxy for each service. + +## Next steps + +After you start the sidecar proxies, Consul makes available the rest of its service mesh features. You can now use Consul to [manage traffic between services](/consul/docs/manage-traffic) and [observe service mesh telemetry](/consul/docs/observe/telemetry/vm). + +Your current service mesh is not ready for production environments. Follow these +steps to secure +north/south access from external sources into the service mesh: + +1. Deploy the Consul API gateway. Refer to the [API gateways overview + documentation](/consul/docs/north-south/api-gateway) for configuration + information and links to VM-specific deployment guides. +1. Secure service-to-service communication with mTLS certificates and service + intentions. Refer to the [secure Consul overview](/consul/docs/secure) for + more information. diff --git a/website/content/docs/consul-vs-other/api-gateway-compare.mdx b/website/content/docs/consul-vs-other/api-gateway-compare.mdx deleted file mode 100644 index 6fae646e1286..000000000000 --- a/website/content/docs/consul-vs-other/api-gateway-compare.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -layout: docs -page_title: Consul Compared to Other API Gateways -description: >- - The Consul API Gateway is an implementation of the Kubernetes Gateway API that provides a single entry point that routes public requests to services within the service mesh. ---- - -# Consul Compared to Other API Gateways - -**Examples**: Kong Gateway, Apigee, Mulesoft, Gravitee - -The [Consul API Gateway](/consul/docs/api-gateway) is an implementation of the [Kubernetes Gateway API](https://gateway-api.sigs.k8s.io/). Traditionally, API gateways are used for two things: _client traffic management_ and _API lifecycle management_. - -Client traffic management refers to an API gateway's role in controlling the point of entry for public traffic into a given environment, also known as _managing north-south traffic_. The Consul API Gateway is deployed alongside Consul service mesh and is responsible for routing inbound client requests to the mesh based on defined routes. For a full list of supported traffic management features, refer to the [Consul API Gateway documentation](/consul/docs/api-gateway). - -API lifecycle management refers to how application developers use an API gateway to deploy, iterate, and manage versions of an API. At this time, the Consul API Gateway does not support API lifecycle management. diff --git a/website/content/docs/consul-vs-other/config-management-compare.mdx b/website/content/docs/consul-vs-other/config-management-compare.mdx deleted file mode 100644 index 14de4933c49b..000000000000 --- a/website/content/docs/consul-vs-other/config-management-compare.mdx +++ /dev/null @@ -1,23 +0,0 @@ ---- -layout: docs -page_title: Consul Compared to Other Configuration Management Tools -description: >- - Chef, Puppet, and other configuration management tools build service discovery mechanisms by querying global state and constructing configuration files on each node during a periodic convergence run. ---- - -# Consul Compared to Other Configuration Management Tools - -**Examples**: Chef, Puppet - -There are many configuration management tools, however, they typically focus on static provisioning. Consul enables you to dynamically configure your services based on service and node state. Both static and dynamic configuration are important and work well together. Since Consul offers a number of different capabilities, there are times when its functionality overlaps with other configuration management tools. - -For example, Chef and Puppet are configuration management tools that can build service discovery mechanisms. However, they only support configuration information that is static. As a result, the time it takes to implement updates depends on the frequency of conversion runs (several minutes to hours). Additionally, these tools do not let you incorporate the system state in the configuration. This could lead to load balancers sending traffic to unhealthy nodes, further exacerbating issues. Supporting multiple datacenters is also challenging with these tools, since a central group of servers must manage all datacenters. - -Consul's service discovery layer is specifically designed to dynamically track and respond to your service's state. By using the integrated health checking, Consul can route traffic away from unhealthy nodes, allowing systems and services to gracefully recover. In addition, Consul’s service discovery layer works with Terraform. Consul-Terraform-Sync (CTS) automates updates to network infrastructure based on dynamic changes to each service. For example, as services scale up or down, CTS can trigger Terraform to update firewalls or load balancers to reflect the latest changes. Also, since each datacenter runs independently, supporting multiple datacenters is no different than supporting a single datacenter. - -Consul is not a replacement for other configuration management tools. These tools are still critical for setting up applications, including Consul. Static provisioning is best managed by existing tools, while Consul enables you to leverage dynamic configuration and service discovery. - -By separating configuration management and cluster management tools, you can take advantage of simpler workflows: -- Periodic runs are no longer required for service or configuration changes. -- Chef recipes and Puppet manifests are simpler because they do not require a global state. -- Infrastructure can become immutable because configuration management runs do not require global state. diff --git a/website/content/docs/consul-vs-other/dns-tools-compare.mdx b/website/content/docs/consul-vs-other/dns-tools-compare.mdx deleted file mode 100644 index f4d27de8ab45..000000000000 --- a/website/content/docs/consul-vs-other/dns-tools-compare.mdx +++ /dev/null @@ -1,16 +0,0 @@ ---- -layout: docs -page_title: Consul Compared to Other DNS Tools -description: >- - Service discovery is one of Consul's foundational capabilities. Consul is platform agnostic, which allows it to discover services across multiple runtimes and cloud providers including VMs, bare-metal, Kubernetes, Nomad, EKS, AKS, ECS, and Lambda. ---- - - -# Consul Compared to Other DNS Tools - -**Examples**: NS1, AWS Route53, AzureDNS, Cloudflare DNS - -Consul was originally designed as a centralized service registry for any cloud environment that dynamically tracks services as they are added, changed, or removed within a compute infrastructure. Consul maintains a catalog of these registered services and their attributes, such as IP addresses or service name. For more information, refer to [What is Service Discovery?](/consul/docs/concepts/service-discovery). - -As a result, Consul can also provide basic DNS functionality, including [lookups, alternate domains, and access controls](/consul/docs/services/discovery/dns-overview). Since Consul is platform agnostic, you can retrieve service information across both cloud and on-premises data centers. Consul does not natively support some advanced DNS capabilities, such as filters or advanced routing logic. However, you can integrate Consul with existing DNS solutions, such as [NS1](https://help.ns1.com/hc/en-us/articles/360039417093-NS1-Consul-Integration-Overview) and [DNSimple](https://blog.dnsimple.com/2022/05/consul-integration/), to support these advanced capabilities. - diff --git a/website/content/docs/consul-vs-other/index.mdx b/website/content/docs/consul-vs-other/index.mdx deleted file mode 100644 index 26901a3db434..000000000000 --- a/website/content/docs/consul-vs-other/index.mdx +++ /dev/null @@ -1,15 +0,0 @@ ---- -layout: docs -page_title: Why Choose Consul? -description: >- - Consul is a service networking platform that centralizes service discovery, enables zero trust networking with service mesh, automates network infrastructure, and controls access to mesh services with the Consul API Gateway. Compare Consul with other software that provide similar capabilities with one or more of the core use cases. ---- - -# Why Choose Consul? - -HashiCorp Consul is a service networking platform that encompasses multiple capabilities to secure and simplify network service management. These capabilities include service mesh, service discovery, configuration management, and API gateway functionality. While competing products offer a few of these core capabilities, Consul is developed to address all four. The topics in this section provide a general overview of how Consul’s capabilities compare to some other tools on the market. Visit the following pages to read more about how: - -- [Consul compares with other service meshes](/consul/docs/consul-vs-other/service-mesh-compare) -- [Consul compares with other DNS tools](/consul/docs/consul-vs-other/dns-tools-compare) -- [Consul compares with other configuration management tools](/consul/docs/consul-vs-other/config-management-compare) -- [Consul compares with other API Gateways](/consul/docs/consul-vs-other/api-gateway-compare) diff --git a/website/content/docs/consul-vs-other/service-mesh-compare.mdx b/website/content/docs/consul-vs-other/service-mesh-compare.mdx deleted file mode 100644 index 1164336781ae..000000000000 --- a/website/content/docs/consul-vs-other/service-mesh-compare.mdx +++ /dev/null @@ -1,18 +0,0 @@ ---- -layout: docs -page_title: Consul compared to other service meshes -description: >- - Consul's service mesh provides zero trust networking based on service identities to authorize, authenticate, and encrypt network services. Consul's service mesh can also provide advanced traffic management capabilities. Although there are many similar capabilities between Consul and other providers like Istio, Solo, Linkerd, Kong, Tetrate, and AWS App Mesh, we highlight the main differentiating factors for help customers compare. ---- - -# Consul compared to other service mesh software - -**Examples**: Istio, Solo Gloo Mesh, Linkerd, Kong/Kuma, AWS App Mesh - -Consul’s service mesh allows organizations to securely connect and manage their network services across multiple different environments. Using Envoy as the sidecar proxy attached to every service, Consul ensures that all service-to-service communication is authorized, authenticated, and encrypted. Consul includes traffic management capabilities like load balancing and traffic splitting, which help developers perform canary testing, A/B tests, and blue/green deployments. Consul also includes health check and observability features. - -Consul is platform agnostic — it supports any runtime (Kubernetes, EKS, AKS, GKE, VMs, ECS, Lambda, Nomad) and any cloud provider (AWS, Microsoft Azure, GCP, private clouds). This makes it one of the most flexible service discovery and service mesh platforms. While other service mesh software provides support for multiple runtimes for the data plane, they require you to run the control plane solely on Kubernetes. With Consul, you can run both the control plane and data plane in different runtimes. - -Consul also has several unique integrations with Vault, an industry standard for secrets management. Operators have the option to use Consul’s built-in certificate authority, or leverage Vault’s PKI engine to generate and store TLS certificates for both the data plane and control plane. In addition, Consul can automatically rotate the TLS certificates on both the data plane and control plane without requiring any type of restarts. This lets you rotate the certificates more frequently without incurring additional management burden on operators. -When deploying Consul on Kubernetes, you can store sensitive data including licenses, ACL tokens, and TLS certificates centrally in Vault instead of Kubernetes secrets. Vault is much more secure than Kubernetes secrets because it automatically encrypts all data, provides advanced access controls to secrets, and provides centralized governance for all secrets. - diff --git a/website/content/docs/deploy/index.mdx b/website/content/docs/deploy/index.mdx new file mode 100644 index 000000000000..8e0a58acdb53 --- /dev/null +++ b/website/content/docs/deploy/index.mdx @@ -0,0 +1,65 @@ +--- +layout: docs +page_title: Deploy Consul +description: >- + Learn about the editions of Consul you can deploy as Consul server agents, client agents, and dataplanes. You can find additional guidance for the deployment process in our tutorials. +--- + +# Deploy Consul + +This topic provides an overview of the configurations and processes to deploy a Consul agent in your network. + +A node must run Consul to connect to a Consul datacenter. It can run either as a server agent, or as a client agent or Consul dataplane that supports an application workload. + +## Consul editions + +You can configure agents to run a specific edition of Consul. + +| Edition | How to enable | +| :------------------- | :---------------------------------------------------- | +| Community edition | Enabled by default. | +| Enterprise | Configure `license` block in the agent configuration. | +| [FIPS 140-2 compliant](/consul/docs/deploy/server/fips) | Specify an alternate source binary or image. | + +To find the package for a specific release, refer to [releases.hashicorp.com](https://releases.hashicorp.com). To learn more about the differences in Consul editions and their available features, refer to [Consul edition fundamentals](/consul/docs/fundamentals/editions). + +## Documentation + +The following resources are available to help you deploy Consul on nodes in your network. + +To deploy a Consul server, refer to the instructions for your runtime: + +- [Deploy Consul servers on VMs](/consul/docs/deploy/server/vm) +- [Deploy Consul servers on Kubernetes](/consul/docs/deploy/server/k8s) +- [Deploy Consul servers on AWS Elastic Container Service (ECS)](/consul/docs/deploy/server/ecs) +- [Deploy Consul servers on Docker](/consul/docs/deploy/server/docker) + +By default, the servers you deploy use the WAL backend for the Raft index. To learn more, refer to [WAL LogStore backend overview](/consul/docs/deploy/server/wal). + +To deploy a Consul process alongside you application workloads, follow the instructions to deploy a client agent or Consul dataplanes: + +- Deploy Consul client agents: + - [Deploy client agents on VMs](/consul/docs/deploy/workload/client/vm) + - [Deploy client agents on Kubernetes](/consul/docs/deploy/workload/client/k8s) + - [Deploy client agents on Docker](/consul/docs/deploy/workload/client/docker) +- Deploy Consul dataplanes: + - [Deploy dataplanes on Kubernetes](/consul/docs/deploy/workload/dataplane/k8s) + - [Deploy dataplanes on AWS ECS](/consul/docs/deploy/workload/dataplane/ecs) + +You can also [configure cloud auto-join](/consul/docs/deploy/server/cloud-auto-join) so that nodes automatically join the Consul cluster when they start. + +## Tutorials + +The following tutorials provide additional instruction in the process to deploy Consul agents. + +### Virtual Machines + +- [Getting Started: Deploy Consul on VMs](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy) +- [Consul reference architecture](/consul/tutorials/production-vms/reference-architecture) +- [Production: Deployment Guide](/consul/tutorials/production-vms/deployment-guide) + +### Kubernetes + +- [Getting Started: Deploy Consul on Kubernetes](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy) +- [Consul on Kubernetes reference architecture](/consul/tutorials/production-kubernetes/kubernetes-reference-architecture) +- [Production: Consul and Kubernetes deployment guide](/consul/tutorials/production-kubernetes/kubernetes-deployment-guide) \ No newline at end of file diff --git a/website/content/docs/install/cloud-auto-join.mdx b/website/content/docs/deploy/server/cloud-auto-join.mdx similarity index 79% rename from website/content/docs/install/cloud-auto-join.mdx rename to website/content/docs/deploy/server/cloud-auto-join.mdx index 82cf6b6d5929..8a2a0845ae18 100644 --- a/website/content/docs/install/cloud-auto-join.mdx +++ b/website/content/docs/deploy/server/cloud-auto-join.mdx @@ -1,24 +1,21 @@ --- layout: docs -page_title: Auto-join a Cloud Provider +page_title: Automatically join clusters to a cloud provider description: >- Auto-join enables agents to automatically join other agents running in the cloud. To configure auto-join, specify agent addresses with compute node metadata for the cloud provider instead of an IP address. Use the CLI or an agent configuration file to configure cloud auto-join. --- -# Cloud Auto-join +# Automatically join clusters to a cloud provider -As of Consul 0.9.1, `retry-join` accepts a unified interface using the -[go-discover](https://github.com/hashicorp/go-discover) library for -automatically joining a Consul datacenter using cloud metadata. To use `retry-join` with a -supported cloud provider, specify the configuration on the command line or -configuration file as a `key=value key=value ...` string. +This page describes how to configure `retry_join` for an agent so that servers can join the datacenter automatically. This configuration can be combined with static IP or DNS addresses or even multiple configurations for different providers. -In Consul 0.9.1-0.9.3 the values need to be URL encoded but for most -practical purposes you need to replace spaces with `+` signs. +## Introduction -As of Consul 1.0 the values are taken literally and must not be URL -encoded. If the values contain spaces, equals, backslashes or double quotes then -they need to be double quoted and the usual escaping rules apply. +As of Consul 0.9.1, `retry-join` accepts a unified interface using the [go-discover](https://github.com/hashicorp/go-discover) library for automatically joining a Consul datacenter using cloud metadata. To use `retry-join` with a supported cloud provider, specify the configuration on the command line or configuration file as a `key=value key=value ...` string. + +In Consul 0.9.1-0.9.3, the values need to be URL encoded but for most practical purposes you need to replace spaces with `+` signs. + +As of Consul 1.0 the values are taken literally and must not be URL encoded. If the values contain spaces, equals, backslashes or double quotes then they need to be double quoted and the usual escaping rules apply. ```shell-session $ consul agent -retry-join 'provider=my-cloud config=val config2="some other val" ...' @@ -32,11 +29,11 @@ or via a configuration file: } ``` +In order to use discovery behind a proxy, you will need to set `HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables per [Golang `net/http` library](https://golang.org/pkg/net/http/#ProxyFromEnvironment). + ## Auto-join with Network Segments -In order to use cloud auto-join with [Network Segments](/consul/docs/enterprise/network-segments/network-segments-overview), -you must reconfigure the Consul agent's Serf LAN port to match that of the -segment you wish to join. +In order to use cloud auto-join with [Network Segments](/consul/docs/multi-tenant/network-segment), you must reconfigure the Consul agent's Serf LAN port to match that of the segment you wish to join. For example, given the following segment configuration on the server agents: @@ -61,14 +58,12 @@ segments = [ -A Consul client agent wishing to join the "alpha" segment would need to be configured -to use port `8303` as its Serf LAN port prior to attempting to join the cluster. +A Consul client agent wishing to join the "alpha" segment would need to be configured to use port `8303` as its Serf LAN port prior to attempting to join the cluster. -The following example configuration overrides the default Serf LAN port using the -[`ports.serf_lan`](/consul/docs/agent/config/config-files#serf_lan_port) configuration option. +The following example configuration overrides the default Serf LAN port using the [`ports.serf_lan`](/consul/docs/reference/agent/configuration-file/general#serf_lan_port) configuration option. @@ -83,8 +78,7 @@ ports { -The following example overrides the default Serf LAN port using the -[`-serf-lan-port`](/consul/docs/agent/config/cli-flags#_serf_lan_port) command line flag. +The following example overrides the default Serf LAN port using the [`-serf-lan-port`](/consul/commands/agent#_serf_lan_port) command line flag. ```shell $ consul agent -serf-lan-port=8303 -retry-join "provider=..." @@ -93,23 +87,38 @@ $ consul agent -serf-lan-port=8303 -retry-join "provider=..." -## Provider-specific configurations +## Kubernetes + +The Kubernetes provider finds the IP addresses of pods with the matching [label or field selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). This is useful for non-Kubernetes agents that are joining a server cluster running within Kubernetes. -The cloud provider-specific configurations are detailed below. This can be -combined with static IP or DNS addresses or even multiple configurations -for different providers. +The pod IP is used by default, which requires that the agent connecting can network to the pod IP. The `host_network` boolean can be set to true to use the host IP instead, but this requires the agent ports (Gossip, RPC, etc.) to be exported to the host as well. + +By default, no port is specified. This causes Consul to use the default gossip port (default behavior with all join requests). The pod may specify the `consul.hashicorp.com/auto-join-port` annotation to set the port. The value may be an integer or a named port. + +```shell-session +$ consul agent -retry-join "provider=k8s label_selector=\"app=consul,component=server\"" +``` + +```json +{ + "retry-join": ["provider=k8s label_selector=..."] +} +``` -In order to use discovery behind a proxy, you will need to set -`HTTP_PROXY`, `HTTPS_PROXY` and `NO_PROXY` environment variables per -[Golang `net/http` library](https://golang.org/pkg/net/http/#ProxyFromEnvironment). +- `provider` (required) - the name of the provider ("k8s" is the provider here) +- `kubeconfig` (optional) - path to the kubeconfig file. If this isn't + set, then in-cluster auth will be attempted. If that fails, the default + kubeconfig paths are tried (`$HOME/.kube/config`). +- `namespace` (optional) - the namespace to search for pods. If this isn't + set, it defaults to all namespaces. +- `label_selector` (optional) - the label selector for matching pods. +- `field_selector` (optional) - the field selector for matching pods. -The following sections give the options specific to each supported cloud -provider. +The Kubernetes token used by the provider needs to have permissions to list pods in the desired namespace. -### Amazon EC2 and ECS +## AWS EC2 and ECS -This returns the first private IP address of all servers in the given -region which have the given `tag_key` and `tag_value`. +This returns the first private IP address of all servers in the given region which have the given `tag_key` and `tag_value`. ```shell-session $ consul agent -retry-join "provider=aws tag_key=... tag_value=..." @@ -133,7 +142,7 @@ $ consul agent -retry-join "provider=aws tag_key=... tag_value=..." - `ecs_family` (optional) - String value limits searches to a AWS ECS task definition family. By default, Consul searches all task definition families with the specified tags. - `endpoint` (optional) - String value that specifies the endpoint URL of the AWS service to use. If not set, the AWS client sets the value, which defaults to the public DNS name for the service in the specified region. -#### Authentication & Precedence +### Authentication & precedence - Static credentials `access_key_id=... secret_access_key=...` - Environment variables (`AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`) @@ -154,11 +163,9 @@ The AWS ECS task role associated with the service attempting to discover the `co - `ecs:DescribeTasks` If the region is omitted from the configuration, Consul obtains it from the local instance's [ECS V4 metadata endpoint](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v4.html). -### Microsoft Azure +## Microsoft Azure -This returns the first private IP address of all servers in the given region -which have the given `tag_key` and `tag_value` applied to their virtual NIC in the tenant and subscription, or in -the given `resource_group` of a `vm_scale_set` for Virtual Machine Scale Sets. +This returns the first private IP address of all servers in the given region which have the given `tag_key` and `tag_value` applied to their virtual NIC in the tenant and subscription, or in the given `resource_group` of a `vm_scale_set` for Virtual Machine Scale Sets. ```shell-session $ consul agent -retry-join "provider=azure tag_key=... tag_value=... tenant_id=... client_id=... subscription_id=... secret_access_key=..." @@ -199,16 +206,15 @@ that contains the virtual NICs for the Virtual Machines. When using Virtual Machine Scale Sets the only role action needed is `Microsoft.Compute/virtualMachineScaleSets/*/read`. -~> **Note:** If the Consul datacenter is hosted on Azure, Consul can use Managed Service Identities (MSI) to access Azure instead of an environment -variable, shared client id and secret. MSI must be enabled on the VMs or Virtual Machine Scale Sets hosting Consul. It is the preferred configuration -since MSI prevents your Azure credentials from being stored in Consul configuration. This feature is supported in Consul 1.7 and above. When using -MSI, the `tag_key`, `tag_value` and `subscription_id` need to be supplied for Virtual machines. -Be aware that the amount of time that Azure takes for the VMs to detect the MSI permissions can be between a minute to an hour. + -### Google Compute Engine +If the Consul datacenter is hosted on Azure, Consul can use Managed Service Identities (MSI) to access Azure instead of an environment variable, shared client id and secret. MSI must be enabled on the VMs or Virtual Machine Scale Sets hosting Consul. It is the preferred configuration since MSI prevents your Azure credentials from being stored in Consul configuration. This feature is supported in Consul 1.7 and above. When using MSI, the `tag_key`, `tag_value` and `subscription_id` need to be supplied for Virtual machines. Be aware that the amount of time that Azure takes for the VMs to detect the MSI permissions can be between a minute to an hour. -This returns the first private IP address of all servers in the given -project which have the given `tag_value`. + + +## Google Compute Engine + +This returns the first private IP address of all servers in the given project which have the given `tag_value`. ```shell-session $ consul agent -retry-join "provider=gce project_name=... tag_value=..." @@ -226,11 +232,9 @@ $ consul agent -retry-join "provider=gce project_name=... tag_value=..." - `zone_pattern` (optional) - the list of zones can be restricted through an RE2 compatible regular expression. If omitted, servers in all zones are returned. - `credentials_file` (optional) - the credentials file for authentication. Note, if you set `-config-dir` do not store the credentials.json file in the configuration directory as it will be parsed as a config file and Consul will fail to start. See below for more information. -#### Authentication & Precedence +### Authentication & precedence -Discovery requires a [GCE Service -Account](https://cloud.google.com/compute/docs/access/service-accounts). -Credentials are searched using the following paths, in order of precedence. +Discovery requires a [GCE Service Account](https://cloud.google.com/compute/docs/north-south/service-accounts). Credentials are searched using the following paths, in order of precedence. - Use credentials from `credentials_file`, if provided. - Use JSON file from `GOOGLE_APPLICATION_CREDENTIALS` environment variable. @@ -240,10 +244,9 @@ Credentials are searched using the following paths, in order of precedence. - On Google Compute Engine, use credentials from the metadata server. In this final case any provided scopes are ignored. -### IBM SoftLayer +## IBM SoftLayer -This returns the first private IP address of all servers for the given -datacenter with the given `tag_value`. +This returns the first private IP address of all servers for the given datacenter with the given `tag_value`. ```shell-session $ consul agent -retry-join "provider=softlayer datacenter=... tag_value=... username=... api_key=..." @@ -263,10 +266,9 @@ $ consul agent -retry-join "provider=softlayer datacenter=... tag_value=... user - `username` (required) - the username to use for auth. - `api_key` (required) - the api key to use for auth. -### Aliyun (Alibaba Cloud) +## Aliyun (Alibaba Cloud) -This returns the first private IP address of all servers for the given -`region` with the given `tag_key` and `tag_value`. +This returns the first private IP address of all servers for the given `region` with the given `tag_key` and `tag_value`. ```shell-session $ consul agent -retry-join "provider=aliyun region=... tag_key=consul tag_value=... access_key_id=... access_key_secret=..." @@ -287,13 +289,11 @@ $ consul agent -retry-join "provider=aliyun region=... tag_key=consul tag_value= - `access_key_id` (required) -the access key to use for auth. - `access_key_secret` (required) - the secret key to use for auth. -The required RAM permission is `ecs:DescribeInstances`. -It is recommended you make a dedicated key used to auto-join. +The required RAM permission is `ecs:DescribeInstances`. It is recommended you make a dedicated key used to auto-join. -### Digital Ocean +## Digital Ocean -This returns the first private IP address of all servers for the given -`region` with the given `tag_name`. +This returns the first private IP address of all servers for the given `region` with the given `tag_name`. ```shell-session $ consul agent -retry-join "provider=digitalocean region=... tag_name=... api_token=..." @@ -310,10 +310,9 @@ $ consul agent -retry-join "provider=digitalocean region=... tag_name=... api_to - `tag_name` (required) - the value of the tag to auto-join on. - `api_token` (required) -the token to use for auth. -### Openstack +## Openstack -This returns the first private IP address of all servers for the given -`region` with the given `tag_key` and `tag_value`. +This returns the first private IP address of all servers for the given `region` with the given `tag_key` and `tag_value`. ```shell-session $ consul agent -retry-join "provider=os tag_key=consul tag_value=server user_name=... password=... auth_url=..." @@ -342,10 +341,9 @@ $ consul agent -retry-join "provider=os tag_key=consul tag_value=server user_nam The configuration can also be provided by environment variables. -### Scaleway +## Scaleway -This returns the first private IP address of all servers for the given -`region` with the given `tag_name`. +This returns the first private IP address of all servers for the given `region` with the given `tag_name`. ```shell-session $ consul agent -retry-join "provider=scaleway organization=my-org tag_name=consul-server token=... region=..." @@ -365,7 +363,7 @@ $ consul agent -retry-join "provider=scaleway organization=my-org tag_name=consu - `organization` (required) - the organization access key to use for auth (equal to access key). - `token` (required) - the token to use for auth. -### TencentCloud +## TencentCloud This returns the first IP address of all servers for the given `region` with the given `tag_key` and `tag_value`. @@ -389,10 +387,9 @@ $ consul agent -retry-join "provider=tencentcloud region=... tag_key=consul tag_ - `access_key_id` (required) - The secret id of TencentCloud. - `access_key_secret` (required) - The secret key of TencentCloud. -This required permission to 'cvm:DescribeInstances'. -It is recommended you make a dedicated key used to auto-join the Consul datacenter. +This required permission to 'cvm:DescribeInstances'. It is recommended you make a dedicated key used to auto-join the Consul datacenter. -### Joyent Triton +## Joyent Triton This returns the first PrimaryIP addresses for all servers with the given `tag_key` and `tag_value`. @@ -415,7 +412,7 @@ $ consul agent -retry-join "provider=triton account=testaccount url=https://us-s - `tag_key` (optional) - the instance tag key to use. - `tag_value` (optional) - the tag value to use. -### vSphere +## vSphere This returns the first private IP address of all servers for the given region with the given `tag_name` and `category_name`. @@ -440,7 +437,7 @@ $ consul agent -retry-join "provider=vsphere category_name=consul-role tag_name= - `insecure_ssl` (optional) - Whether or not to skip SSL certificate validation. - `timeout` (optional) - Discovery context timeout (default: 10m) -### Packet +## Packet This returns the first private IP address (or the IP address of `address type`) of all servers with the given `project` and `auth_token`. @@ -462,7 +459,7 @@ $ consul agent -retry-join "provider=packet auth_token=token project=uuid url=.. - `url` (optional) - a REST URL for packet - `address_type` (optional) - the type of address to check for in this provider ("private_v4", "public_v4" or "public_v6". Defaults to "private_v4") -### Linode +## Linode This returns the first private IP address of all servers for the given `region` with the given `tag_name`. @@ -485,42 +482,3 @@ $ consul agent -retry-join "provider=linode region=us-east tag_name=consul-serve Variables can also be provided by environment variables: - `LINODE_TOKEN` for `api_token` - -### Kubernetes (k8s) - -The Kubernetes provider finds the IP addresses of pods with the matching -[label or field selector](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/). -This is useful for non-Kubernetes agents that are joining a server cluster -running within Kubernetes. - -The pod IP is used by default, which requires that the agent connecting can -network to the pod IP. The `host_network` boolean can be set to true to use -the host IP instead, but this requires the agent ports (Gossip, RPC, etc.) -to be exported to the host as well. - -By default, no port is specified. This causes Consul to use the default -gossip port (default behavior with all join requests). The pod may specify -the `consul.hashicorp.com/auto-join-port` annotation to set the port. The value -may be an integer or a named port. - -```shell-session -$ consul agent -retry-join "provider=k8s label_selector=\"app=consul,component=server\"" -``` - -```json -{ - "retry-join": ["provider=k8s label_selector=..."] -} -``` - -- `provider` (required) - the name of the provider ("k8s" is the provider here) -- `kubeconfig` (optional) - path to the kubeconfig file. If this isn't - set, then in-cluster auth will be attempted. If that fails, the default - kubeconfig paths are tried (`$HOME/.kube/config`). -- `namespace` (optional) - the namespace to search for pods. If this isn't - set, it defaults to all namespaces. -- `label_selector` (optional) - the label selector for matching pods. -- `field_selector` (optional) - the field selector for matching pods. - -The Kubernetes token used by the provider needs to have permissions to list pods -in the desired namespace. diff --git a/website/content/docs/deploy/server/docker.mdx b/website/content/docs/deploy/server/docker.mdx new file mode 100644 index 000000000000..5489ded801df --- /dev/null +++ b/website/content/docs/deploy/server/docker.mdx @@ -0,0 +1,104 @@ +--- +layout: docs +page_title: Deploy Consul server agent on Docker +description: >- + Learn how to deploy a Consul server agent on a Docker container. +--- + +# Deploy Consul server agent on Docker + +This topic provides an overview for deploying a Consul server when running Consul on Docker containers. + +## Deploy and run a Consul server + +You can start a Consul server container and configure the Consul agent from the command line. + +The following example starts the Docker container and includes a `consul agent -server` sub-command to configure the Consul server agent and enable the UI. + +```shell-session +$ docker run --name=consul-server -d -p 8500:8500 -p 8600:8600/udp hashicorp/consul consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=0.0.0.0 -data-dir=/consul/data +``` + +Since you started the container in detached mode, `-d`, the process will run in the background. You also set port mapping to your local machine as well as binding the client interface of your agent to `0.0.0.0`. This configuration allows you to work directly with the Consul datacenter from your local machine and access Consul's UI and DNS over `localhost`. Finally, you are using Docker's default bridge network. + +Note, the Consul Docker image sets up the Consul configuration directory at `/consul/config` by default. The agent will load any configuration files placed in that directory. + +The configuration directory is not exposed as a volume and does not persist data. Consul uses it only during startup and does not store any state there. + + You can also submit a Consul configuration in JSON format with the environment variable `CONSUL_LOCAL_CONFIG`. This approach does not require mounting volumes or copying files to the container. For more information, refer to the [Consul on Docker documentation](/consul/docs/docker#consul-agent). + +### Discover the server IP address + +To find the IP address of the Consul server, execute the `consul members` command inside of the `consul-server` container. + +```shell-session +$ docker exec consul-server consul members +Node Address Status Type Build Protocol DC Partition Segment +server-1 172.17.0.2:8301 alive server 1.21.2 2 dc1 default +``` + +## Multi-node Consul cluster + +You can start a Consul cluster with multiple server containers that mimick more closely a production environment. The following example uses a Docker compose file to start three Consul server containers. + + + +```yaml +version: '3.7' +services: + consul-server1: + image: hashicorp/consul:1.21.3 + container_name: consul-server1 + restart: always + networks: + - consul + ports: + - "8500:8500" + - "8600:8600/tcp" + - "8600:8600/udp" + command: "agent -server -ui -data-dir='/consul/data' -node=consul-server1 -retry-join=consul-server2 -bootstrap-expect=3" + consul-server2: + image: hashicorp/consul:1.21.3 + container_name: consul-server2 + restart: always + networks: + - consul + command: "agent -server -ui -data-dir='/consul/data' -node=consul-server2 -retry-join=consul-server1 -bootstrap-expect=3" + consul-server3: + image: hashicorp/consul:1.21.3 + container_name: consul-server3 + restart: always + networks: + - consul + command: "agent -server -ui -data-dir='/consul/data' -node=consul-server3 -retry-join=consul-server1 -bootstrap-expect=3" +networks: + consul: + driver: bridge +``` + + + +You can start the cluster with the following command: + +```shell-session +$ docker-compose -f consul-cluster.yml up -d +[+] Running 4/4 + ✔ Network docker_consul Created 0.0s + ✔ Container consul-server3 Started 0.2s + ✔ Container consul-server1 Started 0.2s + ✔ Container consul-server2 Started 0.2s +``` + +This command starts the three Consul server containers in detached mode. Each server is configured to expect three servers in the cluster, which is specified by the `-bootstrap-expect=3` flag. + +You can verify the status of the cluster by executing the `consul members` command inside any of the server containers: + +```shell-session +$ docker exec consul-server1 consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server1 172.19.0.2:8301 alive server 1.21.3 2 dc1 default +consul-server2 172.19.0.4:8301 alive server 1.21.3 2 dc1 default +consul-server3 172.19.0.3:8301 alive server 1.21.3 2 dc1 default +``` + +You can also access the Consul UI by navigating to [http://localhost:8500](http://localhost:8500) in your web browser, because in the Docker compose file the port `8500` on you local machine is forwarded to the `consul-server1` container. diff --git a/website/content/docs/deploy/server/ecs/index.mdx b/website/content/docs/deploy/server/ecs/index.mdx new file mode 100644 index 000000000000..93a9f2058de1 --- /dev/null +++ b/website/content/docs/deploy/server/ecs/index.mdx @@ -0,0 +1,479 @@ +--- +layout: docs +page_title: Deploy Consul to ECS using the Terraform module +description: >- + Terraform modules simplify the process to install Consul on Amazon Web Services ECS. Learn how to create task definitions, schedule tasks for your service mesh, and configure routes with example configurations so that you can Deploy Consul to ECS using Terraform. +--- + +# Deploy Consul to ECS using the Terraform module + +This topic describes how to create a Terraform configuration that deploys Consul service mesh to your ECS cluster workloads. Consul server agents do not run on ECS and must be deployed to another runtime, such as EKS, and connected to your ECS workloads. Refer [Consul on AWS Elastic Container Service overview](/consul/docs/ecs) for additional information. + +## Overview + +Create a Terraform configuration file that includes the ECS task definition and Terraform modules that build the Consul service mesh components. The task definition is the ECS blueprint for your software services on AWS. Refer to the [ECS task definitions documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) for additional information. + +You can add the following modules and resources to your Terraform configuration: + +- `mesh-task` module: Adds the Consul ECS control-plane and Consul dataplane containers to the task definition along with your application container. Envoy runs as a subprocess within the Consul dataplane container. +- `aws_ecs_service` resource: Adds an ECS service to run and maintain your task instance. +- `gateway-task` module: Adds mesh gateway containers to the cluster. Mesh gateways enable service-to-service communication across different types of network areas. + +To enable Consul security features for your production workloads, you must also deploy the `controller` module, which provisions ACL tokens for service mesh tasks. + +After defining your Terraform configuration, use `terraform apply` to deploy Consul to your ECS cluster. + +## Requirements + +- You should be familiar with creating Terraform configuration files. Refer to the [Terraform documentation](/terraform/docs) for information about how to get started with Terraform. +- You should be familiar with AWS ECS. Refer to [What is Amazon Elastic Container Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) in the Amazon AWS documentation for additional information. +- If you intend to use the `gateway-task` module to deploy mesh gateways, you must enable TLS. Refer to [Configure the ECS controller](#configure-the-ecs-controller) for additional information. + +### Secure configuration requirements + +You must meet the following requirements and prerequisites to enable security features in Consul service mesh: + +- Enable [TLS encryption](/consul/docs/security/encryption#rpc-encryption-with-tls) on your Consul servers so that they can communicate securely with Consul containers over gRPC. +- Enable [access control lists (ACLs)](/consul/docs/secure/acl) on your Consul servers. ACLs provide authentication and authorization for access to Consul servers on the mesh. +- You should be familiar with specifying sensitive data on ECS. Refer to [Passing sensitive data to a container](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the AWS documentation for additional information. + +Additionally, Consul requires a unique IAM role for each ECS task family. Task IAM roles cannot be shared by different task families because the task family is unique to each Consul service. + +You should be familiar with configuring Consul's secure features, including how to create ACL tokens and policies. Refer to the following resources for additional information: + +- [Create a service token](/consul/docs/secure/acl/token/service) +- [Day 1: Security tutorial](https://developer.hashicorp.com/consul/tutorials/security) + +## Create the task definition + +Create a Terraform configuration file and add your ECS task definition. The task definition includes your application containers, Consul control-plane container, dataplane container, and controller container. If you intend to peer the service mesh to multiple Consul datacenters or partitions, [add the gateway-task module](#configure-the-gateway-task-module), which deploys gateway containers that enable connectivity between network areas in your network. + +## Configure the mesh task module + +Add a `module` block to your Terraform configuration and specify the following fields: + +- `source`: Specifies the location of the `mesh-task` module. This field must be set to `hashicorp/consul-ecs/aws//modules/mesh-task`. The `mesh-task` module automatically adds the Consul service mesh infrastructure when you apply the Terraform configuration. +- `version`: Specifies the version of the `mesh-task` module to use. +- `family`: Specifies the [ECS task definition family](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#family). Consul also uses the `family` value as the Consul service name by default. +- `container_definitions`: Specifies a list of [container definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions) for the task definition. This field is where you include your application containers. + +Refer to the [`mesh-task` module reference documentation in the Terraform registry](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task?tab=inputs) for information about all options you can configure. + +You should also configure the ACL and encryption settings if they are enabled on your Consul servers. Refer to [Enable secure deployment](#enable-secure-deployment) for additional information. + +In the following example, the Terraform configuration file `mesh-task.tf` creates a task definition with an application container called `example-client-app`: + + + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + version = "" + + family = "my_task" + container_definitions = [ + { + name = "example-client-app" + image = "docker.io/org/my_task:v0.0.1" + essential = true + portMappings = [ + { + containerPort = 9090 + hostPort = 9090 + protocol = "tcp" + } + ] + cpu = 0 + mountPoints = [] + volumesFrom = [] + } + ] + + port = 9090 +} +``` + + + +The following fields are required. Refer to the [module reference documentation](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task?tab=inputs) for a complete reference. + +## Configure Consul server settings + +Provide Consul server connection settings to the mesh task module so that the module can configure the control-plane and ECS controller containers to connect to the servers. + +1. In your `variables.tf` file, define variables for the host URL and the TLS settings for gRPC and HTTP traffic. Refer to the [mesh task module reference](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task?tab=inputs) for information about the variables you can define. In the following example, the Consul server address is defined in the `consul_server_hosts` variable: + + ```hcl + variable "consul_server_hosts" { + description = "Address of Consul servers." + type = string + } + ``` +1. Add an `environment` block to the control-plane and ECS controller containers definition. +1. Set the `environment.name` field to the `CONSUL_ECS_CONFIG_JSON` environment variable and the value to `local.encoded_config`. + + ```hcl + environment = [ + { + name = "CONSUL_ECS_CONFIG_JSON", + value = local.encoded_config + } + ] + ``` + + When you apply the configuration, the mesh task module interpolates the server configuration variables, builds a `config.tf` file, and injects the settings into the appropriate containers. For additional information about the `config.tf` file, refer to the [JSON schema reference documentation](/consul/docs/reference/ecs/server-json). + +## Configure an ECS service to run your task instances + +To start a task using the task definition, add the `aws_ecs_service` resource to your configuration to create an ECS service. [ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) are one of the most common ways to start tasks using a task definition. + +Reference the `mesh-task` module's `task_definition_arn` output value in your `aws_ecs_service` resource. The following example adds an ECS service for a task definition referenced in as `module.my_task.task_definition_arn`: + + + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + ... +} + +resource "aws_ecs_service" "my_task" { + name = "my_task_service" + task_definition = module.my_task.task_definition_arn + launch_type = "FARGATE" + propagate_tags = "TASK_DEFINITION" + ... +} +``` + + + +Refer to [`aws_ecs_service` in the Terraform registry](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service) for a complete configuration reference. + +If you are deploying a test instance of your ECS application, you can apply your configuration in Terraform. Refer to [Run your configuration](#run-your-configuration) for instructions. To configure your deployment for a production environment, you must also deploy the ECS controller module. Refer to [Configure the ECS controller](#configure-the-ecs-controller) for instructions. + +If you intend to leverage multi-datacenter Consul features, such as WAN federation and cluster peering, then you must add the `gateway-task` module for each Consul datacenter in your network. Refer to [Configure the gateway task module](#configure-the-gateway-task-module) for instructions. + +## Configure the gateway task module + +The `gateway-task` module deploys a mesh gateway, which enables service-to-service communication across network areas. Mesh gateways detect the server name indication (SNI) header from the service mesh session and route the connection to the appropriate destination. + +Refer to the following documentation for additional information: + +- [WAN Federation via Mesh Gateways](/consul/docs/east-west/mesh-gateway/enable) +- [Service-to-service Traffic Across Datacenters](/consul/docs/east-west/mesh-gateway/federation) + +To use mesh gateways, TLS must be enabled in your cluster. Refer to the [requirements section](#requirements) for additional information. + +1. Add a `module` block to your Terraform configuration file and specify a label. The label is a unique identifier for the gateway. +1. Add a `source` to the `module` and specify the location of the `gateway-task`. The value must be `hashicorp/consul-ecs/aws//modules/gateway-task`. +1. Specify the following required inputs: + - `ecs_cluster_arn`: The ARN of the ECS cluster for the gateway. + - `family`: Specifies a name for multiple versions of the task. Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#family) for details. + - `kind`: Set to `mesh-gateway` + - `subnets`: Specifies a list of subnet IDs where the gateway task should be deployed. +1. If you are deploying to a production environment, you must also add the `acl` and `tls` configurations. Refer to [Configure the ECS controller](#configure-the-ecs-controller) for details. +1. Configure any additional parameters necessary for your environment. Refer to the [module reference documentation](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task?tab=inputs) for information about all parameters. + +The following example defines a mesh gateway task called `my-gateway`: + + + +```hcl +module "my_mesh_gateway" { + source = "hashicorp/consul-ecs/aws//modules/gateway-task" + version = "" + kind = "mesh-gateway" + + family = "my-gateway" + ecs_cluster_arn = "" + subnets = [""] + consul_server_hosts = "
    " + tls = true + consul_ca_cert_arn = "" +} +``` + + + +Refer to [gateway-task module in the Terraform registry](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task?tab=inputs) for a complete reference. + +Refer to the [gateway task configuration examples](#gateway-task-configuration-examples) for additional example configurations. + +## Configure the ECS controller + +Deploy the ECS controller container to its own ECS task in the cluster. Refer to [ECS controller container](/consul/docs/reference/architecture/ecs#ecs-controller) for details about the container. + +Verify that you have completed the prerequisites described in [Secure configuration requirements](#secure-configuration-requirements) and complete the following steps to configure the controller container. + +### Create a an ACL token for the controller + +1. On the Consul server, create a policy that grants the following access for the controller: + + - `acl:write` + - `operator:write` + - `node:write` + - `service:write` + + The policy allows Consul to generate a token linked to the policy. Refer to [Create a service token](/consul/docs/secure/acl/token/service) for instructions. +1. Create a token and link it to the ACL controller policy. Refer to the [ACL tokens documentation](/consul/docs/secure/acl/token) for instructions. + +### Configure an AWS secrets manager secret + +Add the `aws_secretsmanager_secret` resource to your Terraform configuration and specify values for retrieving the CA and TLS certificates. The resource enables services to communicate over TLS and present ACL tokens. The ECS controller also uses the secret manager to retrieve the value of the bootstrap token. + +In the following example, Terraform creates the CA certificates for gRPC and HTTPS in the secrets manager. Consul retrieves the CA certificate PEM file from the secret manager so that the mesh task can use TLS for HTTP and gRPC traffic: + + + +```hcl +resource "tls_private_key" "ca" { + algorithm = "ECDSA" + ecdsa_curve = "P384" +} + +resource "tls_self_signed_cert" "ca" { + private_key_pem = tls_private_key.ca.private_key_pem + + subject { + common_name = "Consul Agent CA" + organization = "HashiCorp Inc." + } + + // 5 years. + validity_period_hours = 43800 + + is_ca_certificate = true + set_subject_key_id = true + + allowed_uses = [ + "digital_signature", + "cert_signing", + "crl_signing", + ] +} + +resource "aws_secretsmanager_secret" "ca_key" { + name = "${var.name}-${var.datacenter}-ca-key" + recovery_window_in_days = 0 +} + +resource "aws_secretsmanager_secret_version" "ca_key" { + secret_id = aws_secretsmanager_secret.ca_key.id + secret_string = tls_private_key.ca.private_key_pem +} + +resource "aws_secretsmanager_secret" "ca_cert" { + name = "${var.name}-${var.datacenter}-ca-cert" + recovery_window_in_days = 0 +} + +resource "aws_secretsmanager_secret_version" "ca_cert" { + secret_id = aws_secretsmanager_secret.ca_cert.id + secret_string = tls_self_signed_cert.ca.cert_pem +} +``` + + + +Note that you could use a single `CERT PEM` for both variables. The `consul_ca_cert_arn` is the default ARN applicable to both the protocols. You can also use protocol-specific certificate PEMs with the `consul_https_ca_cert_arn` and `consul_grpc_ca_cert_arn` variables. + +The following Terraform configuration passes the generated CA certificate ARN to the `mesh-task` module and ensures that the CA certificate and PEM variable are set for both HTTPS and gRPC communication. + + + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + version = "" + + ... + + tls = true + consul_ca_cert_arn = aws_secretsmanager_secret.ca_cert.arn +} +``` + + + +### Enable secure deployment + +To enable secure deployment, add the following configuration to the task module. + + + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + version = "" + + ... + + tls = true + consul_grpc_ca_cert_arn = aws_secretsmanager_secret.ca_cert.arn + + acls = true + consul_server_hosts = "https://consul-server.example.com" + consul_https_ca_cert_arn = aws_secretsmanager_secret.ca_cert.arn +} + +``` + + + +### Complete configuration examples + +The [terraform-aws-consul-ecs GitHub repository](https://github.com/hashicorp/terraform-aws-consul-ecs/tree/main/examples) contains examples that you can reference to help you deploy Consul service mesh to your ECS workloads. + +## Apply your Terraform configuration + +Run Terraform to create the task definition. + +Save the Terraform configuration for the task definition to a file, such as `mesh-task.tf`. +You should place this file in a directory alongside other Terraform configuration files for your project. + +The `mesh-task` module requires the AWS Terraform provider. The following example shows how to include +and configure the AWS provider in a file called `provider.tf`. Refer to [AWS provider in the Terraform registry](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) +for additional documentation and specifications. + + + +```hcl +terraform { + required_providers { + aws = { + source = "hashicorp/aws" + version = "" + } + } +} + +provider "aws" { + region = "" + ... +} +``` + + + +Specify any additional AWS resources for your project in Terraform configuration files +in the same directory. The following example shows a basic project directory: + +```shell-session +$ ls +mesh-task.tf +provider.tf +... +``` + +Issue the following commands to run the configuration: + +1. `terraform init`: This command downloads dependencies, such as Terraform providers. +1. `terraform apply`: This command directs Terraform to create the AWS resources, such as the task definition from the `mesh-task` module. + +Terraform reads all files in the current directory that have a `.tf` file extension. +Refer to the [Terraform documentation](/terraform/docs) for more information and Terraform best practices. + +## Next steps + +After deploying the Consul service mesh infrastructure, you must still define routes between service instances as well as configure the bind address for your applications so that they only receive traffic through the mesh. Refer to the following topics: + +- [Configure routes between ECS tasks](/consul/docs/connect/ecs) +- [Configure the ECS task bind address](/consul/docs/register/service/ecs/task-bind-address) + +## Gateway task configuration examples + +The following examples illustrate how to configure the `gateway-task` for different use cases. + +### Ingress + +Mesh gateways need to be reachable over the WAN to route traffic between datacenters. Configure the following options in the `gateway-task` module to enable ingress through the mesh gateway. + +| Input variable | Type | Description | +| --- | --- | --- | +| `lb_enabled` | Boolean | Set to `true` to automatically deploy and configure a network load balancer for ingress to the mesh gateway. | +| `lb_vpc_id` | string | Specifies the VPC to launch the load balancer in. | +| `lb_subnets` | list of strings | Specifies one or more public subnets to associate with the load balancer. | + + + +```hcl +module "my_mesh_gateway" { + ... + + lb_enabled = true + lb_vpc_id = "" + lb_subnets = [""] +} +``` + + + +Alternatively, you can manually configure ingress to the mesh gateway and provide the `wan_address` and `wan_port` inputs to the `gateway-task` module. The `wan_port` field is optional. Port `8443` is used by default. + + + +```hcl +module "my_mesh_gateway" { + ... + + wan_address = "" + wan_port = +} +``` + + + +Mesh gateways route L4 TCP connections and do not terminate mTLS sessions. If you manually configure [AWS Elastic Load Balancing](https://aws.amazon.com/elasticloadbalancing/) for ingress to a mesh gateway, you must use an [AWS Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) or a [Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html). + +### ACLs + +When ACLs are enabled, configure the following options in the `gateway-task` module. + +| Option | Type | Description | +| --- | --- | --- | +| `acl` | Boolean | Set to `true` when ACLs are enabled. | +| `consul_server_hosts` | string | Specifies the HTTP `address` of the Consul server. Required for the mesh gateway task to log into Consul using the IAM auth method so that it can obtain its client and service tokens. | +| `consul_https_ca_cert_arn` | string | Specifies ARN of the Secrets Manager secret that contains the certificate for the Consul HTTPS API. | + + + +```hcl +module "my_mesh_gateway" { + ... + + acls = true + consul_server_hosts = "" + tls = true + consul_https_ca_cert_arn = "" +} +``` + + + +### WAN federation + +Configure the following options in the `gateway-task` to enable [WAN federation through mesh gateways](/consul/docs/east-west/mesh-gateway/enable). + +| Option | Type | Description | +| --- | --- | --- | +| `consul_datacenter` | string | Specifies the name of the local Consul datacenter. | +| `consul_primary_datacenter` | string | Specifies the name of the primary Consul datacenter. | +| `enable_mesh_gateway_wan_federation` | Boolean | Set to `true` to enable WAN federation. | +| `enable_acl_token_replication` | Boolean | Set to `true` to enable ACL token replication and allow the creation of local tokens secondary datacenters. | + +The following example shows how to configure the `gateway-task` module. + + + +```hcl +module "my_mesh_gateway" { + ... + + enable_mesh_gateway_wan_federation = true +} +``` + + + +When federating Consul datacenters over the WAN with ACLs enabled, [ACL Token replication](/consul/docs/secure/acl/token/federation) must be enabled on all server and client agents in all datacenters. \ No newline at end of file diff --git a/website/content/docs/deploy/server/ecs/manual.mdx b/website/content/docs/deploy/server/ecs/manual.mdx new file mode 100644 index 000000000000..20bf50f45ff9 --- /dev/null +++ b/website/content/docs/deploy/server/ecs/manual.mdx @@ -0,0 +1,343 @@ +--- +layout: docs +page_title: Deploy Consul to ECS manually +description: >- + Manually install Consul on Amazon Web Services ECS by using the Docker `consul-ecs` image to create task definitions that include required containers. Learn how to configure task definitions with example configurations. +--- + +# Deploy Consul to ECS manually + +The following instructions describe how to use the `consul-ecs` Docker image to manually create the ECS task definition without Terraform. If you intend to peer the service mesh to multiple Consul datacenters or partitions, you must use the Consul ECS Terraform module to install your service mesh on ECS. There is no manual process for deploying a mesh gateway to an ECS cluster. + +## Requirements + +You should have some familiarity with AWS ECS. Refer to [What is Amazon Elastic Container Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) for details. + +### Secure configuration requirements + +You must meet the following requirements and prerequisites to enable security features in Consul service mesh: + +- Enable [TLS encryption](https://developer.hashicorp.com/consul/docs/security/encryption#rpc-encryption-with-tls) on your Consul servers so that they can communicate security with Consul dataplane containers over gRPC. +- Enable [access control lists (ACLs)](/consul/docs/secure/acl) on your Consul servers. ACLs provide authentication and authorization for access to Consul servers on the mesh. +- You should be familiar with specifying sensitive data on ECS. Refer to [Passing sensitive data to a container](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the AWS documentation for additional information. + +You should be familiar with configuring Consul's secure features, including how to create ACL tokens and policies. Refer to the following resources: +- [Create a service token](/consul/docs/secure/acl/token/service) +- [Day 1: Security tutorial](https://developer.hashicorp.com/consul/tutorials/security) for additional information. + +Consul requires a unique IAM role for each ECS task family. Task IAM roles cannot be shared by different task families because the task family is unique to each Consul service. + +## Configure ECS task definition file + +Create a JSON file for the task definition. The task definition is the ECS blueprint for your software services on AWS. Refer to the [ECS task definitions in the AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) for additional information. + +In addition to your application container, add configurations to your task definition that creates the following Consul containers: + +- Dataplane container +- Control-plane container +- ECS controller container + +## Top-level fields + +The following table describes the top-level fields you must include in the task definition: + +| Field name | Description | Type | +| --- | --- | --- | +| `family` | The task family name. This is used as the Consul service name by default. | string | +| `networkMode` | Must be `awsvpc`, which is the only network mode supported by Consul on ECS. | string | +| `volumes` | Volumes on the host for sharing configuration between containers for initial task setup. You must define a `consul_data` and `consul_binary` bind mount. Bind mounts can be mounted into one or more containers in order to share files among containers. For Consul on ECS, certain binaries and configuration are shared among containers during task startup. | list | +| `containerDefinitions` | Defines the application container that runs in the task. Refer to [Define your application container](#define-your-application-container). | list | + +The following example shows the top-level fields: + +```json +{ + "family": "my-example-client-app", + "networkMode": "awsvpc", + "volumes": [ + { + "name": "consul_data" + }, + { + "name": "consul_binary" + } + ], + "containerDefinitions": [...], + "tags": [ + { + "key": "consul.hashicorp.com/mesh", + "value": "true" + }, + { + "key": "consul.hashicorp.com/service-name", + "value": "example-client-app" + } + ] +} +``` + +## Configure task tags + +The `tags` list must include the following tags if you are using the ECS controller in a [secure configuration](/consul/docs/ecs/deploy/manual#secure-configuration-requirements). +Without these tags, the ACL controller is unable to provision a service token for the task. + +| Tag | Description | Type | Default | +| --- | --- | --- | --- | +| `consul.hashicorp.com/mesh` | Enables the ECS controller. Set to `false` to disable the ECS controller. | String | `true` | +| `consul.hashicorp.com/service-name` | Specifies the name of the Consul service associated with this task. Required if the service name is different than the task `family`. | String | None | +| `consul.hashicorp.com/partition` | Specifies the Consul admin partition associated with this task. | String | `default` | +| `consul.hashicorp.com/namespace` | Specifies the name of the Consul namespace associated with this task. | String | `default` | + +## Define your application container + +Specify your application container configurations in the `containerDefinitions` field. The following table describes all `containerDefinitions` fields: + +| Field name | Description | Type | +| --- | --- | --- | +| `name` | The name of your application container. | string | +| `image` | The container image used to run your application. | string | +| `essential` | Must be `true` to ensure the health of your application container affects the health status of the task. | boolean | +| `dependsOn` | Specifies container dependencies that ensure your application container starts after service mesh setup is complete. Refer to [Application container dependency configuration](#application-container-dependency-configuration) for details. | list | + +Refer to the [ECS Task Definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) documentation for a complete reference. + +### Application container dependency configuration + +The control-plane and the dataplane containers are dependencies that enforce a specific startup order. The settings ensure that your application container starts after the control-plane container finishes setting up the task and after the dataplane is ready to proxy traffic between this task and the service mesh. + +The `dependsOn` list must include the following maps: + +```json +{ + { + "containerName": "consul-ecs-control-plane", + "condition": "SUCCESS" + }, + { + "containerName": "consul-dataplane", + "condition": "HEALTHY" + } +} +``` + +## Configure the dataplane container + +The dataplane container runs Envoy proxy for Consul service mesh. Specify the fields described in the following table to declare a dataplane container: + +| Field name | Description | Type | +| --- | --- | --- | +| `name` | Specifies the name of the container. This must be `consul-dataplane`. | string | +| `image` | Specifies the Envoy image. This must be a [supported version of Envoy](/consul/docs/connect/proxies/envoy#supported-versions). | string | +| `dependsOn` | Specifies container dependencies that ensure the dataplane container starts after the control-pane container has written the Envoy bootstrap configuration file. Refer to [Dataplane container dependency configuration](#dataplane-container-dependency-configuration) for details. | list | +| `healthCheck` | Must be set as shown above to monitor the health of Envoy's primary listener port, which ties into container dependencies and startup ordering. | map | +| `mountPoints` | Specifies a shared volume so that the dataplane container can access and consume the Envoy configuration file that the control-plane container generates. The keys and values in this configuration must be defined as described in [Dataplane container dependency configuration](#dataplane-container-dependency-configuration). | list | +| `ulimits` | The nofile ulimit must be raised to a sufficiently high value so that Envoy does not fail to open sockets. | list | +| `entrypoint` | Must be set to the custom Envoy entrypoint `consul-ecs envoy-entrypoint` to facilitate graceful shutdown. | list | +| `command` | Specifies the startup command to pass the bootstrap configuration to Envoy. | list | + +### Dataplane container dependency configuration + +The `dependsOn` configuration ensures that the dataplane container starts after the control-plane container successfully writes the Envoy bootstrap configuration file to the shared volume. The `dependsOn` list must include the following map: + +```json +[ + { + "containerName": "consul-ecs-control-plane", + "condition": "SUCCESS" + } +] +``` + +### Dataplane container volume mount configuration + +The `mountPoints` configuration defines a volume and path where dataplane container can consume the Envoy bootstrap configuration file generated by the control-plane container. You must specify the following keys and values: + +```json +{ + "mountPoints": [ + { + "readOnly": true, + "containerPath": "/consul", + "sourceVolume": "consul_data" + } + ], +} +``` + +## Configure the control-plane container + +The control-plane container is the first Consul container to start and set up the instance for Consul service mesh. It registers the service and proxy for this task with Consul and writes the Envoy bootstrap configuration to a shared volume. + +Specify the fields described in the following table to declare the control-plane container: + +| Field name | Description | Type | +| --- | --- | --- | +| `name` | Specifies the name of the container. This must be `control-plane`. | string | +| `image` | Specifies the `consul-ecs` image. Specify the following public AWS registry to avoid rate limits: `public.ecr.aws/hashicorp/consul-ecs` | string | +| `mountPoints` | Specifies a shared volume to store the Envoy bootstrap configuration file that the dataplane container can access and consume. The keys and values in this configuration must be defined as described in [Control-plane shared volume configuration](#control-plane-shared-volume-configuration). | list | +| `command` | Set to `["control-plane"]` so that the container runs the `control-plane` command. | list | +| `environment` | Specifies the `CONSUL_ECS_CONFIG_JSON` environment variable, which configures the container to connect to the Consul servers. Refer to [Control-plane to Consul servers configuration](#control-plane-to-Consul-servers-configuration) for details. | list | + +### Control-plane shared volume configuration + +The `mountPoints` configuration defines a volume and path where the control-plane container stores the Envoy bootstrap configuration file required to start Envoy. You must specify the following keys and values: + +```json +"mountPoints": [ + { + "readOnly": false, + "containerPath": "/consul", + "sourceVolume": "consul_data" + }, + { + "readOnly": true, + "containerPath": "/bin/consul-inject", + "sourceVolume": "consul_binary" + } +], +``` + +### Control-plane to Consul servers configuration + +Provide Consul server connection settings to the mesh task module so that the module can configure the control-plane and ECS controller containers to connect to the servers. + +1. In your `variables.tf` file, define variables for the host URL and TLS settings for gRPC and HTTP traffic. Refer to the [mesh task module reference](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task?tab=inputs) for information about the variables you can define. In the following example, the Consul server address is defined in the `consul_server_hosts` variable: + + ```hcl + variable "consul_server_hosts" { + description = "Address of Consul servers." + type = string + } + + ``` +1. Add an `environment` block to the control-plane and ECS controller containers definition. +1. Set the `environment.name` field to the `CONSUL_ECS_CONFIG_JSON` environment variable and the value to `local.encoded_config`. + + ```hcl + environment = [ + { + name = "CONSUL_ECS_CONFIG_JSON", + value = local.encoded_config + } + ] + ``` + When you apply the configuration, the mesh task module interpolates the server configuration variables, builds a `config.tf` file, and injects the settings into the appropriate containers. + +For additional information about the `config.tf` file, refer to the [JSON schema reference documentation](/consul/docs/reference/ecs/server-json). + +## Register the task definition configuration + +Register the task definition with your ECS cluster using the AWS Console, AWS CLI, or another method supported by AWS. You must also create an ECS Service to start tasks using the task definition. Refer to the following ECS documentation for information on how to register the task definition: + +- [Creating a task definition using the classic console](https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-task-definition-classic.html) +- [Register task definition CLI](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html) + +## Deploy the controller container + +The controller container runs in a separate ECS task and is responsible for Consul security features. The controller uses the AWS IAM auth method to enable ECS tasks to automatically obtain Consul ACL tokens when the task starts up. Refer to [Consul security components](/consul/docs/ecs/architecture#consul-security-components) for additional information. + +Verify that you have completed the prerequisites described in [Secure configuration requirements](#secure-configuration-requirements) and configure the following components to enable Consul security features. + +- ACL policy +- ECS task role +- Auth method for service tokens +### Create an ACL policy + +On the Consul server, create a policy that grants the following access for the controller: + +- `acl:write` +- `operator:write` +- `node:write` +- `service:write` + +The policy allows Consul to generate a token linked to the policy. Refer to [Create a service token](/consul/docs/secure/acl/token/service) for instructions. + +### ECS task role + +1. Create an ECS task role and configure an `iam:GetRole` permission so that it can fetch itself. Refer to [IAM Policies](/consul/docs/security/acl/auth-methods/aws-iam#iam-policies) for instructions. +1. Add an `consul.hashicorp.com.service-name` tag to the task role that contains the Consul service name for the application in the task. +1. When using Consul Enterprise, you must also include the `consul.hashicorp.com.namespace` tag to specify the namespace to register the service in. + +### Configure the auth method for service tokens + +Run the `consul acl auth-method create` command on a Consul server to create an instance of the auth method for service tokens. + +The following example command configures the auth method to associate a service identity +to each token created during login to this auth method instance. + +```shell-session +$ consul acl auth-method create \ + -type aws-iam \ + -name iam-ecs-service-token \ + -description="AWS IAM auth method for ECS service tokens" \ + -config '{ + "BoundIAMPrincipalArns": ["arn:aws:iam:::role/consul-ecs/*"], + "EnableIAMEntityDetails": true, + "IAMEntityTags": [ + "consul.hashicorp.com.service-name" + ] +}' +``` + +If you want to create these resources in a particular partition, include the `-partition ` option when creating Consul ACL roles, policies, auth methods, and binding rules with the Consul CLI. + +You must specify the following flags in the command: + +| Flag | Description | Type | +| --- | --- | --- | +| `-type` | Must be `aws-iam`. | String | +| `-name` | Specify a name for the auth method. Must be unique among all auth methods. | String | +| `-description` | Specify a description for the auth method. | String | +| `-config` | A JSON string containing the configuration for the auth method. Refer to [Auth method `-config` parameter](#auth-method–config-parameter) for details. | String | +| `-partition` | Specifies an admin partition that the auth method is valid for. | String | + +#### Auth method `-config` parameter + +You must specify the following configuration in the `-config` flag: + +| Flag | Description | Type | +| --- | --- | --- | +| `BoundIAMPrincipalArns` | Specifies a list of trusted IAM roles. We recommend using a wildcard to trust IAM roles at a particular path. | List | +| `EnableIAMEntityDetails` | Must be `true` so that the auth method can retrieve IAM role details, such as the role path and role tags. | Boolean | +| `IAMEntityTags` | Specifies a list of IAM role tags to make available to binding rules. Must include the service name tag. | List | + +Refer to the [auth method configuration parameters documentation](/consul/docs/security/acl/auth-methods/aws-iam#config-parameters) for additional information. + +### Create the binding rule + +Run the `consul acl binding-rule create` command on a Consul server to create a binding rule. The rule associates a service identity with each token created on successful login to this instance of the auth method. + +In the following example, Consul takes the service identity name from the `consul.hashicorp.com.service-name` tag specified for authenticating IAM role identity. + +```shell-session +$ consul acl binding-rule create \ + -method iam-ecs-service-token \ + -description 'Bind a service identity from IAM role tags for ECS service tokens' \ + -bind-type service \ + -bind-name '${entity_tags.consul.hashicorp.com.service-name}' +``` + +Note that you must include the `-partition ` option to the Consul CLI when creating Consul ACL roles, policies, auth methods, and binding rules, in order to create these resources in a particular partition. + +### Configure storage for secrets + +Secure and store Consul Server CA certificates so that they are available to ECS tasks. You may require more than one certificate for different Consul protocols. Refer to the following documentation for instructions on how to store and pass secrets to ECS tasks: + +- [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-parameters.html) +- [AWS Secrets Manager](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html) + +You can reference stored secrets using their ARN. The examples show ARNs for secrets stored in AWS Secrets Manager: + +- **Consul Server CA Cert for RPC**: `arn:aws:secretsmanager:us-west-2:000000000000:secret:my-consul-ca-cert` +- **Consul Server CA Cert for HTTPS**: `arn:aws:secretsmanager:us-west-2:000000000000:secret:my-consul-https-ca-cert` + +### Configure audit logging + +You can configure Consul servers connected to your ECS workloads to capture a log of authenticated events. Refer to [Audit Logging](/consul/docs/monitor/log/audit) for details. + +## Next steps + +After deploying the Consul service mesh infrastructure, you must still define routes between service instances as well as configure the bind address for your applications so that they only receive traffic through the mesh. Refer to the following topics: + +- [Configure routes between ECS tasks](/consul/docs/connect/ecs) +- [Configure the ECS task bind address](/consul/docs/register/service/ecs/task-bind-address) \ No newline at end of file diff --git a/website/content/docs/enterprise/fips.mdx b/website/content/docs/deploy/server/fips.mdx similarity index 76% rename from website/content/docs/enterprise/fips.mdx rename to website/content/docs/deploy/server/fips.mdx index dc4d2ed3bc9e..108efd59e7ce 100644 --- a/website/content/docs/enterprise/fips.mdx +++ b/website/content/docs/deploy/server/fips.mdx @@ -1,44 +1,40 @@ --- layout: docs -page_title: FIPS 140-2 +page_title: Deploy FIPS 140-2 compliant Consul servers description: >- - A version of Consul compliant with FIPS 140-2 is available to Enterprise users. Learn about where to find FIPS-compliant versions of Consul, its usage restrictions, technical details, and Leidos validation. + A version of Consul compliant with FIPS 140-2 is available to Enterprise users. Learn about where to find FIPS-compliant versions of Consul, as well as usage restrictions and technical details. --- -# FIPS 140-2 - - - This feature requires Consul Enterprise. - +# Deploy FIPS 140-2 compliant Consul servers Builds of Consul Enterprise marked with a `fips1402` feature name include built-in support for FIPS 140-2 compliance. -To use this feature, you must have an [active or trial license for Consul Enterprise](/consul/docs/enterprise/license/overview). To start a trial, contact HashiCorp sales. +To use this feature, you must have an [active or trial license for Consul +Enterprise](/consul/docs/enterprise/license). To start a trial, contact +HashiCorp sales. + + ## Using FIPS 140-2 Consul Enterprise -FIPS 140-2 builds of Consul Enterprise behave in the same way as non-FIPS builds. There are no restrictions on Consul algorithms and ensuring that Consul remains in a FIPS-compliant mode of operation is your responsibility. To maintain FIPS-compliant operation, you must [ensure that TLS is enabled](/consul/tutorials/archive/tls-encryption-secure) so that communication is encrypted. Consul products surface some helpful warnings where settings are insecure. +FIPS 140-2 builds of Consul Enterprise behave in the same way as non-FIPS builds. There are no restrictions on Consul algorithms, and ensuring that Consul remains in a FIPS-compliant mode of operation is your responsibility. To maintain FIPS-compliant operation, you must [ensure that TLS is enabled](/consul/docs/secure/encryption/tls/enable/new/builtin) so that communication is encrypted. Consul products surface some helpful warnings where settings are insecure. -Encryption is disabled in Consul Enterprise by default. As a result, Consul may transmit sensitive control plane information. You must ensure that gossip encryption and mTLS is enabled for all agents when running Consul with FIPS-compliant settings. In addition, be aware that TLS v1.3 does not work with FIPS 140-2, as HKDF is not a certified primitive. +Encryption is disabled in Consul Enterprise by default. As a result, Consul may transmit sensitive control plane information. You must ensure that gossip encryption and mTLS is enabled for all agents when running Consul with FIPS-compliant settings. In addition, be aware that TLSv1.3 does not work with FIPS 140-2, as HKDF is not a certified primitive. HashiCorp is not a NIST-certified testing laboratory and can only provide general guidance about using Consul Enterprise in a FIPS-compliant manner. We recommend consulting an approved auditor for further information. The FIPS 140-2 variant of Consul uses separate binaries that are available from the following sources: -- From the [HashiCorp Releases page](https://releases.hashicorp.com/consul), releases ending with the `+ent.fips1402` suffix. -- From the [Docker Hub `hashicorp/consul-enterprise-fips`](https://hub.docker.com/r/hashicorp/consul-enterprise-fips) container repository. -- From the [AWS ECR `hashicorp/consul-enterprise-fips`](https://gallery.ecr.aws/hashicorp/consul-enterprise-fips) container repository. -- From the [Red Hat Access `hashicorp/consul-enterprise-fips`](https://catalog.redhat.com/software/containers/hashicorp/consul-enterprise-fips/628d50e37ff70c66a88517ea) container repository. - -The above naming conventions, which append `.fips1402` to binary names and tags, and `-fips` to registry names, also apply to `consul-k8s`, `consul-k8s-control-plane`, `consul-dataplane`, and `consul-ecs`, which are packaged separately from Consul Enterprise. - -### Cluster peering support +- The [HashiCorp Releases page](https://releases.hashicorp.com/consul), releases ending with the `+ent.fips1402` suffix. +- The [Docker Hub `hashicorp/consul-enterprise-fips`](https://hub.docker.com/r/hashicorp/consul-enterprise-fips) container repository. +- The [AWS ECR `hashicorp/consul-enterprise-fips`](https://gallery.ecr.aws/hashicorp/consul-enterprise-fips) container repository. +- The [Red Hat Access `hashicorp/consul-enterprise-fips`](https://catalog.redhat.com/software/containers/hashicorp/consul-enterprise-fips/628d50e37ff70c66a88517ea) container repository. -A Consul cluster running FIPS variants of Consul can peer with any combination of standard (non-FIPS) Consul clusters and FIPS Consul clusters. Consul supports all cluster peering features between FIPS clusters and non-FIPS clusters. +The aforementioned naming conventions, which append `.fips1402` to binary names and tags, and `-fips` to registry names, also apply to `consul-k8s`, `consul-k8s-control-plane`, `consul-dataplane`, and `consul-ecs`, which are packaged separately from Consul Enterprise. ### Usage restrictions -When using Consul Enterprise with FIPS 140-2, be aware of the following operation restrictions: +When using Consul Enterprise with FIPS 140-2, be aware of the following operation restrictions. #### Migration restrictions @@ -80,7 +76,7 @@ Depending on your Consul runtime, there are additional requirements for using FI If using Consul on VMs, you must use a FIPS compliant version of Envoy. Contact HashiCorp sales to learn how to obtain a FIPS compliant version of Envoy. -### Consul-k8s and Helm +### consul-k8s and Helm When deploying the FIPS builds of Consul on Kubernetes using `consul-k8s` or Helm, you must ensure that the Helm chart is updated to use FIPS builds of Consul Enterprise, Consul Dataplane, and Envoy images. @@ -92,16 +88,24 @@ Consul's FIPS 140-2 products on Windows use the CNGCrypto integration in Microso To ensure your build of Consul Enterprise includes FIPS support, confirm that a line with `FIPS: Enabled` appears when you run a `version` command. For example, the following message appears for Linux users: -```log hideClipboard + + +```log FIPS: FIPS 140-2 Enabled, crypto module boringcrypto ``` + + The following message appears for Windows users: -```log hideClipboard + + +```log FIPS: FIPS 140-2 Enabled, crypto module cngcrypto ``` + + FIPS 140-2 Linux binaries depend on cgo, which require that a GNU C Library (glibc) Linux distribution be used to run Consul. Refer to [instructions for Windows FIPS mode](https://github.com/microsoft/go/blob/microsoft/main/eng/doc/fips/README.md#windows-fips-mode-cng) for more information on running CNGCrypto-enabled Go binaries in FIPS mode. The NIST Cryptographic Module Validation Program certifications and accompanying security policies for BoringCrypto and CNG are available through the following external links: @@ -111,7 +115,7 @@ The NIST Cryptographic Module Validation Program certifications and accompanying ### Validating FIPS crypto modules -To validate that a FIPS 140-2 Linux binary correctly includes BoringCrypto, run `go tool nm` on the binary to get a symbol dump. On FIPS-enabled builds, many results appear, as in the following example: +To validate that a FIPS 140-2 Linux binary correctly includes BoringCrypto, run `go tool nm` on the binary to get a symbol dump. On FIPS-enabled builds, many results appear, as in the following example. ```shell-session $ go tool nm consul | grep -i goboringcrypto @@ -122,7 +126,7 @@ $ go tool nm consul | grep -i goboringcrypto 401560 T _cgo_6880f0fbb71e_Cfunc__goboringcrypto_AES_set_decrypt_key ``` -Similarly, on a FIPS Windows binary, run `go tool nm` on the binary to get a symbol dump, and then search for `go-crypto-winnative`. +Similarly, on a FIPS Windows binary, run `go tool nm` on the binary to get a symbol dump, and then search for `go-crypto-winnative`. On both Linux and Windows non-FIPS builds, the search output yields no results. @@ -143,4 +147,6 @@ In 2024, Leidos certified the integration of FIPS 140-2 cryptographic module [Bo - [`consul-k8s-control-plane_1.2.0+fips1402`](https://releases.hashicorp.com/consul-k8s-control-plane/1.2.0+fips1402/) - [`consul-k8s-control-plane_1.2.1+fips1402`](https://releases.hashicorp.com/consul-k8s-control-plane/1.2.2+fips1402/) -For more information about verified platform architectures and confirmed feature support, [review the Leidos certification letter](https://www.datocms-assets.com/2885/1715791547-boringcrypto_compliance_letter_signed.pdf). \ No newline at end of file +For more information about verified platform architectures and confirmed feature +support, [review the Leidos certification +letter](https://www.datocms-assets.com/2885/1715791547-boringcrypto_compliance_letter_signed.pdf). diff --git a/website/content/docs/deploy/server/hcp.mdx b/website/content/docs/deploy/server/hcp.mdx new file mode 100644 index 000000000000..607941b4e6b7 --- /dev/null +++ b/website/content/docs/deploy/server/hcp.mdx @@ -0,0 +1,46 @@ +--- +layout: docs +page_title: Deploy servers on HCP Consul +description: >- + This page provides an overview for deploying clusters of Consul servers using HCP Consul. You can deploy HashiCorp-managed clusters or create and link a new self-managed cluster. +--- + +# Deploy servers on HCP Consul + +@include 'alerts/hcp-dedicated-eol.mdx' + +This page provides an overview for deploying Consul servers using HCP Consul. You can deploy HashiCorp-managed clusters or you can use guided workflows to create a self-managed cluster and link it to HCP Consul Central. + +For more information about HashiCorp-managed clusters and how they differ from self-managed clusters, refer to [cluster management](/hcp/docs/consul/cluster-management) in the HCP Consul documentation. + +## Introduction + +Creating a HashiCorp-managed server cluster simplifies the overall process of bootstrapping Consul servers. Additional cluster maintenance operations are also simplified through the HCP Consul UI. The HCP platform automates the following parts of a server's lifecycle: + +- Generating and distributing a gossip key between servers +- Starting the certificate authority and distributing TLS certificates to servers +- Bootstrapping the ACL system and saving tokens to a secure Vault environment +- Rotating expired TLS certificates after expiration +- Upgrading servers to new versions of Consul + +You can also link your own self-managed Consul server clusters, or clusters you install, configure, and manage yourself, to HCP Consul. Using self-managed clusters with HCP Consul can help you streamline network operations while maintaining control over your clusters. It also enables HCP Consul Central features, such as observability dashboards, for your self-managed clusters. + +## Guidance + +The following resources are available in the [HCP Consul documentation](/hcp/docs/consul) to help you deploy HashiCorp-managed servers. + +### Tutorials + +- [Deploy HCP Consul](/consul/tutorials/get-started-hcp/hcp-gs-deploy) demonstrates the end-to-end deployment for a development tier cluster using the automated Terraform workflow. + +### Concepts and reference + +- [Cluster management](/hcp/docs/consul/concepts/cluster-management) explains the difference between HashiCorp-managed clusters and self-managed clusters. +- [Cluster tiers](/hcp/docs/consul/concepts/cluster-tiers) explains how the tier you select when creating a HashiCorp-managed cluster determines its multi-cloud functionality. +- [Cluster configuration reference](/hcp/docs/consul/hcp-managed/reference) provides reference information about cluster properties, including the [ports HashiCorp-managed servers listen on](/hcp/docs/consul/hcp-managed/reference#cluster-server-ports). + +### Usage documentation + +- [Create a HashiCorp-managed cluster](/hcp/docs/consul/hcp-managed/create) +- [Access a HashiCorp-managed cluster](/hcp/docs/consul/hcp-managed/access) +- [Link new self-managed clusters](/hcp/docs/consul/self-managed/new) diff --git a/website/content/docs/deploy/server/index.mdx b/website/content/docs/deploy/server/index.mdx new file mode 100644 index 000000000000..b0f69265a38a --- /dev/null +++ b/website/content/docs/deploy/server/index.mdx @@ -0,0 +1,71 @@ +--- +layout: docs +page_title: Deploy Consul servers overview +description: >- + Learn about the process to deploy Consul servers and find guidance for runtime-specific resources for deploying servers on virtual machines, Kubernetes, and AWS ECS. +--- + +# Deploy Consul servers overview + +This topic provides an overview of the processes involved in deploying a Consul server agent and joining it to your control plane. The specific steps vary depending on your runtime or cloud provider. + +For information about how to deploy Consul client agents or dataplanes, refer to [Configure Consul workloads for your dataplane](/consul/docs/deploy/workload). + +## Introduction + +Consul server agents are the backbone of a Consul deployment. + +The basic process to deploy the first server agent in a cluster consists of the following steps: + +1. Create a Consul server configuration with bootstrap mode enabled. +1. Start the Consul server. +1. Configure access to the Consul server with the CLI or HTTP API. + +The process to deploy additional server agents in a cluster consists of the following steps: + +1. Update the `retry_join` stanza of the server agent configuration with the address of the bootstrapped server agent. +1. Start the Consul server agents using the updated configuration. +1. Verify cluster members with the `consul members` command. +1. Remove the servers from bootstrap mode. + +After you deploy it, you must secure a Consul cluster to prepare it for production environments. Refer to [Secure Consul](/consul/docs/secure) for more information about Consul's security model. + +### Automatic agent discovery + +You can configure the `retry_join` stanza so that when a Consul agent starts, it automatically joins a cluster. This ability is called _cloud auto-join_. Configuration requirements vary by cloud provider. For more information, refer to [automatically join clusters to a cloud provider](/deploy/server/cloud-auto-join). + +## Guidance + +Runtime-specific instructions are available to help you deploy Consul server agents. + +### Virtual machines (VMs) + +For an overview of the process to deploy a server, refer to the [Deploy Consul on VMs overview](/consul/docs/deploy/server/vm). + +The following guides are also available: + +- [Consul server requirements on VMs](/consul/docs/deploy/server/vm/requirements) +- [Bootstrap a Consul cluster](/consul/docs/deploy/server/vm/bootstrap) + +### Kubernetes + +For an overview of our documentation related to deploying Consul servers running on Kubernetes clusters, refer to the [Deploy a Consul server on Kubernetes](/consul/docs/deploy/server/k8s). + +Platform-specific guidance is also available: + +- [Deploy Consul on AKS](/consul/docs/deploy/server/k8s/platform/aks) +- [Deploy Consul on EKS](/consul/docs/deploy/server/k8s/platform/eks) +- [Deploy Consul on GKE](/consul/docs/deploy/server/k8s/platform/gke) +- [Deploy Consul on kind](/consul/docs/deploy/server/k8s/platform/kind) +- [Deploy Consul on Minikube](/consul/docs/deploy/server/k8s/platform/minikube) +- [Deploy Consul on OpenShift](/consul/docs/deploy/server/k8s/platform/openshift) +- [Deploy Consul on self-hosted Kubernetes](/consul/docs/deploy/server/k8s/platform/self-hosted) + +### AWS ECS + +- [Deploy Consul to ECS using the Terraform module](/consul/docs/deploy/server/ecs) +- [Deploy Consul to ECS manually](/consul/docs/deploy/server/ecs/manual) + +### HCP Consul Dedicated + +To learn about how to deploy HCP Consul Dedicated servers, refer to [Deploy servers on HCP Consul](/consul/docs/deploy/server/hcp). You can access the UI workflow to deploy Dedicated clusters from [the HCP portal](https://portal.cloud.hashicorp.com). \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/consul-k8s.mdx b/website/content/docs/deploy/server/k8s/consul-k8s.mdx new file mode 100644 index 000000000000..f8179272be0e --- /dev/null +++ b/website/content/docs/deploy/server/k8s/consul-k8s.mdx @@ -0,0 +1,97 @@ +--- +layout: docs +page_title: Install Consul on Kubernetes with the Consul K8s CLI +description: >- + You can use the Consul K8s CLI tool to schedule Kubernetes deployments instead of using Helm. Learn how to download and install the tool to interact with Consul on Kubernetes using the `consul-k8s` command. +--- + +# Install Consul on Kubernetes with the Consul K8s CLI + +This topic describes how to install Consul on Kubernetes using the Consul K8s CLI (`consul-k8s`). The Consul K8s CLI lets you quickly install and interact with Consul on Kubernetes. Use the Consul K8s CLI tool to install Consul on Kubernetes if you are deploying a single cluster. + +We recommend using the [Helm chart installation method](/consul/docs/deploy/server/k8s/helm) if you are installing Consul on Kubernetes for multi-cluster deployments that involve cross-partition or cross datacenter communication. + +## Requirements + +You must meet the following requirements to install Consul on Kubernetes with `consul-k8s`: + +- `kubectl` configured to point to the Kubernetes cluster you want to install Consul on +- `consul-k8s` installed on your local machine. Refer to [Consul on Kubernetes CLI reference](/consul/docs/reference/cli/consul-k8s#install-the-cli) for instructions. + + + + You must install the correct version of the CLI for your Consul on Kubernetes deployment. To deploy a previous version of Consul on Kubernetes, download the specific version of the CLI that matches the version of the control plane that you would like to deploy. Refer to the [compatibility matrix](/consul/docs/upgrade/k8s/compatibility) for details. + + + + +## Install Consul on Kubernetes + +Issue the `install` subcommand to install Consul on your existing Kubernetes cluster. If you do not include any additional options, the `consul-k8s` CLI installs Consul on Kubernetes using the default settings from the Consul Helm chart values. + +You can specify options to configure your Consul install. For example, you can include the `-auto-approve` option set to `true` to proceed with the installation if the pre-install checks pass. Refer to the [Consul K8s CLI reference](/consul/docs/reference/cli/consul-k8s) for details about all commands and available options. + +The following example installs Consul on Kubernetes with service mesh and CRDs enabled. + +```shell-session +$ consul-k8s install +==> Pre-Install Checks +No existing installations found. + ✓ No previous persistent volume claims found + ✓ No previous secrets found +==> Consul Installation Summary + Installation name: consul + Namespace: consul + Overrides: + connectInject: + enabled: true + + Proceed with installation? (y/N) y + +==> Running Installation + ✓ Downloaded charts +--> creating 1 resource(s) +--> creating 45 resource(s) +--> beginning wait for 45 resources with timeout of 10m0s + ✓ Consul installed into namespace "consul" +``` + +The pre-install checks may fail if existing `PersistentVolumeClaims` (PVC) are detected. Refer to the [uninstall instructions](/consul/docs/deploy/server/k8s/uninstall) for information about removing PVCs. + +## Custom installation + +You can create a values file and specify parameters to overwrite the default Helm chart installation. Add the `-f` and specify your values file to implement your configuration, for example: + +```shell-session +$ consul-k8s install -f values.yaml +``` + +@include 'alerts/k8s-namespace.mdx' + +### Install Consul on OpenShift clusters + +[Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a security-conscious, opinionated wrapper for Kubernetes. To install Consul on OpenShift-managed Kubernetes, set `global.openshift.enabled=true` in your [custom installation](#custom-installation) values file: + +```yaml +global: + openshift: + enabled: true +``` + +Refer to [`openshift` in the Helm chart reference](/consul/docs/reference/k8s/helm#v-global-openshift) for additional information. + +## Check the Consul cluster status + +Issue the `consul-k8s status` command to view the status of the installed Consul cluster. + +```shell-session +$ consul-k8s status + +==> Consul-K8s Status Summary + NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED +---------+-----------+----------+--------------+------------+----------+-------------------------- + consul | consul | deployed | 0.40.0 | 1.14.0 | 1 | 2022/01/31 16:58:51 PST + +✓ Consul servers healthy (3/3) +✓ Consul clients healthy (3/3) +``` diff --git a/website/content/docs/deploy/server/k8s/enterprise.mdx b/website/content/docs/deploy/server/k8s/enterprise.mdx new file mode 100644 index 000000000000..f0cb0b929670 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/enterprise.mdx @@ -0,0 +1,174 @@ +--- +layout: docs +page_title: Deploy Consul Enterprise on Kubernetes +description: >- + Consul Enterprise features are available when running Consul on Kubernetes. Learn how to apply your license in the Helm chart and return the license information with the `consul license get` command. +--- + +# Deploy Consul Enterprise on Kubernetes + +This topic describes how to install Consul Enterprise on Kubernetes using the Helm chart. It assumes you are storing your license as a Kubernetes Secret. If you would like to store the enterprise license in Vault, reference [Store Enterprise license in Vault](/consul/docs/deploy/server/k8s/vault/data/enterprise-license). + +## Requirements + +Find the license file that you received in your welcome email. It should have a `.hclic` extension. You will use the contents of this file to create a Kubernetes secret before installing the Helm chart. + + + +If you cannot find your `.hclic` file, please contact your sales team or Technical Account Manager. + + + + +## Create Kubernetes secret + +Export an environment variable named `secret` with the contents of your Enterprise license file. + +```shell-session +$ secret=$(cat 1931d1f4-bdfd-6881-f3f5-19349374841f.hclic) +``` + +Create a Kubernetes secret named `consul-ent-license` and key `key` with the value of the license file. + +```shell-session +$ kubectl create secret generic consul-ent-license --from-literal="key=${secret}" +``` + +## Configure Helm chart + +In your `values.yaml`, change the value of `global.image` to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/consul-enterprise/tags). Enterprise images end with `-ent`. + + + +```yaml +global: + image: 'hashicorp/consul-enterprise:1.20.0-ent' +``` + + + +Define your Consul Enterprise secret. + + + + + +If you are using Consul v1.10+, add the name and key of the secret you just created to `server.enterpriseLicense`. + + + +```yaml +global: + image: 'hashicorp/consul-enterprise:1.20.0-ent' + enterpriseLicense: + secretName: 'consul-ent-license' + secretKey: 'key' +``` + + + + + + +If you are using a Consul version older than 1.10, use the following configuration with the name and key of the secret you just created. These values are required on top of your normal configuration. + +You must set `server.enterpriseLicense.enableLicenseAutoload` to `false`. + + + +```yaml +global: + image: 'hashicorp/consul-enterprise:1.8.3-ent' + enterpriseLicense: + secretName: 'consul-ent-license' + secretKey: 'key' + enableLicenseAutoload: false +``` + + + + + + + +## Install Consul Enterprise + +Now, install Consul Enterprise on your Kubernetes cluster using the updated Helm chart. + +```shell-session +$ helm install --wait hashicorp hashicorp/consul --values values.yaml +``` + +@include 'alerts/k8s-namespace.mdx' + +## Verify installation + +Once the cluster is up, you can verify the nodes are running Consul Enterprise by using the `consul license get` command. + +First, forward your local port 8500 to the Consul servers so you can run `consul` commands locally against the Consul servers in Kubernetes. + +```shell-session +$ kubectl port-forward service/hashicorp-consul-server 8500:8500 +``` + +In a separate tab, run the `consul license get` command. + +```shell-session +$ consul license get +License is valid +License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f +Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996 +Expires At: 2020-03-09 03:59:59.999 +0000 UTC +Datacenter: * +Package: premium +Licensed Features: + Automated Backups + Automated Upgrades + Enhanced Read Scalability + Network Segments + Redundancy Zone + Advanced Network Federation +``` + +List the Consul servers. Notice the `+ent` in the `Build` column, indicating that the servers are running Consul Enterprise. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Segment +hashicorp-consul-server-0 10.60.0.187:8301 alive server 1.10.0+ent 2 dc1 +hashicorp-consul-server-1 10.60.1.229:8301 alive server 1.10.0+ent 2 dc1 +hashicorp-consul-server-2 10.60.2.197:8301 alive server 1.10.0+ent 2 dc1 +``` + +### ACLs enabled + +If the commands return the following error message, your Consul deployment likely has ACLs enabled. + +```bash +Error getting license: invalid character 'r' looking for beginning of value +``` + +You need to specify your ACL token when running the `license get` command. First, assign the ACL token to the `CONSUL_HTTP_TOKEN` environment variable: + +```shell-session +$ export CONSUL_HTTP_TOKEN=$(kubectl get secrets/hashicorp-consul-bootstrap-acl-token --template='{{.data.token | base64decode }}') +``` + +Now, the token will be used when running Consul commands. + +```shell-session +$ consul license get +License is valid +License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f +Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996 +Expires At: 2020-03-09 03:59:59.999 +0000 UTC +Datacenter: * +Package: premium +Licensed Features: + Automated Backups + Automated Upgrades + Enhanced Read Scalability + Network Segments + Redundancy Zone + Advanced Network Federation +``` diff --git a/website/content/docs/deploy/server/k8s/external.mdx b/website/content/docs/deploy/server/k8s/external.mdx new file mode 100644 index 000000000000..e82673bc418a --- /dev/null +++ b/website/content/docs/deploy/server/k8s/external.mdx @@ -0,0 +1,170 @@ +--- +layout: docs +page_title: Join Kubernetes Clusters to external Consul Servers +description: >- + Kubernetes clusters can be joined to existing Consul clusters in a much simpler way with the introduction of Consul Dataplane. Learn how to add Kubernetes Clusters into an existing Consul cluster and bootstrap ACLs by configuring the Helm chart. +--- + +# Join Kubernetes Clusters to external Consul Servers + +If you have a Consul cluster already running, you can configure your +Consul on Kubernetes installation to join this existing cluster. + +The below `values.yaml` file shows how to configure the Helm chart to install +Consul so that it joins an existing Consul server cluster. + +The `global.enabled` value first disables all chart components by default +so that each component is opt-in. + +Next, configure `externalServers` to point it to Consul servers. +The `externalServers.hosts` value must be provided and should be set to a DNS, an IP, +or an `exec=` string with a command returning Consul IPs. Please see [this documentation](https://github.com/hashicorp/go-netaddrs) +on how the `exec=` string works. +Other values in the `externalServers` section are optional. Please refer to +[Helm Chart configuration](/consul/docs/reference/k8s/helm#h-externalservers) for more details. + + + +```yaml +global: + enabled: false + +externalServers: + hosts: [] +``` + + + +With the introduction of [Consul Dataplane](/consul/docs/connect/dataplane#what-is-consul-dataplane), Consul installation on Kubernetes is simplified by removing the Consul Client agents. +This requires the Helm installation and rest of the consul-k8s components installed on Kubernetes to talk to Consul Servers directly on various ports. +Before starting the installation, ensure that the Consul Servers are configured to have the gRPC port enabled `8502/tcp` using the [`ports.grpc = 8502`](/consul/docs/reference/agent/configuration-file/general#grpc) configuration option. + + +## Configuring TLS + +-> **Note:** Consul on Kubernetes currently does not support external servers that require mutual authentication +for the HTTPS clients of the Consul servers, that is when servers have either +`tls.defaults.verify_incoming` or `tls.https.verify_incoming` set to `true`. +As noted in the [Security Model](/consul/docs/security#secure-configuration), +that setting isn't strictly necessary to support Consul's threat model as it is recommended that +all requests contain a valid ACL token. + +If the Consul server has TLS enabled, you need to provide the CA certificate so that Consul on Kubernetes can communicate with the server. Save the certificate in a Kubernetes secret and then provide it in your Helm values, as demonstrated in the following example: + + + +```yaml +global: + tls: + enabled: true + caCert: + secretName: + secretKey: +externalServers: + enabled: true + hosts: [] +``` + + + +If your HTTPS port is different from Consul's default `8501`, you must also set +`externalServers.httpsPort`. If the Consul servers are not running TLS enabled, use this config to set the HTTP port the servers are configured with (default `8500`). + +## Configuring ACLs + +If you are running external servers with ACLs enabled, there are a couple of ways to configure the Helm chart +to help initialize ACL tokens for Consul clients and consul-k8s components for you. + +### Manually Bootstrapping ACLs + +If you would like to call the [ACL bootstrapping API](/consul/api-docs/acl#bootstrap-acls) yourself or if your cluster has already been bootstrapped with ACLs, +you can provide the bootstrap token to the Helm chart. The Helm chart will then use this token to configure ACLs +for Consul clients and any consul-k8s components you are enabling. + +First, create a Kubernetes secret containing your bootstrap token: + +```shell +kubectl create secret generic bootstrap-token --from-literal='token=' +``` + +Then provide that secret to the Helm chart: + + + +```yaml +global: + acls: + manageSystemACLs: true + bootstrapToken: + secretName: bootstrap-token + secretKey: token +``` + + + +The bootstrap token requires the following minimal permissions: + +- `acl:write` +- `operator:write` if enabling Consul namespaces +- `agent:read` if using WAN federation over mesh gateways + +Next, configure external servers. The Helm chart will use this configuration to talk to the Consul server's API +to create policies, tokens, and an auth method. If you are [enabling Consul service mesh](/consul/docs/k8s/connect), +`k8sAuthMethodHost` should be set to the address of your Kubernetes API server +so that the Consul servers can validate a Kubernetes service account token when using the [Kubernetes auth method](/consul/docs/secure/acl/auth-method/k8s) +with `consul login`. + +-> **Note:** If `externalServers.k8sAuthMethodHost` is set and you are also using WAN federation +(`global.federation.enabled` is set to `true`), ensure that `global.federation.k8sAuthMethodHost` is set to the same +value as `externalServers.k8sAuthMethodHost`. + + + +```yaml +externalServers: + enabled: true + hosts: [] + k8sAuthMethodHost: 'https://kubernetes.example.com:443' +``` + + + +Your resulting Helm configuration will end up looking similar to this: + + + +```yaml +global: + enabled: false + acls: + manageSystemACLs: true + bootstrapToken: + secretName: bootstrap-token + secretKey: token +externalServers: + enabled: true + hosts: [] + k8sAuthMethodHost: 'https://kubernetes.example.com:443' +``` + + + +### Bootstrapping ACLs via the Helm chart + +If you would like the Helm chart to call the bootstrapping API and set the server tokens for you, then the steps are similar. +The only difference is that you don't need to set the bootstrap token. The Helm chart will save the bootstrap token as a Kubernetes secret. + + + +```yaml +global: + enabled: false + acls: + manageSystemACLs: true +externalServers: + enabled: true + hosts: [] + k8sAuthMethodHost: 'https://kubernetes.example.com:443' +``` + + diff --git a/website/content/docs/deploy/server/k8s/helm.mdx b/website/content/docs/deploy/server/k8s/helm.mdx new file mode 100644 index 000000000000..413b944e5377 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/helm.mdx @@ -0,0 +1,370 @@ +--- +layout: docs +page_title: Install Consul on Kubernetes with Helm +description: >- + You can use Helm to configure Consul on Kubernetes deployments. Learn how to add the official Helm chart to your repository and the parameters that enable the service mesh, CNI plugins, Consul UI, and Consul HTTP API. +--- + +# Install Consul on Kubernetes with Helm + +This topic describes how to install Consul on Kubernetes using the official Consul Helm chart. We recommend using this method if you are installing Consul on Kubernetes for multi-cluster deployments that involve cross-partition or cross datacenter communication. + +For instruction on how to install Consul on Kubernetes using the Consul K8s CLI, refer to [Install Consul on Kubernetes with the Consul K8s CLI](/consul/docs/deploy/server/k8s/consul-k8s). + +## Introduction + +We recommend using the Consul Helm chart to install Consul on Kubernetes for multi-cluster installations that involve cross-partition or cross datacenter communication. The Helm chart installs and configures all necessary components to run Consul. + +Consul can run directly on Kubernetes so that you can leverage Consul functionality if your workloads are fully deployed to Kubernetes. For heterogeneous workloads, Consul agents can join a server running inside or outside of Kubernetes. Refer to the [Consul on Kubernetes architecture](/consul/docs/architecture/control-plane/k8s) to learn more about its general architecture. + +The Helm chart exposes several useful configurations and automatically sets up complex resources, but it does not automatically operate Consul. You must still become familiar with how to monitor, backup, and upgrade the Consul cluster. + +The Helm chart has no required configuration, so it installs a Consul cluster with default configurations. We strongly recommend that you [learn about the configuration options](/consul/docs/reference/k8s/helm) before going to production. + + + +By default, Helm installs Consul with security configurations disabled so that the out-of-box experience is optimized for new users. We strongly recommend using a properly-secured Kubernetes cluster or making sure that you understand and enable [Consul's security features](/consul/docs/secure-consul) before going into production. Some security features are not supported in the Helm chart and require additional manual configuration. + + + +For a hands-on experience with Consul as a service mesh for Kubernetes, follow the [Getting Started with Consul service mesh](/consul/tutorials/get-started-kubernetes) tutorial. + +## Requirements + +Using the Helm Chart requires Helm version 3.6+. Visit the [Helm website](https://helm.sh/docs/intro/install/) to download the latest version. + +## Install Consul + +1. Add the HashiCorp Helm Repository: + + ```shell-session + $ helm repo add hashicorp https://helm.releases.hashicorp.com + "hashicorp" has been added to your repositories + ``` + +1. Verify that you have access to the Consul chart: + + ```shell-session + $ helm search repo hashicorp/consul + NAME CHART VERSION APP VERSION DESCRIPTION + hashicorp/consul 1.3.1 1.17.1 Official HashiCorp Consul Chart + ``` + +1. Before you install Consul on Kubernetes with Helm, ensure that the `consul` Kubernetes namespace does not exist. We recommend installing Consul on a dedicated namespace. + + ```shell-session + $ kubectl get namespace + NAME STATUS AGE + default Active 18h + kube-node-lease Active 18h + kube-public Active 18h + kube-system Active 18h + ``` + +@include 'alerts/k8s-namespace.mdx' + +1. Install Consul on Kubernetes using Helm. The Helm chart does everything to set up your deployment: after installation, agents automatically form clusters, elect leaders, and run the necessary agents. + + - Run the following command to install the latest version of Consul on Kubernetes with its default configuration. + + ```shell-session + $ helm install consul hashicorp/consul --set global.name=consul --create-namespace --namespace consul + ``` + + You can also install Consul on a dedicated namespace of your choosing by modifying the value of the `-n` flag for the Helm install. + + - To install a specific version of Consul on Kubernetes, issue the following command with `--version` flag: + + ```shell-session + $ export VERSION=1.0.1 && \ + helm install consul hashicorp/consul + --set global.name=consul + --version ${VERSION} + --create-namespace + --namespace consul + ``` + +### Update your Consul on Kubernetes configuration + +If you already installed Consul and want to make changes, you need to run `helm upgrade`. Refer to [Upgrading](/consul/docs/upgrade/k8s) for more details. + +## Custom installation + +If you want to customize your installation, create a `values.yaml` file to override the default settings. To learn what settings are available, run `helm inspect values hashicorp/consul` or review the [Helm Chart Reference](/consul/docs/reference/k8s/helm). + +### Minimal `values.yaml` for Consul service mesh + +The following `values.yaml` config file contains the minimum required settings to enable [Consul service mesh](/consul/docs/k8s/connect): + + + +```yaml +global: + name: consul +``` + + + +After you create your `values.yaml` file, run `helm install` with the `--values` flag: + +```shell-session +$ helm install consul hashicorp/consul --create-namespace --namespace consul --values values.yaml +NAME: consul +... +``` + +### Install Consul on OpenShift clusters + +[Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a security-conscious, opinionated wrapper for Kubernetes. To install Consul on OpenShift-managed Kubernetes, set `global.openshift.enabled=true` in your values file: + +```yaml +global: + openshift: + enabled: true +``` + +Refer to [`openshift` in the Helm chart reference](/consul/docs/reference/k8s/helm#v-global-openshift) for additional information. + +### Enable the Consul CNI plugin + +By default, Consul injects a `connect-inject-init` init container as part of the Kubernetes pod startup process when Consul is in [transparent proxy mode](/consul/docs/connect/transparent-proxy). The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated `CAP_NET_ADMIN` privileges, which may not be compatible with security policies in your organization. + +Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. Because the plugin is executed by the local Kubernetes kubelet, the plugin already has the elevated privileges necessary to configure the network. + +The Consul Helm Chart is responsible for installing the Consul CNI plugin. To configure the plugin to be installed, add the following configuration to your `values.yaml` file: + + + + + + + + ```yaml + global: + name: consul + connectInject: + enabled: true + cni: + enabled: true + logLevel: info + cniBinDir: "/opt/cni/bin" + cniNetDir: "/etc/cni/net.d" + ``` + + + + + + + + + ```yaml + global: + name: consul + connectInject: + enabled: true + cni: + enabled: true + logLevel: info + cniBinDir: "/home/kubernetes/bin" + cniNetDir: "/etc/cni/net.d" + ``` + + + + + + + + ```yaml + global: + name: consul + openshift: + enabled: true + connectInject: + enabled: true + cni: + enabled: true + logLevel: info + multus: true + cniBinDir: "/var/lib/cni/bin" + cniNetDir: "/etc/kubernetes/cni/net.d" +``` + + + + + +The following table describes the available CNI plugin options: + +| Option | Description | Default | +| ---------- | ----------- | ------------- | +| `cni.enabled` | Boolean value that enables or disables the CNI plugin. If `true`, the plugin is responsible for redirecting traffic in the service mesh. If `false`, redirection is handled by the `connect-inject init` container. | `false` | +| `cni.logLevel` | String value that specifies the log level for the installer and plugin. You can specify the following values: `info`, `debug`, `error`. | `info` | +| `cni.namespace` | Set the namespace to install the CNI plugin into. Overrides global namespace settings for CNI resources, for example `kube-system` | namespace used for `consul-k8s` install, for example `consul` | +| `cni.multus` | Boolean value that enables multus CNI plugin support. If `true`, multus will be enabled. If `false`, Consul CNI will operate as a chained plugin. | `false` | +| `cni.cniBinDir` | String value that specifies the location on the Kubernetes node where the CNI plugin is installed. | `/opt/cni/bin` | +| `cni.cniNetDir` | String value that specifies the location on the Kubernetes node for storing the CNI configuration. | `/etc/cni/net.d` | + +### Enable Consul service mesh on select namespaces + +By default, Consul service mesh is enabled on almost all namespaces within a Kubernetes cluster, except `kube-system` and `local-path-storage`. To restrict the service mesh to a subset of namespaces: + +1. Specify a `namespaceSelector` that matches a label attached to each namespace where you want to deploy the service mesh. To enable service mesh on select namespaces by label by default, you must set the `connectInject.default` value to `true`. + + + + ```yaml + global: + name: consul + connectInject: + enabled: true + default: true + namespaceSelector: | + matchLabels: + connect-inject : enabled + ``` + + + +1. Label the namespaces where you would like to enable Consul service mesh. + + ```shell-session + $ export NAMESPACE=foo && \ + kubectl create ns $NAMESPACE && \ + kubectl label namespace $NAMESPACE connect-inject=enabled + ``` + +1. Run `helm install` with the `--values` flag: + + ```shell-session + $ helm install consul hashicorp/consul --create-namespace --namespace consul --values values.yaml + NAME: consul + ``` + +## Usage + +You can view the Consul UI and access the Consul HTTP API after installation. + +### Viewing the Consul UI + +The Consul UI is enabled by default when using the Helm chart. + +For security reasons, it is not exposed through a `LoadBalancer` service by default. To visit the UI, you must use `kubectl port-forward`. + +#### Port forward with TLS disabled + +If running with TLS disabled, the Consul UI is accessible through http on port 8500: + +```shell-session +$ kubectl port-forward service/consul-server --namespace consul 8500:8500 +``` + +After you set up the port forward, navigate to [http://localhost:8500](http://localhost:8500). + +#### Port forward with TLS enabled + +If running with TLS enabled, the Consul UI is accessible through https on port 8501: + +```shell-session +$ kubectl port-forward service/consul-server --namespace consul 8501:8501 +``` + +After you set up the port forward, navigate to [https://localhost:8501](https://localhost:8501). + + + +You need to click through an SSL warning from your browser because the Consul certificate authority is self-signed and not in the browser's trust store. + + + +#### ACLs enabled + +If ACLs are enabled, you need to input an ACL token to display all resources and make modifications in the UI. + +To retrieve the bootstrap token that has full permissions, run: + +```shell-session +$ kubectl get secrets/consul-bootstrap-acl-token --template='{{.data.token | base64decode }}' +e7924dd1-dc3f-f644-da54-81a73ba0a178% +``` + +Then paste the token into the UI under the ACLs tab (without the `%`). + + + +If using multi-cluster federation, your `kubectl` context must be in the primary datacenter to retrieve the bootstrap token since secondary datacenters use a separate token with less permissions. + + + +#### Exposing the UI through a service + +If you want to expose the UI via a Kubernetes Service, configure the [`ui.service` chart values](/consul/docs/reference/k8s/helm#v-ui-service). Because this service allows requests to the Consul servers, it should not be open to the world. + +### Access the Consul HTTP API + +While technically any listening agent can respond to the HTTP API, communicating with the local Consul node has important caching behavior and allows you to use the simpler [`/agent` endpoints for services and checks](/consul/api-docs/agent). + +To find information about a node, you can use the [downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). + +The following is an example pod specification. In addition to pods, anything with a pod template can also access the Consul API and can therefore also access Consul: StatefulSets, Deployments, Jobs, etc. + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: consul-example +spec: + containers: + - name: example + image: 'hashicorp/consul:latest' + env: + - name: HOST_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + command: + - '/bin/sh' + - '-ec' + - | + export CONSUL_HTTP_ADDR="${HOST_IP}:8500" + consul kv put hello world + restartPolicy: Never +``` + +The following example `Deployment` shows how you can access the host IP can be accessed from nested pod specifications: + + + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: consul-example-deployment +spec: + replicas: 1 + selector: + matchLabels: + app: consul-example + template: + metadata: + labels: + app: consul-example + spec: + containers: + - name: example + image: 'hashicorp/consul:latest' + env: + - name: HOST_IP + valueFrom: + fieldRef: + fieldPath: status.hostIP + command: + - '/bin/sh' + - '-ec' + - | + export CONSUL_HTTP_ADDR="${HOST_IP}:8500" + consul kv put hello world +``` + + diff --git a/website/content/docs/deploy/server/k8s/index.mdx b/website/content/docs/deploy/server/k8s/index.mdx new file mode 100644 index 000000000000..3ea608acda8c --- /dev/null +++ b/website/content/docs/deploy/server/k8s/index.mdx @@ -0,0 +1,38 @@ +--- +layout: docs +page_title: Deploy Consul server on Kubernetes +description: >- + To deploy a Consul server on Kubernetes, use either Helm or the `consul-k8s` CLI to schedule the server. Learn about additional steps for deploying a Consul server to different Kubernetes platforms. +--- + +# Deploy a Consul server on Kubernetes + +This topic provides an overview for deploying a Consul server when running Consul on Kubernetes + +## Basic deployments + +To schedule a Consul server on a Kubernetes cluster, you can use either Helm or the Consul on Kubernetes CLI. Depending on your Kubernetes platform and cloud provider, there may be additional steps. + +To get started, refer to the following topics: + +- [Deploy Consul Community edition with Helm](/consul/docs/deploy/server/k8s/helm) +- [Deploy Consul Enterprise with Helm](/consul/docs/deploy/server/k8s/enterprise) +- [Deploy Consul with the `consul-k8s` CLI](/consul/docs/deploy/server/k8s/consul-k8s) + +Refer to the following platform deployment guides more specific information: + +- [Azure Kubernetes Service (AKS)](/consul/docs/deploy/server/k8s/platform/aks) +- [AWS Elastic Kubernetes Service (EKS)](/consul/docs/deploy/server/k8s/platform/eks) +- [Google Kubernetes Engine (GKE)](/consul/docs/deploy/server/k8s/platform/gke) +- [Kind](/consul/docs/deploy/server/k8s/platform/kind) +- [Minikube](/consul/docs/deploy/server/k8s/platform/minikube) +- [Openshift](/consul/docs/deploy/server/k8s/platform/openshift) +- [Self-hosted](/consul/docs/deploy/server/k8s/platform/self-hosted) + +## Advanced deployments + +You can configure Consul to run on Kubernetes clusters that are deployed in dispersed networks. Refer to the following topics for more information: + +- [Deploy a single Consul datacenter across multiple Kubernetes clusters](/consul/docs/deploy/server/k8s/multi-cluster) +- [Deploy Consul servers outside of Kubernetes](/consul/docs/deploy/server/k8s/external) +- [Deploy Consul with Vault as Secrets backend](/consul/docs/deploy/server/k8s/vault) \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/multi-cluster.mdx b/website/content/docs/deploy/server/k8s/multi-cluster.mdx new file mode 100644 index 000000000000..4d265a6f789a --- /dev/null +++ b/website/content/docs/deploy/server/k8s/multi-cluster.mdx @@ -0,0 +1,320 @@ +--- +layout: docs +page_title: Deploy Single Consul Datacenter Across Multiple K8s Clusters +description: >- + A single Consul datacenter can run across multiple Kubernetes pods in a flat network as long as only one pod has server agents. Learn how to configure the Helm chart, deploy pods in sequence, and verify your service mesh. +--- + +# Deploy Single Consul Datacenter Across Multiple Kubernetes Clusters + +~> **Note:** When running Consul across multiple Kubernetes clusters, we recommend using [admin partitions](/consul/docs/multi-tenant/admin-partition) for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network. + +This page describes deploying a single Consul datacenter in multiple Kubernetes clusters, +with servers running in one cluster and only Consul on Kubernetes components in the rest of the clusters. +This example uses two Kubernetes clusters, but this approach could be extended to using more than two. + +## Requirements + +* `consul-k8s` v1.0.x or higher, and Consul 1.14.x or higher +* Kubernetes clusters must be able to communicate over LAN on a flat network. +* Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix. + +## Prepare Helm release name ahead of installs + +The Helm release name must be unique for each Kubernetes cluster. +The Helm chart uses the Helm release name as a prefix for the +ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are identical, or if `global.name` for each cluster is identical, subsequent Consul on Kubernetes clusters will overwrite existing ACL resources and cause the clusters to fail. + +Before proceeding with installation, prepare the Helm release names as environment variables for both the server and client install. + +```shell-session + $ export HELM_RELEASE_SERVER=server + $ export HELM_RELEASE_CONSUL=consul + ... + $ export HELM_RELEASE_CONSUL2=consul2 +``` + +## Deploying Consul servers in the first cluster + +First, deploy the first cluster with Consul servers according to the following example Helm configuration. + + + +```yaml +global: + datacenter: dc1 + tls: + enabled: true + enableAutoEncrypt: true + acls: + manageSystemACLs: true + gossipEncryption: + secretName: consul-gossip-encryption-key + secretKey: key +server: + exposeService: + enabled: true + type: NodePort + nodePort: + ## all are random nodePorts and you can set your own + http: 30010 + https: 30011 + serf: 30012 + rpc: 30013 + grpc: 30014 +ui: + service: + type: NodePort +``` + + + +Note that this will deploy a secure configuration with gossip encryption, +TLS for all components and ACLs. In addition, this will enable the Consul Service Mesh and the controller for CRDs +that can be used later to verify the connectivity of services across clusters. + +The UI's service type is set to be `NodePort`. +This is needed to connect to servers from another cluster without using the pod IPs of the servers, +which are likely going to change. + +Other services are exposed as `NodePort` services and configured with random port numbers. In this example, the `grpc` port is set to `30014`, which enables services to discover Consul servers using gRPC when connecting from another cluster. + +To deploy, first generate the Gossip encryption key and save it as a Kubernetes secret. + +```shell-session +$ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen) +``` + +Now install Consul cluster with Helm: +```shell-session +$ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul +``` + +Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster. + * The CA certificate generated during installation + * The ACL bootstrap token generated during installation + +```shell-session +$ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml +``` + +## Deploying Consul Kubernetes in the second cluster +~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster. + +Switch to the second Kubernetes cluster where Consul clients will be deployed +that will join the first Consul cluster. + +```shell-session +$ kubectl config use-context +``` + +First, apply the credentials extracted from the first cluster to the second cluster: + +```shell-session +$ kubectl apply --filename cluster1-credentials.yaml +``` +To deploy in the second cluster, the following example Helm configuration will be used: + + + +```yaml +global: + enabled: false + datacenter: dc1 + acls: + manageSystemACLs: true + bootstrapToken: + secretName: cluster1-consul-bootstrap-acl-token + secretKey: token + tls: + enabled: true + caCert: + secretName: cluster1-consul-ca-cert + secretKey: tls.crt +externalServers: + enabled: true + # This should be any node IP of the first k8s cluster or the load balancer IP if using LoadBalancer service type for the UI. + hosts: ["10.0.0.4"] + # The node port of the UI's NodePort service or the load balancer port. + httpsPort: 31557 + # Matches the gRPC port of the Consul servers in the first cluster. + grpcPort: 30014 + tlsServerName: server.dc1.consul + # The address of the kube API server of this Kubernetes cluster + k8sAuthMethodHost: https://kubernetes.example.com:443 +connectInject: + enabled: true +``` + + + +Note the references to the secrets extracted and applied from the first cluster in ACL and TLS configuration. + +The `externalServers.hosts` and `externalServers.httpsPort` +refer to the IP and port of the UI's NodePort service deployed in the first cluster. +Set the `externalServers.hosts` to any Node IP of the first cluster, +which can be seen by running `kubectl get nodes --output wide`. +Set `externalServers.httpsPort` to the `nodePort` of the `cluster1-consul-ui` service. +In our example, the port is `31557`. + +```shell-session +$ kubectl get service cluster1-consul-ui --context cluster1 +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +cluster1-consul-ui NodePort 10.0.240.80 443:31557/TCP 40h +``` + +The `grpcPort: 30014` configuration refers to the gRPC port number specified in the `NodePort` configuration in the first cluster. + +Set the `externalServer.tlsServerName` to `server.dc1.consul`. This the DNS SAN +(Subject Alternative Name) that is present in the Consul server's certificate. +This is required because the connection to the Consul servers uses the node IP, +but that IP isn't present in the server's certificate. +To make sure that the hostname verification succeeds during the TLS handshake, set the TLS +server name to a DNS name that *is* present in the certificate. + +Next, set `externalServers.k8sAuthMethodHost` to the address of the second Kubernetes API server. +This should be the address that is reachable from the first cluster, so it cannot be the internal DNS +available in each Kubernetes cluster. Consul needs it so that `consul login` with the Kubernetes auth method will work +from the second cluster. +More specifically, the Consul server will need to perform the verification of the Kubernetes service account +whenever `consul login` is called, and to verify service accounts from the second cluster, it needs to +reach the Kubernetes API in that cluster. +The easiest way to get it is from the `kubeconfig` by running `kubectl config view` and grabbing +the value of `cluster.server` for the second cluster. + +Now, proceed with the installation of the second cluster. + +```shell-session +$ helm install ${HELM_RELEASE_CONSUL} --values cluster2-values.yaml hashicorp/consul +``` + +## Verifying the Consul Service Mesh works + +~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have an explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. + +Now that the Consul cluster spanning across multiple k8s clusters is up and running, deploy two services in separate k8s clusters and verify that they can connect to each other. + +First, deploy `static-server` service in the first cluster: + + + +```yaml +--- +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: static-server +spec: + destination: + name: static-server + sources: + - name: static-client + action: allow +--- +apiVersion: v1 +kind: Service +metadata: + name: static-server +spec: + type: ClusterIP + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-server +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + "consul.hashicorp.com/connect-inject": "true" + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + serviceAccountName: static-server +``` + + + +Note that defining a Service intention is required so that our services are allowed to talk to each other. + +Next, deploy `static-client` in the second cluster with the following configuration: + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: static-client +spec: + selector: + app: static-client + ports: + - port: 80 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-client +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-client +spec: + replicas: 1 + selector: + matchLabels: + app: static-client + template: + metadata: + name: static-client + labels: + app: static-client + annotations: + "consul.hashicorp.com/connect-inject": "true" + "consul.hashicorp.com/connect-service-upstreams": "static-server:1234" + spec: + containers: + - name: static-client + image: curlimages/curl:latest + command: [ "/bin/sh", "-c", "--" ] + args: [ "while true; do sleep 30; done;" ] + serviceAccountName: static-client +``` + + + +Once both services are up and running, try connecting to the `static-server` from `static-client`: + +```shell-session +$ kubectl exec deploy/static-client -- curl --silent localhost:1234 +"hello world" +``` + +A successful installation would return `hello world` for the above curl command output. \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/platform/aks.mdx b/website/content/docs/deploy/server/k8s/platform/aks.mdx new file mode 100644 index 000000000000..618e0662f98b --- /dev/null +++ b/website/content/docs/deploy/server/k8s/platform/aks.mdx @@ -0,0 +1,308 @@ +--- +layout: docs +page_title: Deploy Consul on AKS +description: >- + Deploy Consul on AKS and learn how to manage your Consul datacenter with the Consul CLI, UI, and API. +--- + +# Deploy Consul on AKS + +This topic describes how to deploy a Consul datacenter to an Azure Kubernetes Services (AKS) cluster. After deploying Consul, you will interact with Consul using the CLI, UI, and/or API. + +## Requirements + +To deploy Consul on AKS, you will need: + +- An Azure account with the ability to create a Kubernetes cluster +- [Azure Cloud Shell](https://shell.azure.com/) +- [consul v1.14.0+](/consul/install/) +- [Helm v3.6+](https://helm.sh/docs/helm/helm_install/) +- [consul-k8s v1.0.2+](/consul/docs/reference/cli/consul-k8s#install-the-cli) + +This example uses the latest version of Helm and `kubectl`, which are installed with the Azure Cloud Shell. + +### Create an AKS cluster + +You need at least a three node AKS cluster to deploy Consul using the official Consul Helm chart or the Consul K8S CLI. Create a three node cluster on AKS by following the [AKS documentation](https://docs.microsoft.com/en-us/azure/aks/). + +### Configure kubectl to interact to your cluster + +Login to the Azure Shell on your console. + +```shell-session +$ az login +``` + +Configure `kubectl` to connect to your Kubernetes cluster. This command downloads credentials and configures the Kubernetes CLI to use them. Replace `myResourceGroup` and `myAKSCluster` with your resource group and AKS cluster name. + +```shell-session +$ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster +``` + +Verify you are connected to your Kubernetes cluster. + +```shell-session +$ kubectl cluster-info +Kubernetes control plane is running at +CoreDNS is running at https:///api/v1/namespaces/kube-system/services/kube-dns:dns/proxy +Metrics-server is running at https:///api/v1/namespaces/kube-system/services/https:metrics-server:/proxy + +To further debug and diagnose cluster problems, use the command`kubectl cluster-info dump`. +``` + +## Deploy Consul + +You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers. You can review the Consul Kubernetes installation [documentation](/consul/docs/deploy/server/k8s/helm) to learn more about these installation options. + +### Create a values file + +To customize your deployment, create a `values.yaml` file to customization your Consul deployment. + + + +```yaml +# Contains values that affect multiple components of the chart. +global: + # The main enabled/disabled setting. + # If true, servers, clients, Consul DNS and the Consul UI will be enabled. + enabled: true + # The prefix used for all resources created in the Helm chart. + name: consul + # The name of the datacenter that the agents should register as. + datacenter: dc1 + # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. + tls: + enabled: true + # Enables ACLs across the cluster to secure access to data and APIs. + acls: + # If true, automatically manage ACL tokens and policies for all Consul components. + manageSystemACLs: true +# Configures values that configure the Consul server cluster. +server: + enabled: true + # The number of server agents to run. This determines the fault tolerance of the cluster. + replicas: 3 +# Contains values that configure the Consul UI. +ui: + enabled: true + # Registers a Kubernetes Service for the Consul UI as a LoadBalancer. + service: + type: LoadBalancer +# Configures and installs the automatic Consul Connect sidecar injector. +connectInject: + enabled: true +``` + + + +### Install Consul in your cluster + +You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI. + + + + +Install Consul onto the AKS cluster with `consul-k8s`. Confirm the installation with a `y` when prompted. + +```shell-session +$ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.14.0 +``` + +Review the official [Consul K8S CLI documentation](/consul/docs/reference/cli/consul-k8s) to learn more about additional settings. + + + + +Install Consul onto the kind cluster with Helm. Confirm the installation with a `y` when prompted. + +```shell-session +$ helm install --values values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.0.0" +``` + +Review the official [Helm chart values](/consul/docs/reference/k8s/helm#configuration-values) to learn more about the default settings. + + + + +Verify the Consul resources were successfully created. + +```shell-session +$ kubectl get pods --namespace consul +NAME READY STATUS RESTARTS AGE +consul-connect-injector-6fc8d669b8-2n82l 1/1 Running 0 2m34s +consul-connect-injector-6fc8d669b8-9mqfm 1/1 Running 0 2m34s +consul-controller-554c7f79c4-2xc64 1/1 Running 0 2m34s +consul-server-0 1/1 Running 0 2m34s +consul-server-1 1/1 Running 0 2m34s +consul-server-2 1/1 Running 0 2m34s +consul-webhook-cert-manager-64889c4964-wxc9b 1/1 Running 0 2m34s +``` + +## Configure your CLI to interact with Consul cluster + +In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul cluster. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run `consul` commands. + +Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs. + +Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable. + +```shell-session +$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d) +``` + +Set the Consul destination address. + +```shell-session +$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +``` + +Remove SSL verification checks to simplify communication to your Consul cluster. + +```shell-session +$ export CONSUL_HTTP_SSL_VERIFY=false +``` + + + + In a production environment, we recommend keeping this SSL verification set to `true`. Only remove this verification for if you have a Consul cluster without TLS configured in development environment and demonstration purposes. + + + +## View Consul services + +In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh. + + + + +Run the CLI command `consul catalog services` to return the list of services registered in Consul. Notice this returns only the `consul` service since it is the only running service in your Consul cluster. + +```shell-session +$ consul catalog services +consul +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +Run the CLI command `consul members` to return the list of Consul agents in your environment. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0 10.0.4.117:8301 alive server 1.14.0beta1 2 dc1 default +consul-server-1 10.0.5.11:8301 alive server 1.14.0beta1 2 dc1 default +consul-server-2 10.0.4.55:8301 alive server 1.14.0beta1 2 dc1 default +``` + + + + +Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI. + +```shell-session +$ echo $CONSUL_HTTP_TOKEN +fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc +``` + +Open a separate terminal window and expose the Consul UI with `kubectl port-forward` using the `consul-ui` service name as the target. By default, Consul UI runs on port `6443` when you enable TLS, and port `8500` when TLS is disabled. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 6443:443 +``` + +Open [https://localhost:6443](https://localhost:6443) in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings. + +On the left navigation pane, click **Services** to review your deployed services. At this time, you will only find the `consul` service. + +![Consul UI Services Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_services.png 'Consul services page in UI with Consul services') + +By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click **Log In** in the top right and insert your bootstrap ACL token. + +![Consul UI Login Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_login.png 'Consul login page in UI') + +After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the **Access Controls** section on the left navigation pane. + +![Consul UI Post Authentication](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_post_authentication.png 'Consul UI post authentication') + +On the left navigation pane, click on **Nodes**. + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +![Consul UI Nodes](/img/k8s/deploy/kubernetes-eks-aws-consul_view_nodes.png 'Consul UI Nodes') + + + + +View the list of services registered in Consul. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/catalog/services +``` + +Sample output: + +```json hideClipboard +{"consul":[]} +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +View the list of server and client Consul agents in your environment. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/agent/members\?pretty +``` + +Sample output: + +```json hideClipboard +[ + { + "Name": "consul-server-0", + "Addr": "10.244.0.13", + "Port": 8301, + "Tags": { + "acls": "1", + "bootstrap": "1", + "build": "1.14.0", + "dc": "dc1", + "ft_fs": "1", + "ft_si": "1", + "grpc_port": "8502", + "id": "8016fc4d-767f-8552-b018-0812228bd135", + "port": "8300", + "raft_vsn": "3", + "role": "consul", + "segment": "", + "use_tls": "1", + "vsn": "2", + "vsn_max": "3", + "vsn_min": "2", + "wan_join_port": "8302" + }, + "Status": 1, + "ProtocolMin": 1, + "ProtocolMax": 5, + "ProtocolCur": 2, + "DelegateMin": 2, + "DelegateMax": 5, + "DelegateCur": 4 + } + ## ... +] +``` + + + + +All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the [Service Discovery overview](/consul/docs/use-case/service-discovery) page to learn more. + +## Next steps + +You learned how to deploy a Consul datacenter onto an Azure Kubernetes Service (AKS) cluster. After deploying Consul, you interacted with Consul using the CLI, UI, and API. + +To learn more about deployment best practices, review the [Kubernetes reference architecture tutorial](/consul/tutorials/kubernetes/kubernetes-reference-architecture). \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/platform/eks.mdx b/website/content/docs/deploy/server/k8s/platform/eks.mdx new file mode 100644 index 000000000000..8b383cdd7669 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/platform/eks.mdx @@ -0,0 +1,300 @@ +--- +layout: docs +page_title: Deploy Consul on EKS +description: >- + Deploy Consul on EKS and learn how to manage your Consul datacenter with the Consul CLI, UI, and API. +--- + +# Deploy Consul on EKS + +This topic describes how to deploy a Consul datacenter to an AWS Elastic Kubernetes Services (EKS) cluster. After deploying Consul, you will interact with Consul using the CLI, UI, and/or API. + +## Requirements + +To deploy Consul on AKS, you will need: + +- An AWS account with the ability to create a Kubernetes cluster +- [aws-cli](https://aws.amazon.com/cli/) +- [kubectl >= 1.21](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [consul >= 1.14.0](/consul/install/) +- [Helm v3.6+](https://helm.sh/docs/helm/helm_install/) +- [consul-k8s v1.0.2+](/consul/docs/reference/cli/consul-k8s#install-the-cli) + +### Create an EKS cluster + +You need at least a three node EKS cluster to deploy Consul using the official Consul Helm chart or the Consul K8S CLI. Create a three node cluster on EKS by following the [EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html). + +### Configure kubectl to talk to your cluster + +Configure `kubectl` to talk to your EKS cluster. + +```shell-session +$ aws eks update-kubeconfig --region --name +``` + +Verify you are connected to your Kubernetes cluster. + +```shell-session +$ kubectl cluster-info +Kubernetes master is running at https://.eks.amazonaws.com +CoreDNS is running at https://.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy +``` + +Refer to the [AWS documentation](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) for additional information and help to configuring `kubectl` and EKS. + +## Deploy Consul + +You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers. You can review the Consul Kubernetes installation [documentation](/consul/docs/deploy/server/k8s/helm) to learn more about these installation options. + +### Create a values file + +To customize your deployment, create a `values.yaml` file to customization your Consul deployment. + + + +```yaml +# Contains values that affect multiple components of the chart. +global: + # The main enabled/disabled setting. + # If true, servers, clients, Consul DNS and the Consul UI will be enabled. + enabled: true + # The prefix used for all resources created in the Helm chart. + name: consul + # The name of the datacenter that the agents should register as. + datacenter: dc1 + # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. + tls: + enabled: true + # Enables ACLs across the cluster to secure access to data and APIs. + acls: + # If true, automatically manage ACL tokens and policies for all Consul components. + manageSystemACLs: true +# Configures values that configure the Consul server cluster. +server: + enabled: true + # The number of server agents to run. This determines the fault tolerance of the cluster. + replicas: 3 +# Contains values that configure the Consul UI. +ui: + enabled: true + # Registers a Kubernetes Service for the Consul UI as a LoadBalancer. + service: + type: LoadBalancer +# Configures and installs the automatic Consul Connect sidecar injector. +connectInject: + enabled: true +``` + + + +### Install Consul in your cluster + +You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI. + + + + +Install Consul onto the AKS cluster with `consul-k8s`. Confirm the installation with a `y` when prompted. + +```shell-session +$ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.14.0 +``` + +Review the official [Consul K8S CLI documentation](/consul/docs/reference/cli/consul-k8s) to learn more about additional settings. + + + + +Install Consul onto the kind cluster with Helm. Confirm the installation with a `y` when prompted. + +```shell-session +$ helm install --values values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.0.0" +``` + +Review the official [Helm chart values](/consul/docs/reference/k8s/helm#configuration-values) to learn more about the default settings. + + + + +Verify the Consul resources were successfully created. + +```shell-session +$ kubectl get pods --namespace consul +NAME READY STATUS RESTARTS AGE +consul-connect-injector-6fc8d669b8-2n82l 1/1 Running 0 2m34s +consul-connect-injector-6fc8d669b8-9mqfm 1/1 Running 0 2m34s +consul-controller-554c7f79c4-2xc64 1/1 Running 0 2m34s +consul-server-0 1/1 Running 0 2m34s +consul-server-1 1/1 Running 0 2m34s +consul-server-2 1/1 Running 0 2m34s +consul-webhook-cert-manager-64889c4964-wxc9b 1/1 Running 0 2m34s +``` + +## Configure your CLI to interact with Consul cluster + +In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul cluster. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run `consul` commands. + +Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs. + +Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable. + +```shell-session +$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d) +``` + +Set the Consul destination address. + +```shell-session +$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') +``` + +Remove SSL verification checks to simplify communication to your Consul cluster. + +```shell-session +$ export CONSUL_HTTP_SSL_VERIFY=false +``` + + + + In a production environment, we recommend keeping this SSL verification set to `true`. Only remove this verification for if you have a Consul cluster without TLS configured in development environment and demonstration purposes. + + + +## View Consul services + +In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh. + + + + +Run the CLI command `consul catalog services` to return the list of services registered in Consul. Notice this returns only the `consul` service since it is the only running service in your Consul cluster. + +```shell-session +$ consul catalog services +consul +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +Run the CLI command `consul members` to return the list of Consul agents in your environment. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0 10.0.4.117:8301 alive server 1.14.0beta1 2 dc1 default +consul-server-1 10.0.5.11:8301 alive server 1.14.0beta1 2 dc1 default +consul-server-2 10.0.4.55:8301 alive server 1.14.0beta1 2 dc1 default +``` + + + + +Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI. + +```shell-session +$ echo $CONSUL_HTTP_TOKEN +fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc +``` + +Open a separate terminal window and expose the Consul UI with `kubectl port-forward` using the `consul-ui` service name as the target. By default, Consul UI runs on port `6443` when you enable TLS, and port `8500` when TLS is disabled. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 6443:443 +``` + +Open [https://localhost:6443](https://localhost:6443) in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings. + +On the left navigation pane, click **Services** to review your deployed services. At this time, you will only find the `consul` service. + +![Consul UI Services Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_services.png 'Consul services page in UI with Consul services') + +By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click **Log In** in the top right and insert your bootstrap ACL token. + +![Consul UI Login Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_login.png 'Consul login page in UI') + +After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the **Access Controls** section on the left navigation pane. + +![Consul UI Post Authentication](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_post_authentication.png 'Consul UI post authentication') + +On the left navigation pane, click on **Nodes**. + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +![Consul UI Nodes](/img/k8s/deploy/kubernetes-eks-aws-consul_view_nodes.png 'Consul UI Nodes') + + + + +View the list of services registered in Consul. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/catalog/services +``` + +Sample output: + +```json hideClipboard +{"consul":[]} +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +View the list of server and client Consul agents in your environment. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/agent/members\?pretty +``` + +Sample output: + +```json hideClipboard +[ + { + "Name": "consul-server-0", + "Addr": "10.244.0.13", + "Port": 8301, + "Tags": { + "acls": "1", + "bootstrap": "1", + "build": "1.14.0", + "dc": "dc1", + "ft_fs": "1", + "ft_si": "1", + "grpc_port": "8502", + "id": "8016fc4d-767f-8552-b018-0812228bd135", + "port": "8300", + "raft_vsn": "3", + "role": "consul", + "segment": "", + "use_tls": "1", + "vsn": "2", + "vsn_max": "3", + "vsn_min": "2", + "wan_join_port": "8302" + }, + "Status": 1, + "ProtocolMin": 1, + "ProtocolMax": 5, + "ProtocolCur": 2, + "DelegateMin": 2, + "DelegateMax": 5, + "DelegateCur": 4 + } + ## ... +] +``` + + + + +All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the [Service Discovery overview](/consul/docs/use-case/service-discovery) page to learn more. + +## Next steps + +You learned how to deploy a Consul datacenter onto an EKS cluster. After deploying Consul, you interacted with Consul using the CLI, UI, and API. + +To learn more about deployment best practices, review the [Kubernetes reference architecture tutorial](/consul/tutorials/kubernetes/kubernetes-reference-architecture). diff --git a/website/content/docs/deploy/server/k8s/platform/gke.mdx b/website/content/docs/deploy/server/k8s/platform/gke.mdx new file mode 100644 index 000000000000..f0eadda1da07 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/platform/gke.mdx @@ -0,0 +1,324 @@ +--- +layout: docs +page_title: Deploy Consul on GKE +description: >- + Deploy Consul on GKE and learn how to manage your Consul datacenter with the Consul CLI, UI, and API. +--- + +# Deploy Consul on GKE + +This topic describes how to deploy a Consul datacenter to a Google Kubernetes Engine (GKE) cluster on Google Cloud. After deploying Consul, you will interact with Consul using the CLI, UI, and/or API. + +## Prerequisites + +To deploy Consul on AKS, you will need: + +- A GCP account with the ability to create a Kubernetes cluster +- [Google Cloud CLI](https://cloud.google.com/sdk/docs/downloads-interactive) +- [kubectl >= 1.21](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [consul >= 1.14.0](/consul/downloads/) +- [Helm v3.6+](https://helm.sh/docs/helm/helm_install/) +- [consul-k8s v1.0.2+](/consul/docs/reference/cli/consul-k8s#install-the-cli) + +### Initialize Google Cloud CLI + +Initialize the Google Cloud CLI. + +```shell-session +$ gcloud init +``` + +### Service account authentication (optional) + +You should create a GCP IAM service account and authenticate with it on the command line. + +- To review the GCP IAM service account documentation, go [here](https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account) +- To interact with GCP IAM service accounts, go [here](https://console.cloud.google.com/iam-admin/serviceaccounts) + +Once you have obtained your GCP IAM service account `key-file`, authenticate `gcloud`. + +```shell-session +$ gcloud auth activate-service-account --key-file="" +``` + +### Create a GKE cluster + +You need at least a three node GKE cluster to deploy Consul using the official Consul Helm chart or the Consul K8S CLI. Create a three node cluster on GKE by following the the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster). + +### Configure kubectl to talk to your cluster + +Configure `kubectl` to connect to your Kubernetes cluster. This command downloads credentials and configures the Kubernetes CLI to use them. Replace `my-consul-cluster`, `us-west1-b` and `my-project` with your cluster name, zone, and project name. + +```shell-session +$ gcloud container clusters get-credentials my-consul-cluster --zone us-west1-b --project my-project +``` + +Verify you are connected to your Kubernetes cluster. + +```shell-session +$ kubectl cluster-info +Kubernetes master is running at https:// +GLBCDefaultBackend is running at https:///api/v1/namespaces/kube-system/services/default-http-backend:http/proxy +Heapster is running at https:///api/v1/namespaces/kube-system/services/heapster/proxy +KubeDNS is running at https:///api/v1/namespaces/kube-system/services/kube-dns:dns/proxy +Metrics-server is running at https:///api/v1/namespaces/kube-system/services/https:metrics-server:/proxy + +To further debug and diagnose cluster problems, use `kubectl cluster-info dump`. +``` + +## Deploy Consul + +You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. By default, these methods will install a total of three Consul servers. You can review the Consul Kubernetes installation [documentation](/consul/docs/deploy/server/k8s/helm) to learn more about these installation options. + +### Create a values file + +To customize your deployment, create a `values.yaml` file to customization your Consul deployment. + + + +```yaml +# Contains values that affect multiple components of the chart. +global: + # The main enabled/disabled setting. + # If true, servers, clients, Consul DNS and the Consul UI will be enabled. + enabled: true + # The prefix used for all resources created in the Helm chart. + name: consul + # The name of the datacenter that the agents should register as. + datacenter: dc1 + # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. + tls: + enabled: true + # Enables ACLs across the cluster to secure access to data and APIs. + acls: + # If true, automatically manage ACL tokens and policies for all Consul components. + manageSystemACLs: true +# Configures values that configure the Consul server cluster. +server: + enabled: true + # The number of server agents to run. This determines the fault tolerance of the cluster. + replicas: 3 +# Contains values that configure the Consul UI. +ui: + enabled: true + # Registers a Kubernetes Service for the Consul UI as a LoadBalancer. + service: + type: LoadBalancer +# Configures and installs the automatic Consul Connect sidecar injector. +connectInject: + enabled: true +``` + + + +### Install Consul in your cluster + +You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI. + + + + +Install Consul onto the AKS cluster with `consul-k8s`. Confirm the installation with a `y` when prompted. + +```shell-session +$ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.14.0 +``` + +Review the official [Consul K8S CLI documentation](/consul/docs/reference/cli/consul-k8s) to learn more about additional settings. + + + + +Install Consul onto the kind cluster with Helm. Confirm the installation with a `y` when prompted. + +```shell-session +$ helm install --values values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.0.0" +``` + +Review the official [Helm chart values](/consul/docs/reference/k8s/helm#configuration-values) to learn more about the default settings. + + + + +Verify the Consul resources were successfully created. + +```shell-session +$ kubectl get pods --namespace consul +NAME READY STATUS RESTARTS AGE +consul-connect-injector-6fc8d669b8-2n82l 1/1 Running 0 2m34s +consul-connect-injector-6fc8d669b8-9mqfm 1/1 Running 0 2m34s +consul-controller-554c7f79c4-2xc64 1/1 Running 0 2m34s +consul-server-0 1/1 Running 0 2m34s +consul-server-1 1/1 Running 0 2m34s +consul-server-2 1/1 Running 0 2m34s +consul-webhook-cert-manager-64889c4964-wxc9b 1/1 Running 0 2m34s +``` + +## Configure your CLI to interact with Consul cluster + +In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul cluster. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run `consul` commands. + +Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs. + +Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable. + +```shell-session +$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d) +``` + +Set the Consul destination address. + +```shell-session +$ export CONSUL_HTTP_ADDR=https://$(kubectl get services/consul-ui --namespace consul -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +``` + +Remove SSL verification checks to simplify communication to your Consul cluster. + +```shell-session +$ export CONSUL_HTTP_SSL_VERIFY=false +``` + + + + In a production environment, we recommend keeping this SSL verification set to `true`. Only remove this verification for if you have a Consul cluster without TLS configured in development environment and demonstration purposes. + + + +## View Consul services + +In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh. + + + + +Run the CLI command `consul catalog services` to return the list of services registered in Consul. Notice this returns only the `consul` service since it is the only running service in your Consul cluster. + +```shell-session +$ consul catalog services +consul +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +Run the CLI command `consul members` to return the list of Consul agents in your environment. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0 10.0.4.117:8301 alive server 1.14.0beta1 2 dc1 default +consul-server-1 10.0.5.11:8301 alive server 1.14.0beta1 2 dc1 default +consul-server-2 10.0.4.55:8301 alive server 1.14.0beta1 2 dc1 default +``` + + + + +Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI. + +```shell-session +$ echo $CONSUL_HTTP_TOKEN +fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc +``` + +Open a separate terminal window and expose the Consul UI with `kubectl port-forward` using the `consul-ui` service name as the target. By default, Consul UI runs on port `6443` when you enable TLS, and port `8500` when TLS is disabled. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 6443:443 +``` + +Open [https://localhost:6443](https://localhost:6443) in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings. + +On the left navigation pane, click **Services** to review your deployed services. At this time, you will only find the `consul` service. + +![Consul UI Services Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_services.png 'Consul services page in UI with Consul services') + +By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click **Log In** in the top right and insert your bootstrap ACL token. + +![Consul UI Login Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_login.png 'Consul login page in UI') + +After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the **Access Controls** section on the left navigation pane. + +![Consul UI Post Authentication](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_post_authentication.png 'Consul UI post authentication') + +On the left navigation pane, click on **Nodes**. + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +![Consul UI Nodes](/img/k8s/deploy/kubernetes-eks-aws-consul_view_nodes.png 'Consul UI Nodes') + + + + +View the list of services registered in Consul. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/catalog/services +``` + +Sample output: + +```json hideClipboard +{"consul":[]} +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +View the list of server and client Consul agents in your environment. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/agent/members\?pretty +``` + +Sample output: + +```json hideClipboard +[ + { + "Name": "consul-server-0", + "Addr": "10.244.0.13", + "Port": 8301, + "Tags": { + "acls": "1", + "bootstrap": "1", + "build": "1.14.0", + "dc": "dc1", + "ft_fs": "1", + "ft_si": "1", + "grpc_port": "8502", + "id": "8016fc4d-767f-8552-b018-0812228bd135", + "port": "8300", + "raft_vsn": "3", + "role": "consul", + "segment": "", + "use_tls": "1", + "vsn": "2", + "vsn_max": "3", + "vsn_min": "2", + "wan_join_port": "8302" + }, + "Status": 1, + "ProtocolMin": 1, + "ProtocolMax": 5, + "ProtocolCur": 2, + "DelegateMin": 2, + "DelegateMax": 5, + "DelegateCur": 4 + } + ## ... +] +``` + + + + +All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the [Service Discovery overview](/consul/docs/use-case/service-discovery) page to learn more. + +## Next steps + +You learned how to deploy a Consul datacenter onto an Google Kubernetes Engine (GKE) cluster. After deploying Consul, you interacted with Consul using the CLI, UI, and API. + +To learn more about deployment best practices, review the [Kubernetes reference architecture tutorial](/consul/tutorials/kubernetes/kubernetes-reference-architecture). \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/platform/kind.mdx b/website/content/docs/deploy/server/k8s/platform/kind.mdx new file mode 100644 index 000000000000..e0fab8b47bcd --- /dev/null +++ b/website/content/docs/deploy/server/k8s/platform/kind.mdx @@ -0,0 +1,307 @@ +--- +layout: docs +page_title: Deploy Consul on kind +description: >- + Deploy Consul locally on kind and learn how to manage your Consul datacenter with the Consul CLI, UI, and API. Finally, configure Consul service mesh for services in your Kubernetes cluster. +--- + +# Deploy Consul on kind + +This topic describes how to create a local Kubernetes cluster with `kind`, and deploy a Consul datacenter to your `minikube` cluster. After deploying Consul, you will interact with Consul using the CLI, UI, and/or API. + +## Requirements + +To deploy Consul on Minikube, you will need: + +- [kind >= 0.17.0](https://kind.sigs.k8s.io/docs/user/quick-start/) +- [kubectl >= 1.23](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [docker >= 20.0](https://docs.docker.com/get-docker/) +- [consul >= 1.15.1](/consul/install/) +- [Helm v3.6+](https://helm.sh/docs/helm/helm_install/) +- [consul-k8s v1.0.2+](/consul/docs/reference/cli/consul-k8s#install-the-cli) + +## Create a Kind cluster + +With `kind` you can quickly create a local Kubernetes cluster. By default, `kind` names your cluster "kind", but you may name it anything you like by specifying the `--name` option. These instructions assumes the cluster is named `dc1`. Refer to the [kind documentation](https://kind.sigs.k8s.io/docs/user/quick-start/#setting-kubernetes-version) for information about how to specify additional parameters using a YAML configuration file. + +```shell-session +$ kind create cluster --name dc1 +Creating cluster "dc1" ... + ✓ Ensuring node image (kindest/node:v1.25.1) 🖼 + ✓ Preparing nodes 📦 + ✓ Writing configuration 📜 + ✓ Starting control-plane 🕹️ + ✓ Installing CNI 🔌 + ✓ Installing StorageClass 💾 +Set kubectl context to "kind-dc1" +You can now use your cluster with: + +kubectl cluster-info --context kind-dc1 + +Have a nice day! 👋 +``` + +## Deploy Consul + +You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. The chart comes with reasonable defaults, however, you will override a few values to integrate more easily with `kind` and enable useful features. You can review the Consul Kubernetes installation [documentation](/consul/docs/deploy/server/k8s/helm) to learn more about these installation options. + +### Create a values file + +To customize your deployment, create a `values.yaml` file to customization your Consul deployment. + + + +```yaml +# Contains values that affect multiple components of the chart. +global: + # The main enabled/disabled setting. + # If true, servers, clients, Consul DNS and the Consul UI will be enabled. + enabled: true + # The prefix used for all resources created in the Helm chart. + name: consul + # The name of the datacenter that the agents should register as. + datacenter: dc1 + # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. + tls: + enabled: true + # Enables ACLs across the cluster to secure access to data and APIs. + acls: + # If true, automatically manage ACL tokens and policies for all Consul components. + manageSystemACLs: true +# Configures values that configure the Consul server cluster. +server: + enabled: true + # The number of server agents to run. This determines the fault tolerance of the cluster. + replicas: 1 +# Contains values that configure the Consul UI. +ui: + enabled: true + # Registers a Kubernetes Service for the Consul UI as a NodePort. + service: + type: NodePort +``` + + + +### Install Consul in your cluster + +You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI. + + + + +Install Consul onto the kind cluster with `consul-k8s`. Confirm the installation with a `y` when prompted. + +```shell-session +$ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.15.1 +## ... +Proceed with installation? (y/N) y +``` + +Review the official [Consul K8S CLI documentation](/consul/docs/reference/cli/consul-k8s) to learn more about additional settings. + + + + +Install Consul onto the kind cluster with Helm. Confirm the installation with a `y` when prompted. + +```shell-session +$ helm install --values values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.1.0" +``` + + +Review the official [Helm chart values](/consul/docs/reference/k8s/helm#configuration-values) to learn more about the default settings. + + + + +Verify the Consul resources were successfully created. + +```shell-session +$ kubectl get pods --namespace consul +NAME READY STATUS RESTARTS AGE +consul-connect-injector-6fc8d669b8-2n82l 1/1 Running 0 2m34s +consul-server-0 1/1 Running 0 2m34s +consul-webhook-cert-manager-64889c4964-wxc9b 1/1 Running 0 2m34s +``` + +## Configure your CLI to interact with Consul cluster + +In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul cluster. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run `consul` commands. + +Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs. + +Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable. + +```shell-session +$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d) +``` + +Set the Consul destination address. By default, Consul runs on port `8500` for `http` and `8501` for `https`. + +```shell-session +$ export CONSUL_HTTP_ADDR=https://127.0.0.1:8501 +``` + +Remove SSL verification checks to simplify communication to your Consul cluster. + +```shell-session +$ export CONSUL_HTTP_SSL_VERIFY=false +``` + + + + In a production environment, we recommend keeping this SSL verification set to `true`. Only remove this verification for if you have a Consul cluster without TLS configured in development environment and demonstration purposes. + + + +## View Consul services + +In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh. + + + + +Open a separate terminal window and expose the Consul server with `kubectl port-forward` using the `consul-ui` service name as the target. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 8501:443 +``` + +In your original terminal, run the CLI command `consul catalog services` to return the list of services registered in Consul. Notice this returns only the `consul` service since it is the only running service in your Consul cluster. + +```shell-session +$ consul catalog services +consul +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +Run the CLI command `consul members` to return the list of Consul agents in your environment. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0 10.244.0.12:8301 alive server 1.14.0 2 dc1 default +``` + + + + +Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI. + +```shell-session +$ echo $CONSUL_HTTP_TOKEN +fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc +``` + +Open a separate terminal window and expose the Consul UI with `kubectl port-forward` using the `consul-ui` service name as the target. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 8501:443 +``` + +Open [https://localhost:8501](https://localhost:8501) in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings. + +On the left navigation pane, click **Services** to review your deployed services. At this time, you will only find the `consul` service. + +![Consul UI Services Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_services.png 'Consul services page in UI with Consul services') + +By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click **Log In** in the top right and insert your bootstrap ACL token. + +![Consul UI Login Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_login.png 'Consul login page in UI') + +After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the **Access Controls** section on the left navigation pane. + +![Consul UI Post Authentication](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_post_authentication.png 'Consul UI post authentication') + +On the left navigation pane, click on **Nodes**. + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +![Consul UI Post Authentication](/img/k8s/deploy/kubernetes-gs-deploy_consul_view_nodes.png 'Consul UI post authentication') + + + + +Open a separate terminal window and expose the Consul server with `kubectl port-forward` using the `consul-ui` service name as the target. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 8501:443 +``` + +In your original terminal, view the list of services registered in Consul. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/catalog/services +``` + +Sample output: + +```json hideClipboard +{"consul":[]} +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +View the list of server and client Consul agents in your environment. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/agent/members\?pretty +``` + +Sample output: + +```json hideClipboard +[ + { + "Name": "consul-server-0", + "Addr": "10.244.0.13", + "Port": 8301, + "Tags": { + "acls": "1", + "bootstrap": "1", + "build": "1.14.0", + "dc": "dc1", + "ft_fs": "1", + "ft_si": "1", + "grpc_port": "8502", + "id": "8016fc4d-767f-8552-b018-0812228bd135", + "port": "8300", + "raft_vsn": "3", + "role": "consul", + "segment": "", + "use_tls": "1", + "vsn": "2", + "vsn_max": "3", + "vsn_min": "2", + "wan_join_port": "8302" + }, + "Status": 1, + "ProtocolMin": 1, + "ProtocolMax": 5, + "ProtocolCur": 2, + "DelegateMin": 2, + "DelegateMax": 5, + "DelegateCur": 4 + } +] +``` + + + + +All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the [Service Discovery overview](/consul/docs/use-case/service-discovery) page to learn more. + +## Clean up + +Run `kind delete cluster` to clean up your local demo environment. + +```shell-session +$ kind delete cluster --name dc1 +Deleting cluster "dc1" ... +``` \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/platform/minikube.mdx b/website/content/docs/deploy/server/k8s/platform/minikube.mdx new file mode 100644 index 000000000000..655cc1140803 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/platform/minikube.mdx @@ -0,0 +1,317 @@ +--- +layout: docs +page_title: Deploy Consul on Minikube +description: >- + Deploy Consul locally on Minikube and learn how to manage your Consul datacenter with the Consul CLI, UI, and API. Finally, configure Consul service mesh for services in your Kubernetes cluster. +--- + +# Deploy Consul on Minikube + +This topic describes how to create a local Kubernetes cluster with `minikube`, and deploy a Consul datacenter to your `minikube` cluster. After deploying Consul, you will interact with Consul using the CLI, UI, and/or API. + +## Requirements + +To deploy Consul on Minikube, you will need: + +- [minkube >= 1.22](https://kubernetes.io/docs/tasks/tools/install-minikube/) +- [kubectl >= 1.22](https://kubernetes.io/docs/tasks/tools/install-kubectl/) +- [docker >= 20.0](https://docs.docker.com/get-docker/) +- [consul >= 1.14.0](/consul/install/) +- [Helm v3.6+](https://helm.sh/docs/helm/helm_install/) +- [consul-k8s v1.0.2+](/consul/docs/reference/cli/consul-k8s#install-the-cli) + +## Create a Minikube cluster + +The following command will create a Kubernetes cluster with `minikube` that sets the cluster name to `dc1`, uses 4GB of memory, and specifies a Kubernetes version of `v1.22.0`. + +```shell-session +$ minikube start --profile dc1 --memory 4096 --kubernetes-version=v1.22.0 +😄 minikube v1.28.0 on Darwin 12.6 +🎉 minikube 1.25.3 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.25.1 +💡 To disable this notice, run: 'minikube config set WantUpdateNotification false' + +✨ Automatically selected the docker driver. Other choices: hyperkit, virtualbox +👍 Starting control plane node minikube in cluster minikube +🔥 Creating docker container (CPUs=2, Memory=4096MB) ... +🐳 Preparing Kubernetes v1.22.0 on Docker 20.10.0 ... + ▪ Generating certificates and keys ... + ▪ Booting up control plane ... + ▪ Configuring RBAC rules ... +🔎 Verifying Kubernetes components... +🌟 Enabled addons: storage-provisioner, default-storageclass +🏄 Done! kubectl is now configured to use "dc1" cluster and "default" namespace by default +``` + +Refer to the [minikube documentation](https://minikube.sigs.k8s.io/docs/commands/start/) for information about how to specify additional cluster parameters. + +## Deploy Consul + +You can deploy a complete Consul datacenter using the official Consul Helm chart or the Consul K8S CLI. The chart comes with reasonable defaults, however, you will override a few values to integrate more easily with `minikube` and enable useful features. You can review the Consul Kubernetes installation [documentation](/consul/docs/deploy/server/k8s/helm) to learn more about these installation options. + +### Create a values file + +To customize your deployment, create a `values.yaml` file to customization your Consul deployment. + + + +```yaml +# Contains values that affect multiple components of the chart. +global: + # The main enabled/disabled setting. + # If true, servers, clients, Consul DNS and the Consul UI will be enabled. + enabled: true + # The prefix used for all resources created in the Helm chart. + name: consul + # The name of the datacenter that the agents should register as. + datacenter: dc1 + # Enables TLS across the cluster to verify authenticity of the Consul servers and clients. + tls: + enabled: true + # Enables ACLs across the cluster to secure access to data and APIs. + acls: + # If true, automatically manage ACL tokens and policies for all Consul components. + manageSystemACLs: true +# Configures values that configure the Consul server cluster. +server: + enabled: true + # The number of server agents to run. This determines the fault tolerance of the cluster. + replicas: 1 +# Contains values that configure the Consul UI. +ui: + enabled: true + # Registers a Kubernetes Service for the Consul UI as a NodePort. + service: + type: NodePort +# Configures and installs the automatic Consul Connect sidecar injector. +connectInject: + enabled: true +``` + + + +### Install Consul in your cluster + +You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI. + + + + +Install Consul onto the kind cluster with `consul-k8s`. Confirm the installation with a `y` when prompted. + +```shell-session +$ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.14.0 +## ... +Proceed with installation? (y/N) y +``` + +Review the official [Consul K8S CLI documentation](/consul/docs/reference/cli/consul-k8s) to learn more about additional settings. + + + + +Install Consul onto the kind cluster with Helm. Confirm the installation with a `y` when prompted. + +```shell-session +$ helm install --values values.yaml consul hashicorp/consul --create-namespace --namespace consul --version "1.0.0" +``` + +Review the official [Helm chart values](/consul/docs/reference/k8s/helm#configuration-values) to learn more about the default settings. + + + + +Verify the Consul resources were successfully created. + +```shell-session +$ kubectl get pods --namespace consul +NAME READY STATUS RESTARTS AGE +consul-connect-injector-6fc8d669b8-2n82l 1/1 Running 0 2m34s +consul-connect-injector-6fc8d669b8-9mqfm 1/1 Running 0 2m34s +consul-controller-554c7f79c4-2xc64 1/1 Running 0 2m34s +consul-server-0 1/1 Running 0 2m34s +consul-webhook-cert-manager-64889c4964-wxc9b 1/1 Running 0 2m34s +``` + +## Configure your CLI to interact with Consul cluster + +In this section, you will set environment variables in your terminal so your Consul CLI can interact with your Consul cluster. The Consul CLI reads these environment variables for behavior defaults and will reference these values when you run `consul` commands. + +Tokens are artifacts in the ACL system used to authenticate users, services, and Consul agents. Since ACLs are enabled in this Consul datacenter, entities requesting access to a resource must include a token that is linked with a policy, service identity, or node identity that grants permission to the resource. The ACL system checks the token and grants or denies access to resources based on the associated permissions. A bootstrap token has unrestricted privileges to all resources and APIs. + +Retrieve the ACL bootstrap token from the respective Kubernetes secret and set it as an environment variable. + +```shell-session +$ export CONSUL_HTTP_TOKEN=$(kubectl get --namespace consul secrets/consul-bootstrap-acl-token --template={{.data.token}} | base64 -d) +``` + +Set the Consul destination address. By default, Consul runs on port `8500` for `http` and `8501` for `https`. + +```shell-session +$ export CONSUL_HTTP_ADDR=https://127.0.0.1:8501 +``` + +Remove SSL verification checks to simplify communication to your Consul cluster. + +```shell-session +$ export CONSUL_HTTP_SSL_VERIFY=false +``` + + + + In a production environment, we recommend keeping this SSL verification set to `true`. Only remove this verification for if you have a Consul cluster without TLS configured in development environment and demonstration purposes. + + + +## View Consul services + +In this section, you will view your Consul services with the CLI, UI, and/or API to explore the details of your service mesh. + + + + +Open a separate terminal window and expose the Consul server with `kubectl port-forward` using the `consul-ui` service name as the target. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 8501:443 +``` + +In your original terminal, run the CLI command `consul catalog services` to return the list of services registered in Consul. Notice this returns only the `consul` service since it is the only running service in your Consul cluster. + +```shell-session +$ consul catalog services +consul +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +Run the CLI command `consul members` to return the list of Consul agents in your environment. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0 10.244.0.12:8301 alive server 1.14.0 2 dc1 default +``` + + + + +Output the token value to your terminal and copy the value to your clipboard. You will use this ACL token to authenticate in the Consul UI. + +```shell-session +$ echo $CONSUL_HTTP_TOKEN +fe0dd5c3-f2e1-81e8-cde8-49d26cee5efc +``` + +Open a separate terminal window and expose the Consul UI with `kubectl port-forward` using the `consul-ui` service name as the target. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 8501:443 +``` + +Open [https://localhost:8501](https://localhost:8501) in your browser to find the Consul UI. Since this environment uses a self-signed TLS certificate for its resources, click to proceed through the certificate warnings. + +On the left navigation pane, click **Services** to review your deployed services. At this time, you will only find the `consul` service. + +![Consul UI Services Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_services.png 'Consul services page in UI with Consul services') + +By default, the anonymous ACL policy allows you to view the contents of Consul services, nodes, and intentions. To make changes and see more details within the Consul UI, click **Log In** in the top right and insert your bootstrap ACL token. + +![Consul UI Login Page](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_login.png 'Consul login page in UI') + +After successfully authenticating with your ACL token, you are now able to view additional Consul components and make changes in the UI. Notice you can view and manage more options under the **Access Controls** section on the left navigation pane. + +![Consul UI Post Authentication](/img/k8s/deploy/kubernetes-gs-deploy_consul_ui_post_authentication.png 'Consul UI post authentication') + +On the left navigation pane, click on **Nodes**. + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +![Consul UI Post Authentication](/img/k8s/deploy/kubernetes-gs-deploy_consul_view_nodes.png 'Consul UI post authentication') + + + + +Open a separate terminal window and expose the Consul server with `kubectl port-forward` using the `consul-ui` service name as the target. + +```shell-session +$ kubectl port-forward svc/consul-ui --namespace consul 8501:443 +``` + +In your original terminal, view the list of services registered in Consul. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/catalog/services +``` + +Sample output: + +```json hideClipboard +{"consul":[]} +``` + +Agents run in either server or client mode. Server agents store all state information, including service and node IP addresses, health checks, and configuration. Client agents are lightweight processes that make up the majority of the datacenter. They report service health status to the server agents. Clients must run on every pod where services are running. + +View the list of server and client Consul agents in your environment. + +```shell-session +$ curl -k \ + --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" \ + $CONSUL_HTTP_ADDR/v1/agent/members\?pretty +``` + +Sample output: + +```json hideClipboard +[ + { + "Name": "consul-server-0", + "Addr": "10.244.0.13", + "Port": 8301, + "Tags": { + "acls": "1", + "bootstrap": "1", + "build": "1.14.0", + "dc": "dc1", + "ft_fs": "1", + "ft_si": "1", + "grpc_port": "8502", + "id": "8016fc4d-767f-8552-b018-0812228bd135", + "port": "8300", + "raft_vsn": "3", + "role": "consul", + "segment": "", + "use_tls": "1", + "vsn": "2", + "vsn_max": "3", + "vsn_min": "2", + "wan_join_port": "8302" + }, + "Status": 1, + "ProtocolMin": 1, + "ProtocolMax": 5, + "ProtocolCur": 2, + "DelegateMin": 2, + "DelegateMax": 5, + "DelegateCur": 4 + } +] +``` + + + + +All services listed in your Consul catalog are empowered with Consul's service discovery capabilities that simplify scalability challenges and improve application resiliency. Review the [Service Discovery overview](/consul/docs/use-case/service-discovery) page to learn more. + +## Clean up + +Run `minikube delete` to clean up your local demo environment. + +```shell-session +$ minikube delete --profile dc1 +🔥 Deleting "dc1" in docker ... +🔥 Deleting container "dc1" ... +🔥 Removing /Users/consul-user/.minikube/machines/dc1 ... +💀 Removed all traces of the "dc1" cluster. +``` \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/platform/openshift.mdx b/website/content/docs/deploy/server/k8s/platform/openshift.mdx new file mode 100644 index 000000000000..e9285dee2818 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/platform/openshift.mdx @@ -0,0 +1,417 @@ +--- +layout: docs +page_title: Deploy Consul servers on OpenShift +description: >- + Deploy Consul on OpenShift with the official Helm chart and deploy services into Consul's service mesh. +--- + +# Deploy Consul servers on OpenShift + +This page describes the process to deploy a Consul datacenter to an Openshift Kubernetes cluster. + +## Overview + +[Red Hat OpenShift](https://www.openshift.com/learn/what-is-openshift) is a distribution of the Kubernetes platform that provides a number of usability and security enhancements. The process to deploy Consul on Openshift clusters resembles the process on Kubernetes. + +These instructions begin with the process to deploy a new OpenShift cluster. If you already have an OpenShift cluster running, go to [Deploy Consul](#deploy-consul) to begin Helm chart configurations. + +## Requirements + +To deploy Consul on Openshift, you will need: + +- Access to a Kubernetes cluster deployed with OpenShift +- A Red Hat account +- [Red Hat OpenShift Local 2.49.0](https://console.redhat.com/openshift/create/local) +- [consul v1.20.5+](/consul/install/) +- [Helm v3.17.2+](https://helm.sh/docs/helm/helm_install/) +- [consul-k8s v1.6.3+](/consul/docs/reference/cli/consul-k8s#install-the-cli) + +## Deploy OpenShift + +You can deploy OpenShift on multiple platforms, and there are several installation options available for installing OpenShift on either production and development environments. This guide requires a running OpenShift cluster to deploy Consul on Kubernetes. If an OpenShift cluster is already provisioned in a production or development environment to be used for deploying Consul on Kubernetes, skip ahead to [Deploy Consul](#deploy-consul). + +These instructions use OpenShift Local to provide a pre-configured development OpenShift environment on your local machine. OpenShift Local is bundled as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10. OpenShift Local is the quickest way to get started building OpenShift clusters. It is designed to run on a local computer to simplify setup and emulate the cloud development environment locally with all the tools needed to develop container-based apps. While we use OpenShift Local, the Consul Helm deployment process works on any OpenShift cluster and is production ready. + + +### OpenShift Local Setup + +After installing OpenShift Local, issue the following command to setup your environment. + +```shell-session +$ crc setup +INFO Using bundle path /Users/hashicorp/.crc/cache/crc_vfkit_4.17.14_arm64.crcbundle +INFO Checking if running macOS version >= 13.x +INFO Checking if running as non-root +INFO Checking if crc-admin-helper executable is cached +INFO Checking if running on a supported CPU architecture +INFO Checking if crc executable symlink exists +INFO Checking minimum RAM requirements +INFO Check if Podman binary exists in: /Users/hashicorp/.crc/bin/oc +INFO Checking if running emulated on Apple silicon +INFO Checking if vfkit is installed +INFO Checking if CRC bundle is extracted in '$HOME/.crc' +INFO Checking if /Users/hashicorp/.crc/cache/crc_vfkit_4.17.14_arm64.crcbundle exists +INFO Getting bundle for the CRC executable +INFO Downloading bundle: /Users/hashicorp/.crc/cache/crc_vfkit_4.17.14_arm64.crcbundle... +4.86 GiB / 4.86 GiB [---------------------------------------] 100.00% 1.81 MiB/s +INFO Uncompressing /Users/hashicorp/.crc/cache/crc_vfkit_4.17.14_arm64.crcbundle +crc.img: 31.00 GiB / 31.00 GiB [-------------------------------------------------------------] 100.00% +oc: 125.37 MiB / 125.37 MiB [----------------------------------------------------------------] 100.00% +INFO Checking if old launchd config for tray and/or daemon exists +INFO Checking if crc daemon plist file is present and loaded +INFO Adding crc daemon plist file and loading it +INFO Checking SSH port availability +Your system is correctly setup for using CRC. Use 'crc start' to start the instance +``` + +### OpenShift Local start + +After the setup is complete, you can start the CRC service that runs OpenShift Local with the following command. The command performs a few system checks to ensure your system meets the minimum requirements and then asks you to provide an image pull secret. [Open your Red Hat account](https://console.redhat.com/openshift/create/local) so that you can easily copy your image pull secret when prompted. + +```shell-session +$ crc start +INFO Using bundle path /Users/hashicorp/.crc/cache/crc_vfkit_4.17.14_arm64.crcbundle +INFO Checking if running macOS version >= 13.x +INFO Checking if running as non-root +INFO Checking if crc-admin-helper executable is cached +INFO Checking if running on a supported CPU architecture +INFO Checking if crc executable symlink exists +INFO Checking minimum RAM requirements +INFO Check if Podman binary exists in: /Users/hashicorp/.crc/bin/oc +INFO Checking if running emulated on Apple silicon +INFO Checking if vfkit is installed +INFO Checking if old launchd config for tray and/or daemon exists +INFO Checking if crc daemon plist file is present and loaded +INFO Checking SSH port availability +INFO Loading bundle: crc_vfkit_4.17.14_arm64... +CRC requires a pull secret to download content from Red Hat. +You can copy it from the Pull Secret section of https://console.redhat.com/openshift/create/local. +? Please enter the pull secret +``` + +Next, paste the image pull secret into the terminal, press enter, and wait for the process to complete. + + + +```plain-text +INFO Creating CRC VM for OpenShift 4.17.14... +INFO Generating new SSH key pair... +INFO Generating new password for the kubeadmin user + +##... + +Started the OpenShift cluster. + +The server is accessible via web console at: + https://console-openshift-console.apps-crc.testing + +Log in as administrator: + Username: kubeadmin + Password: + +Log in as user: + Username: developer + Password: developer + +Use the 'oc' command line interface: + $ eval $(crc oc-env) + $ oc login -u developer https://api.crc.testing:6443 +``` + + + +Notice that the output instructs you to configure your `oc-env`, and also includes a login command and secret password. The secret is specific to your installation. Make note of this command. You need it to login to OpenShift Local (CRC) on your development host later. + +### Configure OpenShift Local (CRC) environment + +Next, configure the environment as instructed by CRC using the following command. + +```shell-session +$ eval $(crc oc-env) +``` + +### Login to the OpenShift cluster + +Next, use the login command you made note of before to authenticate with the OpenShift cluster. + +-> **Note** You will have to replace the secret password below with the value output +by OpenShift Local CRC. + +```shell-session +$ oc login -u kubeadmin -p https://api.crc.testing:6443 +Login successful. + +You don't have any projects. You can try to create a new project, by running + + oc new-project +``` + +### Verify configuration + +Validate that your OpenShift Local CRC setup was successful with the following command. + +```shell-session +$ kubectl cluster-info +Kubernetes control plane is running at https://api.crc.testing:6443 + +To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. +``` + +### Create a new project + +First, create an OpenShift project to install Consul on Kubernetes. Creating an OpenShift project creates a Kubernetes namespace to deploy Kubernetes resources. + +```shell-session +$ oc new-project consul +Now using project "consul" on server "https://api.crc.testing:6443". + +You can add applications to this project with the 'new-app' command. For example, try: + + oc new-app rails-postgresql-example + +to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: + + kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname +``` + +### Create an image pull secret for a RedHat Registry service account + +You must create an image pull secret before authenticating to the RedHat Registry and pulling images from the container registry. First, create a registry service account on the [RedHat Customer Portal](https://access.redhat.com/terms-based-registry/). Then download and apply the OpenShift secret that is associated with the registry service account. In the following example, the name of the downloaded file was `15490118-openshift-secret.yml`, but your filename will differ. + +```shell-session +$ kubectl create -f 15490118-openshift-secret.yml --namespace=consul +secret/15490118-openshift-secret-secret created +``` + +You receive a confirmation for the creation of a Kubernetes secret, alongside its name. Make a note of this name now. You will use it in the Helm chart values file, specifically in the `imagePullSecrets` stanza. In the following example case, the name is `15490118-openshift-secret-secret`. + +## Deploy Consul + +The process to deploy Consul servers on Openshift clusters consists of the following steps: + +1. Configure Helm values. +1. Confirm Helm chart version. +1. Import images from RedHat Catalog. +1. Install Consul with Helm or the `consul-k8s` CLI. + +## Configure Helm values + +To customize your deployment, you can pass a YAML configuration file to be used during the deployment. Any values specified in the values file will override the Helm chart's default settings. The following example file sets the `global.openshift.enabled` entry to true, which is required to operate Consul on OpenShift. + +Create a file named `values.yaml`, and modify it for your deployment. Remember to modify `imagePullSecrets` with to the name generated in the previous step. + + + +```yaml +global: + name: consul + datacenter: dc1 + image: registry.connect.redhat.com/hashicorp/consul:1.20.5-ubi + imagePullSecrets: + - name: + openshift: + enabled: true + +server: + replicas: 1 + bootstrapExpect: 1 + disruptionBudget: + enabled: true + maxUnavailable: 0 + +ui: + enabled: true + +connectInject: + enabled: true + default: true + cni: + enabled: true + logLevel: info + multus: true + cniBinDir: /var/lib/cni/bin + cniNetDir: /etc/kubernetes/cni/net.d + +``` + + + +## Verify chart version + +Search your local repo for the Consul Helm chart. + +```shell-session +$ helm search repo hashicorp/consul +NAME CHART VERSION APP VERSION DESCRIPTION +hashicorp/consul 1.6.3 1.20.5 Official HashiCorp Consul Chart +``` + +If the correct version is not displayed in the output, try +updating your helm repo. + +```shell-session +$ helm repo update +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "hashicorp" chart repository +Update Complete. ⎈Happy Helming!⎈ +``` + +## Import images from RedHat Catalog + +Instead of pulling images directly from the RedHat Registry, Consul and Consul on Kubernetes images could also be pre-loaded onto the internal OpenShift registry using the `oc import` command. Read more about [importing images into the internal OpenShift Registry in the RedHat OpenShift cookbook](https://cookbook.openshift.org/image-registry-and-image-streams/how-do-i-import-an-image-from-an-external-image.html). + +```shell-session +$ oc import-image hashicorp/consul:1.20.5-ubi --from=registry.connect.redhat.com/hashicorp/consul:1.20.5-ubi --confirm +imagestream.image.openshift.io/consul imported + + +``` + +```shell-session +$ oc import-image hashicorp/consul-k8s-control-plane:1.6.1-ubi --from=registry.connect.redhat.com/hashicorp/consul-k8s-control-plane:1.6.1-ubi --confirm +imagestream.image.openshift.io/consul-k8s-control-plane imported + + +``` + +```shell-session +$ oc import-image hashicorp/consul-dataplane:1.6.1-ubi --from=registry.connect.redhat.com/hashicorp/consul-dataplane:1.6.1-ubi --confirm +imagestream.image.openshift.io/consul-dataplane imported + + +``` + +## Install Consul in your cluster + +You can now deploy a complete Consul datacenter in your Kubernetes cluster using the official Consul Helm chart or the Consul K8S CLI. + + + + +Now, issue the `helm install` command. The following command specifies that the +installation should: + +- Use the custom values file you created earlier +- Use the `hashicorp/consul` chart you downloaded in the last step +- Set your Consul installation name to `consul` +- Create Consul resources in the `consul` namespace +- Use `consul-helm` chart version `1.6.3` + + +```shell-session +$ helm install consul hashicorp/consul --values values.yaml --create-namespace --namespace consul --version "1.6.3" --wait +``` + +The output will be similar to the following. + +```shell-session +NAME: consul +LAST DEPLOYED: Wed Sep 28 11:00:16 2022 +NAMESPACE: consul +STATUS: deployed +REVISION: 1 +NOTES: +Thank you for installing HashiCorp Consul! + +Your release is named consul. + +To learn more about the release, run: + + $ helm status consul + $ helm get all consul + +Consul on Kubernetes Documentation: +https://developer.hashicorp.com/docs/platform/k8s + +Consul on Kubernetes CLI Reference: +https://developer.hashicorp.com/docs/reference/k8s/consul-k8s-cli +``` + + + + +Consul K8s CLI is a tool for quickly installing and interacting with Consul on Kubernetes. Ensure that you are installing the correct version of the CLI for your Consul on Kubernetes deployment, as the CLI and the control plane are version dependent. + +To get started with the Consul K8s CLI, follow the instructions [to install the CLI to your local system](/consul/docs/k8s/installation/install-cli#install-the-cli). Refer to the [Consul K8S CLI documentation](/consul/docs/reference/cli/consul-k8s) to learn more about additional settings. + +```shell-session +$ consul-k8s install -config-file=values.yaml +``` + +When Consul is installed successfully, expect the following output: + +```shell-session +==> Installing Consul + ✓ Downloaded charts + --> creating 1 resource(s) + --> creating 46 resource(s) + --> beginning wait for 46 resources with timeout of 10m0s + +... + + ✓ Consul installed in namespace "consul". +``` + + + + +## Verify installation + +Use `kubectl get pods` to verify your installation. + +```shell-session +$ watch kubectl get pods --namespace consul +NAME READY STATUS RESTARTS AGE +consul-cni-45fgb 1/1 Running 0 3m +consul-connect-injector-574799b944-n6jf6 1/1 Running 0 3m +consul-connect-injector-574799b944-xvksv 1/1 Running 0 3m +consul-server-0 1/1 Running 0 3m +consul-webhook-cert-manager-74467cdd8d-88m6j 1/1 Running 0 3m +``` + +Once all pods have a status of `Running`, enter `CTRL-C` to stop the watch. + +## Access the Consul UI + +After you deploy Consul, you can access the Consul UI to verify that the Consul installation was successful, and that the environment is healthy. + +### Expose the UI service to the host + +Since the application is running on your local development host, you can expose +the Consul UI to the development host using `kubectl port-forward`. The UI and the +HTTP API Server run on the `consul-server-0` pod. Issue the following command to +expose the server endpoint at port `8500` to your local development host. + +```shell-session +$ kubectl port-forward consul-server-0 --namespace consul 8500:8500 +Forwarding from 127.0.0.1:8500 -> 8500 +Forwarding from [::1]:8500 -> 8500 +``` + +Open [`http://localhost:8500`](http://localhost:8500) in a new browser tab. + +## Access Consul with the CLI and HTTP API + +To access Consul with the CLI, set the `CONSUL_HTTP_ADDR` following environment variable on the development host so that the Consul CLI knows which Consul server to interact with. + +```shell-session +$ export CONSUL_HTTP_ADDR=http://127.0.0.1:8500 +``` + +You should be able to issue the `consul members` command to view all available +Consul datacenter members. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0 10.217.0.106:8301 alive server 1.20.5 2 dc1 default +crc-dzk9v-master-0 10.217.0.104:8301 alive client 1.20.5 2 dc1 default +``` + +You can use the same URL to make HTTP API requests with your custom code. + +## Next steps + +The Consul server you created is not ready for production. The chart was installed with an insecure configuration of Consul that you still need to secure. Refer to [Secure Consul](/consul/docs/secure/) to learn how you can secure Consul servers for use in production environments. + +For more information on the Consul Helm chart configuration options, review the [Consul Helm chart reference documentation](/consul/docs/reference/k8s/helm). \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/platform/self-hosted.mdx b/website/content/docs/deploy/server/k8s/platform/self-hosted.mdx new file mode 100644 index 000000000000..e126bbd6aaa8 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/platform/self-hosted.mdx @@ -0,0 +1,43 @@ +--- +layout: docs +page_title: Deploy Consul on self-hosted Kubernetes +description: >- + The process for installing Consul on Kubernetes is the same as installing it on cloud-hosted k8s platforms, but requires additional configuration. Learn how to pre-define Persistent Volume Claims (PVCs) and a default storage class for server agents. +--- + +# Deploy Consul on self-hosted Kubernetes + +This topic describes how to deploy a Consul datacenter to an a self-hosted Kubernetes cluster. The instructions are the same as for cloud-hosted Kubernetes, but you may need to pre-define Persistent Volume Claims (PVCs) and a default storage class for server agents. + +Refer to the [Deploy Consul with Helm](/consul/docs/deploy/server/k8s/helm) and [Deploy Consul with `consul-k8s`](/consul/docs/deploy/server/k8s/consul-k8s) for the general installation process. + +## Predefined persistent volume claims (PVCs) + +If running a self-hosted Kubernetes installation, you may need to pre-create the persistent volumes for the stateful set that the Consul servers run in. + +The only way to use a pre-created PVC is to name them in the format Kubernetes expects. + +```text +data---consul-server- +``` + +The Kubernetes namespace you are installing into, Helm release name, and ordinal must match between your Consul servers and your pre-created PVCs. You only need as many PVCs as you have Consul servers. For example, given a Kubernetes namespace of "vault," a release name of "consul," and 5 servers, you would need to create PVCs with the following names. + +```text +data-vault-consul-consul-server-0 +data-vault-consul-consul-server-1 +data-vault-consul-consul-server-2 +data-vault-consul-consul-server-3 +data-vault-consul-consul-server-4 +``` + +## Storage class + +Your Kubernetes installation must either have a default storage class specified (refer to [Storage classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) and [Change default storage class](https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/)) or you must specify the storage class for the Consul servers. + +```yaml +server: + storageClass: your-class +``` + +Refer tothe [Helm reference](/consul/docs/reference/k8s/helm#v-server-storageclass) for more information. \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/requirements.mdx b/website/content/docs/deploy/server/k8s/requirements.mdx new file mode 100644 index 000000000000..83640b06dc1f --- /dev/null +++ b/website/content/docs/deploy/server/k8s/requirements.mdx @@ -0,0 +1,50 @@ +--- +layout: docs +page_title: Consul on Kubernetes Version Compatibility +description: >- + New releases require corresponding version updates to Consul on Kubernetes and its Helm chart. Review the compatibility matrix for Consul and consul-k8s and additional notes for integrating Vault and third-party platforms. +--- + +# Consul on Kubernetes Version Compatibility + +For every release of Consul on Kubernetes, a Helm chart, `consul-k8s-control-plane` binary and a `consul-k8s` CLI binary is built and distributed through a single version. When deploying via Helm, the recommended best path for upgrading Consul on Kubernetes, is to upgrade using the same `consul-k8s-control-plane` version as the Helm Chart, as the Helm Chart and Control Plane binary are tightly coupled. + +## Supported Consul and Kubernetes versions + +Consul Kubernetes versions all of its components (`consul-k8s` CLI, `consul-k8s-control-plane`, and Helm chart) with a single semantic version. When installing or upgrading to a specific versions, ensure that you are using the correct Consul version with the compatible Helm chart or `consul-k8s` CLI. + +| Consul version | Compatible `consul-k8s` versions | Compatible Kubernetes versions | Compatible OpenShift versions | +| -------------- | -------------------------------- | -------------------------------| -------------------------------------- | +| 1.17.x | 1.3.x | 1.25.x - 1.28.x | 4.12.x - 4.14.x (4.15.x not available) | +| 1.16.x | 1.2.x | 1.24.x - 1.27.x | 4.11.x - 4.14.x | +| 1.15.x | 1.1.x | 1.23.x - 1.26.x | 4.10.x - 4.13.x | + +### Version-specific upgrade requirements + +As of Consul v1.14.0, Kubernetes deployments use [Consul Dataplane](/consul/docs/architecture/control-plane/dataplane) instead of client agents. If you upgrade Consul from a version that uses client agents to a version that uses dataplanes, you must follow specific steps to update your Helm chart and remove client agents from the existing deployment. Refer to [Upgrading to Consul Dataplane](/consul/docs/upgrade/k8s#upgrading-to-consul-dataplane) for more information. + +The v1.0.0 release of the Consul on Kubernetes Helm chart also introduced a change to the [`externalServers[].hosts` parameter](/consul/docs/reference/k8s/helm#v-externalservers-hosts). Previously, you were able to enter a provider lookup as a string in this field. Now, you must include `exec=` at the start of a string containing a provider lookup. Otherwise, the string is treated as a DNS name. Refer to the [`go-netaddrs`](https://github.com/hashicorp/go-netaddrs) library and command line tool for more information. + +## Supported Envoy versions + +Supported versions of Envoy and `consul-dataplane` (for Consul K8s 1.0 and above) for Consul versions are also found in [Envoy - Supported Versions](/consul/docs/reference/proxy/envoy#supported-versions). Starting with `consul-k8s` 1.0, `consul-dataplane` will include a bundled version of Envoy. The recommended best practice is to use the default version of Envoy or `consul-dataplane` that is provided in the Helm `values.yaml` file, as that is the version that has been tested with the default Consul and Consul Kubernetes binaries for a given Helm chart. + +## Vault as a Secrets Backend compatibility + +Starting with Consul K8s 0.39.0 and Consul 1.11.x, Consul Kubernetes supports the ability to utilize Vault as the secrets backend for all the secrets utilized by Consul on Kubernetes. + +| `consul-k8s` Versions | Compatible Vault Versions | Compatible `vault-k8s` Versions | +| ------------------------ | --------------------------| ----------------------------- | +| 0.39.0 - latest | 1.9.0 - latest | 0.14.0 - latest | + +## Platform specific compatibility notes + +### Red Hat OpenShift + +You can enable support for Red Hat OpenShift by setting `enabled: true` in the `global.openshift` stanza. Refer to the [Deploy Consul on RedHat OpenShift tutorial](/consul/deploy/server/k8s/platform/openshift) for instructions on deploying to OpenShift. + +### VMware Tanzu Kubernetes Grid and Tanzu Kubernetes Grid Integrated Edition + +Consul Kubernetes is [certified](https://marketplace.cloud.vmware.com/services/details/hashicorp-consul-1?slug=true) for both VMware Tanzu Kubernetes Grid, and VMware Tanzu Kubernetes Integrated Edition. + +- Tanzu Kubernetes Grid is certified for version 1.3.0 and above. Only Calico is supported as the CNI Plugin. \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/uninstall.mdx b/website/content/docs/deploy/server/k8s/uninstall.mdx new file mode 100644 index 000000000000..aed19e7c919e --- /dev/null +++ b/website/content/docs/deploy/server/k8s/uninstall.mdx @@ -0,0 +1,115 @@ +--- +layout: docs +page_title: Uninstall Consul on Kubernetes +description: >- + You can use the Consul-K8s CLI tool to remove all or part of a Consul installation on Kubernetes. You can also use Helm and then manually remove resources that Helm does not delete. +--- + +# Uninstall Consul on Kubernetes + +You can uninstall Consul using Helm commands or the Consul K8s CLI. + +## Consul K8s CLI + +Issue the `consul-k8s uninstall` command to remove Consul on Kubernetes. You can specify the installation name, namespace, and data retention behavior using the applicable options. By default, the uninstall preserves the secrets and PVCs that are provisioned by Consul on Kubernetes. + +```shell-session +$ consul-k8s uninstall +``` + +In the following example, Consul will be uninstalled and the data removed without prompting you to verify the operations: + +```shell-session +$ consul-k8s uninstall -auto-approve=true -wipe-data=true +``` + +Refer to the [Consul K8s CLI reference](/consul/docs/reference/cli/consul-k8s#uninstall) topic for details. + +## Helm commands + +Run the `helm uninstall` **and** manually remove resources that Helm does not delete. + +1. Although the Helm chart automates the deletion of CRDs upon uninstall, sometimes the finalizers tied to those CRDs may not complete because the deletion of the CRDs rely on the Consul K8s controller running. Ensure that previously created CRDs for Consul on Kubernetes are deleted, so subsequent installs of Consul on Kubernetes on the same Kubernetes cluster do not get blocked. + + ```shell-session + $ kubectl delete crd --selector app=consul + ``` + +1. (Optional) If Consul is installed in a dedicated namespace, set the kubeConfig context to the `consul` namespace. Otherwise, subsequent commands will need to include `--namespace consul`. + + ```shell-session + $ kubectl config set-context --current --namespace=consul + ``` + +1. Run the `helm uninstall ` command and specify the release name you've installed Consul with, e.g.,: + + ```shell-session + $ helm uninstall consul + release "consul" uninstalled + ``` + +1. After deleting the Helm release, you need to delete the `PersistentVolumeClaim`'s + for the persistent volumes that store Consul's data. A [bug](https://github.com/helm/helm/issues/5156) in Helm prevents PVCs from being deleted. Issue the following commands: + + ```shell-session + $ kubectl get pvc --selector="chart=consul-helm" + NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE + data-default-hashicorp-consul-server-0 Bound pvc-32cb296b-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m + data-default-hashicorp-consul-server-1 Bound pvc-32d79919-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m + data-default-hashicorp-consul-server-2 Bound pvc-331581ea-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m + + $ kubectl delete pvc --selector="chart=consul-helm" + persistentvolumeclaim "data-default-hashicorp-consul-server-0" deleted + persistentvolumeclaim "data-default-hashicorp-consul-server-1" deleted + persistentvolumeclaim "data-default-hashicorp-consul-server-2" deleted + ``` + + ~> **NOTE:** This will delete **all** data stored in Consul and it can't be + recovered unless you've taken other backups. + +1. If installing with ACLs enabled, you will need to then delete the ACL secrets: + + ```shell-session + $ kubectl get secrets --field-selector="type=Opaque" | grep consul + consul-acl-replication-acl-token Opaque 1 41m + consul-bootstrap-acl-token Opaque 1 41m + consul-client-acl-token Opaque 1 41m + consul-connect-inject-acl-token Opaque 1 37m + consul-controller-acl-token Opaque 1 37m + consul-federation Opaque 4 41m + consul-mesh-gateway-acl-token Opaque 1 41m + ``` + +1. Ensure that the secrets you're about to delete are all created by Consul and not + created by another user with the word `consul`. + + ```shell-session + $ kubectl get secrets --field-selector="type=Opaque" | grep consul | awk '{print $1}' | xargs kubectl delete secret + secret "consul-acl-replication-acl-token" deleted + secret "consul-bootstrap-acl-token" deleted + secret "consul-client-acl-token" deleted + secret "consul-connect-inject-acl-token" deleted + secret "consul-controller-acl-token" deleted + secret "consul-federation" deleted + secret "consul-mesh-gateway-acl-token" deleted + secret "consul-gossip-encryption-key" deleted + ``` + +1. If installing with `tls.enabled` then, run the following commands to delete the `ServiceAccount` left behind: + + ```shell-session + $ kubectl get serviceaccount consul-tls-init + NAME SECRETS AGE + consul-tls-init 1 47m + ``` + + ```shell-session + $ kubectl delete serviceaccount consul-tls-init + serviceaccount "consul-tls-init" deleted + ``` + +1. (Optional) Delete the namespace (i.e. `consul` in the following example) that you have dedicated for installing Consul on Kubernetes. + + ```shell-session + $ kubectl delete ns consul + ``` \ No newline at end of file diff --git a/website/content/docs/deploy/server/k8s/vault/backend.mdx b/website/content/docs/deploy/server/k8s/vault/backend.mdx new file mode 100644 index 000000000000..f803aeee3f0a --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/backend.mdx @@ -0,0 +1,219 @@ +--- +layout: docs +page_title: Vault as secrets backend — System integrations +description: >- + Overview of the systems integration aspects to using Vault as the secrets backend for Consul on Kubernetes. +--- + +# Vault as secrets backend — System integrations + +Integrating Vault with Consul on Kubernetes includes a one-time setup on Vault and setting up the secrets backend for each Consul datacenter via Helm. + +Complete the following steps once: + - Enabling Vault KV Secrets Engine - Version 2 to store arbitrary secrets + - Enabling Vault PKI Engine if you are choosing to store and manage either [Consul Server TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) or [Service Mesh and Consul client TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) + +Repeat the following steps for each datacenter in the cluster: + - Installing the Vault Injector within the Consul datacenter installation + - Configuring a Kubernetes Auth Method in Vault to authenticate and authorize operations from the Consul datacenter + - Enable Vault as the Secrets Backend in the Consul datacenter + +Please read [Run Vault on Kubernetes](/vault/docs/platform/reference/k8s/helm/run) if instructions on setting up a Vault cluster are needed. + +## Vault KV Secrets Engine - Version 2 + +The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets: +- ACL Bootstrap token ([`global.acls.bootstrapToken`](/consul/docs/reference/k8s/helm#v-global-acls-bootstraptoken)) +- ACL Partition token ([`global.acls.partitionToken`](/consul/docs/reference/k8s/helm#v-global-acls-partitiontoken)) +- ACL Replication token ([`global.acls.replicationToken`](/consul/docs/reference/k8s/helm#v-global-acls-replicationtoken)) +- Gossip encryption key ([`global.gossipEncryption`](/consul/docs/reference/k8s/helm#v-global-gossipencryption)) +- Enterprise license ([`global.enterpriseLicense`](/consul/docs/reference/k8s/helm#v-global-enterpriselicense)) +- Snapshot Agent config ([`client.snapshotAgent.configSecret`](/consul/docs/reference/k8s/helm#v-client-snapshotagent-configsecret)) + +In order to store any of these secrets, we must enable the [Vault KV secrets engine - Version 2](/vault/docs/secrets/kv/kv-v2). + +```shell-session +$ vault secrets enable -path=consul kv-v2 +``` + +## Vault PKI Engine + +The Vault PKI Engine must be enabled in order to leverage Vault for issuing Consul Server TLS certificates. More details for configuring the PKI Engine is found in [Bootstrapping the PKI Engine](/consul/docs/deploy/server/k8s/vault/data/tls-certificate#bootstrapping-the-pki-engine) under the Server TLS section. + +```shell-session +$ vault secrets enable pki +``` + +## Set Environment Variables + +Before installing the Vault Injector and configuring the Vault Kubernetes Auth Method, some environment variables need to be set to better ensure consistent mapping between Vault and Consul on Kubernetes. + + - DATACENTER + + We recommend using the value for `global.datacenter` in your Consul Helm values file for this variable. + ```shell-session + $ export DATACENTER=dc1 + ``` + + - VAULT_AUTH_METHOD_NAME + + We recommend using a concatenation of a `kubernetes-` prefix (to denote the auth method type) with the `DATACENTER` environment variable for this variable. + ```shell-session + $ export VAULT_AUTH_METHOD_NAME=kubernetes-${DATACENTER} + ``` + + - VAULT_SERVER_HOST + + We recommend using the external IP address of your Vault cluster for this variable. + + If Vault is installed in a Kubernetes cluster, get the external IP or DNS name of the Vault server load balancer. + + + + On EKS, you can get the hostname of the Vault server's load balancer with the following command: + + ```shell-session + $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') + ``` + + + + + + On GKE, you can get the IP address of the Vault server's load balancer with the following command: + + ```shell-session + $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` + + + + + + On AKS, you can get the IP address of the Vault server's load balancer with the following command: + + ```shell-session + $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 --output jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` + + + + + If Vault is not running on Kubernetes, utilize the `api_addr` as defined in the Vault [High Availability Parameters](/vault/docs/configuration#high-availability-parameters) configuration: + ```shell-session + $ export VAULT_SERVER_HOST= + ``` + + - VAULT_AUTH_METHOD_NAME + + We recommend connecting to port 8200 of the Vault server. + ```shell-session + $ export VAULT_ADDR=http://${VAULT_SERVER_HOST}:8200 + ``` + + If your vault installation is current exposed using SSL, this address will need to use `https` instead of `http`. You will also need to setup the [`VAULT_CACERT`](/vault/docs/commands#vault_cacert) environment variable. + + - VAULT_TOKEN + + We recommend using your allocated Vault token as the value for this variable. If running Vault in dev mode, this can be set to to `root`. + ```shell-session + $ export VAULT_TOKEN= + ``` + +## Install Vault Injector in Consul k8s cluster + +A minimal valid installation of Vault Kubernetes must include the Agent Injector which is utilized for accessing secrets from Vault. Vault servers could be deployed external to Vault on Kubernetes with the [`injector.externalVaultAddr`](/vault/docs/platform/reference/k8s/helm/configuration#externalvaultaddr) value in the Vault Helm Configuration. + +```shell-session +$ cat <> vault-injector.yaml +# vault-injector.yaml +global: + enabled: true + externalVaultAddr: ${VAULT_ADDR} +server: + enabled: false +injector: + enabled: true + authPath: auth/${VAULT_AUTH_METHOD_NAME} +EOF +``` + +Issue the Helm `install` command to install the Vault agent injector using the HashiCorp Vault Helm chart. + +```shell-session +$ helm install vault-${DATACENTER} -f vault-injector.yaml hashicorp/vault --wait +``` + +## Configure the Kubernetes Auth Method in Vault + +Ensure that the Vault Kubernetes Auth method is enabled. + +```shell-session +$ vault auth enable -path=kubernetes-${DATACENTER} kubernetes +``` + +After enabling the Kubernetes auth method, in Vault, ensure that you have configured the Kubernetes Auth method properly as described in [Kubernetes Auth Method Configuration](/vault/docs/auth/kubernetes#configuration). + +First, while targeting your Consul cluster, get the externally reachable address of the Consul Kubernetes cluster. + +```shell-session +$ export KUBE_API_URL=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}") +``` + +Next, you will configure the Vault Kubernetes Auth Method for the datacenter. You will need to provide it with: +- `token_reviewer_jwt` - this a JWT token from the Consul datacenter cluster that the Vault Kubernetes Auth Method will use to query the Consul datacenter Kubernetes API when services in the Consul datacenter request data from Vault. +- `kubernetes_host` - this is the URL of the Consul datacenter's Kubernetes API that Vault will query to authenticate the service account of an incoming request from a Consul data center kubernetes service. +- `kubernetes_ca_cert` - this is the CA certification that is currently being used by the Consul datacenter Kubernetes cluster. + +```shell-session +$ vault write auth/kubernetes/config \ + token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \ + kubernetes_host="https://${KUBE_API_URL}:443" \ + kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt +``` + +## Update Vault Helm chart + +Finally, you will configure the Consul on Kubernetes helm chart for the datacenter to expect to receive the following values (if you have configured them) to be retrieved from Vault: +- ACL Bootstrap token ([`global.acls.bootstrapToken`](/consul/docs/reference/k8s/helm#v-global-acls-bootstraptoken)) +- ACL Partition token ([`global.acls.partitionToken`](/consul/docs/reference/k8s/helm#v-global-acls-partitiontoken)) +- ACL Replication token ([`global.acls.replicationToken`](/consul/docs/reference/k8s/helm#v-global-acls-replicationtoken)) +- Enterprise license ([`global.enterpriseLicense`](/consul/docs/reference/k8s/helm#v-global-enterpriselicense)) +- Gossip encryption key ([`global.gossipEncryption`](/consul/docs/reference/k8s/helm#v-global-gossipencryption)) +- Snapshot Agent config ([`client.snapshotAgent.configSecret`](/consul/docs/reference/k8s/helm#v-client-snapshotagent-configsecret)) +- TLS CA certificates ([`global.tls.caCert`](/consul/docs/reference/k8s/helm#v-global-tls-cacert)) +- Server TLS certificates ([`server.serverCert`](/consul/docs/reference/k8s/helm#v-server-servercert)) + + + +```yaml +global: + secretsBackend: + vault: + enabled: true +``` + + + +## Next Steps + +As a next step, please proceed to Vault integration with Consul on Kubernetes' [Data Integration](/consul/docs/deploy/server/k8s/vault/data). + +## Troubleshooting + +The Vault integration with Consul on Kubernetes makes use of the Vault Agent Injectors. Kubernetes annotations are added to the +deployments of the Consul components which cause the Vault Agent Injector to be added as an init-container that will then attach +Vault secrets to Consul's pods at startup. Additionally the Vault Agent sidecar is added to the Consul component pods which +is responsible for synchronizing and reissuing secrets at runtime. +As a result of these additional sidecar containers the typical location for logging is expanded in the Consul components. + +As a general rule the best way to troubleshoot startup issues for your Consul installation when using the Vault integration +is to establish if the `vault-agent-init` container has completed or not via `kubectl logs -f -c vault-agent-init` +and checking to see if the secrets have completed rendering. +* If the secrets are not properly rendered the underlying problem will be logged in `vault-agent-init` init-container + and generally is related to the Vault Kube Auth Role not having the correct policies for the specific secret + e.g. `global.secretsBackend.vault.consulServerRole` not having the correct policies for TLS. +* If the secrets are rendered and the `vault-agent-init` container has completed AND the Consul component has not become `Ready`, + this generally points to an issue with Consul being unable to utilize the Vault secret. This can occur if, for example, the Vault Role + created for the PKI engine does not have the correct `alt_names` or otherwise is not properly configured. The best logs for this + circumstance are the Consul container logs: `kubectl logs -f -c consul`. diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token.mdx b/website/content/docs/deploy/server/k8s/vault/data/bootstrap-token.mdx similarity index 81% rename from website/content/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token.mdx rename to website/content/docs/deploy/server/k8s/vault/data/bootstrap-token.mdx index ea63dc992428..b0749f350ae3 100644 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token.mdx +++ b/website/content/docs/deploy/server/k8s/vault/data/bootstrap-token.mdx @@ -1,15 +1,17 @@ --- layout: docs -page_title: Storing the ACL Bootstrap Token in Vault +page_title: Store ACL bootstrap token on Vault secrets backend description: >- Configuring the Consul Helm chart to use an ACL bootstrap token stored in Vault. --- -# Storing the ACL Bootstrap Token in Vault +# Store ACL bootstrap token on Vault secrets backend This topic describes how to configure the Consul Helm chart to use an ACL bootstrap token stored in Vault. + ## Overview -To use an ACL bootstrap token stored in Vault, follow the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section. + +To use an ACL bootstrap token stored in Vault, follow the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section. Complete the following steps once: 1. Store the secret in Vault. @@ -21,15 +23,15 @@ Repeat the following steps for each datacenter in the cluster: ## Prerequisites Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). ## Store the Secret in Vault First, generate and store the ACL bootstrap token in Vault. You will only need to perform this action once: ```shell-session -$ vault kv put consul-kv/secret/bootstrap-token token="$(uuidgen | tr '[:upper:]' '[:lower:]')" +$ vault kv put secret/consul/bootstrap-token token="$(uuidgen | tr '[:upper:]' '[:lower:]')" ``` ## Create Vault policy @@ -41,7 +43,7 @@ The path to the secret referenced in the `path` resource is the same value that ```HCL -path "consul-kv/data/secret/bootstrap-token" { +path "secret/data/consul/bootstrap-token" { capabilities = ["read"] } ``` @@ -88,7 +90,7 @@ global: manageSystemACLsRole: consul-server-acl-init acls: bootstrapToken: - secretName: consul-kv/data/secret/bootstrap-token + secretName: secret/data/consul/bootstrap-token secretKey: token ``` diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license.mdx b/website/content/docs/deploy/server/k8s/vault/data/enterprise-license.mdx similarity index 82% rename from website/content/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license.mdx rename to website/content/docs/deploy/server/k8s/vault/data/enterprise-license.mdx index 9b32cf468ebb..1b455765d96f 100644 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license.mdx +++ b/website/content/docs/deploy/server/k8s/vault/data/enterprise-license.mdx @@ -1,15 +1,17 @@ --- layout: docs -page_title: Storing the Enterprise License in Vault +page_title: Store Consul Enterprise license on Vault secrets backend description: >- Configuring the Consul Helm chart to use an enterprise license stored in Vault. --- -# Storing the Enterprise License in Vault +# Store Consul Enterprise license on Vault secrets backend This topic describes how to configure the Consul Helm chart to use an enterprise license stored in Vault. + ## Overview -Complete the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section to use an enterprise license stored in Vault. + +Complete the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section to use an enterprise license stored in Vault. Complete the following steps once: 1. Store the secret in Vault. @@ -21,15 +23,15 @@ Repeat the following steps for each datacenter in the cluster: ## Prerequisites Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). ## Store the Secret in Vault First, store the enterprise license in Vault: ```shell-session -$ vault kv put consul-kv/secret/enterpriselicense key="" +$ vault kv put secret/consul/license key="" ``` ## Create Vault policy @@ -41,7 +43,7 @@ The path to the secret referenced in the `path` resource is the same value that ```HCL -path "consul-kv/data/secret/enterpriselicense" { +path "secret/data/consul/license" { capabilities = ["read"] } ``` @@ -103,7 +105,7 @@ global: consulServerRole: consul-server consulClientRole: consul-client enterpriseLicense: - secretName: consul-kv/data/secret/enterpriselicense + secretName: secret/data/consul/enterpriselicense secretKey: key ``` diff --git a/website/content/docs/deploy/server/k8s/vault/data/gossip-key.mdx b/website/content/docs/deploy/server/k8s/vault/data/gossip-key.mdx new file mode 100644 index 000000000000..6f08f00182b9 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/data/gossip-key.mdx @@ -0,0 +1,114 @@ +--- +layout: docs +page_title: Store Consul gossip encryption key on Vault secrets backend +description: >- + Configuring the Consul Helm chart to use a gossip encryption key stored in Vault. +--- + +# Store Consul gossip encryption key on Vault secrets backend + +This topic describes how to configure the Consul Helm chart to use a gossip encryption key stored in Vault. + +## Overview + +Complete the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section to use a gossip encryption key stored in Vault. + +Complete the following steps once: + 1. Store the secret in Vault. + 1. Create a Vault policy that authorizes the desired level of access to the secret. + +Repeat the following steps for each datacenter in the cluster: + 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. + 1. Update the Consul on Kubernetes helm chart. + +## Prerequisites +Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). + +## Store the Secret in Vault +First, generate and store the gossip key in Vault. You will only need to perform this action once: + +```shell-session +$ vault kv put secret/consul/gossip key="$(consul keygen)" +``` +## Create Vault policy + +Next, create a policy that allows read access to this secret. + +The path to the secret referenced in the `path` resource is the same value that you will configure in the `global.gossipEncryption.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). + + + +```HCL +path "secret/data/consul/gossip" { + capabilities = ["read"] +} +``` + + + +Apply the Vault policy by issuing the `vault policy write` CLI command: + +```shell-session +$ vault policy write gossip-policy gossip-policy.hcl +``` + +## Create Vault Authorization Roles for Consul + +Next, we will create Kubernetes auth roles for the Consul server and client: + +```shell-session +$ vault write auth/kubernetes/role/consul-server \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies=gossip-policy \ + ttl=1h +``` + +```shell-session +$ vault write auth/kubernetes/role/consul-client \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies=gossip-policy \ + ttl=1h +``` + +To find out the service account names of the Consul server and client, +you can run the following `helm template` commands with your Consul on Kubernetes values file: + +- Generate Consul server service account name + ```shell-session + $ helm template --release-name ${RELEASE_NAME} -s templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml + ``` + +- Generate Consul client service account name + ```shell-session + $ helm template --release-name ${RELEASE_NAME} -s templates/client-serviceaccount.yaml hashicorp/consul -f values.yaml + ``` + +## Update Consul on Kubernetes Helm chart + +Now that we've configured Vault, you can configure the Consul Helm chart to +use the gossip key in Vault: + + + +```yaml +global: + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server + consulClientRole: consul-client + gossipEncryption: + secretName: secret/data/consul/gossip + secretKey: key +``` + + + +Note that `global.gossipEncryption.secretName` is the path of the secret in Vault. +This should be the same path as the one you'd include in your Vault policy. +`global.gossipEncryption.secretKey` is the key inside the secret data. This should be the same +as the key we passed when we created the gossip secret in Vault. diff --git a/website/content/docs/deploy/server/k8s/vault/data/index.mdx b/website/content/docs/deploy/server/k8s/vault/data/index.mdx new file mode 100644 index 000000000000..284e659bd87e --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/data/index.mdx @@ -0,0 +1,137 @@ +--- +layout: docs +page_title: Vault as secrets backend — Data integration overview +description: >- + Overview of the data integration aspects to using Vault as the secrets backend for Consul on Kubernetes. +--- + +# Vault as secrets backend — Data integration overview + +his topic describes how to configure Vault and Consul in order to share secrets for use within Consul. + +## Prerequisites + +Before you set up the data integration between Vault and Consul on Kubernetes, read and complete the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). + +## General integration steps + +For each secret you want to store in Vault, you must complete two multi-step procedures. + +Complete the following steps once: + 1. Store the secret in Vault. + 1. Create a Vault policy that authorizes the desired level of access to the secret. + +Repeat the following steps for each datacenter in the cluster: + 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. + 1. Update the Consul on Kubernetes Helm chart. + +## Secrets-to-service account mapping + +At the most basic level, the goal of this configuration is to authorize a Consul on Kubernetes service account to access a secret in Vault. + +The following table associates Vault secrets and the Consul on Kubernetes service accounts that require access. +(NOTE: `Consul components` refers to all other services and jobs that are not Consul servers or clients. +It includes things like terminating gateways, ingress gateways, etc.) + +### Primary datacenter + +| Secret | Service Account For | Configurable Role in Consul k8s Helm | +| ------ | ------------------- | ------------------------------------ | +|[ACL Bootstrap token](/consul/docs/deploy/server/k8s/vault/data/bootstrap-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)| +|[ACL Partition token](/consul/docs/deploy/server/k8s/vault/data/partition-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)| +|[ACL Replication token](/consul/docs/deploy/server/k8s/vault/data/replication-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)| +|[Enterprise license](/consul/docs/deploy/server/k8s/vault/data/enterprise-license) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| +|[Gossip encryption key](/consul/docs/deploy/server/k8s/vault/data/gossip-key) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| +|[Snapshot Agent config](/consul/docs/deploy/server/k8s/vault/data/snapshot-agent) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| +|[Server TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) | Consul servers
    Consul clients
    Consul components | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulclientrole)
    [`global.secretsBackend.vault.consulCARole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulcarole)| +|[Service Mesh and Consul client TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| +|[Webhook TLS certificates for controller and connect inject](/consul/docs/deploy/server/k8s/vault/data/webhook-certificate) | Consul controllers
    Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-controllerrole)
    [`global.secretsBackend.vault.connectInjectRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-controllerrole)| + +### Secondary datacenters + +The mapping for secondary data centers is similar with the following differences: + +- There is no use of bootstrap token because ACLs would have been bootstrapped in the primary datacenter. +- ACL Partition token is mapped to both the `server-acl-init` job and the `partition-init` job service accounts. +- ACL Replication token is mapped to both the `server-acl-init` job and Consul service accounts. + +| Secret | Service Account For | Configurable Role in Consul k8s Helm | +| ------ | ------------------- | ------------------------------------ | +|[ACL Partition token](/consul/docs/deploy/server/k8s/vault/data/partition-token) | Consul server-acl-init job
    Consul partition-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)
    [`global.secretsBackend.vault.adminPartitionsRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-adminpartitionsrole)| +|[ACL Replication token](/consul/docs/deploy/server/k8s/vault/data/replication-token) | Consul server-acl-init job
    Consul servers | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)
    [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| +|[Enterprise license](/consul/docs/deploy/server/k8s/vault/data/enterprise-license) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| +|[Gossip encryption key](/consul/docs/deploy/server/k8s/vault/data/gossip-key) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| +|[Snapshot Agent config](/consul/docs/deploy/server/k8s/vault/data/snapshot-agent) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| +|[Server TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) | Consul servers
    Consul clients
    Consul components | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulclientrole)
    [`global.secretsBackend.vault.consulCARole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulcarole)| +|[Service Mesh and Consul client TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| +|[Webhook TLS certificates for controller and connect inject](/consul/docs/deploy/server/k8s/vault/data/webhook-certificate) | Consul controllers
    Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-controllerrole)
    [`global.secretsBackend.vault.connectInjectRole`](/consul/docs/reference/k8s/helm#v-global-secretsbackend-vault-controllerrole)| + +### Combining policies within roles + +Depending upon your needs, a Consul on Kubernetes service account may need to request more than one secret. To request multiple secrets, create one role for the Consul on Kubernetes service account that is mapped to multiple policies associated with the required secrets. + +For example, if your Consul on Kubernetes servers need access to [Consul Server TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) and an [Enterprise license](/consul/docs/deploy/server/k8s/vault/data/enterprise-license): + +1. Create a policy for each secret. + + 1. Consul Server TLS credentials + + + + ```HCL + path "pki/cert/ca" { + capabilities = ["read"] + } + ``` + + + + ```shell-session + $ vault policy write ca-policy ca-policy.hcl + ``` + + 1. Enterprise License + + + + ```HCL + path "secret/data/consul/license" { + capabilities = ["read"] + } + ``` + + + + ```shell-session + $ vault policy write license-policy license-policy.hcl + ``` + +1. Create one role that maps the Consul on Kubernetes service account to the 3 policies. + ```shell-session + $ vault write auth/kubernetes/role/consul-server \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies=ca-policy,license-policy \ + ttl=1h + ``` + +## Detailed data integration guides + +The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets: + +- [ACL Bootstrap token](/consul/docs/deploy/server/k8s/vault/data/bootstrap-token) +- [ACL Partition token](/consul/docs/deploy/server/k8s/vault/data/partition-token) +- [ACL Replication token](/consul/docs/deploy/server/k8s/vault/data/replication-token) +- [Enterprise license](/consul/docs/deploy/server/k8s/vault/data/enterprise-license) +- [Gossip encryption key](/consul/docs/deploy/server/k8s/vault/data/gossip-key) +- [Snapshot Agent config](/consul/docs/deploy/server/k8s/vault/data/snapshot-agent) + +The following TLS certificates and keys can generated and managed by Vault the Vault PKI Engine, which is meant to handle things like certificate expiration and rotation: + +- [Server TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) +- [Service Mesh and Consul client TLS credentials](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) +- [Vault as the Webhook Certificate Provider for Consul Controller and Connect Inject on Kubernetes](/consul/docs/deploy/server/k8s/vault/data/webhook-certificate) + +## Secrets-to-service account mapping + +Read through the [detailed data integration guides](#detailed-data-integration-guides) that are pertinent to your environment. diff --git a/website/content/docs/deploy/server/k8s/vault/data/mesh-ca.mdx b/website/content/docs/deploy/server/k8s/vault/data/mesh-ca.mdx new file mode 100644 index 000000000000..922c0ecef153 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/data/mesh-ca.mdx @@ -0,0 +1,102 @@ +--- +layout: docs +page_title: Vault as the Consul service mesh certificate authority on Kubernetes +description: >- + Using Vault as the provider for the Service Mesh certificates on Kubernetes. +--- + +# Vault as the Consul service mesh certificate authority on Kubernetes + +This topic describes how to configure the Consul Helm chart to use TLS certificates issued by Vault for Consul service mesh communication. + +-> **Note:** This feature requires Consul 1.11 or higher. As of v1.11, +Consul allows using Kubernetes auth methods to configure the service mesh CA. +This allows for automatic token rotation once the renewal is no longer possible. + +~> **Compatibility note:** If you use Vault 1.11.0+ as Consul's service mesh CA, versions of Consul released before Dec 13, 2022 will develop an issue with Consul control plane or service mesh communication ([GH-15525](https://github.com/hashicorp/consul/pull/15525)). Use or upgrade to a [Consul version that includes the fix](https://support.hashicorp.com/hc/en-us/articles/11308460105491#01GMC24E6PPGXMRX8DMT4HZYTW) to avoid this problem. + +## Overview + +To use Vault as the service mesh certificate provider on Kubernetes, you will complete a modified version of the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section. + +Complete the following steps once: + 1. Create a Vault policy that authorizes the desired level of access to the secret. + +Repeat the following steps for each datacenter in the cluster: + 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. + 1. Update the Consul on Kubernetes helm chart. + +## Prerequisites + +Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). + +## Create Vault policy + +To configure [Vault as the provider](/consul/docs/secure-mesh/certificate/vault) for the Consul service mesh certificates, +you will first need to decide on the type of policy that is suitable for you. +To see the permissions that Consul would need in Vault, please see [Vault ACL policies](/consul/docs/secure-mesh/certificate/vault#vault-acl-policies) +documentation. + +## Create Vault Authorization Roles for Consul + +Next, you will create Kubernetes auth roles for the Consul servers: + +```shell-session +$ vault write auth/kubernetes/role/consul-server \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies= \ + ttl=1h +``` + +To find out the service account name of the Consul server, +you can run: + +```shell-session +$ helm template --release-name ${RELEASE_NAME} --show-only templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml +``` + +## Update Consul on Kubernetes Helm chart +Now you can configure the Consul Helm chart to use Vault as the service mesh (connect) CA provider: + + + +```yaml +global: + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server + consulClientRole: consul-client + consulCARole: consul-ca + connectCA: + address: + rootPKIPath: + intermediatePKIPath: + ca: + secretName: +``` + + + +The `address` you provide to the `connectCA` configuration can be a Kubernetes DNS +address if the Vault cluster is running the same Kubernetes cluster. +The `rootPKIPath` and `intermediatePKIPath` should be the same as the ones +defined in your service mesh CA policy. Behind the scenes, Consul will authenticate to Vault using a Kubernetes +service account using the [Kubernetes auth method](/vault/docs/auth/kubernetes) and will use the Vault token for any API calls to Vault. If the Vault token can not be renewed, Consul will re-authenticate to +generate a new Vault token. + +The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so: + +```shell-session +$ kubectl create secret generic vault-ca --from-file vault.ca=/path/to/your/vault/ca +``` + +### Secondary Datacenters + +To configure Vault as the service mesh (connect) CA in secondary datacenters, you need to make sure that the Root CA path is the same, +but the intermediate is different for each datacenter. In the `connectCA` Helm configuration for a secondary datacenter, +you can specify a `intermediatePKIPath` that is, for example, prefixed with the datacenter +for which this configuration is intended (e.g. `dc2/connect-intermediate`). diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/partition-token.mdx b/website/content/docs/deploy/server/k8s/vault/data/partition-token.mdx similarity index 79% rename from website/content/docs/k8s/deployment-configurations/vault/data-integration/partition-token.mdx rename to website/content/docs/deploy/server/k8s/vault/data/partition-token.mdx index 329c96ebef09..7318a87a6cdb 100644 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/partition-token.mdx +++ b/website/content/docs/deploy/server/k8s/vault/data/partition-token.mdx @@ -1,16 +1,17 @@ --- layout: docs -page_title: Storing the ACL Partition Token in Vault +page_title: Store ACL partition token on Vault secrets backend description: >- Configuring the Consul Helm chart to use an ACL partition token stored in Vault. --- -# Storing the ACL Partition Token in Vault +# Store ACL partition token on Vault secrets backend -This topic describes how to configure the Consul Helm chart to use an ACL partition token stored in Vault when using [Admin Partitions](/consul/docs/enterprise/admin-partitions) in Consul Enterprise. +This topic describes how to configure the Consul Helm chart to use an ACL partition token stored in Vault when using [Admin Partitions](/consul/docs/multi-tenant/admin-partition) in Consul Enterprise. ## Overview -Complete the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section to use an ACL partition token stored in Vault. + +Complete the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section to use an ACL partition token stored in Vault. Complete the following steps once: 1. Store the secret in Vault. @@ -21,16 +22,17 @@ Repeat the following steps for each datacenter in the cluster: 1. Update the Consul on Kubernetes helm chart. ## Prerequisites + Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). ## Store the Secret in Vault First, generate and store the ACL partition token in Vault. You will only need to perform this action once: ```shell-session -$ vault kv put consul-kv/secret/partition-token token="$(uuidgen | tr '[:upper:]' '[:lower:]')" +$ vault kv put secret/consul/partition-token token="$(uuidgen | tr '[:upper:]' '[:lower:]')" ``` ## Create Vault policy @@ -42,7 +44,7 @@ The path to the secret referenced in the `path` resource is the same value that ```HCL -path "consul-kv/data/secret/consul/partition-token" { +path "secret/data/consul/partition-token" { capabilities = ["read"] } ``` @@ -90,7 +92,7 @@ global: adminPartitionsRole: consul-partition-init acls: partitionToken: - secretName: consul-kv/data/secret/partition-token + secretName: secret/data/consul/partition-token secretKey: token ``` diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/replication-token.mdx b/website/content/docs/deploy/server/k8s/vault/data/replication-token.mdx similarity index 80% rename from website/content/docs/k8s/deployment-configurations/vault/data-integration/replication-token.mdx rename to website/content/docs/deploy/server/k8s/vault/data/replication-token.mdx index 6d6077facdac..9e316ab9c1b2 100644 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/replication-token.mdx +++ b/website/content/docs/deploy/server/k8s/vault/data/replication-token.mdx @@ -1,15 +1,17 @@ --- layout: docs -page_title: Storing the ACL Replication Token in Vault +page_title: Store ACL replication token in Vault secrets backend description: >- Configuring the Consul Helm chart to use an ACL replication token stored in Vault. --- -# Storing the ACL Replication Token in Vault +# Store ACL replication token in Vault secrets backend This topic describes how to configure the Consul Helm chart to use an ACL replication token stored in Vault. + ## Overview -To use an ACL replication token stored in Vault, follow the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section. + +To use an ACL replication token stored in Vault, follow the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section. Complete the following steps once: 1. Store the secret in Vault. @@ -20,16 +22,17 @@ Repeat the following steps for each datacenter in the cluster: 1. Update the Consul on Kubernetes helm chart. ## Prerequisites + Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). ## Store the Secret in Vault First, generate and store the ACL replication token in Vault. You will only need to perform this action once: ```shell-session -$ vault kv put consul-kv/secret/replication-token token="$(uuidgen | tr '[:upper:]' '[:lower:]')" +$ vault kv put secret/consul/replication-token token="$(uuidgen | tr '[:upper:]' '[:lower:]')" ``` ## Create Vault policy @@ -41,7 +44,7 @@ The path to the secret referenced in the `path` resource is the same value that ```HCL -path "consul-kv/data/secret/replication-token" { +path "secret/data/consul/replication-token" { capabilities = ["read"] } ``` @@ -88,7 +91,7 @@ global: manageSystemACLsRole: consul-server-acl-init acls: replicationToken: - secretName: consul-kv/data/secret/replication-token + secretName: secret/data/consul/replication-token secretKey: token ``` diff --git a/website/content/docs/deploy/server/k8s/vault/data/snapshot-agent.mdx b/website/content/docs/deploy/server/k8s/vault/data/snapshot-agent.mdx new file mode 100644 index 000000000000..831b5a11c46c --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/data/snapshot-agent.mdx @@ -0,0 +1,106 @@ +--- +layout: docs +page_title: Store snapshot agent configuration in Vault secrets backend +description: >- + Configuring the Consul Helm chart to use a snapshot agent config stored in Vault. +--- + +# Store snapshot agent configuration in Vault secrets backend + +This topic describes how to configure the Consul Helm chart to use a snapshot agent configuration stored in Vault. + +## Overview + +To use a snapshot agent configuration stored in Vault, follow the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section. + +Complete the following steps once: + 1. Store the secret in Vault. + 1. Create a Vault policy that authorizes the desired level of access to the secret. + +Repeat the following steps for each datacenter in the cluster: + 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. + 1. Update the Consul on Kubernetes helm chart. + +## Prerequisites + +Before you set up data integration between Vault and Consul on Kubernetes, complete the following prerequisites: +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). + +## Store the Secret in Vault + +First, store the snapshot agent config in Vault: + +```shell-session +$ vault kv put secret/consul/snapshot-agent-config key="" +``` + +## Create Vault policy + +Next, you will need to create a policy that allows read access to this secret. + +The path to the secret referenced in the `path` resource is the same values that you will configure in the `client.snapshotAgent.configSecret.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). + + + +```HCL +path "secret/data/consul/snapshot-agent-config" { + capabilities = ["read"] +} +``` + + + +Apply the Vault policy by issuing the `vault policy write` CLI command: + +```shell-session +$ vault policy write snapshot-agent-config-policy snapshot-agent-config-policy.hcl +``` + +## Create Vault Authorization Roles for Consul + +Next, add this policy to your Consul server Kubernetes auth role: + +```shell-session +$ vault write auth/kubernetes/role/consul-server \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies=snapshot-agent-config-policy \ + ttl=1h +``` +Note that if you have other policies associated +with the Consul server service account that are not in the example, you need to include those as well. + +To find out the service account name of the Consul snapshot agent, +you can run the following `helm template` command with your Consul on Kubernetes values file: + +```shell-session +$ helm template --release-name ${RELEASE_NAME} -s templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml +``` + +## Update Consul on Kubernetes Helm chart + +Now that you have configured Vault, you can configure the Consul Helm chart to +use the snapshot agent configuration in Vault: + + + +```yaml +global: + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server +client: + snapshotAgent: + configSecret: + secretName: secret/data/consul/snapshot-agent-config + secretKey: key +``` + + + +Note that `client.snapshotAgent.configSecret.secretName` is the path of the secret in Vault. +This should be the same path as the one you included in your Vault policy. +`client.snapshotAgent.configSecret.secretKey` is the key inside the secret data. This should be the same +as the key you passed when creating the snapshot agent config secret in Vault. diff --git a/website/content/docs/deploy/server/k8s/vault/data/tls-certificate.mdx b/website/content/docs/deploy/server/k8s/vault/data/tls-certificate.mdx new file mode 100644 index 000000000000..ebaeaea597cd --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/data/tls-certificate.mdx @@ -0,0 +1,205 @@ +--- +layout: docs +page_title: Vault as the Consul server TLS certificate provider on Kubernetes +description: >- + Configuring the Consul Helm chart to use TLS certificates issued by Vault for the Consul server. +--- + +# Vault as the Consul server TLS certificate provider on Kubernetes + +To use Vault as the server TLS certificate provider on Kubernetes, complete a modified version of the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section. + +Complete the following steps once: + 1. Create a Vault policy that authorizes the desired level of access to the secret. + +Repeat the following steps for each datacenter in the cluster: + 1. (Added) Configure allowed domains for PKI certificates + 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. + 1. Update the Consul on Kubernetes helm chart. + +## Prerequisites + +Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: +1. Read and completed the steps in the [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +2. Read the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +3. Complete the [Bootstrapping the PKI Engine](#bootstrapping-the-pki-engine) section. + +## Bootstrapping the PKI Engine + +Issue the following commands to enable and configure the PKI Secrets Engine to server +TLS certificates to Consul. + +* Enable the PKI Secrets Engine: + + ```shell-session + $ vault secrets enable pki + ``` + +* Tune the engine to enable longer TTL: + + ```shell-session + $ vault secrets tune -max-lease-ttl=87600h pki + ``` + +* Generate the root CA: + + -> **Note:** The `common_name` value is comprised of combining `global.datacenter` dot `global.domain`. + + ```shell-session + $ vault write -field=certificate pki/root/generate/internal \ + common_name="dc1.consul" \ + ttl=87600h + ``` +## Create Vault policies +To use Vault to issue Server TLS certificates, you will need to create the following: + +1. Create a policy that allows `["create", "update"]` access to the + [certificate issuing URL](/vault/api-docs/secret/pki#generate-certificate) so the Consul servers can + fetch a new certificate/key pair. + + The path to the secret referenced in the `path` resource is the same value that you will configure in the `server.serverCert.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). + + + + ```HCL + path "pki/issue/consul-server" { + capabilities = ["create", "update"] + } + ``` + + + +1. Apply the Vault policy by issuing the `vault policy write` CLI command: + + ```shell-session + $ vault policy write consul-server consul-server-policy.hcl + ``` + +1. Create a policy that allows `["read"]` access to the [CA URL](/vault/api-docs/secret/pki), +this is required for the Consul components to communicate with the Consul servers in order to fetch their auto-encryption certificates. + + The path to the secret referenced in the `path` resource is the same value that you will configure in the `global.tls.caCert.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). + + + + ```HCL + path "pki/cert/ca" { + capabilities = ["read"] + } + ``` + + + + ```shell-session + $ vault policy write ca-policy ca-policy.hcl + ``` + +1. Configure allowed domains for PKI certificates. + + Next, a Vault role for the PKI engine will set the default certificate issuance parameters: + + ```shell-session + $ vault write pki/roles/consul-server \ + allowed_domains="" \ + allow_subdomains=true \ + allow_bare_domains=true \ + allow_localhost=true \ + max_ttl="720h" + ``` + + To generate the `` use the following script as a template: + + ```shell-session + #!/bin/sh + + # NAME is set to either the value from `global.name` from your Consul K8s value file, or your $HELM_RELEASE_NAME-consul + export NAME=consulk8s + # NAMESPACE is where the Consul on Kubernetes is installed + export NAMESPACE=consul + # DATACENTER is the value of `global.datacenter` from your Helm values config file + export DATACENTER=dc1 + + echo allowed_domains=\"$DATACENTER.consul, $NAME-server, $NAME-server.$NAMESPACE, $NAME-server.$NAMESPACE.svc\" + ``` + +1. Finally, Kubernetes auth roles need to be created for servers, clients, and components. + + Role for Consul servers: + ```shell-session + $ vault write auth/kubernetes/role/consul-server \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies=consul-server \ + ttl=1h + ``` + + To find out the service account name of the Consul server, + you can run: + + ```shell-session + $ helm template --release-name ${RELEASE_NAME} --show-only templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml + ``` + + Role for Consul clients: + + ```shell-session + $ vault write auth/kubernetes/role/consul-client \ + bound_service_account_names= \ + bound_service_account_namespaces=default \ + policies=ca-policy \ + ttl=1h + ``` + + To find out the service account name of the Consul client, use the command below. + ```shell-session + $ helm template --release-name ${RELEASE_NAME} --show-only templates/client-serviceaccount.yaml hashicorp/consul -f values.yaml + ``` + + Role for CA components: + ```shell-session + $ vault write auth/kubernetes/role/consul-ca \ + bound_service_account_names="*" \ + bound_service_account_namespaces= \ + policies=ca-policy \ + ttl=1h + ``` + + The above Vault Roles will now be your Helm values for `global.secretsBackend.vault.consulServerRole` and + `global.secretsBackend.vault.consulCARole` respectively. + +## Update Consul on Kubernetes Helm chart + +Next, configure the Consul Helm chart to +use the server TLS certificates from Vault: + + + +```yaml +global: + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server + consulClientRole: consul-client + consulCARole: consul-ca + tls: + enableAutoEncrypt: true + enabled: true + caCert: + secretName: "pki/cert/ca" +server: + serverCert: + secretName: "pki/issue/consul-server" + extraVolumes: + - type: "secret" + name: + load: "false" +``` + + + +The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so: + +```shell-session +$ kubectl create secret generic vault-ca --from-file vault.ca=/path/to/your/vault/ +``` diff --git a/website/content/docs/deploy/server/k8s/vault/data/webhook-certificate.mdx b/website/content/docs/deploy/server/k8s/vault/data/webhook-certificate.mdx new file mode 100644 index 000000000000..1913c9141429 --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/data/webhook-certificate.mdx @@ -0,0 +1,237 @@ +--- +layout: docs +page_title: Vault as service mesh webhook certificate provider on Kubernetes +description: >- + Configuring the Consul Helm chart to use TLS certificates issued by Vault for the Consul Controller and Connect Inject webhooks. +--- + +# Vault as service mesh webhook certificate provider on Kubernetes + +This topic describes how to configure the Consul Helm chart to use TLS certificates issued by Vault in the Consul controller and connect inject webhooks. + +## Overview + +In a Consul Helm chart configuration that does not use Vault, `webhook-cert-manager` ensures that a valid certificate is updated to the `mutatingwebhookconfiguration` of either the controller or connect inject to ensure that Kubernetes can communicate with each of these services. + +When Vault is configured as the controller and connect inject Webhook Certificate Provider on Kubernetes: + - `webhook-cert-manager` is no longer deployed to the cluster. + - Controller and connect inject each get their webhook certificates from its own Vault PKI mount via the injected Vault Agent. + - Controller and connect inject each need to be configured with its own Vault Role that has necessary permissions to receive certificates from its respective PKI mount. + - Controller and connect inject each locally update its own `mutatingwebhookconfiguration` so that Kubernetes can relay events. + - Vault manages certificate rotation and rotates certificates to each webhook. + +To use Vault as the controller and connect inject Webhook Certificate Provider, we will need to modify the steps outlined in the [Data Integration](/consul/docs/deploy/server/k8s/vault/data) section: + +These following steps will be repeated for each datacenter: + 1. Create a Vault policy that authorizes the desired level of access to the secret. + 1. (Added) Create Vault PKI roles for controller and connect inject that each establish the domains that each is allowed to issue certificates for. + 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. + 1. Configure the Vault Kubernetes auth roles in the Consul on Kubernetes helm chart. + +## Prerequisites + +Complete the following prerequisites prior to implementing the integration described in this topic: +1. Verify that you have completed the steps described in [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +1. You should be familiar with the [Data Integration Overview](/consul/docs/deploy/server/k8s/vault/data) section of [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault). +1. Configure [Vault as the Server TLS Certificate Provider on Kubernetes](/consul/docs/deploy/server/k8s/vault/data/tls-certificate) +1. Configure [Vault as the Service Mesh Certificate Provider on Kubernetes](/consul/docs/deploy/server/k8s/vault/data/mesh-ca) + +## Bootstrapping the PKI Engines +Issue the following commands to enable and configure the PKI Secrets Engine to serve TLS certificates for the controller and connect inject webhooks: + +* Mount the PKI Secrets Engine for each: + + ```shell-session + $ vault secrets enable -path=controller pki + ``` + + ```shell-session + $ vault secrets enable -path=connect-inject pki + ``` + +* Tune the engine mounts to enable longer TTL: + + ```shell-session + $ vault secrets tune -max-lease-ttl=87600h controller + ``` + + ```shell-session + $ vault secrets tune -max-lease-ttl=87600h connect-inject + ``` + +* Generate the root CA for each: + + ```shell-session + $ vault write -field=certificate controller/root/generate/internal \ + common_name="-controller-webhook" \ + ttl=87600h + ``` + + ```shell-session + $ vault write -field=certificate connect-inject/root/generate/internal \ + common_name="-connect-injector" \ + ttl=87600h + ``` +## Create Vault Policies +1. Create a policy that allows `["create", "update"]` access to the +[certificate issuing URL](/vault/api-docs/secret/pki) so Consul controller and connect inject can fetch a new certificate/key pair and provide it to the Kubernetes `mutatingwebhookconfiguration`. + + The path to the secret referenced in the `path` resource is the same value that you will configure in the `global.secretsBackend.vault.controller.tlsCert.secretName` and `global.secretsBackend.vault.connectInject.tlsCert.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). + + ```shell-session + $ vault policy write controller-tls-policy - <` for each use the following script as a template: + + ```shell-session + #!/bin/sh + + # NAME is set to either the value from `global.name` from your Consul K8s value file, or your $HELM_RELEASE_NAME-consul + export NAME=consulk8s + # NAMESPACE is where the Consul on Kubernetes is installed + export NAMESPACE=consul + # DATACENTER is the value of `global.datacenter` from your Helm values config file + export DATACENTER=dc1 + + echo allowed_domains_controller=\"${NAME}-controller-webhook,${NAME}-controller-webhook.${NAMESPACE},${NAME}-controller-webhook.${NAMESPACE}.svc,${NAME}-controller-webhook.${NAMESPACE}.svc.cluster.local\"" + + echo allowed_domains_connect_inject=\"${NAME}-connect-injector,${NAME}-connect-injector.${NAMESPACE},${NAME}-connect-injector.${NAMESPACE}.svc,${NAME}-connect-injector.${NAMESPACE}.svc.cluster.local\"" + ``` + +1. Finally, Kubernetes auth roles need to be created for controller and connect inject webhooks. + + The path to the secret referenced in the `path` resource is the same values that you will configure in the `global.secretsBackend.vault.controllerRole` and `global.secretsBackend.vault.connectInjectRole` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). + + Role for Consul controller webhooks: + + ```shell-session + $ vault write auth/kubernetes/role/controller-role \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies=controller-ca-policy \ + ttl=1h + ``` + + To find out the service account name of the Consul controller, + you can run: + + ```shell-session + $ helm template --release-name ${RELEASE_NAME} --show-only templates/controller-serviceaccount.yaml hashicorp/consul -f values.yaml + ``` + + Role for Consul connect inject webhooks: + + ```shell-session + $ vault write auth/kubernetes/role/connect-inject-role \ + bound_service_account_names= \ + bound_service_account_namespaces= \ + policies=connect-inject-ca-policy \ + ttl=1h + ``` + + To find out the service account name of the Consul connect inject, use the command below. + ```shell-session + $ helm template --release-name ${RELEASE_NAME} --show-only templates/connect-inject-serviceaccount.yaml hashicorp/consul -f values.yaml + ``` + +## Update Consul on Kubernetes Helm chart + +Now that we've configured Vault, you can configure the Consul Helm chart to +use the Server TLS certificates from Vault: + + + +```yaml +global: + secretsBackend: + vault: + enabled: true + consulServerRole: "consul-server" + consulClientRole: "consul-client" + consulCARole: "consul-ca" + controllerRole: "controller-role" + connectInjectRole: "connect-inject-role" + connectInject: + caCert: + secretName: "connect-inject/cert/ca" + tlsCert: + secretName: "connect-inject/issue/connect-inject-role" + tls: + enabled: true + enableAutoEncrypt: true + caCert: + secretName: "pki/cert/ca" +server: + serverCert: + secretName: "pki/issue/consul-server" + extraVolumes: + - type: "secret" + name: + load: "false" +connectInject: + enabled: true +``` + + + +The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so: + +```shell-session +$ kubectl create secret generic vault-ca --from-file vault.ca=/path/to/your/vault/ +``` diff --git a/website/content/docs/deploy/server/k8s/vault/index.mdx b/website/content/docs/deploy/server/k8s/vault/index.mdx new file mode 100644 index 000000000000..f38623939dbe --- /dev/null +++ b/website/content/docs/deploy/server/k8s/vault/index.mdx @@ -0,0 +1,54 @@ +--- +layout: docs +page_title: Vault as secrets backend — Overview +description: >- + Using Vault as the secrets backend for Consul on Kubernetes. +--- + +# Vault as secrets backend — Overview + +By default, Consul Helm chart will expect that any credentials it needs are stored as Kubernetes secrets. +As of Consul 1.11 and Consul Helm chart v0.38.0, we integrate more natively with Vault making it easier +to use Consul Helm chart with Vault as the secrets storage backend. + +## Secrets Overview + +By default, Consul on Kubernetes leverages Kubernetes secrets which are base64 encoded and unencrypted. In addition, the following limitations exist with managing sensitive data within Kubernetes secrets: + +- There are no lease or time-to-live properties associated with these secrets. +- Kubernetes can only manage resources, such as secrets, within a cluster boundary. If you have sets of clusters, the resources across them need to be managed separately. + +By leveraging Vault as a secrets backend for Consul on Kubernetes, you can now manage and store Consul related secrets within a centralized Vault cluster to use across one or many Consul on Kubernetes datacenters. + +### Secrets stored in the Vault KV Secrets Engine + +The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets: +- ACL Bootstrap token +- ACL Partition token +- ACL Replication token +- Enterprise license +- Gossip encryption key +- Snapshot Agent config + + +### Secrets generated and managed by the Vault PKI Engine + +The following TLS certificates and keys can be generated and managed by the Vault PKI Engine, which is meant to handle things like certificate expiration and rotation: +- Server TLS credentials +- Service Mesh and Consul client TLS credentials + +## Requirements + +1. Vault 1.9+ and Vault-k8s 0.14+ is required. +1. Vault must be installed and accessible to the Consul on Kubernetes installation. +1. `global.tls.enableAutoencrypt=true` is required if TLS is enabled for the Consul installation when using the Vault secrets backend. +1. The Vault installation must have been initialized, unsealed and the KV2 and PKI secrets engines and the Kubernetes Auth Method enabled. + +## Next Steps + +The Vault integration with Consul on Kubernetes has two aspects or phases: +- [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) - Configure Vault and Consul on Kubernetes systems to leverage Vault as the secrets store. +- [Data Integration](/consul/docs/deploy/server/k8s/vault/data) - Configure specific secrets to be stored and +retrieved from Vault for use with Consul on Kubernetes. + +As a next step, please proceed to [Systems Integration](/consul/docs/deploy/server/k8s/vault/backend) overview to understand how to first setup Vault and Consul on Kubernetes to leverage Vault as a secrets backend. \ No newline at end of file diff --git a/website/content/docs/deploy/server/vm/bootstrap.mdx b/website/content/docs/deploy/server/vm/bootstrap.mdx new file mode 100644 index 000000000000..4863343ad61e --- /dev/null +++ b/website/content/docs/deploy/server/vm/bootstrap.mdx @@ -0,0 +1,140 @@ +--- +layout: docs +page_title: Bootstrap a Consul datacenter (VM) +description: >- + Bootstrapping a datacenter is the initial deployment process in Consul that starts server agents and joins them together. Learn how to deploy and join Consul servers running on a virtual machine. +--- + +# Bootstrap a Consul datacenter + +This page describes the process to bootstrap a Consul datacenter running on a virtual machine (VM). + +## Background + +In Consul, a datacenter is an organizational unit that governs the assignment of resources such as servers, proxies, and gateways. Objects in a datacenter share security resources, including tokens and mTLS certificates, which are managed by a Consul server cluster. This server cluster is a group of one to five Consul servers deployed within a single cloud region. + +When a cluster is first created, it must initiate the consensus protocol and elect a leader before it can service requests. _Bootstrapping a datacenter_ refers to the process of configuring and joining the initial server nodes in a cluster. After you bootstrap a datacenter, you can use [cloud auto-join](/consul/docs/deploy/server/cloud-auto-join) to automatically connect to Consul agents in the control plane and data plane as you deploy them. + +## Prerequisites + +Bootstrapping a Consul datacenter requires at least one node with a local [Consul binary installation](/consul/install). For testing and development scenarios, you can use your local node to deploy a datacenter that runs as a single Consul server. + +## Bootstrap a datacenter + +You can bootstrap a datacenter by passing configurations through flags on the `consul agent` command. You can also define multiple servers using a single agent configuration file, and then start each agent using that file. When the Consul servers detect the expected number of servers, they hold an election and communicate according to the Raft protocol. + +Complete the following steps to bootstrap a cluster: + +1. Initiate the cluster and specify the number of server agents to start. +1. Join the servers either automatically or manually. + +Use the [`-bootstrap-expect`](/consul/commands/agent#_bootstrap_expect) flag on the command line or configure `bootstrap_expect` in the agent configuration. This option declares the expected number of servers and automatically bootstraps them when the specified number of servers are available. To prevent inconsistencies and deployments where multiple servers consider themselves leader, you should either specify the same value for each server's `bootstrap_expect` parameter or specify no value at all on all the servers. When `-bootstrap-expect` is omitted, Consul defaults to `1` for the number of expected servers. + +The following command bootstraps a datacenter consisting of three Consul servers: + +```shell-session +$ consul agent -data-dir /tmp/consul -server -bootstrap-expect 3 +``` + +You can also create an agent configuration file to use when deploying multiple Consul servers. The following example demonstrates a basic agent configuration for bootstrapping a datacenter with three servers. + + + +```hcl +datacenter = "dc1" +data_dir = "/tmp/consul" +log_level = "INFO" +server = true +bootstrap_expect = 3 +``` + + + +To apply the agent configuration to each server, run the `consul agent` command on each VM. + +```shell-session +$ consul agent -config-file=bootstrap.hcl +``` + +Consul prints a warning message to the console when the number of servers in a cluster is less than the expected bootstrap number. + + + +```log +[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election. +``` + + + +## Join the servers + +After you start the servers, you must join them in a cluster to initiate the Raft election. To join servers automatically, specify network addresses or [cloud auto join](/consul/docs/deploy/server/cloud-auto-join) tags for supported cloud environments using either the [-retry-join CLI flag](/consul/commands/agent#_retry_join) or the [`retry_join` configuration option](/consul/docs/reference/agent/consul/docs/reference/agent/configuration-file/join#retry_join). + +The following examples demonstrate address options and their formatting for the `-retry-join` CLI flag. + + + +```shell-session +$ consul agent -retry-join "consul.domain.internal" +``` + + + + + +```shell-session +$ consul agent -retry-join "10.0.4.67" +``` + + + + + +```shell-session +$ consul agent -retry-join "192.0.2.10:8304" +``` + + + + + +```shell-session +$ consul agent -retry-join "[::1]:8301" +``` + + + + + +```shell-session +$ consul agent -retry-join "consul.domain.internal" -retry-join "10.0.4.67" +``` + + + + + +```shell-session +$ consul agent -retry-join "provider=aws tag_key=..." +``` + + + +## Verify the Raft status + +To verify that the bootstrap process completed successfully, use the [`consul info`](/consul/commands/info) command to check the cluster's current Raft status. In particular, verify the following: + +- The `raft.num_peers` should be one less than the number of expected bootstrap servers, minus one +- The `raft.last_log_index` should be a non-zero number + +## Next steps + +After you bootstrap a datacenter, you can make additional changes to the datacenter by modifying the agent configuration and then running the [`consul reload` command](/consul/commands/reload). + +We recommend removing `bootstrap_expect` from agent configurations and reloading the agents after the initial bootstrap process is complete. This action prevents server agents that fail from unintentionally bootstrapping again after they restart. Instead, they will rejoin a datacenter's cluster automatically. + +You can also enable Consul's browser-based user interface, deploy client agents, and register services in the Consul catalog for service discovery and service mesh use cases. Refer to the following topics for more information: + +- [Consul UI visualization](/consul/docs/fundamentals/interface/ui) +- [Configure client agents](/consul/docs/deploy/workload/client/vm) +- [Register service](/register/service/vm) diff --git a/website/content/docs/deploy/server/vm/index.mdx b/website/content/docs/deploy/server/vm/index.mdx new file mode 100644 index 000000000000..003863be311f --- /dev/null +++ b/website/content/docs/deploy/server/vm/index.mdx @@ -0,0 +1,20 @@ +--- +layout: docs +page_title: Deploy Consul on virtual machines overview +description: >- + To deploy a Consul server on VMs, install the binary, bootstrap the server, and then join additional server to the cluster. +--- + +# Deploy Consul server on virtual machines + +This topic provides an overview for deploying a Consul server when running Consul on virtual machines (VMs). + +## Overview + +The process to deploy a Consul server on virtual machines consists of the following steps: + +1. Install the Consul binary on the VM +1. [Bootstrap the Consul server](/consul/docs/deploy/server/vm/bootstrap) +1. [Join additional servers to the cluster](/consul/docs/deploy/server/cloud-auto-join) + +For step-by-step guidance on the processes described in this topic, refer to the [Deploy Consul on VMs getting started tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy?variants=consul-workflow%3Alab). \ No newline at end of file diff --git a/website/content/docs/deploy/server/vm/requirements.mdx b/website/content/docs/deploy/server/vm/requirements.mdx new file mode 100644 index 000000000000..35d6ddc9bd03 --- /dev/null +++ b/website/content/docs/deploy/server/vm/requirements.mdx @@ -0,0 +1,215 @@ +--- +layout: docs +page_title: Consul server requirements on VMs +description: >- + Consul servers require sufficient compute resources to communicate and process data quickly. Learn about Consul's minimum server requirements and recommendations for different workloads. +--- + +# Consul server requirements on VMs + +Since Consul servers run a [consensus protocol](/consul/docs/concept/consensus) to +process all write operations and are contacted on nearly all read operations, server +performance is critical for overall throughput and health of a Consul cluster. Servers +are generally I/O bound for writes because the underlying Raft log store performs a sync +to disk every time an entry is appended. Servers are generally CPU bound for reads since +reads work from a fully in-memory data store that is optimized for concurrent access. + +## Minimum Server Requirements ((#minimum)) + +In Consul 0.7, the default server [performance parameters](/consul/docs/reference/agent/configuration-file/general#performance) +were tuned to allow Consul to run reliably (but relatively slowly) on a server cluster of three +[AWS t2.micro](https://aws.amazon.com/ec2/instance-types/) instances. These thresholds +were determined empirically using a leader instance that was under sufficient read, write, +and network load to cause it to permanently be at zero CPU credits, forcing it to the baseline +performance mode for that instance type. Real-world workloads typically have more bursts of +activity, so this is a conservative and pessimistic tuning strategy. + +This default was chosen based on feedback from users, many of whom wanted a low cost way +to run small production or development clusters with low cost compute resources, at the +expense of some performance in leader failure detection and leader election times. + +The default performance configuration is equivalent to this: + +```json +{ + "performance": { + "raft_multiplier": 5 + } +} +``` + +## Production Server Requirements ((#production)) + +When running Consul 0.7 and later in production, it is recommended to configure the server +[performance parameters](/consul/docs/reference/agent/configuration-file/general#performance) back to Consul's original +high-performance settings. This will let Consul servers detect a failed leader and complete +leader elections much more quickly than the default configuration which extends key Raft +timeouts by a factor of 5, so it can be quite slow during these events. + +The high performance configuration is simple and looks like this: + +```json +{ + "performance": { + "raft_multiplier": 1 + } +} +``` + +This value must take into account the network latency between the servers and the read/write load on the servers. + +The value of `raft_multiplier` is a scaling factor and directly affects the following parameters: + +| Param | Value | | +| ------------------ | -----: | ------: | +| HeartbeatTimeout | 1000ms | default | +| ElectionTimeout | 1000ms | default | +| LeaderLeaseTimeout | 500ms | default | + +By default, Consul uses a scaling factor of `5` (i.e. `raft_multiplier: 5`), which results in the following values: + +| Param | Value | Calculation | +| ------------------ | -----: | ----------: | +| HeartbeatTimeout | 5000ms | 5 x 1000ms | +| ElectionTimeout | 5000ms | 5 x 1000ms | +| LeaderLeaseTimeout | 2500ms | 5 x 500ms | + +~> **NOTE** Wide networks with more latency will perform better with larger values of `raft_multiplier`. + +The trade off is between leader stability and time to recover from an actual +leader failure. A short multiplier minimizes failure detection and election time +but may be triggered frequently in high latency situations. This can cause +constant leadership churn and associated unavailability. A high multiplier +reduces the chances that spurious failures will cause leadership churn but it +does this at the expense of taking longer to detect real failures and thus takes +longer to restore cluster availability. + +Leadership instability can also be caused by under-provisioned CPU resources and +is more likely in environments where CPU cycles are shared with other workloads. +In order for a server to remain the leader, it must send frequent heartbeat +messages to all other servers every few hundred milliseconds. If some number of +these are missing or late due to the leader not having sufficient CPU to send +them on time, the other servers will detect it as failed and hold a new +election. + +It's best to benchmark with a realistic workload when choosing a production server for Consul. +Here are some general recommendations: + +- Consul will make use of multiple cores, and at least 2 cores are recommended. + +- Spurious leader elections can be caused by networking + issues between the servers or insufficient CPU resources. Users in cloud environments + often bump their servers up to the next instance class with improved networking + and CPU until leader elections stabilize, and in Consul 0.7 or later the [performance + parameters](/consul/docs/reference/agent/configuration-file/general#performance) configuration now gives you tools + to trade off performance instead of upsizing servers. You can use the [`consul.raft.leader.lastContact` + telemetry](/consul/docs/reference/agent/telemetry#leadership-changes) to observe how the Raft timing is + performing and guide the decision to de-tune Raft performance or add more powerful + servers. + +- For DNS-heavy workloads, configuring all Consul agents in a cluster with the + [`allow_stale`](/consul/docs/reference/agent/configuration-file/dns#allow_stale) configuration option will allow reads to + scale across all Consul servers, not just the leader. Consul 0.7 and later enables stale reads + for DNS by default. See [Stale Reads](/consul/tutorials/networking/dns-caching#stale-reads) in the + [DNS Caching](/consul/tutorials/networking/dns-caching) guide for more details. It's also good to set + reasonable, non-zero [DNS TTL values](/consul/tutorials/networking/dns-caching#ttl-values) if your clients will + respect them. + +- In other applications that perform high volumes of reads against Consul, consider using the + [stale consistency mode](/consul/api-docs/features/consistency#stale) available to allow reads to scale + across all the servers and not just be forwarded to the leader. + +- In Consul 0.9.3 and later, a new [`limits`](/consul/docs/reference/agent/configuration-file/general#limits) configuration is + available on Consul clients to limit the RPC request rate they are allowed to make against the + Consul servers. After hitting the limit, requests will start to return rate limit errors until + time has passed and more requests are allowed. Configuring this across the cluster can help with + enforcing a max desired application load level on the servers, and can help mitigate abusive + applications. + +## Memory Requirements + +Consul server agents operate on a working set of data comprised of key/value +entries, the service catalog, prepared queries, access control lists, and +sessions in memory. These data are persisted through Raft to disk in the form +of a snapshot and log of changes since the previous snapshot for durability. + +When planning for memory requirements, you should typically allocate +enough RAM for your server agents to contain between 2 to 4 times the working +set size. You can determine the working set size by noting the value of +`consul.runtime.alloc_bytes` in the [Telemetry data](/consul/docs/reference/agent/telemetry). + +> NOTE: Consul is not designed to serve as a general purpose database, and you +> should keep this in mind when choosing what data are populated to the +> key/value store. + +## Read/Write Tuning + +Consul is write limited by disk I/O and read limited by CPU. Memory requirements will be dependent on the total size of KV pairs stored and should be sized according to that data (as should the hard drive storage). The limit on a key's value size is `512KB`. + +-> Consul is write limited by disk I/O and read limited by CPU. + +For **write-heavy** workloads, the total RAM available for overhead must approximately be equal to + +``` +RAM NEEDED = number of keys * average key size * 2-3x +``` + +Since writes must be synced to disk (persistent storage) on a quorum of servers before they are committed, deploying a disk with high write throughput (or an SSD) will enhance performance on the write side. ([Documentation](/consul/commands/agent#_data_dir)) + +For a **read-heavy** workload, configure all Consul server agents with the `allow_stale` DNS option, or query the API with the `stale` [consistency mode](/consul/api-docs/features/consistency). By default, all queries made to the server are RPC forwarded to and serviced by the leader. By enabling stale reads, any server will respond to any query, thereby reducing overhead on the leader. Typically, the stale response is `100ms` or less from consistent mode but it drastically improves performance and reduces latency under high load. + +If the leader server is out of memory or the disk is full, the server eventually stops responding, loses its election and cannot move past its last commit time. However, by configuring `max_stale` and setting it to a large value, Consul will continue to respond to queries during such outage scenarios. ([max_stale documentation](/consul/docs/reference/agent/configuration-file/dns#max_stale)). + +It should be noted that `stale` is not appropriate for coordination where strong consistency is important (i.e. locking or application leader election). For critical cases, the optional `consistent` API query mode is required for true linearizability; the trade off is that this turns a read into a full quorum write so requires more resources and takes longer. + +**Read-heavy** clusters may take advantage of the [enhanced reading](/consul/docs/manage/scale/read-replica) feature (Enterprise) for better scalability. This feature allows additional servers to be introduced as non-voters. Being a non-voter, the server will still participate in data replication, but it will not block the leader from committing log entries. + +Consul's agents use network sockets for communicating with the other nodes (gossip) and with the server agent. In addition, file descriptors are also opened for watch handlers, health checks, and log files. For a **write heavy** cluster, the `ulimit` size must be increased from the default value (`1024`) to prevent the leader from running out of file descriptors. + +To prevent any CPU spikes from a misconfigured client, RPC requests to the server should be [rate limited](/consul/docs/reference/agent/configuration-file/general#limits) + +~> **NOTE** Rate limiting is configured on the client agent only. + +In addition, two [performance indicators](/consul/docs/reference/agent/telemetry) — `consul.runtime.alloc_bytes` and `consul.runtime.heap_objects` — can help diagnose if the current sizing is not adequately meeting the load. + +## Service Mesh Certificate Signing CPU Limits + +If you enable [service mesh](/consul/docs/connect), the leader server will need +to perform public key signing operations for every service instance in the +cluster. Typically these operations are fast on modern hardware, however when +the CA is changed or its key rotated, the leader will face an influx of +requests for new certificates for every service instance running. + +While the client agents distribute these randomly over 30 seconds to avoid an +immediate thundering herd, they don't have enough information to tune that +period based on the number of certificates in use in the cluster so picking +longer smearing results in artificially slow rotations for small clusters. + +Smearing requests over 30s is sufficient to bring RPC load to a reasonable level +in all but the very largest clusters, but the extra CPU load from cryptographic +operations could impact the server's normal work. To limit that, Consul since +1.4.1 exposes two ways to limit the impact Certificate signing has on the leader +[`csr_max_per_second`](/consul/docs/reference/agent/configuration-file/service-mesh#ca_csr_max_per_second) and +[`csr_max_concurrent`](/consul/docs/reference/agent/configuration-file/service-mesh#ca_csr_max_concurrent). + +By default we set a limit of 50 per second which is reasonable on modest +hardware but may be too low and impact rotation times if more than 1500 service +instances are using service mesh in the cluster. `csr_max_per_second` is likely best +if you have fewer than four cores available since a whole core being used by +signing is likely to impact the server stability if it's all or a large portion +of the cores available. The downside is that you need to capacity plan: how many +service instances will need service mesh certificates? What CSR rate can your server +tolerate without impacting stability? How fast do you want CA rotations to +process? + +For larger production deployments, we generally recommend multiple CPU cores for +servers to handle the normal workload. With four or more cores available, it's +simpler to limit signing CPU impact with `csr_max_concurrent` rather than tune +the rate limit. This effectively sets how many CPU cores can be monopolized by +certificate signing work (although it doesn't pin that work to specific cores). +In this case `csr_max_per_second` should be disabled (set to `0`). + +For example if you have an 8 core server, setting `csr_max_concurrent` to `1` +would allow you to process CSRs as fast as a single core can (which is likely +sufficient for the very large clusters), without consuming all available +CPU cores and impacting normal server work or stability. diff --git a/website/content/docs/deploy/server/wal/index.mdx b/website/content/docs/deploy/server/wal/index.mdx new file mode 100644 index 000000000000..90b4c8fc9f17 --- /dev/null +++ b/website/content/docs/deploy/server/wal/index.mdx @@ -0,0 +1,46 @@ +--- +layout: docs +page_title: WAL LogStore backend overview +description: >- + The WAL (write-ahead log) LogStore backend shipped in Consul v1.15. It became the default backend in v1.20. +--- + +# WAL LogStore backend overview + +This topic provides an overview of the WAL (write-ahead log) LogStore backend. The WAL LogStore replaces the BoltDB backend used in previous versions of Consul. + +## WAL versus BoltDB + +WAL implements a traditional log with rotating, append-only log files. WAL resolves many issues with the existing `LogStore` provided by the BoltDB backend. The BoltDB `LogStore` is a copy-on-write BTree, which is not optimized for append-only, write-heavy workloads. + +### BoltDB storage scalability issues + +The existing BoltDB log store inefficiently stores append-only logs to disk because it was designed as a full key-value database. It is a single file that only ever grows. Deleting the oldest logs, which Consul does regularly when it makes new snapshots of the state, leaves free space in the file. The free space must be tracked in a `freelist` so that BoltDB can reuse it on future writes. By contrast, a simple segmented log can delete the oldest log files from disk. + +A burst of writes at double or triple the normal volume can suddenly cause the log file to grow to several times its steady-state size. After Consul takes the next snapshot and truncates the oldest logs, the resulting file is mostly empty space. + +To track the free space, Consul must write extra metadata to disk with every write. The metadata is proportional to the amount of free pages, so after a large burst write latencies tend to increase. In some cases, the latencies cause serious performance degradation to the cluster. + +To mitigate risks associated with sudden bursts of log data, Consul tries to limit lots of logs from accumulating in the LogStore. Significantly larger BoltDB files are slower to append to because the tree is deeper and freelist larger. For this reason, Consul's default options associated with snapshots, truncating logs, and keeping the log history have been aggressively set toward keeping BoltDB small rather than using disk IO optimally. + +But the larger the file, the more likely it is to have a large freelist or suddenly form one after a burst of writes. For this reason, the many of Consul's default options associated with snapshots, truncating logs, and keeping the log history aggressively keep BoltDT small rather than using disk IO more efficiently. + +Other reliability issues, such as [raft replication capacity issues](/consul/docs/agent/monitor/telemetry#raft-replication-capacity-issues), are much simpler to solve without the performance concerns caused by storing more logs in BoltDB. + +### WAL approaches storage issues differently + +When directly measured, WAL is more performant than BoltDB because it solves a simpler storage problem. Despite this, some users may not notice a significant performance improvement from the upgrade with the same configuration and workload. In this case, the benefit of WAL is that retaining more logs does not affect write performance. As a result, strategies for reducing disk IO with slower snapshots or for keeping logs to permit slower followers to catch up with cluster state are all possible, increasing the reliability of the deployment. + +## WAL quality assurance + +The WAL backend has been tested thoroughly during development: + +- Every component in the WAL, such as [metadata management](https://github.com/hashicorp/raft-wal/blob/main/types/meta.go), [log file encoding](https://github.com/hashicorp/raft-wal/blob/main/types/segment.go) to actual [file-system interaction](https://github.com/hashicorp/raft-wal/blob/main/types/vfs.go) are abstracted so unit tests can simulate difficult-to-reproduce disk failures. + +- We used the [application-level intelligent crash explorer (ALICE)](https://github.com/hashicorp/raft-wal/blob/main/alice/README.md) to exhaustively simulate thousands of possible crash failure scenarios. WAL correctly recovered from all scenarios. + +- We ran hundreds of tests in a performance testing cluster with checksum verification enabled and did not detect data loss or corruption. We will continue testing before making WAL the default backend. + +We are aware of how complex and critical disk-persistence is for your data. + +We hope that many users at different scales will try WAL in their environments after upgrading to 1.15 or later and report success or failure so that we can confidently replace BoltDB as the default for new clusters in a future release. diff --git a/website/content/docs/deploy/server/wal/monitor-raft.mdx b/website/content/docs/deploy/server/wal/monitor-raft.mdx new file mode 100644 index 000000000000..ff9af90f45ff --- /dev/null +++ b/website/content/docs/deploy/server/wal/monitor-raft.mdx @@ -0,0 +1,83 @@ +--- +layout: docs +page_title: Monitor Raft metrics and logs for WAL +description: >- + Learn how to monitor Raft metrics emitted the experimental WAL (write-ahead log) LogStore backend shipped in Consul 1.15. +--- + +# Monitor Raft metrics and logs for WAL + +This topic describes how to monitor Raft metrics and logs if you are testing the WAL backend. We strongly recommend monitoring the Consul cluster, especially the target server, for evidence that the WAL backend is not functioning correctly. Refer to [ WAL LogStore backend](/consul/docs/deploy/server/wal) for additional information about the WAL backend. + +## Monitor for checksum failures + +Log store verification failures on any server, regardless of whether you are running the BoltDB or WAL backed, are unrecoverable errors. Consul may report the following errors in logs. + +### Read failures: Disk Corruption + +```log hideClipboard +2022-11-15T22:41:23.546Z [ERROR] agent.raft.logstore: verification checksum FAILED: storage corruption rangeStart=1234 rangeEnd=3456 leaderChecksum=0xc1... readChecksum=0x45... +``` + +This indicates that the server read back data that is different from what it wrote to disk. This indicates corruption in the storage backend or filesystem. + +For convenience, Consul also increments a metric `consul.raft.logstore.verifier.read_checksum_failures` when this occurs. + +### Write failures: In-flight Corruption + +The following error indicates that the checksum on the follower did not match the leader when the follower received the logs. The error implies that the corruption happened in the network or software and not the log store: + +```log hideClipboard +2022-11-15T22:41:23.546Z [ERROR] agent.raft.logstore: verification checksum FAILED: in-flight corruption rangeStart=1234 rangeEnd=3456 leaderChecksum=0xc1... followerWriteChecksum=0x45... +``` + +It is unlikely that this error indicates an issue with the storage backend, but you should take the same steps to resolve and report it. + +The `consul.raft.logstore.verifier.write_checksum_failures` metric increments when this error occurs. + +## Resolve checksum failures + +If either type of corruption is detected, complete the instructions for [reverting to BoltDB](/consul/docs/deploy/server/wal/revert-boltdb). If the server already uses BoltDB, the errors likely indicate a latent bug in BoltDB or a bug in the verification code. In both cases, you should follow the revert instructions. + +Report all verification failures as a [GitHub +issue](https://github.com/hashicorp/consul/issues/new?assignees=&labels=&template=bug_report.md&title=WAL:%20Checksum%20Failure). + +In your report, include the following: + - Details of your server cluster configuration and hardware + - Logs around the failure message + - Context for how long they have been running the configuration + - Any metrics or description of the workload you have. For example, how many raft + commits per second. Also include the performance metrics described on this page. + +We recommend setting up an alert on Consul server logs containing `verification checksum FAILED` or on the `consul.raft.logstore.verifier.{read|write}_checksum_failures` metrics. The sooner you respond to a corrupt server, the lower the chance of any of the [potential risks](/consul/docs/agent/wal-logstore/enable#risks) causing problems in your cluster. + +## Performance metrics + +The key performance metrics to watch are: + +- `consul.raft.commitTime` measures the time to commit new writes on a quorum of + servers. It should be the same or lower after deploying WAL. Even if WAL is + faster for your workload and hardware, it may not be reflected in `commitTime` + until enough followers are using WAL that the leader does not have to wait for + two slower followers in a cluster of five to catch up. + +- `consul.raft.rpc.appendEntries.storeLogs` measures the time spent persisting + logs to disk on each _follower_. It should be the same or lower for + WAL-enabled followers. + +- `consul.raft.replication.appendEntries.rpc` measures the time taken for each + `AppendEntries` RPC from the leader's perspective. If this is significantly + higher than `consul.raft.rpc.appendEntries` on the follower, it indicates a + known queuing issue in the Raft library and is unrelated to the backend. + Followers with WAL enabled should not be slower than the others. You can + determine which follower is associated with which metric by running the + `consul operator raft list-peers` command and matching the + `peer_id` label value to the server IDs listed. + +- `consul.raft.compactLogs` measures the time take to truncate the logs after a + snapshot. WAL-enabled servers should not be slower than BoltDB servers. + +- `consul.raft.leader.dispatchLog` measures the time spent persisting logs to + disk on the _leader_. It is only relevant if a WAL-enabled server becomes a + leader. It should be the same or lower than before when the leader was using + BoltDB. \ No newline at end of file diff --git a/website/content/docs/deploy/server/wal/revert-boltdb.mdx b/website/content/docs/deploy/server/wal/revert-boltdb.mdx new file mode 100644 index 000000000000..1bf2ac5bca51 --- /dev/null +++ b/website/content/docs/deploy/server/wal/revert-boltdb.mdx @@ -0,0 +1,76 @@ +--- +layout: docs +page_title: Revert to BoltDB +description: >- + Learn how to revert Consul to the BoltDB backend instead of using the WAL (write-ahead log) LogStore backend. +--- + +# Revert storage backend to BoltDB from WAL + +This topic describes how to revert your Consul storage backend from the WAL LogStore backend to BoltDB. + +The overall process for reverting to BoltDB consists of the following steps. Repeat the steps for all Consul servers that you need to revert. + +1. Stop target server gracefully. +1. Remove data directory from target server. +1. Update target server's configuration. +1. Start target server. + +## Stop target server gracefully + +Stop the target server gracefully. For example, if you are using `systemd`, +run the following command: + +```shell-session +$ systemctl stop consul +``` + +If your environment uses configuration management automation that might interfere with this process, such as Chef or Puppet, you must disable them until you have completely reverted the storage backend. + +## Remove data directory from target server + +Temporarily moving the data directory to a different location is less destructive than deleting it. We recommend moving the data directory instead of deleted it in cases where you unsuccessfully enable WAL. Do not use the old data directory (`/data-dir/raft.bak`) for recovery after restarting the server. We recommend eventually deleting the old directory. + +The following example assumes the `data_dir` in the server's configuration is `/data-dir` and renames it to `/data-dir.wal.bak`. + +```shell-session +$ mv /data-dir/raft /data-dir/raft.wal.bak +``` + +When switching backend, you must always remove _the entire raft directory_ not just the `raft.db` file or `wal` directory. This is because the log must always be consistent with the snapshots to avoid undefined behavior or data loss. + +## Update target server's configuration + +Modify the `backend` in the target server's configuration file: + +```hcl +raft_logstore { + backend = "boltdb" + verification { + enabled = true + interval = "60s" + } +} +``` + +## Start target server + +Start the target server. For example, if you are using `systemd`, run the following command: + +```shell-session +$ systemctl start consul +``` + +Watch for the server to become a healthy voter again. + +```shell-session +$ consul operator raft list-peers +``` + +### Clean up old data directories + +If necessary, clean up any `raft.wal.bak` directories. Replace `/data-dir` with the value you specified in your configuration file. + +```shell-session +$ rm /data-dir/raft.bak +``` diff --git a/website/content/docs/deploy/workload/client/docker.mdx b/website/content/docs/deploy/workload/client/docker.mdx new file mode 100644 index 000000000000..68d302d81b6c --- /dev/null +++ b/website/content/docs/deploy/workload/client/docker.mdx @@ -0,0 +1,128 @@ +--- +layout: docs +page_title: Deploy Consul client agent on Docker +description: >- + Learn how to deploy a Consul client agent on a Docker container. +--- + +# Deploy Consul client agent on Docker + +This topic provides an overview for deploying a Consul client when running Consul on Docker containers. + +## Deploy and run a Consul client + +After you [deploy one or more server agents](/consul/docs/deploy/server/docker), you can deploy a containerized Consul client agent that joins the datacenter. Do not use detached mode. That way you can reference the client logs later. + +The following command deploys a Docker container and instructs it to join the Consul cluster by including a Consul server's hostname or IP address in the `retry-join` parameter. + +```shell-session +$ docker run --name=consul-client hashicorp/consul consul agent -node=consul-client -data-dir=/consul/data -retry-join=consul-server +==> Starting Consul agent... + Version: '1.21.2' + Build Date: '2025-06-18 08:16:39 +0000 UTC' + Node ID: '63a0c0ae-4762-2fa5-4b70-1cf526a1395b' + Node name: 'consul-client' + Datacenter: 'dc1' (Segment: '') + Server: false (Bootstrap: false) + Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: -1, gRPC-TLS: -1, DNS: 8600) + Cluster Addr: consul-server (LAN: 8301, WAN: 8302) + Gossip Encryption: false + Auto-Encrypt-TLS: false + ACL Enabled: false + ACL Default Policy: allow + HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2 + gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2 + Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2 + +==> Log data will now stream in as it occurs: + +2025-07-22T23:16:33.667Z [INFO] agent.client.serf.lan: serf: EventMemberJoin: consul-client consul-server +2025-07-22T23:16:33.667Z [INFO] agent.router: Initializing LAN area manager +2025-07-22T23:16:33.667Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=udp +2025-07-22T23:16:33.667Z [INFO] agent: Started DNS server: address=127.0.0.1:8600 network=tcp +2025-07-22T23:16:33.667Z [INFO] agent: Starting server: address=127.0.0.1:8500 network=tcp protocol=http +2025-07-22T23:16:33.668Z [INFO] agent: started state syncer +2025-07-22T23:16:33.668Z [INFO] agent: Retry join is supported for the following discovery methods: cluster=LAN discovery_methods="aliyun aws azure digitalocean gce hcp k8s linode mdns os packet scaleway softlayer tencentcloud triton vsphere" +2025-07-22T23:16:33.668Z [INFO] agent: Joining cluster...: cluster=LAN +2025-07-22T23:16:33.668Z [INFO] agent: (LAN) joining: lan_addresses=["consul-server"] +2025-07-22T23:16:33.668Z [INFO] agent: Consul agent running! + +##... + +2022-12-15T18:59:46.454Z [INFO] agent: Synced node info +``` + +In a new terminal session, run the `consul members` command in the Consul client container to confirm the agent joined the datacenter. + +```shell-session +$ docker exec consul-client consul members +``` + +```plaintext hideClipboard +Node Address Status Type Build Protocol DC Partition Segment +consul-server 172.17.0.2:8301 alive server 1.21.2 2 dc1 default +consul-client 172.17.0.3:8301 alive client 1.21.2 2 dc1 default +``` + +The output confirms that the client joined the cluster, and is ready to accept service definitions. + +## Multi-agent Consul deployment + +You can start a multi-agent Consul deployment with multiple client containers. The following example uses a Docker compose file to start three Consul client containers that try to connect to a Consul server `consul-server1`. For more information about starting a Consul server cluster, see [Deploy Consul server agents on Docker](/consul/docs/deploy/server/docker). + + + +```yaml +version: '3.7' +services: + consul-client1: + image: hashicorp/consul:1.21.3 + container_name: consul-client1 + restart: always + networks: + - consul + command: "agent -node=consul-client1 -client=0.0.0.0 -data-dir='/consul/data' -retry-join=consul-server1" + consul-client2: + image: hashicorp/consul:1.21.3 + container_name: consul-client2 + restart: always + networks: + - consul + command: "agent -node=consul-client2 -client=0.0.0.0 -data-dir='/consul/data' -retry-join=consul-server1" + consul-client3: + image: hashicorp/consul:1.21.3 + container_name: consul-client3 + restart: always + networks: + - consul + command: "agent -node=consul-client3 -client=0.0.0.0 -data-dir='/consul/data' -retry-join=consul-server1" +networks: + consul: + driver: bridge +``` + + + +You can start the cluster with the following command: + +```shell-session +$ docker-compose -f consul-clients.yml up -d +[+] Running 4/4 + ✔ Network docker_consul Created 0.0s + ✔ Container consul-client3 Started 0.2s + ✔ Container consul-client1 Started 0.2s + ✔ Container consul-client2 Started 0.2s +``` + +This command starts the three Consul client containers in detached mode. Each client is configured to join the cluster by retrying to connect to `consul-server1`. + +You can verify the status of the cluster by executing the `consul members` command inside any of the client containers: + +```shell-session +$ docker exec consul-client1 consul members +Node Address Status Type Build Protocol DC Partition Segment +consul-server1 172.19.0.2:8301 alive server 1.21.3 2 dc1 default +consul-client1 172.19.0.3:8301 alive client 1.21.3 2 dc1 default +consul-client2 172.19.0.4:8301 alive client 1.21.3 2 dc1 default +consul-client3 172.19.0.5:8301 alive client 1.21.3 2 dc1 default +``` diff --git a/website/content/docs/deploy/workload/client/k8s.mdx b/website/content/docs/deploy/workload/client/k8s.mdx new file mode 100644 index 000000000000..f229be6fc2f1 --- /dev/null +++ b/website/content/docs/deploy/workload/client/k8s.mdx @@ -0,0 +1,151 @@ +--- +layout: docs +page_title: Deploy Consul clients outside of Kubernetes +description: >- + Services running on a virtual machine (VM) can join a Consul datacenter running on Kubernetes. Learn how to configure the Kubernetes installation to accept communication from external services. +--- + +# Deploy Consul clients outside of Kubernetes + +Services running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes. + +## Auto-join + +The recommended way to join a cluster running within Kubernetes is to +use the ["k8s" cloud auto-join provider](/consul/docs/deploy/server/cloud-auto-join#kubernetes-k8s). + +The auto-join provider dynamically discovers IP addresses to join using +the Kubernetes API. It authenticates with Kubernetes using a standard +`kubeconfig` file. Auto-join works with all major hosted Kubernetes offerings +as well as self-hosted installations. The token in the `kubeconfig` file +needs to have permissions to list pods in the namespace where Consul servers +are deployed. + +The auto-join string below joins a Consul server agent to a cluster using the [official Helm chart](/consul/docs/reference/k8s/helm): + +```shell-session +$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"' +``` + +-> **Note:** This auto-join command only connects on the default gossip port +8301, whether you are joining on the pod network or via host ports. A +Consul server that is already a member of the datacenter should be +listening on this port for the external service to connect through +auto-join. + +### Auto-join on the Pod network + +In the default Consul Helm chart installation, Consul servers are +routable through their pod IPs for server RPCs. As a result, any +external agents joining the Consul cluster running on Kubernetes +need to be able to connect to those pod IPs. + +In many hosted Kubernetes environments, you need to explicitly configure +your hosting provider to ensure that pod IPs are routable from external VMs. +For more information, refer to [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking), +[AWS EKS CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) and +[GKE VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips). + +To join external agents with Consul on Kubernetes deployments installed with default values through the [official Helm chart](/consul/docs/reference/k8s/helm): + + 1. Make sure the pod IPs of the servers in Kubernetes are + routable from the VM and that the VM can access port 8301 (for gossip) and + port 8300 (for server RPC) on those pod IPs. + + 1. Make sure that the server pods running in Kubernetes can route + to the VM's advertise IP on its gossip port (default 8301). + + 1. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM. + + 1. On the external VM, run: + + ```shell-session + consul agent \ + -advertise="$ADVERTISE_IP" \ + -retry-join='provider=k8s label_selector="app=consul,component=server"' \ + -bind=0.0.0.0 \ + -hcl='leave_on_terminate = true' \ + -hcl='ports { grpc = 8502 }' \ + -config-dir=$CONFIG_DIR \ + -datacenter=$DATACENTER \ + -data-dir=$DATA_DIR \ + ``` + + 1. Run `consul members` to check if the join was successful. + + ```shell-session + / $ consul members + Node Address Status Type Build Protocol DC Segment + consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 + external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 + gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 + gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 + gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 + ``` + +### Auto-join through host ports + +If your external VMs cannot connect to Kubernetes pod IPs but they can connect +to the internal host IPs of the nodes in the Kubernetes cluster, you can join the two by exposing ports on the host IP instead. + + 1. Install the [official Helm chart](/consul/docs/reference/k8s/helm) with the following values: + ```yaml + client: + exposeGossipPorts: true # exposes client gossip ports as hostPorts + server: + exposeGossipAndRPCPorts: true # exposes the server gossip and RPC ports as hostPorts + ports: + # Configures the server gossip port + serflan: + # Note that this needs to be different than 8301, to avoid conflicting with the client gossip hostPort + port: 9301 + ``` + This installation exposes the client gossip ports, the server gossip ports and the server RPC port at `hostIP:hostPort`. Note that `hostIP` is the **internal** IP of the VM that the client/server pods are deployed on. + + 1. Make sure the IPs of the Kubernetes nodes are routable from the VM and + that the VM can access ports 8301 and 9301 (for gossip) and port 8300 (for + server RPC) on those node IPs. + + 1. Make sure the server pods running in Kubernetes can route to + the VM's advertise IP on its gossip port (default 8301). + + 1. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM. + + 1. On the external VM, run: + + ```shell-session + consul agent \ + -advertise="$ADVERTISE_IP" \ + -retry-join='provider=k8s host_network=true label_selector="app=consul,component=server"' + -bind=0.0.0.0 \ + -hcl='leave_on_terminate = true' \ + -hcl='ports { grpc = 8502 }' \ + -config-dir=$CONFIG_DIR \ + -datacenter=$DATACENTER \ + -data-dir=$DATA_DIR \ + ``` + + Note the addition of `host_network=true` in the retry-join argument. + + 1. Run `consul members` to check if the join was successful. + + ```shell-session + / $ consul members + Node Address Status Type Build Protocol DC Segment + consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 + external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 + gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 + gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 + gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 + ``` + +## Manual join + +If you are unable to use auto-join, try following the instructions in +either of the auto-join sections, but instead of using a `provider` key in the +`-retry-join` flag, pass the address of at least one Consul server. Example: `-retry-join=$CONSUL_SERVER_IP:$SERVER_SERFLAN_PORT`. + +A `kubeconfig` file is not required when using manual join. + +Instead of hardcoding an IP address, we recommend you set up a DNS entry +that resolves to the pod IPs or host IPs that the Consul server pods are running on. \ No newline at end of file diff --git a/website/content/docs/deploy/workload/client/vm.mdx b/website/content/docs/deploy/workload/client/vm.mdx new file mode 100644 index 000000000000..ee82043c4fa8 --- /dev/null +++ b/website/content/docs/deploy/workload/client/vm.mdx @@ -0,0 +1,37 @@ +--- +layout: docs +page_title: Configure client agents on virtual machines +description: >- + Learn how to enable and configure Consul client agents. +--- + +# Configure client agents on virtual machines + +This page describes the process to configure a Consul client agent on virtual machines (VMs). An automated process for production environments is demonstrated in the [Consul getting started on Virtual Machines tutorials](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery). + +## Overview + +The Consul client agent is a long-running process that you must deploy on the nodes as your application workloads run on. They use [encrypted gossip communication](/consul/docs/concept/gossip) to communicate with Consul server agents so that Consul DNS requests return healthy results when you query the Consul catalog. + +The process to deploy the client agent consists of the following steps: + +1. Verify that the correct version of the Consul binary is installed. +1. Update the client agent configuration. +1. Add the client agent configuration to the node. +1. Ensure the Consul data directory has the correct permissions. +1. Start the Consul server process. + +You also have the option to configure the `retry_join` stanza so that when a Consul agent starts, it automatically joins a cluster. This ability is called _cloud auto-join_. Configuration requirements vary by cloud provider. For more information, refer to [automatically join clusters to a cloud provider](/deploy/server/cloud-auto-join). + +## Client agent configuration + +Update the configuration with the following information required to join the cluster: + +- Datacenter Name +- Consul server address + +Depending on the cluster's existing security, you may need to update the client configuration with these additional parameters: + +- CA certificate +- TLS server name +- Valid ACL token \ No newline at end of file diff --git a/website/content/docs/deploy/workload/dataplane/ecs.mdx b/website/content/docs/deploy/workload/dataplane/ecs.mdx new file mode 100644 index 000000000000..40704901f18d --- /dev/null +++ b/website/content/docs/deploy/workload/dataplane/ecs.mdx @@ -0,0 +1,89 @@ +--- +layout: docs +page_title: Deploy Consul Dataplane on ECS +description: >- + Consul Dataplane removes the need to run a client agent for service discovery and service mesh by leveraging orchestrator functions. +--- + +# Deploy Consul Dataplane on ECS + +This page describes the requirements to set up Consul on ECS deployments to use dataplanes instead of client agents, as well as the process to update existing deployments to use dataplanes instead of agents. + +If you already have a Consul cluster deployed on Kubernetes and +would like to turn on TLS for internal Consul communication, +refer to [Configuring TLS on an Existing Cluster](/consul/docs/secure/encryption/tls/enable/existing/k8s). + +## Requirements + +- Dataplanes can connect to Consul servers v1.14.0 and newer. +- Dataplanes on AWS Elastic Container Services (ECS) requires Consul ECS v0.7.0 and newer. + +## Installation + +Refer to the following documentation for Consul on ECS workloads: + +- [Deploy Consul with the Terraform module](/consul/docs/register/service/ecs/) +- [Deploy Consul manually](/consul/ecs/install-manual) + +## Upgrading to Consul Dataplanes + +Since v0.7.0, Consul service mesh on ECS uses [Consul dataplanes](/consul/docs/architecture/control-plane/dataplane), which are lightweight processes for managing Envoy proxies in containerized networks. Refer to the [release notes](/consul/docs/release-notes/consul-ecs/v0_7_x) for additional information about the switch to Consul dataplanes. + +### Requirements + +Before you upgrading to the dataplane-based architecture, you must upgrade your Consul servers to a version compatible with Consul ECS: + +- Consul 1.14.x and later +- Consul dataplane 1.3.x and laterS + +### Deploy the latest version of the ECS controller module + +In an ACL enabled cluster, deploy the latest version of the ECS controller module in `hashicorp/terraform-aws-consul-ecs` along with the older version of the ACL controller. Note that both the controllers should coexist until the upgrade is complete. The new version of the controller only tracks tasks that use dataplanes. + +### Upgrade workloads + +For application tasks, upgrade the individual task definitions to `v0.7.0` or later of the `mesh-task` module. You must upgrade each task one at a time. + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + version = "v0.7.0" +} +``` + +For gateway tasks, upgrade the individual task definitions to `v0.7.0` or later of the `gateway-task` module. You must upgrade each task one by one independently. ECS creates new versions of tasks before shutting down the older tasks to support zero downtime deployments. + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/gateway-task" + version = "v0.7.0" +} +``` + +### Delete previous tasks + +After upgrading all tasks, you can destroy the `acl-controller` containers, which are replaced by the ECS controller. You can manually remove any artifacts related to the old architecture, including Consul clients and ACL controllers, by executing the following commands: + +1. Run `consul acl policy delete` to delete the client policy. You can pass either the ID of the policy or the name of the policy, for example: + + ```shell-session + $ consul acl policy delete -name="consul-ecs-client-policy" + ``` + + Refer to the [`consul acl policy delete`](/consul/commands/acl/policy/delete) documentation for additional information. + +1. Run the `consul acl role delete` command to delete the client role. You can pass either the ID of the role or the name of the role, for example: + + ```shell-session + $ consul acl role delete -name="consul-ecs-client-role" + ``` + + Refer to the [`consul acl role delete`](/consul/commands/acl/role/delete) documentation for additional information. + +1. Run the `consul acl auth-method delete` command and specify the auth method name to delete. + + ```shell-session + $ consul acl auth-method delete -name="iam-ecs-client-token" + ``` + + Refer to the [`consul acl auth-method delete`](/consul/commands/acl/auth-method/delete) documentation for additional information. \ No newline at end of file diff --git a/website/content/docs/deploy/workload/dataplane/k8s.mdx b/website/content/docs/deploy/workload/dataplane/k8s.mdx new file mode 100644 index 000000000000..3a38e5be1b1e --- /dev/null +++ b/website/content/docs/deploy/workload/dataplane/k8s.mdx @@ -0,0 +1,79 @@ +--- +layout: docs +page_title: Deploy Consul Dataplane on Kubernetes +description: >- + Consul Dataplane removes the need to run a client agent for service discovery and service mesh by leveraging orchestrator functions. +--- + +# Deploy Consul Dataplane on Kubernetes + +This page describes the requirements to set up Consul on Kubernetes deployments to use dataplanes instead of client agents, as well as the process to update existing deployments to use dataplanes instead of agents. + +If you already have a Consul cluster deployed on Kubernetes and +would like to turn on TLS for internal Consul communication, +refer to [Configuring TLS on an Existing Cluster](/consul/docs/secure/encryption/tls/enable/existing/k8s). + +## Requirements + +- Dataplanes can connect to Consul servers v1.14.0 and newer. +- Dataplanes on Kubernetes requires Consul K8s v1.0.0 and newer. +- Consul Dataplane is not supported on Windows. +- Consul Dataplane requires the `NET_BIND_SERVICE` capability. Refer to [Set capabilities for a Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) in the Kubernetes Documentation for more information. + +## Installation + +To install Consul Dataplane, set `VERSION` to `1.0.0` and then follow the instructions to install a specific version of Consul [with the Helm Chart](/consul/docs/k8s/installation/install#install-consul) or [with the Consul-k8s CLI](/consul/docs/k8s/installation/install-cli#install-a-previous-version). + +### Helm + +```shell-session +$ export VERSION=1.0.0 +$ helm install consul hashicorp/consul --set global.name=consul --version ${VERSION} --create-namespace --namespace consul +``` + +### Consul-k8s CLI + +```shell-session +$ export VERSION=1.0.0 && \ + curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip +``` + +## Upgrading to Consul Dataplane + +In earlier versions, Consul on Kubernetes used client agents in its deployments. As of v1.14.0, Consul uses [Consul Dataplane](/consul/docs/connect/dataplane/) in Kubernetes deployments instead of client agents. + +If you upgrade Consul from a version that uses client agents to a version the uses dataplanes, complete the following steps to upgrade your deployment safely and without downtime. + +1. If ACLs are enabled, you must first upgrade to consul-k8s 0.49.8 or above. These versions expose the setting `connectInject.prepareDataplanesUpgrade` + which is required for no-downtime upgrades when ACLs are enabled. + + Set `connectInject.prepareDataplanesUpgrade` to `true` and then perform the upgrade to 0.49.8 or above (whichever is the latest in the 0.49.x series) + + ```yaml filename="values.yaml" + connectInject: + prepareDataplanesUpgrade: true + ``` + +1. Consul dataplanes disables Consul clients by default, but during an upgrade you need to ensure Consul clients continue to run. Edit your Helm chart configuration and set the [`client.enabled`](/consul/docs/reference/k8s/helm#v-client-enabled) field to `true` and specify an action for Consul to take during the upgrade process in the [`client.updateStrategy`](/consul/docs/reference/k8s/helm#v-client-updatestrategy) field: + + ```yaml filename="values.yaml" + client: + enabled: true + updateStrategy: | + type: OnDelete + ``` + +1. Follow our [recommended procedures to upgrade servers](#upgrade-consul-servers) on Kubernetes deployments to upgrade Helm values for the new version of Consul. The latest version of consul-k8s components may be in a CrashLoopBackoff state during the performance of the server upgrade from versions <1.14.x until all Consul servers are on versions >=1.14.x. Components in CrashLoopBackoff will not negatively affect the cluster because older versioned components will still be operating. Once all servers have been fully upgraded, the latest consul-k8s components will automatically restore from CrashLoopBackoff and older component versions will be spun down. + +1. Run `kubectl rollout restart` to restart your service mesh applications. Restarting service mesh application causes Kubernetes to re-inject them with the webhook for dataplanes. + +1. Restart all gateways in your service mesh. + +1. Now that all services and gateways are using Consul dataplanes, disable client agents in your Helm chart by deleting the `client` stanza or setting `client.enabled` to `false` and running a `consul-k8s` or Helm upgrade. + +1. If ACLs are enabled, outdated ACL tokens will persist a result of the upgrade. You can manually delete the tokens to declutter your Consul environment. + + Outdated connect-injector tokens have the following description: `token created via login: {"component":"connect-injector"}`. Do not delete + the tokens that have a description where `pod` is a key, for example `token created via login: {"component":"connect-injector","pod":"default/consul-connect-injector-576b65747c-9547x"}`). The dataplane-enabled connect inject pods use these tokens. + + You can also review the creation date for the tokens and only delete the injector tokens created before your upgrade, but do not delete all old tokens without considering if they are still in use. Some tokens, such as the server tokens, are still necessary. \ No newline at end of file diff --git a/website/content/docs/deploy/workload/index.mdx b/website/content/docs/deploy/workload/index.mdx new file mode 100644 index 000000000000..e8986fbab819 --- /dev/null +++ b/website/content/docs/deploy/workload/index.mdx @@ -0,0 +1,53 @@ +--- +layout: docs +page_title: Deploy Consul client agents and dataplanes for application workloads +description: >- + Learn how to enable and configure Consul workloads for your applications. Client agents and dataplanes enable Consul's core features in VM, Kubernetes, Docker, and AWS ECS deployments. +--- + +# Deploy Consul client agents and dataplanes for application workloads + +This topic provides an overview of the process to configure and deploy Consul alongside your workloads so that server agents can communicate with the nodes where your services run. + +## Introduction + +In addition to the server agents you deploy in your datacenter, you must run Consul workloads alongside services in your application's data plane. These workloads provide your application's services with the following Consul functions: + +- Register services to the Consul catalog. +- Update service status in the catalog in response to health checks.. +- Resolve services with Consul DNS. + +When you run a Consul _client agent_ on a node, your application can also take advantage of the following Consul functions: + +- Deploy and manage sidecar proxies. +- Secure service-to-service communication. +- Monitor services. +- Manage service traffic. + +When you run a Consul _dataplane_, a lightweight process communicates with the Envoy proxies already running alongside your application workloads. Dataplanes let you use Consul's security, monitoring, and traffic management features when you already have sidecar proxies and health checks configured in Kubernetes or ECS. For more information, refer to [Consul Dataplane](/consul/docs/architecture/control-plane/dataplane). + +## Guidance + +The following resources are available to help you deploy Consul alongside your application workloads. + +### Deploy Consul client agents + +The process to deploy a Consul client agent depends on your runtime: + +- [Deploy a Consul client agent on virtual machines](/consul/docs/deploy/workload/client/vm) +- [Deploy a Consul client agent on Kubernetes](/consul/docs/deploy/workload/client/k8s) +- [Deploy a Consul client agent on Docker](/consul/docs/deploy/workload/client/docker) + +### Deploy Consul dataplanes + +The process to deploy a Consul dataplane depends on your runtime: + +- [Deploy Consul dataplanes on Kubernetes](/consul/docs/deploy/workload/dataplane/k8s) +- [Deploy Consul dataplanes on AWS ECS](/consul/docs/deploy/workload/dataplane/ecs) + +### HCP Consul Dedicated + +When you use HCP Consul Dedicated clusters, you must deploy Consul alongside your workloads. Refer to the following pages in the [HCP Consul Dedicated documentation](/hcp/docs/consul): + +- [Deploy Consul clients](/hcp/docs/consul/dedicated/clients) +- [Deploy Consul dataplanes](/hcp/docs/consul/dedicated/dataplanes) \ No newline at end of file diff --git a/website/content/docs/discover/dns/configure.mdx b/website/content/docs/discover/dns/configure.mdx new file mode 100644 index 000000000000..d6c20ec8aa3a --- /dev/null +++ b/website/content/docs/discover/dns/configure.mdx @@ -0,0 +1,80 @@ +--- +layout: docs +page_title: Configure Consul DNS behavior +description: >- + Learn how to modify the default DNS behavior so that services and nodes can easily discover other services and nodes in your network. +--- + +# Configure Consul DNS behavior + +This topic describes the default behavior of the Consul DNS functionality and how to customize how Consul performs queries. + +## Introduction + +@include 'text/descriptions/consul-dns.mdx' + +## Configure DNS behaviors + +By default, the Consul DNS listens for queries at `127.0.0.1:8600` and uses the `consul` domain. Specify the following parameters in the agent configuration to determine DNS behavior when querying services: + +- [`client_addr`](/consul/docs/reference/agent/configuration-file/general#client_addr) +- [`ports.dns`](/consul/docs/reference/agent/configuration-file/general#dns_port): By default, Consul does not use port `53`, which is typically reserved for the default port for DNS resolvers, because it requires an escalated privilege to bind to. +- [`recursors`](/consul/docs/reference/agent/configuration-file/general#recursors) +- [`domain`](/consul/docs/reference/agent/configuration-file/dns#domain) +- [`alt_domain`](/consul/docs/reference/agent/configuration-file/general#alt_domain) +- [`dns_config`](/consul/docs/reference/agent/configuration-file/dns#dns_config) + +### Configure WAN address translation + +By default, Consul DNS queries return a node's local address, even when being queried from a remote datacenter. You can configure the DNS to reach a node from outside its datacenter by specifying the address in the following configuration fields in the Consul agent: + +- [advertise-wan](/consul/commands/agent#_advertise-wan) +- [translate_wan_addrs](/consul/docs/reference/agent/configuration-file/general#translate_wan_addrs) + +### Use a custom DNS resolver library + +You can specify a list of addresses in the agent's [`recursors`](/consul/docs/reference/agent/configuration-file/general#recursors) field to provide upstream DNS servers that recursively resolve queries that are outside the service domain for Consul. + +Nodes that query records outside the `consul.` domain resolve to an upstream DNS. You can specify IP addresses or use `go-sockaddr` templates. Consul resolves IP addresses in the specified order and ignores duplicates. + +### Enable non-Consul queries + +You enable non-Consul queries to be resolved by setting Consul as the DNS server for a node and providing a [`recursors`](/consul/docs/reference/agent/configuration-file/general##recursors) configuration. + +### Forward queries to an agent + +You can forward all queries sent to the `consul.` domain from the existing DNS server to a Consul agent. Refer to [Forward DNS for Consul Service Discovery](/consul/tutorials/networking/dns-forwarding) for instructions. + +### Query an alternate domain + +By default, Consul responds to DNS queries in the `consul` domain, but you can set a specific domain for responding to DNS queries by configuring the [`domain`](/consul/docs/reference/agent/configuration-file/dns#domain) parameter. + +You can also specify an additional domain in the [`alt_domain`](//consul/docs/reference/agent/configuration-file/general##alt_domain) agent configuration option, which configures Consul to respond to queries in a secondary domain. Configuring an alternate domain may be useful during a DNS migration or to distinguish between internal and external queries, for example. + +Consul's DNS response uses the same domain as the query. + +In the following example, the `alt_domain` parameter in the agent configuration is set to `test-domain`, which enables operators to query the domain: + +```shell-session +$ dig @127.0.0.1 -p 8600 consul.service.test-domain SRV +;; QUESTION SECTION: +;consul.service.test-domain. IN SRV + +;; ANSWER SECTION: +consul.service.test-domain. 0 IN SRV 1 1 8300 machine.node.dc1.test-domain. + +;; ADDITIONAL SECTION: +machine.node.dc1.test-domain. 0 IN A 127.0.0.1 +machine.node.dc1.test-domain. 0 IN TXT "consul-network-segment=" +``` + +#### PTR queries + +Responses to pointer record (PTR) queries, such as `.in-addr.arpa.`, always use the [primary domain](/consul/docs/reference/agent/configuration-file/general#domain) and not the alternative domain. + +### Caching + +By default, DNS results served by Consul are not cached. Refer to the [DNS +Caching tutorial](/consul/tutorials/networking/dns-caching) for instructions on +how to enable caching. + diff --git a/website/content/docs/discover/dns/docker.mdx b/website/content/docs/discover/dns/docker.mdx new file mode 100644 index 000000000000..5032c993bb77 --- /dev/null +++ b/website/content/docs/discover/dns/docker.mdx @@ -0,0 +1,45 @@ +--- +layout: docs +page_title: Discover services running on Docker containers +description: >- + Learn how to discover services running on a Docker container. +--- + +# Discover services running on Docker containers + +This topic provides an overview for deploying a Consul server when running Consul on Docker containers. + +## Use Consul DNS to discover the counting service + +Now you can query Consul for the location of your service using the following dig command against Consul's DNS. + +```shell-session +$ dig @127.0.0.1 -p 8600 counting.service.consul +``` + +```plaintext hideClipboard +; <<>> DiG 9.10.6 <<>> @127.0.0.1 -p 8600 counting.service.consul +; (1 server found) +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61865 +;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 +;; WARNING: recursion requested but not available + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;counting.service.consul. IN A + +;; ANSWER SECTION: +counting.service.consul. 0 IN A 172.17.0.3 + +;; Query time: 4 msec +;; SERVER: 127.0.0.1#8600(127.0.0.1) +;; WHEN: Thu Dec 15 13:03:30 CST 2022 +;; MSG SIZE rcvd: 68 +``` + +You can also access your newly registered service from Consul's UI, [http://localhost:8500](http://localhost:8500). + +![Consul UI with Registered Service](/img/consul-containers-ui-services.png 'Consul UI with Registered Service') \ No newline at end of file diff --git a/website/content/docs/discover/dns/index.mdx b/website/content/docs/discover/dns/index.mdx new file mode 100644 index 000000000000..489988b9c6a1 --- /dev/null +++ b/website/content/docs/discover/dns/index.mdx @@ -0,0 +1,42 @@ +--- +layout: docs +page_title: Consul DNS overview +description: >- + For service discovery use cases, Domain Name Service (DNS) is the main interface to look up, query, and address Consul nodes and services. Learn how a Consul DNS lookup can help you find services by tag, name, namespace, partition, datacenter, or domain. +--- + +# Consul DNS overview + +This topic provides overview information about how to look up Consul nodes and services using the Consul DNS. + +## Consul DNS + +@include 'text/descriptions/consul-dns.mdx' + +### DNS versus native app integration + +You can use DNS to reach services registered with Consul or modify your application to natively consume the Consul service discovery HTTP APIs. + +We recommend using the DNS because it is less invasive. You do not have to modify your application with Consul to retrieve the service's connection information. Instead, you can use a DNS fully qualified domain (FQDN) that conforms to Consul's lookup format to retrieve the relevant information. + +Refer to [Native App Integration](/consul/docs/automate/native) and its [Go package](/consul/docs/automate/native/go) for additional information. + +### DNS versus upstreams + +If you are using Consul for service discovery and have not enabled service mesh features, then use the DNS to discover services and nodes in the Consul catalog. + +If you are using Consul for service mesh on VMs, you can use upstreams or DNS. We recommend using upstreams because you can query services and nodes without modifying the application code or environment variables. Refer to [Upstream Configuration Reference](/consul/docs/reference/proxy/connect-proxy#upstream-configuration-reference) for additional information. + +If you are using Consul on Kubernetes, refer to [the upstreams annotation documentation](/consul/docs/reference/k8s/annotation-label#consul-hashicorp-com-connect-service-upstreams) for additional information. + +## Static queries + +Node lookups and service lookups are the fundamental types of static queries. Depending on your use case, you may need to use different query methods and syntaxes to query the DNS for services and nodes. + +Consul relies on a specific format for queries to resolve names. Note that all queries are case-sensitive. + +Refer to [Perform static DNS lookups](/consul/docs/discover/service/static) for details about how to perform node and service lookups. + +## Prepared queries + +Prepared queries are configurations that enable you to register complex DNS queries. They provide lookup features that extend Consul's service discovery capabilities, such as filtering by multiple tags and automatically querying remote datacenters for services if no healthy nodes are available in the local datacenter. You can also create prepared query templates that match names using a prefix match, allowing a single template to apply to potentially many services. Refer to [Perform dynamic DNS queries](/consul/docs/discover/service/dynamic) for additional information. diff --git a/website/content/docs/discover/dns/k8s.mdx b/website/content/docs/discover/dns/k8s.mdx new file mode 100644 index 000000000000..2800c5544108 --- /dev/null +++ b/website/content/docs/discover/dns/k8s.mdx @@ -0,0 +1,197 @@ +--- +layout: docs +page_title: Resolve Consul DNS requests in Kubernetes +description: >- + Use a k8s ConfigMap to configure KubeDNS or CoreDNS so that you can use Consul's `.service.consul` syntax for queries and other DNS requests. In Kubernetes, this process uses either stub-domain or proxy configuration. +--- + +# Resolve Consul DNS requests in Kubernetes + +This topic describes how to resolve Consul DNS requests in Kubernetes. Before you do so, you need to configure KubeDNS or CoreDNS to interface with Consul. + +## Introduction + +[Consul DNS](/consul/docs/discover/dns) is the primary interface for querying records. You can configure Consul DNS in Kubernetes using: + +1. A [stub-domain configuration](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configure-stub-domain-and-upstream-dns-servers) if using KubeDNS +1. A [proxy configuration](https://coredns.io/plugins/forward/) if using CoreDNS + +Once configured, DNS requests in the form `.service.consul` will resolve for services in Consul. This will work from all Kubernetes namespaces. + +If you want resolve requests using `` (without the `.service.consul`), then you need to enable [service sync](/consul/docs/register/service/k8s/service-sync#consul-to-kubernetes). + +## Retrieve Consul DNS cluster IP + +To configure KubeDNS or CoreDNS, you need the `ClusterIP` of the Consul DNS service created by the [Helm chart](/consul/docs/reference/k8s/helm). + +The default name of the Consul DNS service will be `consul-dns`. Use that name to get the `ClusterIP`. For this installation, the `ClusterIP` is `10.35.240.78`. + +```shell-session +$ kubectl get svc consul-dns --output jsonpath='{.spec.clusterIP}' +10.35.240.78% +``` + +If you've installed Consul using a different Helm release name than `consul`, then the DNS service name will be `-consul-dns`. + +Once you have the `ClusterIP`, you can go to the KubeDNS or CoreDNS section that matches your Kubernetes setup. + +## Configure KubeDNS + +If using KubeDNS, you need to create a `ConfigMap` that tells KubeDNS to use Consul DNS to resolve all domains ending with `.consul`. + +First, export the Consul DNS IP as an environment variable. This is the IP address you retrieved in the previous step. + +```bash +$ export CONSUL_DNS_IP=10.35.240.78 +``` + +Then, define the `ConfigMap`. If using a different zone than `.consul`, change the stub domain to that zone. + + + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + addonmanager.kubernetes.io/mode: EnsureExists + name: kube-dns + namespace: kube-system +data: + stubDomains: | + { "consul": ["$CONSUL_DNS_IP"] } +``` + + + +Create the `ConfigMap`. + +```shell-session +$ kubectl apply --filename consul-kubedns-config-map.yaml +``` + +Verify that the `ConfigMap` was created successfully. + +```shell-session +$ kubectl get configmap kube-dns --namespace kube-system --output yaml +apiVersion: v1 +data: + stubDomains: | + {"consul": ["10.35.240.78"]} +kind: ConfigMap +## ... +``` + +The `stubDomain` can only point to a static IP. If the cluster IP of Consul DNS changes, then you must update the `ConfigMap` to match the new service IP. The cluster IP of Consul DNS may change if the service is deleted and recreated, such as in full cluster rebuilds. + +Now, skip ahead to the [Verify DNS](#verify-dns) section. + +## Configure CoreDNS + +If using CoreDNS, you need to update your existing `coredns` ConfigMap in the `kube-system` namespace to include a `forward` definition for `consul` that points to the cluster IP of the Consul DNS service. + +Edit the `coreDNS` `ConfigMap`. + +```shell-session +$ kubectl edit configmap coredns --namespace kube-system +``` + +You need to edit the following items: + +- Add the `consul` block below the default `.:53` block. +- Replace `` with the DNS Service's IP address you retrieved previously. +- If using a different zone than `.consul`, change the key accordingly. + + + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + addonmanager.kubernetes.io/mode: EnsureExists + name: coredns + namespace: kube-system +data: + Corefile: | + .:53 { + ## Existing CoreDNS definition... + } + consul { + errors + cache 30 + forward . + } +``` + + + +The Consul proxy can only point to a static IP. If the cluster IP of Consul DNS changes, then you must update the `ConfigMap` to match the new service IP. The cluster IP of Consul DNS may change if the service is deleted and recreated, such as in full cluster rebuilds. + +## Verify DNS + +To verify DNS works, run the following example job to query DNS. Save the following job to the file `job.yaml`. + + + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: dns +spec: + template: + spec: + containers: + - name: dns + image: anubhavmishra/tiny-tools + command: ['dig', 'consul.service.consul'] + restartPolicy: Never + backoffLimit: 4 +``` + + + +Deploy the example job to your cluster. + +```shell-session +$ kubectl apply --filename job.yaml +``` + +Then, retrieve the pod name of the job. + +```shell-session +$ kubectl get pods --show-all | grep dns +dns-lkgzl 0/1 Completed 0 6m +``` + +Finally, retrieve the logs for the pod. The logs should show a successful DNS query, something similar to the following. If the logs show any errors, then DNS is not configured properly. + +```shell-session +$ kubectl logs dns-lkgzl +; <<>> DiG 9.11.2-P1 <<>> consul.service.consul +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4489 +;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;consul.service.consul. IN A + +;; ANSWER SECTION: +consul.service.consul. 0 IN A 10.36.2.23 +consul.service.consul. 0 IN A 10.36.4.12 +consul.service.consul. 0 IN A 10.36.0.11 + +;; ADDITIONAL SECTION: +consul.service.consul. 0 IN TXT "consul-network-segment=" +consul.service.consul. 0 IN TXT "consul-network-segment=" +consul.service.consul. 0 IN TXT "consul-network-segment=" + +;; Query time: 5 msec +;; SERVER: 10.39.240.10#53(10.39.240.10) +;; WHEN: Wed Sep 12 02:12:30 UTC 2018 +;; MSG SIZE rcvd: 206 +``` diff --git a/website/content/docs/discover/dns/pas.mdx b/website/content/docs/discover/dns/pas.mdx new file mode 100644 index 000000000000..72cdbb657ef5 --- /dev/null +++ b/website/content/docs/discover/dns/pas.mdx @@ -0,0 +1,184 @@ +--- +layout: docs +page_title: Discover services with Pivotal Application Service (PAS) +description: >- + Use Consul DNS with Pivotal Application Service to discover external services. +--- + +# Discover services with Pivotal Application Service (PAS) + +This page describes the process to discover services external to your datacenter using the Pivotal Application Service (PAS). + +## Overview + +Pivotal Application Service (PAS) is a cloud-agnostic platform for running +applications. It abstracts away the complexity of running your own +infrastructure and networking for an application and gives developers the +ability to focus on writing code. It has many features, including its own +internal service discovery. In hybrid environments, where workloads can run on +multiple platforms or clouds, applications running on PAS may need to discover +services external to the platform. + +Consul’s DNS interface is one of its primary querying interfaces for discovering +external services. This introductory tutorial describes the basics on how to +configure your PAS installation to delegate DNS lookups for Consul-specific +domains to the Consul servers. + +## Prerequisites + +This tutorial assumes basic familiarity with Consul, +[PAS](https://pivotal.io/platform/pivotal-application-service), as well as its +tools, like [BOSH](https://bosh.io/docs/) and [Ops +Manager](https://pivotal.io/platform/pcf-components/pcf-ops-manager). You will +be able complete this tutorial without production experience. + +Here is what you will need for this tutorial: + +- A running installation of PAS +- Ability to make and apply changes in Pivotal Ops Manager +- Ability to push applications to PAS +- IPs or DNS name(s) of the Consul servers that you will be connecting to + +You will also need to ensure that the container host VMs (aka Diego VMs) have +network connectivity to the Consul servers. + +![Consul DNS and PCF](/img/architecture/consul-pcf-arch.png 'Service discovery with Consul DNS') + +## Configure PAS to use Consul DNS + +Log in to your Pivotal Ops Manager instance, use [Pivotal +documentation](https://docs.pivotal.io/platform/ops-manager/2-7/) for +instructions specific to your installation. + +From your installation dashboard navigate to the “BOSH DNS Config Page” within +the BOSH Director tile. + +Add the following configuration to the `Handlers` field, setting `recursors` to +the IPs or DNS names of your Consul servers. + +```json +[ + { + "cache": { + "enabled": true + }, + "domain": "consul.", + "source": { + "recursors": ["10.52.0.63:8600", "10.52.0.55:8600", "10.52.0.43:8600"], + "type": "dns" + } + } +] +``` + +This configuration sets the Consul server running on `10.52.0.63` and the +default DNS port `8600`. + +If the Consul server is configured with a [different +domain](/consul/docs/reference/agent/configuration-file/dns#domain), make sure to +change the `domain` property above to reflect that. + +Your resulting page should include the configuration in the `Handlers` field. + +![PCF UI](/img/pcf-ui.png 'Configure PCF UI') + +When ready, click "Save". Note that you will need to review and apply pending +changes in Ops Manager to apply the configuration. + +## Discover a service with Consul DNS + +Now that you have configured Consul’s DNS as a query interface, you can test +that it works by deploying a [simple demo +application](https://github.com/cloudfoundry-samples/cf-sample-app-spring.git). +From the demo application container, you will be able to query Consul DNS. + +First, log in to PAS. + +```shell-session +$ cf login -a api.sys.example.com +``` + +Create an organization. + +```shell-session +$ cf create-org hashicorp +``` + +Create a space. + +```shell-session +$ cf create-space -o hashicorp consul-demo +``` + +Target an organization and a space. + +```shell-session +$ cf target -o hashicorp -s consul-demo +``` + +Next, download the demo application. + +```shell-session +$ git clone https://github.com/cloudfoundry-samples/cf-sample-app-spring.git +``` + +Move into the directory for the downloaded app. + +```shell-session +$ cd cf-sample-app-spring +``` + +Deploy the demo application. + +```shell-session +$ cf push +``` + +The demo application is maintained by Cloud Foundry. + +Finally, SSH into the demo application container and use `dig` to discover all +running instances of the Consul service. + +```shell-session +$ cf ssh cf-demo -c 'dig consul.service.consul' +``` + +```plaintext +; <<>> DiG 9.11.3-1ubuntu1.11-Ubuntu <<>> consul.service.consul +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 62872 +;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4 +;; WARNING: recursion requested but not available + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;consul.service.consul. IN A + +;; ANSWER SECTION: +consul.service.consul. 5 IN A 10.52.0.63 +consul.service.consul. 5 IN A 10.52.0.55 +consul.service.consul. 5 IN A 10.52.0.43 + +;; ADDITIONAL SECTION: +consul.service.consul. 5 IN TXT "consul-network-segment=" +consul.service.consul. 5 IN TXT "consul-network-segment=" +consul.service.consul. 5 IN TXT "consul-network-segment=" + +;; Query time: 2 msec +;; SERVER: 169.254.0.2#53(169.254.0.2) +;; WHEN: Thu Dec 05 07:04:34 UTC 2019 +;; MSG SIZE rcvd: 332 +``` + +## Next steps + +In this tutorial you learned how to configure Pivotal Application Service (PAS) +to use Consul’s DNS interface for service discovery of external services. If you +have services deployed in hybrid environments, such as VMs or Kubernetes, you +will now be able to discover them using Consul’s DNS interface. + +Learn more about [Consul service discovery and +registration](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery) and +[Consul’s DNS interface](/consul/docs/discover/dns). diff --git a/website/content/docs/discover/dns/scale.mdx b/website/content/docs/discover/dns/scale.mdx new file mode 100644 index 000000000000..d18aa5cec1eb --- /dev/null +++ b/website/content/docs/discover/dns/scale.mdx @@ -0,0 +1,250 @@ +--- +layout: docs +page_title: Scale Consul DNS +description: -> + You tune Consul DNS query handling to balance between current information and reducing request response time. Learn how to enable caching by modifying TTL values, how to return stale results from the DNS cache, and how to configure Consul for negative response caching. +--- + +# Scale Consul DNS + +This page describes the process to return cached results in response to DNS lookups. Consul agents can use DNS caching to reduce response time, but might provide stale information in the process. + +## Scaling techniques + +By default, Consul serves all DNS results with a `0` TTL value so that it returns the most recent information. When operating at scale, this configuration may result in additional latency because servers must respond to every DNS query. There are several strategies for distributing this burden in your datacenter: + +- [Allow Stale Reads](#stale-reads). Allows other servers besides the leader to answer the query rather than forwarding it to the leader. +- [Configure DNS TTLs](#ttl-values). Configure DNS time-to-live (TTL) values for nodes or services so that the DNS subsystem on the container’s operating system can cache responses. Services then resolve DNS queries locally without any external requests. +- [Add Read Replicas](/consul/docs/enterprise/read-scale). Enterprise users can use read replicas, which are additional Consul servers that replicate cluster data without participating in the Raft quorum. +- [Use Cache to prevent server requests](/consul/docs/reference/agent/configuration-file/dns#dns_use_cache). Configure the Consul client to use its agent cache to subscribe to events about a service or node. After you establish the watch, the local Consul client agent can resolve DNS queries about the service or node without querying Consul servers. + +The following table describes the availability of each scaling technique depending on whether you configure Consul to offload DNS requests from the cluster leader to a client agent, dataplane, or DNS proxy. + +| Scaling technique | Supported by client agents | Supported by dataplanes | Supported by Consul DNS Proxy | +| :---------------------------------- | :---------------------------: | :---------------------------: | :---------------------------: | +| Configure DNS TTLs | ✅ | ✅ | ✅ | +| Allow Stale Reads | ✅ | ✅ | ✅ | +| Add Read Replicas | ✅ | ✅ | ✅ | +| Use Cache to prevent server request | ✅ | ❌ | ❌ | + +For more information about considerations for Consul deployments that operate at scale, refer to [Operating Consul at Scale](/consul/docs/architecture/scale). + +## TTL values ((#ttl)) + +You can configure TTL values in the [agent configuration file](/consul/docs/reference/agent/configuration-file) to allow DNS results to be cached downstream of Consul. +Higher TTL values reduce the number of lookups on the Consul servers and speed +lookups for clients, at the cost of increasingly stale results. By default, all +TTLs are zero, preventing any caching. + + + +```hcl +dns_config { + service_ttl { + "*" = "0s" + } + node_ttl = "0s" +} +``` + +```json +{ + "dns_config": { + "service_ttl": { + "*": "0s" + }, + "node_ttl": "0s" + } +} +``` + + + +### Enable caching + +To enable caching of node lookups, set the +[`dns_config.node_ttl`](/consul/docs/reference/agent/configuration-file/dns#node_ttl) +value. This can be set to `10s` for example, and all node lookups will serve +results with a 10 second TTL. + +Service TTLs can be specified in a more granular fashion. You can set TTLs +per-service, with a wildcard TTL as the default. This is specified using the +[`dns_config.service_ttl`](/consul/docs/reference/agent/configuration-file/dns#service_ttl) +map. The `*` is supported at the end of any prefix and has a lower precedence +than strict match, so `my-service-x` has precedence over `my-service-*`. When +performing wildcard match, the longest path is taken into account, thus +`my-service-*` TTL will be used instead of `my-*` or `*`. With the same rule, +`*` is the default value when nothing else matches. If no match is found the TTL +defaults to 0. + +For example, a [`dns_config`](/consul/docs/reference/agent/configuration-file/dns#dns_config) +that provides a wildcard TTL and a specific TTL for a service might look like this: + + + +```hcl +dns_config { + service_ttl { + "*" = "5s" + "web" = "30s" + "db*" = "10s" + "db-master" = "3s" + } +} +``` + +```json +{ + "dns_config": { + "service_ttl": { + "*": "5s", + "web": "30s", + "db*": "10s", + "db-master": "3s" + } + } +} +``` + + + +This sets all lookups to "web.service.consul" to use a 30 second TTL +while lookups to "api.service.consul" will use the 5 second TTL from the wildcard. +All lookups matching "db\*" would get a 10 seconds TTL except "db-master" that +would have a 3 seconds TTL. + +### Prepared queries + +[Prepared Queries](/consul/api-docs/query) provide an additional +level of control over TTL. They allow for the TTL to be defined along with +the query, and they can be changed on the fly by updating the query definition. +If a TTL is not configured for a prepared query, then it will fall back to the +service-specific configuration defined in the Consul agent as described above, +and ultimately to 0 if no TTL is configured for the service in the Consul agent. + +
    + +## Stale reads + +Stale reads can be used to reduce latency and increase the throughput of DNS +queries. The [settings](/consul/docs/reference/agent/configuration-file) used to +control stale reads of DNS queries are: + +- [`dns_config.allow_stale`](/consul/docs/reference/agent/configuration-file/dns#allow_stale) must be + set to true to enable stale reads. +- [`dns_config.max_stale`](/consul/docs/reference/agent/configuration-file/dns#max_stale) limits how stale results + are allowed to be when querying DNS. + +With these two settings you can allow or prevent stale reads. Below we will +discuss the advantages and disadvantages of both. + +### Allow stale reads + +Since Consul 0.7.1, `allow_stale` is enabled by default and uses a `max_stale` +value that defaults to a near-indefinite threshold (10 years). This allows DNS +queries to continue to be served in the event of a long outage with no leader. A +new telemetry counter has also been added at `consul.dns.stale_queries` to track +when agents serve DNS queries that are stale by more than 5 seconds. + + + +```hcl +dns_config { + allow_stale = true + max_stale = "87600h" +} +``` + +```json +{ + "dns_config": { + "allow_stale": true, + "max_stale": "87600h" + } +} +``` + + + + + + + The above example is the default setting. You do not need to set it explicitly. + + + +Doing a stale read allows any Consul server to service a query, but non-leader +nodes may return data that is out-of-date. By allowing data to be slightly +stale, you get horizontal read scalability. Now any Consul server can service +the request, so you increase throughput by the number of servers in a datacenter. + +### Prevent stale reads + +If you want to prevent stale reads or limit how stale they can be, you can set +`allow_stale` to false or use a lower value for `max_stale`. Doing the first +will ensure that all reads are serviced by a +[single leader node](/consul/docs/architecture/consensus). +The reads will then be strongly consistent but will be limited by the throughput +of a single node. + + + +```hcl +dns_config { + allow_stale = false +} +``` + +```json +{ + "dns_config": { + "allow_stale": false + } +} +``` + + + +## Negative response caching + +Although DNS clients cache negative responses, Consul returns a "not +found" style response when a service exists but there are no healthy +endpoints. When using DNS for service discovery, cached negative responses may +cause a service to appear down for longer than it is actually unavailable. + +### Configure SOA + +In Consul v1.3.0 and newer, it is now possible to tune SOA responses and modify +the negative TTL cache for some resolvers. It can be achieved using the +[`soa.min_ttl`](/consul/docs/reference/agent/configuration-file/dns#soa_min_ttl) +configuration within the [`soa`](/consul/docs/reference/agent/configuration-file/dns#soa) configuration. + + + +```hcl +dns_config { + soa { + min_ttl = 60 + } +} +``` + +```json +{ + "dns_config": { + "soa": { + "min_ttl": 60 + } + } +} +``` + + + +One common example is that Windows will default to caching negative responses +for 15 minutes. DNS forwarders may also cache negative responses, with the same +effect. To avoid this problem, check the negative response cache defaults for +your client operating system and any DNS forwarder on the path between the +client and Consul and set the cache values appropriately. In many cases +"appropriately" means turning negative response caching off to get the best +recovery time when a service becomes available again. diff --git a/website/content/docs/discover/docker.mdx b/website/content/docs/discover/docker.mdx new file mode 100644 index 000000000000..dd767d520b20 --- /dev/null +++ b/website/content/docs/discover/docker.mdx @@ -0,0 +1,43 @@ +--- +layout: docs +page_title: Discover services with Consul on Docker +description: >- + This page provides an overview of the service discovery features and operations Consul supports when running on Docker. +--- + +# Discover services with Consul on Docker + +This page provides an overview of Consul service discovery operations when running on Docker. + +When running Consul on Docker, you can use the Consul DNS interface to perform service discovery operations. + +## Introduction + +When a service registers with Consul, the catalog records the address of the container running the service. Consul then updates the service's catalog entry with the results of each health check it performs. Consul agents replicate catalog information between each other using the [Raft consensus protocol](/consul/docs/concept/consensus), enabling high availability service networking through any Consul agent. + +Consul's service discovery operations use [Consul DNS addresses](/consul/docs/discover/dns) to route traffic to healthy service instances and return information about service nodes registered to Consul. + +You can also use the `docker exec` command to connect directly to a Consul container and interact with Consul's service discovery features. + +## Application load balancing + +@include 'text/descriptions/load-balancer.mdx' + +## Static lookups + +@include 'text/descriptions/static-query.mdx' + +## Prepared queries + +@include 'text/descriptions/prepared-query.mdx' + +## Reference documentation + +For reference material related to Consul's service discovery functions on Docker, refer to the following pages: + +- [Discover services on Docker containers](/consul/docs/discover/dns/docker) +- [Consul DNS reference](/consul/docs/reference/dns) + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/discover.mdx' diff --git a/website/content/docs/discover/index.mdx b/website/content/docs/discover/index.mdx new file mode 100644 index 000000000000..44e5903e3367 --- /dev/null +++ b/website/content/docs/discover/index.mdx @@ -0,0 +1,38 @@ +--- +layout: docs +page_title: Discover services overview +description: >- + This topic provides an overview of the service discovery features and operations enabled by Consul DNS, including application load balancing, static lookups, and prepared queries. +--- + +# Discover services overview + +This topic provides an overview of Consul service discovery operations. After you register services with Consul, you can address them using Consul DNS to perform application load balancing and static service lookups. You can also create prepared queries for dynamic service lookups and service failover. + +If you run Consul on Kubernetes or AWS ECS and prefer to use your runtime's built-in service discovery alongside Consul's service mesh features, refer to [onboard services in transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy). + +## Introduction + +When a service registers with Consul, the catalog records the address of each service's node. Consul then updates an instance's catalog entry with the results of each health check it performs. Consul agents replicate catalog information between each other using the [Raft consensus protocol](/consul/docs/concept/consensus), enabling high availability service networking through any Consul agent. + +Consul's service discovery operations use [Consul DNS addresses](/consul/docs/discover/dns) to route traffic to healthy service instances and return information about service nodes registered to Consul. + +## Application load balancing + +@include 'text/descriptions/load-balancer.mdx' + +## Static lookups + +@include 'text/descriptions/static-query.mdx' + +## Prepared queries + +@include 'text/descriptions/prepared-query.mdx' + +## Guidance + +@include 'text/guidance/discover.mdx' + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/discover.mdx' \ No newline at end of file diff --git a/website/content/docs/discover/k8s.mdx b/website/content/docs/discover/k8s.mdx new file mode 100644 index 000000000000..150ef4e52d93 --- /dev/null +++ b/website/content/docs/discover/k8s.mdx @@ -0,0 +1,53 @@ +--- +layout: docs +page_title: Discover services with Consul on Kubernetes +description: >- + This page provides an overview of the service discovery features and operations Consul supports when running on Kubernetes. +--- + +# Discover services with Consul on Kubernetes + +This page provides an overview of Consul service discovery operations when running on Kubernetes. + +Use KubeDNS to perform service discovery operations by default. If you use Consul's service mesh features and prefer to use KubeDNS, refer to [onboard services in transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy). + +## Introduction + +Kubernetes deployments support Consul's service discovery features. When running Consul on Kubernetes, you can adopt either of the following service discovery strategies: + +- [Configure Kubernetes to use Consul DNS](/consul/docs/discover/dns/k8s) +- Use KubeDNS to resolve DNS addresses and manage service discovery in your employment + +You can also use the `kubectl exec` command to connect directly to a Consul container and interact with Consul's service discovery features. + +If you use KubeDNS with Consul's service mesh enabled, you must also configure [transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy) to enable permissive mTLS between services. + +## Application load balancing + +@include 'text/descriptions/load-balancer.mdx' + +## Static lookups + +@include 'text/descriptions/static-query.mdx' + +## Prepared queries + +@include 'text/descriptions/prepared-query.mdx' + +## Runtime specific usage documentation + +For runtime-specific guidance, refer to the following pages: + +- [Resolve Consul DNS requests in Kubernetes](/consul/docs/manage/dns/forwarding/k8s) +- [Consul DNS views for Kubernetes](/consul/docs/manage/dns/views) + +## Reference documentation + +For reference material related to Consul's service discovery functions on Kubernetes, refer to the following pages: + +- [Consul DNS reference](/consul/docs/reference/dns) +- [Consul on Kubernetes annotations and labels reference](/consul/docs/reference/k8s/annotation-label) + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/discover.mdx' diff --git a/website/content/docs/discover/load-balancer/envoy.mdx b/website/content/docs/discover/load-balancer/envoy.mdx new file mode 100644 index 000000000000..9e7677ad9911 --- /dev/null +++ b/website/content/docs/discover/load-balancer/envoy.mdx @@ -0,0 +1,353 @@ +--- +layout: docs +page_title: Load balancing services in Consul service mesh with Envoy +description: >- + Manage traffic across services within Consul service mesh with Envoy load balancing policies. +--- + +# Load balancing services in Consul service mesh with Envoy + +Load balancing is a mechanism of distributing traffic between multiple targets in order to make use of available resources in the most effective way. There are many different ways of accomplishing this, so Envoy provides several different load balancing strategies. To take advantage of this Envoy functionality, Consul supports changing the load balancing policy used by the Envoy data plane proxy. Consul configuration entries specify the desired load balancing algorithm. + +Load balancing policies apply to both requests from internal services inside the service mesh, and requests from external clients accessing services in your datacenter through an ingress gateway. + +The policies will be defined using a [`service-resolver`](/consul/docs/reference/config-entry/service-resolver) configuration entry. + +This tutorial will guide you through the steps needed to change the load balancing policy for a service in your mesh. You will learn how to configure sticky session for services using the _maglev_ policy and how to setup a _least_request_ policy that will take into account the amount of requests on service instances for load balancing traffic. + +## Prerequisites + +In order to deploy and test Consul load balancing policies for Envoy proxies, you will need the following resources deployed in a non-production environment: + +- A Consul version 1.13.1 cluster or later with Consul service mesh functionality enabled. Check [Secure Service-to-Service Communication](/consul/tutorials/developer-mesh/service-mesh-with-envoy-proxy) for configuration guidance. In this tutorial, this will be referred to as the _server_ node. + +- Two nodes, each hosting an instance of a service, called _backend_ that requires load balancing policy tuning. + +- One node, hosting a client service, you are going to use to route requests to the backend services. The client service is configured to use the _backend_ service as upstream. In this tutorial we will refer to this as the _client_ node. + +Optional: + +- One node running as ingress gateway. + + + + The content of this tutorial also applies to Consul clusters hosted on HashiCorp Cloud (HCP). + + + +The diagram below shows the minimal architecture needed to demonstrate the functionality. + +![Architectural diagram showing two backend services and a client service within a Consul service mesh](/img/consul-lb-envoy-client-service.png) + + + + + + This tutorial is not for production use. By default, the lab will install an insecure configuration of Consul. + + + +## Available load balancing policies + +Consul implements load balancing by automating Envoy configuration to reflect the selected approach. + +The following policies are available: + +| Policy | Description | +| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `round_robin` _(default)_ | The request will be resolved to any healthy service, in a round robin order. | +| `least_request` | An _O(1)_ algorithm selects N random available hosts as specified in the configuration (2 by default) and picks the host which has the fewest active requests. | +| `ring_hash` | Implements consistent hashing to upstream hosts. Each host is mapped onto a circle (the “ring”) by hashing its address; each request is then routed to a host by hashing properties of the request, and finding the nearest corresponding host clockwise around the ring. | +| `maglev` | Implements consistent hashing to upstream hosts. It can be used as a replacement for the ring_hash load balancer in any place in which consistent hashing is desired. | +| `random` | The request will be resolved to any healthy service, randomly. | + + + + These policies are a reflection of the Envoy [supported load balancers](https://www.envoyproxy.io/docs/envoy/v1.23.0/intro/arch_overview/upstream/load_balancing/load_balancers). Refer to [Envoy supported versions](/consul/docs/connect/proxies/envoy#supported-versions) in the documentation to make sure your Envoy version is compatible with your Consul version. + + + +## Default load balancing policy + +Consul automatically balances traffic across all the healthy instances of the resolved service using the _round_robin_ policy. + +You can verify this by using the `curl` command from the node running the _client_ service. + +First login to the _client_ node, then run a request against the client service. + +```shell-session +$ curl --silent localhost:9192 +``` + +Example output: + +```json +{ + "name": "main", + "uri": "/", + "type": "HTTP", + "ip_addresses": ["172.18.0.4"], + "start_time": "2020-10-01T16:15:54.151406", + "end_time": "2020-10-01T16:15:54.151885", + "duration": "478.867µs", + "body": "Hello World", + "code": 200 +} +``` + +If you run the command multiple times you will be able to confirm, using the output, that the requests are being balanced across the two different instances of the service. + +## Configure service defaults + +In order to enable service resolution and apply load balancer policies, you first need to configure HTTP as the service protocol in the service's `service-defaults` configuration entry. + +First, login to the _server_ node, then create a `default.hcl` file with the following content: + + + + +```hcl +Kind = "service-defaults" +Name = "backend" +Protocol = "http" +``` + + + + +Then, apply the configuration to your Consul datacenter. + +```shell-session +$ consul config write /etc/consul.d/default.hcl +``` + +Example output: + +```plaintext hideClipboard +Config entry written: service-defaults/backend +``` + +## Configure a sticky session for service resolution + +A common requirement for many applications is to have the ability to direct all the requests from a specific client to the same server. These configurations usually rely on an HTTP header to map client requests to a specific backend and route the traffic accordingly. + +For this tutorial you will configure a policy using the `x-user-id` header to implement sticky session. + +First login to the _server_ node and create a `hash-resolver.hcl` file using the following content: + + + + +```hcl +Kind = "service-resolver" +Name = "backend" +LoadBalancer = { + Policy = "maglev" + HashPolicies = [ + { + Field = "header" + FieldValue = "x-user-id" + } + ] +} +``` + + + + +This configuration creates a `service-resolver` configuration for the service `backend` that uses the `maglev` policy and relies on the content of the `x-user-id` header to resolve requests. + +You can apply the policy using the `consul config` command. + +```shell-session +$ consul config write /etc/consul.d/hash-resolver.hcl +``` + +Example output: + +```plaintext hideClipboard +Config entry written: service-resolver/backend +``` + +### Verify the policy is applied + +Once the policy is in place, you can test it using the `curl` command from the _client_ node and applying the `x-user-id` header to the request: + +```shell-session +$ curl --silent localhost:9192 --header "x-user-id: 12345" +``` + +Example output: + +```json +{ + "name": "main", + "uri": "/", + "type": "HTTP", + "ip_addresses": ["172.18.0.4"], + "start_time": "2020-10-01T16:15:47.950151", + "end_time": "2020-10-01T16:15:47.950581", + "duration": "430.088µs", + "body": "Hello World", + "code": 200 +} +``` + +If you execute the previous command multiple times, you will always be redirected to the same instance of the _backend_ service. + + + + sticky sessions are consistent given a stable service configuration. If the number of backend hosts changes, a fraction of the sessions will be routed to a new host after the change. + + + +## Use least_request LB policy + +The default load balancing policy, _round_robin_, is usually the best approach in scenarios where requests are homogeneous and the system is over-provisioned. + +In scenarios where the different instances might be subject to substantial differences in terms of workload, there are better approaches. + +Using the _least_request_ policy enables Consul to route requests using information on connection-level metrics and select the one with the lowest number of connections. + +### Configure least_request load balancing policy + +The tutorial's lab environment provides an example configuration file, `least-req-resolver.hcl` for the _least_request_ policy. + + + + +```hcl +Kind = "service-resolver" +Name = "backend" +LoadBalancer = { + Policy = "least_request" + LeastRequestConfig = { + ChoiceCount = "2" + } +} +``` + + + + +This configuration creates a `service-resolver` configuration for the service `backend`, which for every request will select 2 (as expressed by `ChoiceCount`) random instances of the service, and route to the one with the least amount of load. + +You can apply the policy using the `consul config` command. + +```shell-session +$ consul config write least-req-resolver.hcl +``` + +Example output: + +```plaintext hideClipboard +Config entry written: service-resolver/backend +``` + +### Verify the policy is applied + +Once the policy is in place, you can test it using the `curl` command and applying the `x-user-id` header to the request. +First, log in to the _client_ node and then use the following command: + +```shell-session +$ curl --silent localhost:9192 --header "x-user-id: 12345" +``` + +Example output: + +```json +{ + "name": "main", + "uri": "/", + "type": "HTTP", + "ip_addresses": ["172.18.0.4"], + "start_time": "2020-10-01T16:25:47.950151", + "end_time": "2020-10-01T16:25:47.950581", + "duration": "420.066µs", + "body": "Hello World", + "code": 200 +} +``` + +If you execute the command multiple times, you will notice that despite applying the header used by the previous policy, the request is served by different instances of the service. + +-> The _least_request_ configuration with `ChoiceCount` set to 2 is also known as P2C (power of two choices). The P2C load balancer has the property that a host with the highest number of active requests in the cluster will never receive new requests. It will be allowed to drain until it is less than or equal to all of the other hosts. You can read more on this in [this paper](https://www.eecs.harvard.edu/~michaelm/postscripts/handbook2001.pdf) + +## Load balancer policies and ingress gateways (optional) + +The load balancing policy for the datacenter applies also to the service resolution performed by ingress gateways. Once you have configured the policies for the services and tested it internally using the client service, you can introduce an ingress gateway in your configuration, and the same policies will be now respected by external requests being served by your Consul datacenter. + +Consul reduces the amount of configuration needed for specifying load balancing policies by using a common policy to resolve internal requests from services inside the mesh, and external requests from clients outside the mesh. + +![Architectural diagram showing two backend services and a client service accessing the Consul service mesh using an ingress gateway](/img/consul-lb-envoy-ingress-gw.png) + +You can deploy an ingress gateway using the following configuration: + + + + +```hcl +Kind = "ingress-gateway" +Name = "ingress-service" + +Listeners = [ + { + Port = 8080 + Protocol = "http" + Services = [ + { + Name = "backend" + } + ] + } +] +``` + + + + +Once the configuration is applied, using command `consul config write` you can start the gateway on the gateway node using the following command: + +```shell-session +$ consul connect envoy -gateway=ingress -register -service ingress-service -address '{{ GetInterfaceIP "eth0" }}:8888' +``` + +Once the Envoy proxy is active, you can test the load balancing policy accessing the service from your browser using the ingress gateway at `http://backend.ingress.consul:8080`, and test the policy by refreshing the page. + +Alternatively, you can test it using the `curl` command as explained in the previous chapters but using the DNS name and port `8080`. + +```shell-session +$ curl --silent backend.ingress.consul:8080 +``` + +Example output: + +```json +{ + "name": "main", + "uri": "/", + "type": "HTTP", + "ip_addresses": ["172.18.0.4"], + "start_time": "2020-10-01T18:15:27.650151", + "end_time": "2020-10-01T18:15:27.650581", + "duration": "420.066µs", + "body": "Hello World", + "code": 200 +} +``` + +-> Configuration Info: When using an `http` listener in the ingress gateway configuration, Consul will accept requests to the ingress gateway service only if the hostname matches with the internal naming convention `.ingress.*`. If you need the service to be accessible using a different hostname or an IP, please refer to the ingress gateway [configuration entry](/consul/docs/reference/config-entry/ingress-gateway#hosts) documentation. + + + + Try to configure the load balancing policy with a sticky session, and then use `curl` to verify the sticky session is configured for the traffic routed from the ingress gateway. + + + +## Next steps + +In this tutorial, you learned how to change the default load balancing policy for services in your service mesh. + +You can learn more about the other options for traffic splitting available in Consul from our [Deploy seamless canary deployments with service splitters](/consul/tutorials/developer-mesh/service-splitters-canary-deployment) tutorial. + +If you want to learn more on how to deploy an ingress gateway for your Consul service mesh, you can complete the [Allow External Traffic Inside Your Service Mesh With Ingress Gateways](/consul/tutorials/developer-mesh/service-mesh-ingress-gateways) tutorial. diff --git a/website/content/docs/discover/load-balancer/f5.mdx b/website/content/docs/discover/load-balancer/f5.mdx new file mode 100644 index 000000000000..9bead452206f --- /dev/null +++ b/website/content/docs/discover/load-balancer/f5.mdx @@ -0,0 +1,400 @@ +--- +layout: docs +page_title: Load balancing with F5 +description: >- + Use Consul to configure F5 BIG-IP nodes and server pools based on changes in Consul service discovery. +--- + +# Load balancing with F5 + +The F5 BIG-IP AS3 service discovery integration with Consul queries Consul's +catalog on a regular, configurable basis to get updates about changes for a +given service, and adjusts the node pools dynamically without operator +intervention. + +In this tutorial you will use Consul to configure F5 BIG-IP nodes and server pools. +You will set up a basic F5 BIG-IP AS3 declaration that generates the F5 load +balancer backend-server-pool configuration based on the available service +instances registered in Consul's service catalog. + +## Prerequisites + +To complete this tutorial, you will need previous experience with F5 BIG-IP and +Consul. You can either manually deploy the necessary infrastructure, or use the +terraform demo code. + +### Watch the video - Optional + +Consul's integration with F5 was demonstrated in a webinar. If you would prefer +to learn about the integration but aren't ready to try it out, you can watch the +webinar recording to see the integration in action. + +### Manually deploy your infrastructure + +You should configure the following infrastructure. + +- A single Consul datacenter with server and client nodes, and the configuration + directory for Consul agents at `/etc/consul.d/`. + +- A running instance of the F5 BIG-IP platform. If you don’t already have one + you can use a [hosted AWS + instance](https://aws.amazon.com/marketplace/pp/B079C44MFH) for this tutorial. + +- The AS3 package version 3.7.0 + [installed](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/installation.html) + on your F5 BIG-IP platform. + +- Standard web server running on a node, listening on HTTP port 80. We will use + NGINX in this tutorial. + +**Note** The content of this tutorial also applies to Consul clusters hosted on HashiCorp Cloud (HCP) + +### Deploy a demo using Terraform - Optional + +You can set up the prerequisites on your own, or use the terraform +configuration in [this +repository](https://github.com/hashicorp/f5-terraform-consul-sd-webinar) to set +up the entire tutorial environment. + +Once your environment is set up, you'll be able to visit the F5 GUI at +`:8443/tmui/login.jsp` where `` is the address provided in your +Terraform output. Login with the username `admin` and the password from your +Terraform output. + +### Verify your environment + +Check your environment to ensure you have a healthy Consul datacenter by +checking your datacenter members. You can do this by running the `consul members` command on the machine where Consul is running, or by accessing the +Consul web UI at the IP address of your consul instances, on port 8500. + +If you deployed your infrastructure using the provided terraform you can SSH to +the machine running consul using the key you will find in the terraform +directory. For example `ssh ubuntu@ -i terraform-20190917053444504900000001.pem`. Remember to replace `` +with the address of your Consul node. + +```shell-session +$ consul members +``` + +```plaintext hideClipboard +Node Address Status Type Build Protocol DC Segment +consul 10.0.0.100:8301 alive server 1.5.3 2 dc1 +nginx 10.0.0.109:8301 alive client 1.5.3 2 dc1 +``` + +In this sample environment we have one Consul server node and one web server +node with a Consul client. + +## Register a web service + +To register the web service on your client node with Consul, create a +service definition in Consul's config directory `/etc/consul.d/` and +paste in the following configuration, which includes a tcp +check for the web server so that Consul can monitor its health. (you many need +to change directory permissions using `chmod` before writing the file). + + + + + +```hcl +service { + name = "nginx" + port = 80 + checks = [ + { + id = "nginx" + name = "nginx TCP Check" + tcp = "localhost:80" + interval = "5s" + timeout = "1s" + } + ] +} +``` + + + + + +```json +{ + "service": { + "name": "nginx", + "port": 80, + "checks": [ + { + "id": "nginx", + "name": "nginx TCP Check", + "tcp": "localhost:80", + "interval": "5s", + "timeout": "1s" + } + ] + } +} +``` + + + + + +Reload the client to read the new service definition. + +```shell-session +$ consul reload +``` + +In a browser window, visit the services page of the Consul web UI at +`:8500/ui/dc1/services/nginx`. Remember to add your own node IP +address. + +![Consul UI with NGINX registered](/img/consul-f5-nginx.png 'Consul web +UI with a healthy NGINX service') + +You should notice your instance of the nginx service listed and healthy. + +## Apply an AS3 declaration + +Next, you will configure BIG-IP to use Consul Service discovery with an AS3 +declaration. You will use cURL to apply the declaration to the BIG-IP Instance. + +First construct an authorization header to authenticate our API call with +BIG-IP. You will need to use a username and password for your instance. Below is +an example for username “admin”, and password “password”. + +```shell-session +$ echo -n 'admin:password' | base64 +``` + +```plaintext hideClipboard +YWRtaW46YWRtaW4= +``` + +The below declaration does the following: + +- Creates a partition (tenant) named `Consul_SD`. + +- Defines a virtual server named `serviceMain` in `Consul_SD` partition with: + + - A pool named web_pool monitored by the http health monitor. + + - NGINX Pool members auto-discovered via Consul's [catalog HTTP API + endpoint](/consul/api-docs/catalog#list-nodes-for-service). + For the `virtualAddresses` make sure to substitute your BIG-IP Virtual + Server. + + - A URI specific to your Consul environment for the scheme, host, and port of + your consul address discovery. This could be a single server, load balanced + endpoint, or co-located agent, depending on your requirements. Make sure to + replace the `uri` in your configuration with the IP of your Consul client. + +Use cURL to send the authorized declaration to the BIG-IP Instance. It may +be useful to edit the below command in a text editor before pasting it into the +terminal. + +- Replace `` with the value you created above for + your BIG-IP instance in the authorization header. + +- Replace `` with the real IP address. + +- Replace `` with BIG-IP's virtual IP. + +- Replace `` with Consul's IP address. + +```shell-session +$ curl -X POST \ + https:///mgmt/shared/appsvcs/declare \ + -H 'authorization: Basic ' \ + -d '{ + "class": "ADC", + "schemaVersion": "3.7.0", + "id": "Consul_SD", + "controls": { + "class": "Controls", + "trace": true, + "logLevel": "debug" + }, + "Consul_SD": { + "class": "Tenant", + "Nginx": { + "class": "Application", + "template": "http", + "serviceMain": { + "class": "Service_HTTP", + "virtualPort": 8080, + "virtualAddresses": [ + "" + ], + "pool": "web_pool" + }, + "web_pool": { + "class": "Pool", + "monitors": [ + "http" + ], + "members": [ + { + "servicePort": 80, + "addressDiscovery": "consul", + "updateInterval": 5, + "uri": "http://:8500/v1/catalog/service/nginx" + } + ] + } + } + } +}' +``` + +You should get a similar output to the following after you’ve applied your +declaration. + +```json hideClipboard +{ + "results": [ + { + "message": "success", + "lineCount": 26, + "code": 200, + "host": "localhost", + "tenant": "Consul_SD", + "runTime": 3939 + } + ], + "declaration": { + "class": "ADC", + "schemaVersion": "3.7.0", + "id": "Consul_SD", + "controls": { + "class": "Controls", + "trace": true, + "logLevel": "debug", + "archiveTimestamp": "2019-09-06T03:12:06.641Z" + }, + "Consul_SD": { + "class": "Tenant", + "Nginx": { + "class": "Application", + "template": "http", + "serviceMain": { + "class": "Service_HTTP", + "virtualPort": 8080, + "virtualAddresses": ["10.0.0.200"], + "pool": "web_pool" + }, + "web_pool": { + "class": "Pool", + "monitors": ["http"], + "members": [ + { + "servicePort": 80, + "addressDiscovery": "consul", + "updateInterval": 5, + "uri": "http://10.0.0.100:8500/v1/catalog/service/nginx" + } + ] + } + } + }, + "updateMode": "selective" + } +} +``` + +You can find more information on Consul SD declarations in [F5’s Consul service +discovery +documentation](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/declarations/discovery.html#service-discovery-using-hashicorp-consul) + +You can read more about composing AS3 declarations in the [F5 +documentation](https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/composing-a-declaration.html). +The Terraform provider for BIG-IP [also supports AS3 +resources](https://registry.terraform.io/providers/F5Networks/bigip/latest/docs/resources/bigip_as3.html). + +## Verify BIG-IP Consul communication + +Use the `consul monitor` command on the Consul client specified in the AS3 URI +to verify that you are receiving catalog requests from the BIG-IP instance. + +```shell-session +$ consul monitor -log-level=debug +``` + +```log hideClipboard +[DEBUG] http: Request GET /v1/catalog/service/nginx (103.796µs) from=10.0.0.200:29487 +[DEBUG] http: Request GET /v1/catalog/service/nginx (104.95µs) from=10.0.0.200:42079 +[DEBUG] http: Request GET /v1/catalog/service/nginx (98.652µs) from=10.0.0.200:45536 +[DEBUG] http: Request GET /v1/catalog/service/nginx (101.242µs) from=10.0.0.200:45940 +``` + +Check that the interval matches the value you supplied in your AS3 declaration. + +## Verify the BIG-IP dynamic pool + +Check the network map of the BIG-IP instance to make sure that the NGINX +instances registered in Consul are also in your BIG-IP dynamic pool. + +To check the network map, open a browser window and navigate to +`https:///tmui/tmui/locallb/network_map/app/?xui=false#!/?p=Consul_SD`. +Remember to replace the IP address. + +![NGINX instances in BIG-IP](/img/consul-f5-partition.png 'NGINX +instances listed in the BIG-IP web graphical user interface') + +You can read more about the network map in the [F5 +documentation](https://support.f5.com/csp/article/K20448153#accessing%20map). + +## Test the BIG-IP virtual server + +Now that you have a healthy virtual service, you can use it to access your NGINX +web server. + +```shell-session +$ curl :8080 +``` + +```html + + + + Welcome to nginx! + + + +

    Welcome to nginx!

    +

    + If you see this page, the nginx web server is successfully installed and + working. Further configuration is required. +

    + +

    + For online documentation and support please refer to + nginx.org.
    + Commercial support is available at + nginx.com. +

    + +

    Thank you for using nginx.

    + + +``` + +## Next steps + +The F5 BIG-IP AS3 service discovery integration with Consul queries Consul's +catalog on a regular, configurable basis to get updates about changes for a +given service, and adjusts the node pools dynamically without operator +intervention. + +In this tutorial you configured an F5 BIG-IP instance to natively integrate with +Consul for service discovery. You were able to monitor dynamic node registration +for a web server pool member and test it with a virtual server. + +As a follow up, you can add or remove web server nodes registered with Consul +and validate that the network map on the F5 BIG-IP updates automatically. diff --git a/website/content/docs/discover/load-balancer/ha.mdx b/website/content/docs/discover/load-balancer/ha.mdx new file mode 100644 index 000000000000..388f6f78b6fa --- /dev/null +++ b/website/content/docs/discover/load-balancer/ha.mdx @@ -0,0 +1,673 @@ +--- +layout: docs +page_title: Load balance with HA proxy +description: >- + Configure HAProxy to use Consul DNS interface to load balance traffic across multiple instances of the same service. +--- + +# Load balance with HAProxy + +This guide describes how to use HAProxy's native integration to automatically +configure the load balancer with service discovery data from Consul. This allows +HAProxy to automatically scale its backend server pools by leveraging its +`server-template` function and Consul's service discovery. + +In this guide, you set up a basic HAProxy configuration based on HAProxy's +`server-template`. This template generates the load balancer backend server pool +configuration based on the available service instances registered in Consul +service discovery. + +## Requirements + +To complete this guide, you need previous experience with Consul and +[HAProxy](https://www.haproxy.org). Additionally, you should have the following +infrastructure configured: + +- A Consul datacenter with the web UI enabled. +- At least three Consul client nodes, to register the web services and health + checks referenced in this guide. `enable_local_script_checks` must be set to `true` on the client nodes. +- Standard web servers running on each node, listening on `HTTP` port `80`. +- HAProxy of version 2.6+ (LTS) installed and co-located with a Consul client. + Refer to the [HAProxy documentation][haproxy-docs] for the manuals specific to your HA Proxy version. + +After you've completed the steps listed in this guide, your infrastructure +should look like the following diagram. + +![HAProxy and Consul architecture](/img/discover/load-balancer/haproxy/arch-haproxy-consul.png) + +## Create the HAProxy configuration + +Start with a default installation of HAProxy. Add the following configuration at +the end of the `/etc/haproxy/haproxy.cfg` file. + + + +```plaintext +defaults + timeout connect 5s + timeout client 1m + timeout server 1m + +frontend stats + bind *:1936 + mode http + stats uri / + stats show-legends + no log + +frontend http_front + bind *:80 + default_backend http_back + +backend http_back + balance roundrobin + server-template mywebapp 1-10 _web._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check + +resolvers consul + nameserver consul 127.0.0.1:8600 + accepted_payload_size 8192 + hold valid 5s +``` + + + +### Frontend + + + +```plaintext +frontend http_front + bind *:80 + default_backend http_back +``` + + + +The `frontend http_front` stanza instructs HAProxy to listen for `HTTP` requests +on TCP port `80` and to use the `http_back` server pool as the respective load +balancer backend. + +### Backend + + + +```plaintext +backend http_back + balance roundrobin + server-template mywebapp 1-10 _web._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check +``` + + + +The `backend http_back` stanza includes `balance` type and `server-template` +settings. Refer to the [HAProxy Configuration Manual][haproxy-docs] for +explanations of these settings. + +The `balance` type is round robin, which load balances across the available service in order. + +The HAProxy `server-template` is what allows Consul service registrations to +configure HAProxy's backend server pool. This means you do not need to +explicitly add your backend servers' IP addresses. The configuration specifies a +`server-template` named `mywebapp`. The template name is not tied to the service +name, which is registered in Consul. + +The `server-template` provisions `1-10` slots for backend servers in HAProxy's +runtime configuration. It reserves the memory for up to ten instances even +if you do not use them all. If you have more than ten healthy and available +services, HAProxy only uses ten of them and backfills available slots in +case of a service error. You can configure the number of available slots. + +The `_web._tcp.service.consul` parameter instructs HAProxy to use the DNS SRV +record for the backend service `_web.service.consul_` to discover the available +services. + +This configuration also allows running two service instances on the same IP +address using different ports `resolve-opts allow-dup-ip` and resolves IPv4 +addresses for the service endpoints `resolve-prefer ipv4`. + +Finally, specify that the server template should look at the `resolvers consul` +stanza to discover where to find Consul's DNS interface. + + + +Consul has sophisticated distributed health checks, so the additional HAProxy +health `check` is not necessarily needed, depending on the configuration of +Consul's health checks and the update interval HAProxy uses to discover healthy +endpoints from Consul's service discovery. + + + +### Resolvers + + + +```plaintext +resolvers consul + nameserver consul 127.0.0.1:8600 + accepted_payload_size 8192 + hold valid 5s +``` + + + +The `resolvers consul` stanza defines the actual service discovery endpoint to +be used by HAProxy. + +`nameserver consul 127.0.0.1:8600` points HAProxy to the DNS interface of the +local Consul client. + +You need to allow for a larger payload by configuring +`accepted_payload_size 8192`, since DNS SRV records can result in larger DNS +replies from Consul service discovery. This is based on the number of available +and healthy services. + +Finally `hold valid 5s` instructs HAProxy to check Consul's service catalog +every 5 seconds for updates on available and healthy service endpoints. This +value is also tuned in this example for faster service discovery. + +## Reload HAProxy + +Reload your HAProxy instance to apply the adjusted `haproxy.cfg` configuration +file. + +```shell-session +$ service haproxy reload +``` + +### Check HAProxy settings + +Access the HAProxy statistics and monitoring page at +`http://:1936` to verify the new configuration settings. + +![HAProxy monitoring UI with no service instances.](/img/discover/load-balancer/haproxy/haproxy-ui-no_instances.png#light-theme-only) +![HAProxy monitoring UI with no service instances.](/img/discover/load-balancer/haproxy/haproxy-ui-no_instances-dark.png#dark-theme-only) + +## Register a web service + +To register the web service on your first node with Consul, create a service +definition in Consul's `/etc/consul.d/` configuration directory. + + + + + +```hcl +service { + name = "web" + port = 80 + check { + args = ["curl", "localhost"] + interval = "3s" + } +} +``` + + + + + +```json +{ + "service": { + "Name": "web", + "Port": 80, + "check": { + "args": ["curl", "localhost"], + "interval": "3s" + } + } +} +``` + + + + + +Reload the client to apply the new service definition. + +```shell-session +$ consul reload +``` + +Register a second web service by repeating this process on a second node. + +### Verify services are registered in Consul + +If the registration was successful you should see the change in Consul +catalog. Choose your preferred method to verify that the services are registered +correctly in Consul. + + + + +Use the `consul catalog` command to check available services. + +```shell-session +$ consul catalog services +consul +web +``` + + + + +Use the `/v1/catalog/services` API endpoint to to check available services. + +```shell-session +$ curl --silent http://127.0.0.1:8500/v1/catalog/services | jq +``` + +You should get the following output, showing both the `consul` service, +representing the Consul server agents, and the `web` service you just registered. + + + +```json +{ + "consul": [], + "web": [] +} +``` + + + +Use the `/v1/catalog/service/web` API endpoint to get more details on the `web` +service's instances. + +```shell-session +$ curl --silent http://127.0.0.1:8500/v1/catalog/service/web | jq +``` + +You should get an output similar to the following. + + + +```json +[ + { + "ID": "1b183015-ef7f-52af-bad9-1e260c689a03", + "Node": "web-app-0", + "Address": "172.18.0.7", + "Datacenter": "dc1", + "TaggedAddresses": { + "lan": "172.18.0.7", + "lan_ipv4": "172.18.0.7", + "wan": "172.18.0.7", + "wan_ipv4": "172.18.0.7" + }, + "NodeMeta": { + "consul-network-segment": "", + "consul-version": "1.20.1" + }, + "ServiceKind": "", + "ServiceID": "web-0", + "ServiceName": "web", + "ServiceTags": [], + "ServiceAddress": "", + "ServiceWeights": { + "Passing": 1, + "Warning": 1 + }, + "ServiceMeta": {}, + "ServicePort": 80, + "ServiceSocketPath": "", + "ServiceEnableTagOverride": false, + "ServiceProxy": { + "Mode": "", + "MeshGateway": {}, + "Expose": {} + }, + "ServiceConnect": {}, + "ServiceLocality": null, + "CreateIndex": 14342, + "ModifyIndex": 14342 + }, + { + "ID": "86be7e7c-cfaf-931f-297b-6992f049c1ff", + "Node": "web-app-1", + "Address": "172.18.0.9", + "Datacenter": "dc1", + "TaggedAddresses": { + "lan": "172.18.0.9", + "lan_ipv4": "172.18.0.9", + "wan": "172.18.0.9", + "wan_ipv4": "172.18.0.9" + }, + "NodeMeta": { + "consul-network-segment": "", + "consul-version": "1.20.1" + }, + "ServiceKind": "", + "ServiceID": "web-1", + "ServiceName": "web", + "ServiceTags": [], + "ServiceAddress": "", + "ServiceWeights": { + "Passing": 1, + "Warning": 1 + }, + "ServiceMeta": {}, + "ServicePort": 80, + "ServiceSocketPath": "", + "ServiceEnableTagOverride": false, + "ServiceProxy": { + "Mode": "", + "MeshGateway": {}, + "Expose": {} + }, + "ServiceConnect": {}, + "ServiceLocality": null, + "CreateIndex": 14347, + "ModifyIndex": 14347 + } +] +``` + + + + + + +You should see both instances of the web service in the in the Consul UI. + +![Consul UI services page showing two instances of the web service.](/img/discover/load-balancer/haproxy/consul-ui-services-two_web_instances.png#light-theme-only) +![Consul UI services page showing two instances of the web service.](/img/discover/load-balancer/haproxy/consul-ui-services-two_web_instances-dark.png#dark-theme-only) + +Click on the `web` service to get details on the available instances. + +![Consul UI service web page showing two instances.](/img/discover/load-balancer/haproxy/consul-ui-service-web-two_instances.png#light-theme-only) +![Consul UI service web page showing two instances.](/img/discover/load-balancer/haproxy/consul-ui-service-web-two_instances-dark.png#dark-theme-only) + + + + +Query the Consul DNS service for instances of the `web` service. + +```shell-session +$ dig @127.0.0.1 -p 8600 -t srv web.service.consul +short +1 1 80 web-app-1.node.dc1.consul. +1 1 80 web-app-0.node.dc1.consul. +``` + + + + +## Check load balancing + +The HAProxy monitoring UI also reflects the services, so you may verify that two +instances of the ten available are now up and running. + +![HAProxy monitoring UI with two service instances.](/img/discover/load-balancer/haproxy/haproxy-ui-two_instances.png#light-theme-only) +![HAProxy monitoring UI with two service instances.](/img/discover/load-balancer/haproxy/haproxy-ui-two_instances-dark.png#dark-theme-only) + +Browse to the IP address of your HAProxy load balancer and reload the page +several times. Because you registered two services in Consul and configured +HAProxy to use round robin load balancing, you should see the connection +toggling between both your available web servers. + +## Scale your backend servers + +HAProxy queries Consul's DNS interface every five seconds to check if +something changed within the requested service `web`. + +Scale your web service registering the instance running on your third node. + +Create a service definition in the third Consul client's configuration directory +(`/etc/consul.d/`). + + + + + +```hcl +service { + name = "web" + port = 80 + check { + args = ["curl", "localhost"] + interval = "3s" + } +} +``` + + + + + +```json +{ + "service": { + "Name": "web", + "Port": 80, + "check": { + "args": ["curl", "localhost"], + "interval": "3s" + } + } +} +``` + + + + + +Reload the client to apply the new service definition. + +```shell-session +$ consul reload +``` + + + +In production deployments you should automate the registration of new instances +of a service, so that Consul knows when new services in the datacenter start. + + + +Once you register the new web service, HAProxy detects the change and trigger an +update in its runtime configuration. After scaling your backend server from two +to three instances, the HAProxy statistics page reflects the resulting runtime +load balancer configuration. + +![HAProxy monitoring UI with three service instances.](/img/discover/load-balancer/haproxy/haproxy-ui-three_instances.png#light-theme-only) +![HAProxy monitoring UI with three service instances.](/img/discover/load-balancer/haproxy/haproxy-ui-three_instances-dark.png#dark-theme-only) + +Traffic should now be load balanced across all three available services. + +### Stop one backend service + +HAProxy's service discovery integration scales your backend configuration +automatically depending on available services. It only uses +healthy services for rendering the final configuration. + +You configured Consul to perform a basic `curl` health check in your +service definition. Consul notices if the `web` server instance is in an +unhealthy state. + +To simulate an error and see how Consul health checks are working, stop the web +server process on one of you backend servers. + +```shell-session +$ service stop +``` + +The state of this service instance is immediately unhealthy in the Consul UI, + since the `curl` health check results in an error because no service responds +to requests on `HTTP` port `80`. + +![Consul UI service web page showing three instances, one unhealthy.](/img/discover/load-balancer/haproxy/consul-ui-service-web-three_instances_one_failing.png#light-theme-only) +![Consul UI service web page showing three instances, one unhealthy.](/img/discover/load-balancer/haproxy/consul-ui-service-web-three_instances_one_failing-dark.png#dark-theme-only) + +Due to its regular check against Consul's DNS interface, your HAProxy instance +notices that a change in the health of one of the services has occurred and +adjusts its runtime load balancer configuration. + +Your HAProxy instance now only balances traffic between the remaining +healthy services. + +You can again check the resulting runtime load balancer configuration for your +HAProxy instance in the statistics page. + +![HAProxy monitoring UI with three service instances one unhealthy.](/img/discover/load-balancer/haproxy/haproxy-ui-three_instances_one_failed.png#light-theme-only) +![HAProxy monitoring UI with three service instances one unhealthy.](/img/discover/load-balancer/haproxy/haproxy-ui-three_instances_one_failed-dark.png#dark-theme-only) + +Note that the unhealthy instance is not in an error state from HAProxy's +perspective. It is in maintenance state (`MAINT`) as DNS resolution for this +specific host is not working anymore. + + + +Consul health checks can look at CPU utilization, RAM usage, and other metrics +of your web services that central load balancer can't monitor. Learn more about +Consul's health check feature in the +[health check tutorial](/consul/tutorials/connect-services/monitor-applications-health-checks). + + + +Restart the stopped `web` server instance. The Consul health check marks the +service as healthy and adds the service back into the load balancer backend +configuration to serve traffic. + +## Add DNS weights to the HAProxy configuration + +In addition to Consul's DNS interface for querying only healthy instances, +HAProxy's service discovery integration can also evaluate the DNS weight. In +this section, you adjust one of your services to give it a higher weight. +This enables HAProxy to adjust its runtime value for the backend server +weight, which results in sending more traffic to the respective backend +server. + +This approach is helpful if your servers are different sizes, or some +instances of a service are able to handle more requests than others. + +### Check the current weight + +First, check the assigned DNS weights in Consul using the Consul DNS interface. + +```shell-session +$ dig @127.0.0.1 -p 8600 -t srv web.service.consul +short +1 1 80 web-app-0.node.dc1.consul. +1 1 80 web-app-1.node.dc1.consul. +1 1 80 web-app-2.node.dc1.consul. +``` + +The DNS weight is the column before the port number (which is `80`). The DNS +weight is currently set to one for all of your healthy services, which is the +default that Consul uses. + +### Add a weight to one service + +HAProxy's service discovery integration queries Consul's DNS interface on a +regular, configurable basis to get updates about changes for a given service and +adjusts the runtime configuration of HAProxy automatically. Adjust your +HAProxy runtime configuration by configuring additional options in Consul like +DNS weights. + +HAProxy supports weights between one and two hundred fifty-six. Consul's +officially supported DNS weights do not match the supported HAProxy backend +server weights, so you need to convert the desired HAProxy weight to a DNS +weight configured in Consul's service definition with the following formula. + + + +```plaintext +([Desired weight in HAProxy] * 256) - 1 = (Service weight in Consul) +``` + + + +In this guide, you want to have a HAProxy weight of two, which results in the +following calculation. + + + +```plaintext +(2 * 256) - 1 = 511 +``` + + + +Adjust one of your service definitions by adding the `weights` option to the +`web` service registration. + + + + + +```hcl +service { + name = "web" + port = 80 + check { + args = ["curl", "localhost"] + interval = "3s" + } + weights { + passing = 511 + warning = 1 + } +} +``` + + + + + +```json +{ + "service": { + "Name": "web", + "Port": 80, + "check": { + "args": ["curl", "localhost"], + "interval": "3s" + }, + "weights": { + "passing": 511, + "warning": 1 + } + } +} +``` + + + + + +Reload the local Consul client to apply the new service definition. + +```shell-session +$ consul reload +``` + +### Check the new weight + +First, check Consul's DNS interface to make sure Consul now shows another weight +for the service instance you just configured. + +```shell-session +$ dig @127.0.0.1 -p 8600 -t srv web.service.consul +short +1 511 80 web-app-1.node.dc1.consul. +1 1 80 web-app-0.node.dc1.consul. +1 1 80 web-app-2.node.dc1.consul. +``` + +Second, you can check the HAProxy statistics page, `http://:1936`. + +![HAProxy monitoring UI with three service instances one with different weight.](/img/discover/load-balancer/haproxy/haproxy-ui-three_instances_weight.png#light-theme-only) +![HAProxy monitoring UI with three service instances one with different weight.](/img/discover/load-balancer/haproxy/haproxy-ui-three_instances_weight-dark.png#dark-theme-only) + +Finally, check that traffic is load balanced unequally across the +three available service endpoints. Browse to the IP address of your HAProxy load +balancer and reload the page several times. One of your instances serves +twice as many requests as the others. + +## Next steps + +Refer to the [NGINX](/consul/docs/discover/load-balancer/nginx), +[F5](/consul/docs/discover/load-balancer/f5), and +[Envoy](/consul/docs/discover/load-balancer/envoy) guides for other load +balancing options. + + +[haproxy-docs]: https://docs.haproxy.org diff --git a/website/content/docs/discover/load-balancer/nginx.mdx b/website/content/docs/discover/load-balancer/nginx.mdx new file mode 100644 index 000000000000..cfcbc0a4b80e --- /dev/null +++ b/website/content/docs/discover/load-balancer/nginx.mdx @@ -0,0 +1,416 @@ +--- +layout: docs +page_title: Load Balancing with NGINX and Consul Template +description: >- + Use Consul template to update NGINX load balancer configurations based on changes in Consul service discovery. +--- + +# Load Balancing with NGINX and Consul Template + +This tutorial describes how to use Consul and Consul template to automatically +update an NGINX configuration file with the latest list of backend servers using +Consul's service discovery. + +While following this tutorial you will: + +- Register an example service with Consul +- Configure Consul template +- Create an NGINX load balancer configuration template +- Run Consul template +- Scale your servers +- Test Consul's health checks + +Once you have completed this tutorial you will end up with an architecture like the +one diagramed below. NGINX will get a list of healthy servers from Consul +service discovery via Consul template and will balance internet traffic to those +servers according to its own configuration. + +![NGINX and Consul template architecture](/img/consul-nginx-template-arch.png 'Consul template gets healthy services from Consul service discovery and uses them to populate NGINX configuration') + +## Prerequisites + +To complete this tutorial you will need: + +- A Consul cluster. We recommend three Consul server nodes + +- A minimum of two application servers registered to Consul service discovery + with a Consul client agent running on the node (We assume a standard web + server listening on HTTP port 80 in the following examples) + +- A node running NGINX + +- A Consul client agent on the NGINX node + +- [Consul-template](https://github.com/hashicorp/consul-template#installation) + on the NGINX node to keep the NGINX configuration file updated + + + + The content of this tutorial also applies to Consul clusters hosted on HashiCorp Cloud (HCP). + + + +## Register your web servers to Consul + +If you haven't already registered your web servers in Consul Service Registry +create a service definition for your web service in Consul's configuration +directory `/etc/consul.d/`. + +Create a service registration file for the `web` service with the following content. + + + + + +```hcl +service { + name = "web" + port = 80 + check { + args = ["curl", "localhost"] + interval = "3s" + } +} +``` + + + + + +```json +{ + "service": { + "Name": "web", + "Port": 80, + "check": { + "args": ["curl", "localhost"], + "interval": "3s" + } + } +} +``` + + + + + +-> Since this service definition contains a basic "curl" health check for a web +server, `enable_local_script_checks` must be set to `true` in the configuration +of the Consul agent where the web server is running. + +Reload the local Consul agent to read the new service definition. + +```shell-session +$ consul reload +``` + +After registering the service it will appear in Consul's Service Registry. + +![Healthy web service](/img/consul-nginx-template-web.png 'The Consul UI displaying a healthy web service') + +After repeating the registration step for all your web server instances, all +instances will appear in the instances view of the "web" service. + +## Configure Consul template + +A Consul template configuration file will specify which input template to use, +which output file to generate, and which command will reload Consul template's +target application (NGINX in this tutorial) with its new configuration. + +Create a configuration file called `consul-template-config.hcl` with the +following content. + + + +```hcl +consul { + address = "localhost:8500" + + retry { + enabled = true + attempts = 12 + backoff = "250ms" + } +} +template { + source = "/etc/nginx/conf.d/load-balancer.conf.ctmpl" + destination = "/etc/nginx/conf.d/load-balancer.conf" + perms = 0600 + command = "service nginx reload" +} +``` + + + +The `consul` stanza tells consul-template, where to find the Consul API, which +is located on localhost, as we are running a Consul client agent on the same +node as our NGINX instance. + +The `template` stanza tells Consul template: + +- Where the `source` (input) template file will be located, in this case + `/etc/nginx/conf.d/load-balancer.conf.ctmpl` + +- Where the destination (output) file should be located, in this case + `/etc/nginx/conf.d/load-balancer.conf`. (This is a default path that NGINX uses + to read its configuration. You will either use it or `/usr/local/nginx/conf/` + depending on your NGINX distribution.) + +- Which `permissions` the destination file needs + +- Which command to run after rendering the destination file. In this case, + `service nginx reload` will trigger NGINX to reload its configuration + +For all available configuration options for Consul template, please see the +[GitHub repo](https://github.com/hashicorp/consul-template). + +## Create an input template + +Now you will create the basic NGINX Load Balancer configuration template which +Consul template will use to render the final `load-balancer.conf` for your NGINX +load balancer instance. + +Create a template file called `load-balancer.conf.ctmpl` in the location you +specified as a `source` (in this example, `/etc/nginx/conf.d/`) with the +following content: + + + +```go +upstream backend { +{{- range service "web" }} + server {{ .Address }}:{{ .Port }}; +{{- end }} +} + +server { + listen 80; + + location / { + proxy_pass http://backend; + } +} +``` + + + +Instead of explicitly putting your backend server IP addresses directly in the +load balancer configuration file, you will use Consul template's templating +language to specify some variables in this file, which will automatically fetch +the final values from Consul's Service Registry and render the final load +balancer configuration file. + +Specifically, the below snippet from the above template file tells Consul +template to query Consul's Service Registry for all healthy nodes of the "web" +service in the current data center, and put the IP addresses and service ports +of those endpoints in the generated output configuration file. + +```go +{{- range service "web" }} + server {{ .Address }}:{{ .Port }}; +{{- end }} +``` + +For all available options on how to build template files for use with +Consul template, please see the [GitHub +repo](https://github.com/hashicorp/consul-template). + +### Clean up NGINX default sites config + +To make sure your NGINX instance will act as a load balancer and not as a web +server delete the following file if it exists. + +```nginx +/etc/nginx/sites-enabled/default +``` + +Then reload the NGINX service. + +```shell-session +$ service nginx reload +``` + +Now your NGINX load balancer should not have any configuration and you should +not see a web page when browsing to your NGINX IP address. + +## Run Consul template + +Start Consul template with the earlier generated configuration file. + +```shell-session +$ consul-template -config=consul-template-config.hcl +``` + +This will start Consul template running in the foreground until you stop the +process. It will automatically connect to Consul's API, render the NGINX +configuration for you and reload the NGINX service. + +Your NGINX load balancer should now serve traffic and perform simple round-robin +load balancing amongst all of your registered and healthy "web" server +instances. + +The resulting load balancer configuration located at +`/etc/nginx/conf.d/load-balancer.conf` should look like this: + + + +```nginx +upstream backend { + server 192.168.43.101:80; + server 192.168.43.102:80; +} + +server { + listen 80; + + location / { + proxy_pass http://backend; + } +} +``` + + + +Notice that Consul template filled the variables from the template file with +actual IP addresses and ports of your "web" servers. + +## Verify your implementation + +Now that everything is set up and running, test out your implementation by +watching what happens when you scale or stop your services. In both cases, +Consul template should keep NGINX's configuration up to date. + +### Scale your backend services + +Consul template uses a long-lived HTTP query (blocking query) against Consul's +API and will get an immediate notification about updates to the requested +service "web". + +As soon as you scale your "web" and the new instances register themselves to the +Consul Service Registry, Consul template will see the change in your service and +trigger a new generation of the configuration file. + +After scaling your backend server from two to three instances, the resulting +load balancer configuration for your NGINX instance located at +`/etc/nginx/conf.d/load-balancer.conf` should look like this: + + + +```nginx +upstream backend { + server 192.168.43.101:80; + server 192.168.43.102:80; + server 192.168.43.103:80; +} + +server { + listen 80; + + location / { + proxy_pass http://backend; + } +} +``` + + + +### Cause an error in a web service instance + +Not only will Consul template update your backend configuration automatically +depending on available service endpoints, but it will also only use healthy +endpoints when rendering the final configuration. + +You configured Consul to perform a basic curl-based health check in your service +definition, so Consul will notice if a "web" server instance is in an unhealthy +state. + +To simulate an error and see how Consul health checks are working, stop one +instance of the web process. + +```shell-session +$ service nginx stop +``` + +You will see the state of this service instance as "Unhealthy" in the Consul UI +because no service on this node is responding to requests on HTTP port 80. + +![Unhealthy web service](/img/consul-nginx-template-web-unhealthy.png 'The Consul UI displaying an unhealthy web service') + +Because of its blocking query against Consul's API, Consul template will be +notified immediately that a change in the health of one of the service endpoints +occurred and will re-render a new Load Balancer configuration file excluding the +unhealthy service instance: + + + +```nginx +upstream backend { + server 192.168.43.101:80; + server 192.168.43.103:80; +} + +server { + listen 80; + + location / { + proxy_pass http://backend; + } +} +``` + + + +Your NGINX instance will now only balance traffic between the remaining healthy +service endpoints. + +As soon as you restart your stopped "web" server instance and Consul health +check marks the service endpoint as "Healthy" again, the process automatically +starts over and the instance will be included in the load balancer backend +configuration to serve traffic: + + + +```nginx +upstream backend { + server 192.168.43.101:80; + server 192.168.43.102:80; + server 192.168.43.103:80; +} + +server { + listen 80; + + location / { + proxy_pass http://backend; + } +} +``` + + + +-> Consul health checks can be much more sophisticated. They can check CPU or +RAM utilization or other service metrics, which you are not able to monitor from +a central Load Balancing instance. You can learn more about Consul's Health +Check feature [here](/consul/tutorials/developer-discovery/service-registration-health-checks). + +## Next steps + +In this tutorial you discovered how Consul template can generate the configuration +for your NGINX load balancer based on available and healthy service endpoints +registered in Consul's Service Registry. You learned how to scale up and down +services without manually reconfiguring your NGINX load balancer every time a +new service endpoint was started or deleted. + +You learned how Consul template uses blocking queries against Consul's HTTP API +to get immediate notifications about service changes and re-renders the required +configuration files automatically for you. + +This tutorial described how to use consul template to configure NGINX, but a +similar process would apply for other load balancers as well. To use another +load balancer you will need to replace the NGINX input template, output file, +and reload command, as well as any NGINX CLI commands. + +Learn more about Consul [service registration and +discovery](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery) and +[Consul +template](/consul/tutorials/developer-configuration/consul-template). \ No newline at end of file diff --git a/website/content/docs/discover/service/dynamic.mdx b/website/content/docs/discover/service/dynamic.mdx new file mode 100644 index 000000000000..dc8c929bdc34 --- /dev/null +++ b/website/content/docs/discover/service/dynamic.mdx @@ -0,0 +1,114 @@ +--- +layout: docs +page_title: Perform dynamic DNS service lookups with prepared queries +description: >- + Learn how to dynamically query the Consul DNS using prepared queries, which enable robust service and node lookups. +--- + +# Perform dynamic service lookups with prepared queries + +This topic describes how to dynamically query the Consul catalog using prepared queries. Prepared queries are configurations that let you register a complex service query and execute it on demand. For information about how to perform standard node and service lookups, refer to [Perform static DNS queries](/consul/docs/discover/service/static). + +## Introduction + +Prepared queries provide a rich set of lookup features, such as filtering by multiple tags and automatically failing over to look for services in remote datacenters if no healthy nodes are available in the local datacenter. You can also create prepared query templates that match names using a prefix match, allowing a single template to apply to potentially many services. Refer to [Consul DNS overview](/consul/docs/discover/dns) for additional information about DNS query behaviors. + +## Requirements + +Consul 0.6.4 or later is required to create prepared query templates. + +### ACLs + +If ACLs are enabled, the querying service must present a token linked to permissions that enable read access for query, service, and node resources. Refer to the following documentation for information about creating policies to enable read access to the necessary resources: + +- [Prepared query rules](/consul/docs/secure/acl/rule#prepared-query-rules) +- [Service rules](/consul/docs/secure/acl/rule#service-rules) +- [Node rules](/consul/docs/secure/acl/rule#node-rules) + +## Create prepared queries + +Refer to the [prepared query reference](/consul/api-docs/query#create-prepared-query) for usage information. + +1. Specify the prepared query options in JSON format. The following prepared query targets all instances of the `redis` service in `dc1` and `dc2`: + + + + ```json + { + "Name": "my-query", + "Session": "adf4238a-882b-9ddc-4a9d-5b6758e4159e", + "Token": "", + "Service": { + "Service": "redis", + "Failover": { + "NearestN": 3, + "Datacenters": ["dc1", "dc2"] + }, + "Near": "node1", + "OnlyPassing": false, + "Tags": ["primary", "!experimental"], + "NodeMeta": { + "instance_type": "m3.large" + }, + "ServiceMeta": { + "environment": "production" + } + }, + "DNS": { + "TTL": "10s" + } + } + ``` + + + + Refer to the [prepared query configuration reference](/consul/api-docs/query#create-prepared-query) for information about all available options. + +1. Send the query in a POST request to the [`/query` API endpoint](/consul/api-docs/query). If the request is successful, Consul prints an ID for the prepared query. + + In the following example, the prepared query configuration is stored in the `payload.json` file: + + ```shell-session + $ curl --request POST --data @payload.json http://127.0.0.1:8500/v1/query + {"ID":"014af5ff-29e6-e972-dcf8-6ee602137127"}% + ``` + +1. To run the query, send a GET request to the endpoint and specify the ID returned from the POST call. + + ```shell-session + $ curl http://127.0.0.1:8500/v1/query/14af5ff-29e6-e972-dcf8-6ee602137127/execute\?near\=_agent + ``` + +## Execute prepared queries + +You can execute a prepared query using the standard lookup format or the strict RFC 2782 SRV lookup. + +### Standard lookup + +Use the following format to execute a prepared query using the standard lookup format: + +``` +.query[.]. +``` + +Refer [Standard lookup](/consul/docs/discover/service/static#standard-lookups) for additional information about the standard lookup format in Consul. + +### RFC 2782 SRV lookup + +Use the following format to execute a prepared query using the RFC 2782 lookup format: + +``` +_._tcp.query[.]. +``` + +For additional information about following the RFC 2782 SRV lookup format in Consul, refer to [RFC 2782 Lookup](/consul/docs/discover/service/static#rfc-2782-lookup). For general information about the RFC 2782 specification, refer to [A DNS RR for specifying the location of services \(DNS SRV\)](https://tools.ietf.org/html/rfc2782). + +### Lookup options + +The `datacenter` subdomain is optional. By default, the lookup queries the datacenter of this Consul agent. + +The `query name` or `id` subdomain is the name or ID of an existing prepared query. + +## Query results + +To allow for simple load balancing, Consul returns the set of nodes in random order for each query. Prepared queries support A and SRV records. SRV records provide the port that a service is registered. Consul only serves SRV records if the client specifically requests them. diff --git a/website/content/docs/discover/service/static.mdx b/website/content/docs/discover/service/static.mdx new file mode 100644 index 000000000000..55d1a31ed1fa --- /dev/null +++ b/website/content/docs/discover/service/static.mdx @@ -0,0 +1,410 @@ +--- +layout: docs +page_title: Perform static DNS queries +description: >- + Learn how to use standard Consul DNS lookup formats to enable service discovery for services and nodes. +--- + +# Perform static DNS queries + +This topic describes how to query the Consul DNS to look up nodes and services registered with Consul. Refer to [Perform dynamic DNS queries](/consul/docs/discover/service/dynamic) for information about using prepared queries. + +## Introduction + +Node lookups and service lookups are the fundamental types of queries you can perform using the Consul DNS. Node lookups query the catalog for named Consul agents. Service lookups query the catalog for services registered with Consul. Refer to [DNS Usage Overview](/consul/docs/discover/dns) for additional background information. + +## Requirements + +All versions of Consul support DNS lookup features. + +### ACLs + +If ACLs are enabled, you must present a token linked with the necessary policies. We recommend using a separate token in production deployments for querying the DNS. By default, Consul agents resolve DNS requests using the preconfigured tokens in order of precedence: + +1. The agent's [`default` token](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_default) +1. The built-in [`anonymous` token](/consul/docs/secure/acl/token#built-in-tokens). + +The following table describes the available DNS lookups and required policies when ACLs are enabled: + +| Lookup | Type | Description | ACLs Required | +| --- | --- | --- | --- | +| `*.node.consul` | Node | Allows Consul to resolve DNS requests for the target node. Example: `.node.consul` | `node:read` | +| `*.service.consul`
    `*.connect.consul`
    `*.ingress.consul`
    `*.virtual.consul` |Service: standard | Allows Consul to resolve DNS requests for target service instances running on ACL-authorized nodes. Example: `.service.consul` | `service:read`
    `node:read` | + +## Node lookups + +Specify the name of the node, datacenter, and domain using the following FQDN syntax: + +```text +.node[..dc]. +``` + +The `datacenter` subdomain is optional. By default, the lookup queries the datacenter of the agent. + +By default, the domain is `consul`. Refer to [Configure Consul DNS behavior](/consul/docs/discover/dns/configure) for information about using alternate domains. + +### Node lookup results + +Node lookups return A and AAAA records that contain the IP address and TXT records containing the `node_meta` values of the node. + +By default, TXT record values match the node's metadata key-value pairs according to [RFC1464](https://www.ietf.org/rfc/rfc1464.txt). If the metadata key starts with `rfc1035-`, the TXT record only includes the node's metadata value. + +The following example lookup queries the `foo` node in the `default` datacenter: + +```shell-session +$ dig @127.0.0.1 -p 8600 foo.node.consul ANY +; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 foo.node.consul ANY +; (1 server found) +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24355 +;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 +;; WARNING: recursion requested but not available + +;; QUESTION SECTION: +;foo.node.consul. IN ANY + +;; ANSWER SECTION: +foo.node.consul. 0 IN A 10.1.10.12 +foo.node.consul. 0 IN TXT "meta_key=meta_value" +foo.node.consul. 0 IN TXT "value only" + +;; AUTHORITY SECTION: +consul. 0 IN SOA ns.consul. postmaster.consul. 1392836399 3600 600 86400 0 +``` + +### Node lookups for Consul Enterprise + +Consul Enterprise includes the admin partition concept, which is an abstraction that lets you define isolated administrative network areas. Refer to [Admin partitions](/consul/docs/multi-tenant/admin-partition) for additional information. + +Consul nodes reside in admin partitions within a datacenter. By default, node lookups query the same partition and datacenter of the Consul agent that received the DNS query. + +Use the following query format to specify a partition for a node lookup: + +```text +.node[..ap][..dc]. +``` + +Consul server agents are in the `default` partition. If you send a DNS query to Consul server agents, you must explicitly specify the partition of the target node if it is not `default`. + +## Service lookups + +You can query the network for service providers using either the [standard lookup](#standard-lookup) method or [strict RFC 2782 lookup](#rfc-2782-lookup) method. + +By default, all SRV records are weighted equally in service lookup responses, but you can configure the weights using the [`Weights`](/consul/docs/reference/service#weights) attribute of the service definition. Refer to [Define Services](/consul/docs/register/service/vm/define) for additional information. + +The DNS protocol limits the size of requests, even when performing DNS TCP queries, which may affect your experience querying for services. For services with more than 500 instances, you may not be able to retrieve the complete list of instances for the service. Refer to [RFC 1035, Domain Names - Implementation and Specification](https://datatracker.ietf.org/doc/html/rfc1035#section-2.3.4) for additional information. + +Consul randomizes DNS SRV records and ignores weights specified in service configurations when printing responses. If records are truncated, each client using weighted SRV responses may have partial and inconsistent views of instance weights. As a result, the request distribution may be skewed from the intended weights. We recommend calling the [`/catalog/nodes` API endpoint](/consul/api-docs/catalog#list-nodes) to retrieve the complete list of nodes. You can apply query parameters to API calls to sort and filter the results. + +### Standard lookups + +To perform standard service lookups, specify tags, the name of the service, datacenter, cluster peer, and domain using the following syntax to query for service providers: + +```text +[.].service[..dc][..peer][..sg]. +``` + +- The `tag` subdomain is optional. It filters responses so that only service providers containing the tag appear. + +- The `datacenter` subdomain is optional. By default, Consul interrogates the querying agent's datacenter. + +- The `cluster-peer` name is optional, and specifies the [cluster peer](/consul/docs/east-west/cluster-peering) whose [exported services](/consul/docs/reference/config-entry/exported-services) should be the target of the query. + +- The `sameness-group` name is optional, and specifies the [sameness group](/consul/docs/multi-tenant/sameness-group/vm) that should be the target of the query. When Consul receives a DNS request for a service that is tied to a sameness group, it returns service instances from the first healthy member of the sameness group. If the local partition is a member of a sameness group, its service instances take precedence over the members of its sameness group. Optionally, you can include a namespace or admin partition when performing a lookup on a sameness group. Refer to [Service lookups for Consul Enterprise](#service-lookups-for-consul-enterprise) for more information. + +By default, the lookups query in the `consul` domain. Refer to [Configure Consul DNS behavior](/consul/docs/discover/dns/configure) for information about using alternate domains. + +#### Standard lookup results + +Standard services queries return A and SRV records. SRV records include the port that the service is registered on. SRV records are only served if the client specifically requests them. + +Services that fail their health check or that fail a node system check are omitted from the results. As a load balancing measure, Consul randomizes the set of nodes returned in the response. These mechanisms help you use DNS with application-level retries as the foundation for a self-healing service-oriented architecture. + +The following example retrieves the SRV records for any `redis` service registered in Consul. + +```shell-session +$ dig @127.0.0.1 -p 8600 consul.service.consul SRV + +; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 consul.service.consul ANY +; (1 server found) +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50483 +;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1 +;; WARNING: recursion requested but not available + +;; QUESTION SECTION: +;consul.service.consul. IN SRV + +;; ANSWER SECTION: +consul.service.consul. 0 IN SRV 1 1 8300 foobar.node.dc1.consul. + +;; ADDITIONAL SECTION: +foobar.node.dc1.consul. 0 IN A 10.1.10.12 +``` + +The following example command and FQDN retrieves the SRV records for the primary Postgres service in the secondary datacenter: + +```shell-session hideClipboard +$ dig @127.0.0.1 -p 8600 primary.postgresql.service.dc2.consul SRV + +; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 primary.postgresql.service.dc2.consul ANY +; (1 server found) +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 50483 +;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 1, ADDITIONAL: 1 +;; WARNING: recursion requested but not available + +;; QUESTION SECTION: +;consul.service.consul. IN SRV + +;; ANSWER SECTION: +consul.service.consul. 0 IN SRV 1 1 5432 primary.postgresql.service.dc2.consul. + +;; ADDITIONAL SECTION: +primary.postgresql.service.dc2.consul. 0 IN A 10.1.10.12 +``` + +### RFC 2782 lookup + +Per [RFC 2782](https://tools.ietf.org/html/rfc2782), SRV queries must prepend `service` and `protocol` values with an underscore (`_`) to prevent DNS collisions. Use the following syntax to perform RFC 2782 lookups: + +```text +_._[.service][.]. +``` + +You can create lookups that filter results by placing service tags in the `protocol` field. Use the following syntax to create RFC 2782 lookups that filter results based on service tags: + +```text +_._[.service][.]. +``` + +The following example queries the `rabbitmq` service tagged with `amqp`, which returns an instance at `rabbitmq.node1.dc1.consul` on port `5672`: + +```shell-session +$ dig @127.0.0.1 -p 8600 _rabbitmq._amqp.service.consul SRV +; <<>> DiG 9.8.3-P1 <<>> @127.0.0.1 -p 8600 _rabbitmq._amqp.service.consul ANY +; (1 server found) +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52838 +;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 +;; WARNING: recursion requested but not available + +;; QUESTION SECTION: +;_rabbitmq._amqp.service.consul. IN SRV + +;; ANSWER SECTION: +_rabbitmq._amqp.service.consul. 0 IN SRV 1 1 5672 rabbitmq.node1.dc1.consul. + +;; ADDITIONAL SECTION: +rabbitmq.node1.dc1.consul. 0 IN A 10.1.11.20 +``` + +You can also perform RFC 2782 lookups that target a specific [cluster peer](/consul/docs/east-west/cluster-peering) or datacenter by including `.dc` or `.peer` in the query labels: + +```text +_._[.service][..dc][..peer]. +``` + +The following example queries the `redis` service tagged with `tcp` for the cluster peer `phx1`, which returns two instances, one at `10.1.11.83:29081` and one at `10.1.11.86:29142`: + +```shell-session +$ dig @127.0.0.1 -p 8600 _redis._tcp.service.phx1.peer.consul SRV +; <<>> DiG 9.18.15 <<>> @127.0.0.1 -p 8600 _redis._tcp.service.phx1.peer.consul SRV +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 40572 +;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 1232 +;; QUESTION SECTION: +;_redis._tcp.service.phx1.peer.consul. IN SRV + +;; ANSWER SECTION: +_redis._tcp.service.phx1.peer.consul. 0 IN SRV 1 1 29081 0a000d53.addr.consul. +_redis._tcp.service.phx1.peer.consul. 0 IN SRV 1 1 29142 0a010d56.addr.consul. + +;; ADDITIONAL SECTION: +0a000d53.addr.consul. 0 IN A 10.1.11.83 +0a010d56.addr.consul. 0 IN A 10.1.11.86 +``` + +#### SRV responses for hosts in the .addr subdomain + +If a service registered with Consul is configured with an explicit IP address or addresses in the [`address`](/consul/docs/reference/service#address) or [`tagged_address`](/consul/docs/reference/service#tagged_address) parameter, then Consul returns the hostname in the target field of the answer section for the DNS SRV query according to the following format: + +```text +.addr..consul. +``` + +In the following example, the `rabbitmq` service is registered with an explicit IPv4 address of `192.0.2.10`. + + + +```hcl +node_name = "node1" + +services { + name = "rabbitmq" + address = "192.0.2.10" + port = 5672 +} +``` + +```json +{ + "node_name": "node1", + "services": [ + { + "name": "rabbitmq", + "address": "192.0.2.10", + "port": 5672 + } + ] +} +``` + + + +The following example SRV query response contains a single record with a hostname written as a hexadecimal value: + +```shell-session +$ dig @127.0.0.1 -p 8600 -t srv _rabbitmq._tcp.service.consul +short +1 1 5672 c000020a.addr.dc1.consul. +``` + +You can convert hex octets to decimals to reveal the IP address. The following example command converts the hostname expressed as `c000020a` into the IPv4 address specified in the service registration. + +```shell-session +$ echo -n "c000020a" | perl -ne 'printf("%vd\n", pack("H*", $_))' +192.0.2.10 +``` + +In the following example, the `rabbitmq` service is registered with an explicit IPv6 address of `2001:db8:1:2:cafe::1337`. + + + +```hcl +node_name = "node1" + +services { + name = "rabbitmq" + address = "2001:db8:1:2:cafe::1337" + port = 5672 +} +``` + +```json +{ + "node_name": "node1", + "services": [ + { + "name": "rabbitmq", + "address": "2001:db8:1:2:cafe::1337", + "port": 5672 + } + ] +} +``` + + + +The following example SRV query response contains a single record with a hostname written as a hexadecimal value. + +```shell-session +$ dig @127.0.0.1 -p 8600 -t SRV _rabbitmq._tcp.service.consul +short +1 1 5672 20010db800010002cafe000000001337.addr.dc1.consul. +``` + +The response contains the fully-expanded IPv6 address with colon separators removed. The following command re-adds the colon separators to display the fully expanded IPv6 address that was specified in the service registration. + +```shell-session +$ echo -n "20010db800010002cafe000000001337" | perl -ne 'printf join(":", unpack("(A4)*", $_))."\n"' +2001:0db8:0001:0002:cafe:0000:0000:1337 +``` + +### Service lookups for Consul Enterprise + +You can perform the following types of service lookups to query for services in another namespace, partition, and datacenter: + +- `.service` +- `.connect` +- `.virtual` +- `.ingress` + +Use the following query format to specify namespace, partition, or datacenter: +``` +[.].service[..ns][..ap][..dc] +``` + +The `namespace`, `partition`, and `datacenter` are optional. By default, all service lookups use the `default` namespace within the partition and datacenter of the Consul agent that received the DNS query. + +Consul server agents reside in the `default` partition. If DNS queries are addressed to Consul server agents, you must explicitly specify the partition of the target service when querying for services in partitions other than `default`. + +To lookup services imported from a cluster peer, refer to [Service virtual IP lookups for Consul Enterprise](#service-virtual-ip-lookups-for-consul-enterprise). + +#### Alternative formats for specifying namespace + +Although we recommend using the format described in [Service lookups for Consul Enterprise](#service-lookups-for-consul-enterprise) for readability, you can use the alternate query format to specify namespaces but not partitions: + +``` +[.].service... +``` + +### Service mesh-enabled service lookups + +Add the `.connect` subdomain to query for service mesh-enabled services: + +```text +.connect. +``` + +This finds all service mesh-capable endpoints for the service. A service mesh-capable endpoint may be a proxy for a service or a natively integrated service mesh application. The DNS interface does not differentiate the two. + +Many services use a proxy that handles service discovery automatically. As a result, they may not use the DNS format, which is primarily for service mesh-native applications. +This endpoint only finds services within the same datacenter and does not support tags. Refer to the [`catalog` API endpoint](/consul/api-docs/catalog) for more complex behaviors. + +### Service virtual IP lookups + +Add the `.virtual` subdomain to queries to find the unique virtual IP allocated for a service: + +```text +.virtual[.]. +``` + +This returns the unique virtual IP for any service mesh-capable service. Each service mesh service has a virtual IP assigned to it by Consul. Sidecar proxies use the virtual IP to enable the [transparent proxy](/consul/docs/connect/transparent-proxy) feature. + +The peer name is an optional. The DNS uses it to query for the virtual IP of a service imported from the specified peer. + +Consul adds virtual IPs to the [`tagged_addresses`](/consul/docs/reference/service#tagged_addresses) field in the service definition under the `consul-virtual` tag. + +#### Service virtual IP lookups for Consul Enterprise + +By default, a service virtual IP lookup checks the `default` namespace within the partition and datacenter of the Consul agent that received the DNS query. +To lookup services imported from a partition in another cluster peered to the querying cluster or open-source datacenter, specify the namespace and peer name in the lookup: + +```text +.virtual[.].. +``` + +To lookup services in a cluster peer that have not been imported, refer to [Service lookups for Consul Enterprise](#service-lookups-for-consul-enterprise). + +### Ingress service lookups + +Add the `.ingress` subdomain to your DNS FQDN to find ingress-enabled services: + +```text +.ingress. +``` + +This finds all ingress gateway endpoints for the service. + +This endpoint finds services within the same datacenter and does not support tags. Refer to the [`catalog` API endpoint](/consul/api-docs/catalog) for more complex behaviors. + +### UDP-based DNS queries + +When the DNS query is performed using UDP, Consul truncates the results without setting the truncate bit. This prevents a redundant lookup over TCP that generates additional load. If the lookup is done over TCP, the results are not truncated. diff --git a/website/content/docs/discover/vm.mdx b/website/content/docs/discover/vm.mdx new file mode 100644 index 000000000000..ca4b92de1954 --- /dev/null +++ b/website/content/docs/discover/vm.mdx @@ -0,0 +1,44 @@ +--- +layout: docs +page_title: Discover services on virtual machines (VMs) +description: >- + This page provides an overview of the service discovery features and operations enabled on virtual machines by Consul DNS, including application load balancing, static lookups, and prepared queries. +--- + +# Discover services on virtual machines (VMs) + +This page provides an overview of Consul service discovery operations on virtual machines. After you register services with Consul, you can use Consul DNS to perform application load balancing and static service lookups. You can also create prepared queries for dynamic service lookups and service failover. + +## Introduction + +When a service registers with Consul, the catalog records the address of each service's node. Consul then updates an instance's catalog entry with the results of each health check it performs. Consul agents replicate catalog information between each other using the [Raft consensus protocol](/consul/docs/concept/consensus), enabling high availability service networking through any Consul agent. + +Consul's service discovery operations use [Consul DNS addresses](/consul/docs/discover/dns) to route traffic to healthy service instances and return information about service nodes registered to Consul. + +## Application load balancing + +@include 'text/descriptions/load-balancer.mdx' + +## Static lookups + +@include 'text/descriptions/static-query.mdx' + +## Prepared queries + +@include 'text/descriptions/prepared-query.mdx' + +## Tutorials + +To get started with Consul's service discovery features on VMs, refer to the following tutorials: + +- [Register your services to Consul](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery) + +## Reference documentation + +For reference material related to Consul's service discovery functions, refer to the following pages: + +- [Consul DNS reference](/consul/docs/reference/dns) + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/discover.mdx' diff --git a/website/content/docs/docker.mdx b/website/content/docs/docker.mdx new file mode 100644 index 000000000000..b6f44b4fda06 --- /dev/null +++ b/website/content/docs/docker.mdx @@ -0,0 +1,116 @@ +--- +layout: docs +page_title: Consul on Docker +description: >- + Learn about Consul's support when running on Docker containers +--- + +# Consul on Docker + +## Access containers + +You can access a containerized Consul datacenter in several different ways. + +#### Docker exec + +You can execute Consul commands directly inside of your Consul containers using `docker exec`. + +```shell-session +$ docker exec consul members +Node Address Status Type Build Protocol DC Partition Segment +server-1 172.17.0.2:8301 alive server 1.14.3 2 dc1 default +client-1 172.17.0.3:8301 alive client 1.14.3 2 dc1 default +``` + +#### Docker exec attach + +You can also issue commands inside of your container by opening an interactive shell and using the Consul binary included in the container. + +```shell-session +$ docker exec -it /bin/sh +# consul members +Node Address Status Type Build Protocol DC Partition Segment +server-1 172.17.0.2:8301 alive server 1.14.3 2 dc1 default +client-1 172.17.0.3:8301 alive client 1.14.3 2 dc1 default +``` + +#### Local Consul binary + +If you have a local Consul binary in your PATH, you can export the `CONSUL_HTTP_ADDR` environment variable to point to the HTTP address of a Consul server container. + +```shell-session +$ export CONSUL_HTTP_ADDR=:8500 +``` + +This approach runs the Consul binary on your local machine instead of the container, but still communicates with the Consul server running in the container. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +server-1 172.17.0.2:8301 alive server 1.21.2 2 dc1 default +client-1 172.17.0.3:8301 alive client 1.21.2 2 dc1 default +``` + +## Reload configuration + +To do an in-memory reload, send a SIGHUP to the container. + +```shell-session +$ docker kill --signal=HUP +``` + +## Consul Autopilot + +As long as there are enough servers in the datacenter to maintain consensus, Consul's [autopilot feature](/consul/docs/manage/scale/autopilot) removes servers whose containers were stopped. + +Autopilot's settings are configurable, and we recommend the following configurations when running Consul on Docker. For more information, check the [Autopilot configuration reference](/consul/docs/reference/agent/configuration-file/general#autopilot). + +- `cleanup_dead_servers` should be set to true to make sure that a stopped container is removed from the datacenter. +- `last_contact_threshold` should be reasonably small, so that dead servers are removed quickly. +- `server_stabilization_time` should be sufficiently large (on the order of several seconds) so that unstable servers are not added to the datacenter until they stabilize. + +## Backup data with snapshots + +You can back-up your Consul datacenter using the [consul snapshot](/consul/commands/snapshot) command. + +```shell-session +$ docker exec consul snapshot save backup.snap +``` + +This will leave the `backup.snap` snapshot file inside of your container. If you are not saving your snapshot to a [persistent volume](https://docs.docker.com/storage/volumes/) then you will need to use `docker cp` to move your snapshot to a location outside of your container. + +```shell-session +$ docker cp :backup.snap ./ +``` + +To save backups automatically, use the Consul Enterprise [snapshot agent](/consul/commands/snapshot/agent). Consul Enterprise's snapshot agent also can save snapshots to Amazon S3 and Azure Blob Storage. + +## Environment variables + +The Consul Docker image supports passing multiple environment variables with the `-e` flag. + +- `CONSUL_CLIENT_INTERFACE` specifies the name of the interface on which Consul exposes DNS, gRPC, and HTTP APIs. +- `CONSUL_CLIENT_ADDRESS` specifies the address Consul binds to for client traffic, such as DNS, gRPC, and HTTP APIs. + +- `CONSUL_BIND_INTERFACE` specifies the interface Consul uses for internal Consul cluster communication. +- `CONSUL_BIND_ADDRESS` specifies the address Consul binds to for internal Consul cluster communication. + +- `CONSUL_DATA_DIR` specifies the directory where Consul stores data to persist across container restarts. The default value is `/consul/data`. + +- `CONSUL_ALLOW_PRIVILEGED_PORTS` is a boolean value. When set to `true`, [`setcap`](https://man7.org/linux/man-pages/man8/setcap.8.html) runs on the Consul binary to allow binding to [privileged ports](https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html). + +- `CONSUL_CONFIG_DIR` allows you to specify a container directory for additional configuration files that the Consul agent loads. The default location is the `/consul/config` directory. +- `CONSUL_LOCAL_CONFIG` passes a JSON string of configuration options to the Consul agent. + +The following is an example how to define a multi-line configuration using `CONSUL_LOCAL_CONFIG`. + +```shell-session +$ docker run -d \ + -e CONSUL_LOCAL_CONFIG='{ + "datacenter":"us_west", + "server":true, + "enable_debug":true + }' \ + hashicorp/consul:latest \ + consul agent -server -bootstrap-expect=1 -data-dir=/consul/data +``` diff --git a/website/content/docs/dynamic-app-config/kv/index.mdx b/website/content/docs/dynamic-app-config/kv/index.mdx deleted file mode 100644 index 677037321d67..000000000000 --- a/website/content/docs/dynamic-app-config/kv/index.mdx +++ /dev/null @@ -1,120 +0,0 @@ ---- -layout: docs -page_title: Key/Value (KV) Store Overview -description: >- - Consul includes a KV store for indexed objects, configuration parameters, and metadata that you can use to dynamically configure apps. Learn about accessing and using the KV store to extend Consul's functionality through watches, sessions, and Consul Template. ---- - -# Key/Value (KV) Store Overview - - -The Consul KV API, CLI, and UI are now considered feature complete and no new feature development is planned for future releases. - - -Consul KV is a core feature of Consul and is installed with the Consul agent. -Once installed with the agent, it will have reasonable defaults. Consul KV allows -users to store indexed objects, though its main uses are storing configuration -parameters and metadata. Please note that it is a simple KV store and is not -intended to be a full featured datastore (such as DynamoDB) but has some -similarities to one. - -The Consul KV datastore is located on the servers, but can be accessed by any -agent (client or server). The natively integrated [RPC -functionality](/consul/docs/architecture) allows clients to forward -requests to servers, including key/value reads and writes. Part of Consul's -core design allows data to be replicated automatically across all the servers. -Having a quorum of servers will decrease the risk of data loss if an outage -occurs. - -If you have not used Consul KV, complete this [Getting Started -tutorial](/consul/tutorials/interactive/get-started-key-value-store?utm_source=docs) on HashiCorp. - -## Accessing the KV store - -The KV store can be accessed by the [consul kv CLI -subcommands](/consul/commands/kv), [HTTP API](/consul/api-docs/kv), and Consul UI. -To restrict access, enable and configure -[ACLs](/consul/tutorials/security/access-control-setup-production). -Once the ACL system has been bootstrapped, users and services, will need a -valid token with KV [privileges](/consul/docs/security/acl/acl-rules#key-value-rules) to -access the data store, this includes even reads. We recommend creating a -token with limited privileges, for example, you could create a token with write -privileges on one key for developers to update the value related to their -application. - -The datastore itself is located on the Consul servers in the [data -directory](/consul/docs/agent/config/cli-flags#_data_dir). To ensure data is not lost in -the event of a complete outage, use the [`consul snapshot`](/consul/commands/snapshot/restore) feature to backup the data. - -## Using Consul KV - -Objects are opaque to Consul, meaning there are no restrictions on the type of -object stored in a key/value entry. The main restriction on an object is size - -the maximum is 512 KB. Due to the maximum object size and main use cases, you -should not need extra storage; the general [sizing -recommendations](/consul/docs/agent/config/config-files#kv_max_value_size) -are usually sufficient. - -Keys, like objects are not restricted by type and can include any character. -However, we recommend using URL-safe chars - `[a-zA-Z0-9-._~]` with the -exception of `/`, which can be used to help organize data. Note, `/` will be -treated like any other character and is not fixed to the file system. Meaning, -including `/` in a key does not fix it to a directory structure. This model is -similar to Amazon S3 buckets. However, `/` is still useful for organizing data -and when recursively searching within the data store. We also recommend that -you avoid the use of `*`, `?`, `'`, and `%` because they can cause issues when -using the API and in shell scripts. - -## Using Sentinel to apply policies for Consul KV - - - -This feature requires -HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. - - - -You can also use Sentinel as a Policy-as-code framework for defining advanced key-value storage access control policies. Sentinel policies extend the ACL system in Consul beyond static "read", "write", -and "deny" policies to support full conditional logic and integration with -external systems. Reference the [Sentinel documentation](https://docs.hashicorp.com/sentinel/concepts) for high-level Sentinel concepts. - -To get started with Sentinel in Consul, -refer to the [Sentinel documentation](https://docs.hashicorp.com/sentinel/consul) or -[Consul documentation](/consul/docs/agent/sentinel). - - -## Extending Consul KV - -### Consul Template - -If you plan to use Consul KV as part of your configuration management process -review the [Consul -Template](/consul/tutorials/developer-configuration/consul-template?utm_source=docs) -tutorial on how to update configuration based on value updates in the KV. Consul -Template is based on Go Templates and allows for a series of scripted actions -to be initiated on value changes to a Consul key. - -### Watches - -Consul KV can also be extended with the use of watches. -[Watches](/consul/docs/dynamic-app-config/watches) are a way to monitor data for updates. When -an update is detected, an external handler is invoked. To use watches with the -KV store the [key](/consul/docs/dynamic-app-config/watches#key) watch type should be used. - -### Consul Sessions - -Consul sessions can be used to build distributed locks with Consul KV. Sessions -act as a binding layer between nodes, health checks, and key/value data. The KV -API supports an `acquire` and `release` operation. The `acquire` operation acts -like a Check-And-Set operation. On success, there is a key update and an -increment to the `LockIndex` and the session value is updated to reflect the -session holding the lock. Review the session documentation for more information -on the [integration](/consul/docs/dynamic-app-config/sessions#k-v-integration). - -Review the following tutorials to learn how to use Consul sessions for [application leader election](/consul/docs/dynamic-app-config/sessions/application-leader-election) and -to [build distributed semaphores](/consul/tutorials/developer-configuration/distributed-semaphore). - -### Vault - -If you plan to use Consul KV as a backend for Vault, please review [this -tutorial](/vault/tutorials/day-one-consul/ha-with-consul?utm_source=docs). diff --git a/website/content/docs/dynamic-app-config/sessions/index.mdx b/website/content/docs/dynamic-app-config/sessions/index.mdx deleted file mode 100644 index 5bbe65e8f11e..000000000000 --- a/website/content/docs/dynamic-app-config/sessions/index.mdx +++ /dev/null @@ -1,144 +0,0 @@ ---- -layout: docs -page_title: Sessions and Distributed Locks Overview -description: >- - Consul supports sessions that you can use to build distributed locks with granular locking. Learn about sessions, how they can prevent ""split-brain"" systems by ensuring consistency in deployments, and how they can integrate with the key/value (KV) store. ---- - -# Sessions and Distributed Locks Overview - -Consul provides a session mechanism which can be used to build distributed locks. -Sessions act as a binding layer between nodes, health checks, and key/value data. -They are designed to provide granular locking and are heavily inspired by -[The Chubby Lock Service for Loosely-Coupled Distributed Systems](https://research.google/pubs/the-chubby-lock-service-for-loosely-coupled-distributed-systems/). - -## Session Design - -A session in Consul represents a contract that has very specific semantics. -When a session is constructed, a node name, a list of health checks, a behavior, -a TTL, and a `lock-delay` may be provided. The newly constructed session is provided with -a named ID that can be used to identify it. This ID can be used with the KV -store to acquire locks: advisory mechanisms for mutual exclusion. - -Below is a diagram showing the relationship between these components: - -![Consul Sessions](/img/consul-sessions.png) - -The contract that Consul provides is that under any of the following -situations, the session will be _invalidated_: - -- Node is deregistered -- Any of the health checks are deregistered -- Any of the health checks go to the critical state -- Session is explicitly destroyed -- TTL expires, if applicable - -When a session is invalidated, it is destroyed and can no longer -be used. What happens to the associated locks depends on the -behavior specified at creation time. Consul supports a `release` -and `delete` behavior. The `release` behavior is the default -if none is specified. - -If the `release` behavior is being used, any of the locks held in -association with the session are released, and the `ModifyIndex` of -the key is incremented. Alternatively, if the `delete` behavior is -used, the key corresponding to any of the held locks is simply deleted. -This can be used to create ephemeral entries that are automatically -deleted by Consul. - -While this is a simple design, it enables a multitude of usage -patterns. By default, the -[gossip based failure detector](/consul/docs/architecture/gossip) -is used as the associated health check. This failure detector allows -Consul to detect when a node that is holding a lock has failed and -to automatically release the lock. This ability provides **liveness** to -Consul locks; that is, under failure the system can continue to make -progress. However, because there is no perfect failure detector, it's possible -to have a false positive (failure detected) which causes the lock to -be released even though the lock owner is still alive. This means -we are sacrificing some **safety**. - -Conversely, it is possible to create a session with no associated -health checks. This removes the possibility of a false positive -and trades liveness for safety. You can be absolutely certain Consul -will not release the lock even if the existing owner has failed. -Since Consul APIs allow a session to be force destroyed, this allows -systems to be built that require an operator to intervene in the -case of a failure while precluding the possibility of a split-brain. - -A third health checking mechanism is session TTLs. When creating -a session, a TTL can be specified. If the TTL interval expires without -being renewed, the session has expired and an invalidation is triggered. -This type of failure detector is also known as a heartbeat failure detector. -It is less scalable than the gossip based failure detector as it places -an increased burden on the servers but may be applicable in some cases. -The contract of a TTL is that it represents a lower bound for invalidation; -that is, Consul will not expire the session before the TTL is reached, but it -is allowed to delay the expiration past the TTL. The TTL is renewed on -session creation, on session renew, and on leader failover. When a TTL -is being used, clients should be aware of clock skew issues: namely, -time may not progress at the same rate on the client as on the Consul servers. -It is best to set conservative TTL values and to renew in advance of the TTL -to account for network delay and time skew. - -The final nuance is that sessions may provide a `lock-delay`. This -is a time duration, between 0 and 60 seconds. When a session invalidation -takes place, Consul prevents any of the previously held locks from -being re-acquired for the `lock-delay` interval; this is a safeguard -inspired by Google's Chubby. The purpose of this delay is to allow -the potentially still live leader to detect the invalidation and stop -processing requests that may lead to inconsistent state. While not a -bulletproof method, it does avoid the need to introduce sleep states -into application logic and can help mitigate many issues. While the -default is to use a 15 second delay, clients are able to disable this -mechanism by providing a zero delay value. - -## K/V Integration - -Integration between the KV store and sessions is the primary -place where sessions are used. A session must be created prior to use -and is then referred to by its ID. - -The KV API is extended to support an `acquire` and `release` operation. -The `acquire` operation acts like a Check-And-Set operation except it -can only succeed if there is no existing lock holder (the current lock holder -can re-`acquire`, see below). On success, there is a normal key update, but -there is also an increment to the `LockIndex`, and the `Session` value is -updated to reflect the session holding the lock. - -If the lock is already held by the given session during an `acquire`, then -the `LockIndex` is not incremented but the key contents are updated. This -lets the current lock holder update the key contents without having to give -up the lock and reacquire it. - -Once held, the lock can be released using a corresponding `release` operation, -providing the same session. Again, this acts like a Check-And-Set operation -since the request will fail if given an invalid session. A critical note is -that the lock can be released without being the creator of the session. -This is by design as it allows operators to intervene and force-terminate -a session if necessary. As mentioned above, a session invalidation will also -cause all held locks to be released or deleted. When a lock is released, the `LockIndex` -does not change; however, the `Session` is cleared and the `ModifyIndex` increments. - -These semantics (heavily borrowed from Chubby), allow the tuple of (Key, LockIndex, Session) -to act as a unique "sequencer". This `sequencer` can be passed around and used -to verify if the request belongs to the current lock holder. Because the `LockIndex` -is incremented on each `acquire`, even if the same session re-acquires a lock, -the `sequencer` will be able to detect a stale request. Similarly, if a session is -invalided, the Session corresponding to the given `LockIndex` will be blank. - -To be clear, this locking system is purely _advisory_. There is no enforcement -that clients must acquire a lock to perform any operation. Any client can -read, write, and delete a key without owning the corresponding lock. It is not -the goal of Consul to protect against misbehaving clients. - -## Leader Election - -You can use the primitives provided by sessions and the locking mechanisms of the KV -store to build client-side leader election algorithms. -These are covered in more detail in the [Leader Election guide](/consul/docs/dynamic-app-config/sessions/application-leader-election). - -## Prepared Query Integration - -Prepared queries may be attached to a session in order to automatically delete -the prepared query when the session is invalidated. diff --git a/website/content/docs/dynamic-app-config/watches.mdx b/website/content/docs/dynamic-app-config/watches.mdx deleted file mode 100644 index 8eac5888628c..000000000000 --- a/website/content/docs/dynamic-app-config/watches.mdx +++ /dev/null @@ -1,693 +0,0 @@ ---- -layout: docs -page_title: Watches Overview and Reference -description: >- - Watches monitor the key/value (KV) store, services, nodes, health checks, and events for updates. When a watch detects a change, it invokes a handler that can call an HTTP endpoint or run an executable. Learn how to configure watches to dynamically respond to changes in Consul. ---- - -# Watches Overview and Reference - -Watches are a way of specifying a view of data (e.g. list of nodes, KV pairs, health -checks) which is monitored for updates. When an update is detected, an external handler -is invoked. A handler can be any executable or HTTP endpoint. As an example, you could watch the status -of health checks and notify an external system when a check is critical. - -Watches are implemented using blocking queries in the [HTTP API](/consul/api-docs). -Agents automatically make the proper API calls to watch for changes -and inform a handler when the data view has updated. - -Watches can be configured as part of the [agent's configuration](/consul/docs/agent/config/config-files#watches), -causing them to run once the agent is initialized. Reloading the agent configuration -allows for adding or removing watches dynamically. - -Alternatively, the [watch command](/consul/commands/watch) enables a watch to be -started outside of the agent. This can be used by an operator to inspect data in Consul -or to easily pipe data into processes without being tied to the agent lifecycle. - -In either case, the `type` of the watch must be specified. Each type of watch -supports different parameters, some required and some optional. These options are specified -in a JSON body when using agent configuration or as CLI flags for the watch command. - -## Handlers - -The watch configuration specifies the view of data to be monitored. -Once that view is updated, the specified handler is invoked. Handlers can be either an -executable or an HTTP endpoint. A handler receives JSON formatted data -with invocation info, following a format that depends on the type of the watch. -Each watch type documents the format type. Because they map directly to an HTTP -API, handlers should expect the input to match the format of the API. A Consul -index is also given, corresponding to the responses from the -[HTTP API](/consul/api-docs). - -### Executable - -An executable handler reads the JSON invocation info from stdin. Additionally, -the `CONSUL_INDEX` environment variable will be set to the Consul index. -Anything written to stdout is logged. - -Here is an example configuration, where `handler_type` is optionally set to -`script`: - - - - -```hcl -watches = [ - { - type = "key" - key = "foo/bar/baz" - handler_type = "script" - args = ["/usr/bin/my-service-handler.sh", "-redis"] - } -] -``` - - - - - -```json -{ - "watches": [ - { - "type": "key", - "key": "foo/bar/baz", - "handler_type": "script", - "args": ["/usr/bin/my-service-handler.sh", "-redis"] - } - ] -} -``` - - - - - -Prior to Consul 1.0, watches used a single `handler` field to define the command to run, and -would always run in a shell. In Consul 1.0, the `args` array was added so that handlers can be -run without a shell. The `handler` field is deprecated, and you should include the shell in -the `args` to run under a shell, eg. `"args": ["sh", "-c", "..."]`. - -### HTTP endpoint - -An HTTP handler sends an HTTP request when a watch is invoked. The JSON invocation info is sent -as a payload along the request. The response also contains the Consul index as a header named -`X-Consul-Index`. - -The HTTP handler can be configured by setting `handler_type` to `http`. Additional handler options -are set using `http_handler_config`. The only required parameter is the `path` field which specifies -the URL to the HTTP endpoint. Consul uses `POST` as the default HTTP method, but this is also configurable. -Other optional fields are `header`, `timeout` and `tls_skip_verify`. The watch invocation data is -always sent as a JSON payload. - -Here is an example configuration: - - - - -```hcl -watches = [ - { - type = "key" - key = "foo/bar/baz" - handler_type = "http" - http_handler_config { - path = "https://localhost:8000/watch" - method = "POST" - header = { - x-foo = ["bar", "baz"] - } - timeout = "10s" - tls_skip_verify = false - } - } -] -``` - - - - -```json -{ - "watches": [ - { - "type": "key", - "key": "foo/bar/baz", - "handler_type": "http", - "http_handler_config": { - "path": "https://localhost:8000/watch", - "method": "POST", - "header": { "x-foo": ["bar", "baz"] }, - "timeout": "10s", - "tls_skip_verify": false - } - } - ] -} -``` - - - - -## Global Parameters - -In addition to the parameters supported by each option type, there -are a few global parameters that all watches support: - -- `datacenter` - Can be provided to override the agent's default datacenter. -- `token` - Can be provided to override the agent's default ACL token. -- `args` - The handler subprocess and arguments to invoke when the data view updates. -- `handler` - The handler shell command to invoke when the data view updates. - -## Watch Types - -The following types are supported. Detailed documentation on each is below: - -- [`key`](#key) - Watch a specific KV pair -- [`keyprefix`](#keyprefix) - Watch a prefix in the KV store -- [`services`](#services) - Watch the list of available services -- [`nodes`](#nodes) - Watch the list of nodes -- [`service`](#service)- Watch the instances of a service -- [`checks`](#checks) - Watch the value of health checks -- [`event`](#event) - Watch for custom user events - -### Type: key ((#key)) - -The "key" watch type is used to watch a specific key in the KV store. -It requires that the `key` parameter be specified. - -This maps to the `/v1/kv/` API internally. - -Here is an example configuration: - - - -```hcl -{ - type = "key" - key = "foo/bar/baz" - args = ["/usr/bin/my-service-handler.sh", "-redis"] -} -``` - -```json -{ - "type": "key", - "key": "foo/bar/baz", - "args": ["/usr/bin/my-service-handler.sh", "-redis"] -} -``` - - - -Or, using the watch command: - -```shell-session -$ consul watch -type=key -key=foo/bar/baz /usr/bin/my-key-handler.sh -``` - -An example of the output of this command: - -```json -{ - "Key": "foo/bar/baz", - "CreateIndex": 1793, - "ModifyIndex": 1793, - "LockIndex": 0, - "Flags": 0, - "Value": "aGV5", - "Session": "" -} -``` - -### Type: keyprefix ((#keyprefix)) - -The `keyprefix` watch type is used to watch a prefix of keys in the KV store. -It requires that the `prefix` parameter be specified. This watch -returns _all_ keys matching the prefix whenever _any_ key matching the prefix -changes. - -This maps to the `/v1/kv/` API internally. - -Here is an example configuration: - - - -```hcl -{ - type = "keyprefix" - prefix = "foo/" - args = ["/usr/bin/my-prefix-handler.sh", "-redis"] -} -``` - -```json -{ - "type": "keyprefix", - "prefix": "foo/", - "args": ["/usr/bin/my-prefix-handler.sh", "-redis"] -} -``` - - - -Or, using the watch command: - -```shell-session -$ consul watch -type=keyprefix -prefix=foo/ /usr/bin/my-prefix-handler.sh -``` - -An example of the output of this command: - -```json -[ - { - "Key": "foo/bar", - "CreateIndex": 1796, - "ModifyIndex": 1796, - "LockIndex": 0, - "Flags": 0, - "Value": "TU9BUg==", - "Session": "" - }, - { - "Key": "foo/baz", - "CreateIndex": 1795, - "ModifyIndex": 1795, - "LockIndex": 0, - "Flags": 0, - "Value": "YXNkZg==", - "Session": "" - }, - { - "Key": "foo/test", - "CreateIndex": 1793, - "ModifyIndex": 1793, - "LockIndex": 0, - "Flags": 0, - "Value": "aGV5", - "Session": "" - } -] -``` - -### Type: services ((#services)) - -The "services" watch type is used to watch the list of available -services. It has no parameters. - -This maps to the `/v1/catalog/services` API internally. - -Below is an example configuration: - - - -```hcl -{ - type = "services" - args = ["/usr/bin/my-services-handler.sh"] -} -``` - -```json -{ - "type": "services", - "args": ["/usr/bin/my-services-handler.sh"] -} -``` - - - -Or, using the watch command: - -```shell-session -$ consul watch -type=services /usr/bin/my-services-handler.sh -``` - -An example of the output of this command: - -```json -{ - "consul": [], - "redis": [], - "web": [] -} -``` - -### Type: nodes ((#nodes)) - -The "nodes" watch type is used to watch the list of available -nodes. It has no parameters. - -This maps to the `/v1/catalog/nodes` API internally. - -Below is an example configuration: - - - -```hcl -{ - type = "nodes" - args = ["/usr/bin/my-nodes-handler.sh"] -} -``` - -```json -{ - "type": "nodes", - "args": ["/usr/bin/my-nodes-handler.sh"] -} -``` - - - -Or, using the watch command: - -```shell-session -$ consul watch -type=nodes /usr/bin/my-nodes-handler.sh -``` - -An example of the output of this command: - -```json -[ - { - "ID": "8d3088b5-ce7d-0b94-f185-ae70c3445642", - "Node": "nyc1-consul-1", - "Address": "192.0.2.10", - "Datacenter": "dc1", - "TaggedAddresses": null, - "Meta": null, - "CreateIndex": 23792324, - "ModifyIndex": 23792324 - }, - { - "ID": "1edb564e-65ee-9e60-5e8a-83eae4637357", - "Node": "nyc1-worker-1", - "Address": "192.0.2.20", - "Datacenter": "dc1", - "TaggedAddresses": { - "lan": "192.0.2.20", - "lan_ipv4": "192.0.2.20", - "wan": "192.0.2.20", - "wan_ipv4": "192.0.2.20" - }, - "Meta": { - "consul-network-segment": "", - "host-ip": "192.0.2.20", - "pod-name": "hashicorp-consul-q7nth" - }, - "CreateIndex": 23792336, - "ModifyIndex": 23792338 - } -] -``` - -### Type: service ((#service)) - -The "service" watch type is used to monitor the providers -of a single service. It requires the `service` parameter -and optionally takes the parameters `tag` and -`passingonly`. The `tag` parameter will filter by one or more tags. -It may be either a single string value or a slice of strings. -The `passingonly` parameter is a boolean that will filter to only the -instances passing all health checks. - -This maps to the `/v1/health/service` API internally. - -Here is an example configuration with a single tag: - - - -```hcl -{ - type = "service" - service = "redis" - args = ["/usr/bin/my-service-handler.sh", "-redis"] - tag = "bar" -} -``` - -```json -{ - "type": "service", - "service": "redis", - "args": ["/usr/bin/my-service-handler.sh", "-redis"], - "tag": "bar" -} -``` - - - -Here is an example configuration with multiple tags: - - - -```hcl -{ - type = "service" - service = "redis" - args = ["/usr/bin/my-service-handler.sh", "-redis"] - tag = ["bar", "foo"] -} -``` - -```json -{ - "type": "service", - "service": "redis", - "args": ["/usr/bin/my-service-handler.sh", "-redis"], - "tag": ["bar", "foo"] -} -``` - - - -Or, using the watch command: - -Single tag: - -```shell-session -$ consul watch -type=service -service=redis -tag=bar /usr/bin/my-service-handler.sh -``` - -Multiple tags: - -```shell-session -$ consul watch -type=service -service=redis -tag=bar -tag=foo /usr/bin/my-service-handler.sh -``` - -An example of the output of this command: - -```json -[ - { - "Node": { - "ID": "f013522f-aaa2-8fc6-c8ac-c84cb8a56405", - "Node": "hashicorp-consul-server-1", - "Address": "192.0.2.50", - "Datacenter": "dc1", - "TaggedAddresses": null, - "Meta": null, - "CreateIndex": 23785783, - "ModifyIndex": 23785783 - }, - "Service": { - "ID": "redis", - "Service": "redis", - "Tags": [], - "Meta": null, - "Port": 6379, - "Address": "", - "Weights": { - "Passing": 1, - "Warning": 1 - }, - "EnableTagOverride": false, - "CreateIndex": 23785794, - "ModifyIndex": 23785794, - "Proxy": { - "MeshGateway": {}, - "Expose": {} - }, - "Connect": {} - }, - "Checks": [ - { - "Node": "hashicorp-consul-server-1", - "CheckID": "serfHealth", - "Name": "Serf Health Status", - "Status": "passing", - "Notes": "", - "Output": "Agent alive and reachable", - "ServiceID": "", - "ServiceName": "", - "ServiceTags": [], - "Type": "", - "Definition": { - "Interval": "0s", - "Timeout": "0s", - "DeregisterCriticalServiceAfter": "0s", - "HTTP": "", - "Header": null, - "Method": "", - "Body": "", - "TLSServerName": "", - "TLSSkipVerify": false, - "TCP": "", - "TCPUseTLS": false, - "GRPC": "", - "GRPCUseTLS": false - }, - "CreateIndex": 23785783, - "ModifyIndex": 23791503 - } - ] - } -] -``` - -### Type: checks ((#checks)) - -The "checks" watch type is used to monitor the checks of a given -service or those in a specific state. It optionally takes the `service` -parameter to filter to a specific service or the `state` parameter to -filter to a specific state. By default, it will watch all checks. - -This maps to the `/v1/health/state/` API if monitoring by state -or `/v1/health/checks/` if monitoring by service. - -Here is an example configuration for monitoring by state: - - - -```hcl -{ - type = "checks" - state = "passing" - args = ["/usr/bin/my-check-handler.sh", "-passing"] -} -``` - -```json -{ - "type": "checks", - "state": "passing", - "args": ["/usr/bin/my-check-handler.sh", "-passing"] -} -``` - - - -Here is an example configuration for monitoring by service: - - - -```hcl -{ - type = "checks" - service = "redis" - args = ["/usr/bin/my-check-handler.sh", "-redis"] -} -``` - -```json -{ - "type": "checks", - "service": "redis", - "args": ["/usr/bin/my-check-handler.sh", "-redis"] -} -``` - - - -Or, using the watch command: - -State: - -```shell-session -$ consul watch -type=checks -state=passing /usr/bin/my-check-handler.sh -passing -``` - -Service: - -```shell-session -$ consul watch -type=checks -service=redis /usr/bin/my-check-handler.sh -redis -``` - -An example of the output of this command: - -```json -[ - { - "Node": "foobar", - "CheckID": "service:redis", - "Name": "Service 'redis' check", - "Status": "passing", - "Notes": "", - "Output": "", - "ServiceID": "redis", - "ServiceName": "redis" - } -] -``` - -### Type: event ((#event)) - -The "event" watch type is used to monitor for custom user -events. These are fired using the [consul event](/consul/commands/event) command. -It takes only a single optional `name` parameter which restricts -the watch to only events with the given name. - -This maps to the `/v1/event/list` API internally. - -Here is an example configuration: - - - -```hcl -{ - type = "event" - name = "web-deploy" - args = ["/usr/bin/my-event-handler.sh", "-web-deploy"] -} -``` - -```json -{ - "type": "event", - "name": "web-deploy", - "args": ["/usr/bin/my-event-handler.sh", "-web-deploy"] -} -``` - - - -Or, using the watch command: - -```shell-session -$ consul watch -type=event -name=web-deploy /usr/bin/my-event-handler.sh -web-deploy -``` - -An example of the output of this command: - -```json -[ - { - "ID": "f07f3fcc-4b7d-3a7c-6d1e-cf414039fcee", - "Name": "web-deploy", - "Payload": "MTYwOTAzMA==", - "NodeFilter": "", - "ServiceFilter": "", - "TagFilter": "", - "Version": 1, - "LTime": 18 - } -] -``` - -To fire a new `web-deploy` event the following could be used: - -```shell-session -$ consul event -name=web-deploy 1609030 -``` diff --git a/website/content/docs/east-west/cluster-peering/establish/k8s.mdx b/website/content/docs/east-west/cluster-peering/establish/k8s.mdx new file mode 100644 index 000000000000..ae79587a35fc --- /dev/null +++ b/website/content/docs/east-west/cluster-peering/establish/k8s.mdx @@ -0,0 +1,442 @@ +--- +layout: docs +page_title: Establish Cluster Peering Connections on Kubernetes +description: >- + To establish a cluster peering connection on Kubernetes, generate a peering token to establish communication. Then export services and authorize requests with service intentions. +--- + +# Establish cluster peering connections on Kubernetes + +This page details the process for establishing a cluster peering connection between services in a Consul on Kubernetes deployment. + +The overall process for establishing a cluster peering connection consists of the following steps: + +1. Create a peering token in one cluster. +1. Use the peering token to establish peering with a second cluster. +1. Export services between clusters. +1. Create intentions to authorize services for peers. + +Cluster peering between services cannot be established until all four steps are complete. + +Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in [create sameness groups](/consul/docs/multi-tenant/sameness-group/k8s). + +For general guidance for establishing cluster peering connections, refer to [Establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/vm). + +## Prerequisites + +You must meet the following requirements to use Consul's cluster peering features with Kubernetes: + +- Consul v1.14.1 or higher +- Consul on Kubernetes v1.0.0 or higher +- At least two Kubernetes clusters + +In Consul on Kubernetes, peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs. For additional information about requirements for cluster peering on Kubernetes deployments, refer to [Cluster peering on Kubernetes technical specifications](/consul/docs/east-west/cluster-peering/tech-specs/k8s). + +### Assign cluster IDs to environmental variables + +After you provision a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters, you can assign your clusters to environmental variables for future use. + +1. Get the context names for your Kubernetes clusters using one of these methods: + + - Run the `kubectl config current-context` command to get the context for the cluster you are currently in. + - Run the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file. + +1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). + + ```shell-session + $ export CLUSTER1_CONTEXT= + $ export CLUSTER2_CONTEXT= + ``` + +### Install Consul using Helm and configure peering over mesh gateways + +To use cluster peering with Consul on Kubernetes deployments, update the Helm chart with [the required values](/consul/docs/k8s/connect/cluster-peering/tech-specs#helm-requirements). After updating the Helm chart, you can use the `consul-k8s` CLI to apply `values.yaml` to each cluster. + +1. In `cluster-01`, run the following commands: + + ```shell-session + $ export HELM_RELEASE_NAME1=cluster-01 + ``` + + ```shell-session + $ helm install ${HELM_RELEASE_NAME1} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc1 --kube-context $CLUSTER1_CONTEXT + ``` + +1. In `cluster-02`, run the following commands: + + ```shell-session + $ export HELM_RELEASE_NAME2=cluster-02 + ``` + + ```shell-session + $ helm install ${HELM_RELEASE_NAME2} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc2 --kube-context $CLUSTER2_CONTEXT + ``` + +1. For both clusters apply the `Mesh` configuration entry values provided in [Mesh Gateway Specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs#mesh-gateway-specifications) to allow establishing peering connections over mesh gateways. + +### Configure the mesh gateway mode for traffic between services + +In Kubernetes deployments, you can configure mesh gateways to use `local` mode so that a service dialing a service in a remote peer dials the local mesh gateway instead of the remote mesh gateway. To configure the mesh gateway mode so that this traffic always leaves through the local mesh gateway, you can use the `ProxyDefaults` CRD. + +1. In `cluster-01` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ProxyDefaults + metadata: + name: global + spec: + meshGateway: + mode: local + ``` + + + + ```shell-session + $ kubectl --context $CLUSTER1_CONTEXT apply -f proxy-defaults.yaml + ``` + +1. In `cluster-02` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ProxyDefaults + metadata: + name: global + spec: + meshGateway: + mode: local + ``` + + + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT apply -f proxy-defaults.yaml + ``` + +## Create a peering token + +To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. + +Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. + +1. In `cluster-01`, create the `PeeringAcceptor` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: PeeringAcceptor + metadata: + name: cluster-02 ## The name of the peer you want to connect to + spec: + peer: + secret: + name: "peering-token" + key: "data" + backend: "kubernetes" + ``` + + + +1. Apply the `PeeringAcceptor` resource to the first cluster. + + ```shell-session + $ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml + ``` + +1. Save your peering token so that you can export it to the other cluster. + + ```shell-session + $ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml + ``` + +## Establish a connection between clusters + +Next, use the peering token to establish a secure connection between the clusters. + +1. Apply the peering token to the second cluster. + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml + ``` + +1. In `cluster-02`, create the `PeeringDialer` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: PeeringDialer + metadata: + name: cluster-01 ## The name of the peer you want to connect to + spec: + peer: + secret: + name: "peering-token" + key: "data" + backend: "kubernetes" + ``` + + + +1. Apply the `PeeringDialer` resource to the second cluster. + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml + ``` + +## Export services between clusters + +After you establish a connection between the clusters, you need to create an `exported-services` CRD that defines the services that are available to another admin partition. + +While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to [exported service consumers](/consul/docs/reference/config-entry/exported-services#consumers-1) for more information. + + +1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example: + + + + ```yaml + # Service to expose backend + apiVersion: v1 + kind: Service + metadata: + name: backend + spec: + selector: + app: backend + ports: + - name: http + protocol: TCP + port: 80 + targetPort: 9090 + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: backend + --- + # Deployment for backend + apiVersion: apps/v1 + kind: Deployment + metadata: + name: backend + labels: + app: backend + spec: + replicas: 1 + selector: + matchLabels: + app: backend + template: + metadata: + labels: + app: backend + annotations: + "consul.hashicorp.com/connect-inject": "true" + spec: + serviceAccountName: backend + containers: + - name: backend + image: nicholasjackson/fake-service:v0.22.4 + ports: + - containerPort: 9090 + env: + - name: "LISTEN_ADDR" + value: "0.0.0.0:9090" + - name: "NAME" + value: "backend" + - name: "MESSAGE" + value: "Response from backend" + ``` + + + +1. Deploy the `backend` service to the second cluster. + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT apply --filename backend.yaml + ``` + +1. In `cluster-02`, create an `ExportedServices` custom resource. The name of the peer that consumes the service should be identical to the name set in the `PeeringDialer` CRD. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ExportedServices + metadata: + name: default ## The name of the partition containing the service + spec: + services: + - name: backend ## The name of the service you want to export + consumers: + - peer: cluster-01 ## The name of the peer that receives the service + ``` + + + +1. Apply the `ExportedServices` resource to the second cluster. + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT apply --filename exported-service.yaml + ``` + +## Authorize services for peers + +Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters. + +1. Create service intentions for the second cluster. The name of the peer should match the name set in the `PeeringDialer` CRD. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceIntentions + metadata: + name: backend-deny + spec: + destination: + name: backend + sources: + - name: "*" + action: deny + - name: frontend + action: allow + peer: cluster-01 ## The peer of the source service + ``` + + + +1. Apply the intentions to the second cluster. + + + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml + ``` + + + +1. Add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods before deploying the workload so that the services in `cluster-01` can dial `backend` in `cluster-02`. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups). In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted. [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups-for-consul-enterprise) details how you would similarly format a DNS name including partitions and namespaces. + + + + ```yaml + # Service to expose frontend + apiVersion: v1 + kind: Service + metadata: + name: frontend + spec: + selector: + app: frontend + ports: + - name: http + protocol: TCP + port: 9090 + targetPort: 9090 + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: frontend + --- + apiVersion: apps/v1 + kind: Deployment + metadata: + name: frontend + labels: + app: frontend + spec: + replicas: 1 + selector: + matchLabels: + app: frontend + template: + metadata: + labels: + app: frontend + annotations: + "consul.hashicorp.com/connect-inject": "true" + spec: + serviceAccountName: frontend + containers: + - name: frontend + image: nicholasjackson/fake-service:v0.22.4 + securityContext: + capabilities: + add: ["NET_ADMIN"] + ports: + - containerPort: 9090 + env: + - name: "LISTEN_ADDR" + value: "0.0.0.0:9090" + - name: "UPSTREAM_URIS" + value: "http://backend.virtual.cluster-02.consul" + - name: "NAME" + value: "frontend" + - name: "MESSAGE" + value: "Hello World" + - name: "HTTP_CLIENT_KEEP_ALIVES" + value: "false" + ``` + + + +1. Apply the service file to the first cluster. + + ```shell-session + $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml + ``` + +1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully. + + ```shell-session + $ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090 + ``` + + + + ```json + { + "name": "frontend", + "uri": "/", + "type": "HTTP", + "ip_addresses": [ + "10.16.2.11" + ], + "start_time": "2022-08-26T23:40:01.167199", + "end_time": "2022-08-26T23:40:01.226951", + "duration": "59.752279ms", + "body": "Hello World", + "upstream_calls": { + "http://backend.virtual.cluster-02.consul": { + "name": "backend", + "uri": "http://backend.virtual.cluster-02.consul", + "type": "HTTP", + "ip_addresses": [ + "10.32.2.10" + ], + "start_time": "2022-08-26T23:40:01.223503", + "end_time": "2022-08-26T23:40:01.224653", + "duration": "1.149666ms", + "headers": { + "Content-Length": "266", + "Content-Type": "text/plain; charset=utf-8", + "Date": "Fri, 26 Aug 2022 23:40:01 GMT" + }, + "body": "Response from backend", + "code": 200 + } + }, + "code": 200 + } + ``` + + \ No newline at end of file diff --git a/website/content/docs/east-west/cluster-peering/establish/vm.mdx b/website/content/docs/east-west/cluster-peering/establish/vm.mdx new file mode 100644 index 000000000000..dd033ab4f78a --- /dev/null +++ b/website/content/docs/east-west/cluster-peering/establish/vm.mdx @@ -0,0 +1,269 @@ +--- +layout: docs +page_title: Establish cluster peering connections +description: >- + Generate a peering token to establish communication, export services, and authorize requests for cluster peering connections. Learn how to establish peering connections with Consul's HTTP API, CLI or UI. +--- + +# Establish cluster peering connections + +This page details the process for establishing a cluster peering connection between services deployed to different datacenters. You can interact with Consul's cluster peering features using the CLI, the HTTP API, or the UI. The overall process for establishing a cluster peering connection consists of the following steps: + +1. Create a peering token in one cluster. +1. Use the peering token to establish peering with a second cluster. +1. Export services between clusters. +1. Create intentions to authorize services for peers. + +Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in [create sameness groups](/consul/docs/multi-tenant/sameness-group/vm). + +For Kubernetes guidance, refer to [Establish cluster peering connections on Kubernetes](/consul/docs/east-west/cluster-peering/establish/k8s). + +## Requirements + +You must meet the following requirements to use cluster peering: + +- Consul v1.14.1 or higher +- Services hosted in admin partitions on separate datacenters + +If you need to make services available to an admin partition in the same datacenter, do not use cluster peering. Instead, use the [`exported-services` configuration entry](/consul/docs/reference/config-entry/exported-services) to make service upstreams available to other admin partitions in a single datacenter. + +### Mesh gateway requirements + +Consul's default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. + +To enable cluster peering through mesh gateways and configure mesh gateways to support cluster peering, refer to [mesh gateway specifications](/consul/docs/connect/cluster-peering/tech-specs#mesh-gateway-specifications). + +## Create a peering token + +To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. + +Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. + + + + +1. In `cluster-01`, use the [`consul peering generate-token` command](/consul/commands/peering/generate-token) to issue a request for a peering token. + + ```shell-session + $ consul peering generate-token -name cluster-02 + ``` + + The CLI outputs the peering token, which is a base64-encoded string containing the token details. + +1. Save this value to a file or clipboard to use in the next step on `cluster-02`. + + + + +1. In `cluster-01`, use the [`/peering/token` endpoint](/consul/api-docs/peering#generate-a-peering-token) to issue a request for a peering token. + + ```shell-session + $ curl --request POST --data '{"Peer":"cluster-02"}' --url http://localhost:8500/v1/peering/token + ``` + + The CLI outputs the peering token, which is a base64-encoded string containing the token details. + +1. Create a JSON file that contains the first cluster's name and the peering token. + + + + ```json + { + "Peer": "cluster-01", + "PeeringToken": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJhZG1pbiIsImF1ZCI6IlNvbHIifQ.5T7L_L1MPfQ_5FjKGa1fTPqrzwK4bNSM812nW6oyjb8" + } + ``` + + + + + + +To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. + +Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. + +1. In the Consul UI for the datacenter associated with `cluster-01`, click **Peers**. +1. Click **Add peer connection**. +1. In the **Generate token** tab, enter `cluster-02` in the **Name of peer** field. +1. Click the **Generate token** button. +1. Copy the token before you proceed. You cannot view it again after leaving this screen. If you lose your token, you must generate a new one. + + + + +## Establish a connection between clusters + +Next, use the peering token to establish a secure connection between the clusters. + + + + +1. In one of the client agents deployed to "cluster-02," issue the [`consul peering establish` command](/consul/commands/peering/establish) and specify the token generated in the previous step. + + ```shell-session + $ consul peering establish -name cluster-01 -peering-token token-from-generate + "Successfully established peering connection with cluster-01" + ``` + +When you connect server agents through cluster peering, they peer their default partitions. To establish peering connections for other partitions through server agents, you must add the `-partition` flag to the `establish` command and specify the partitions you want to peer. For additional configuration information, refer to [`consul peering establish` command](/consul/commands/peering/establish). + +You can run the `peering establish` command once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token. + + + + +1. In one of the client agents in "cluster-02," use `peering_token.json` and the [`/peering/establish` endpoint](/consul/api-docs/peering#establish-a-peering-connection) to establish the peering connection. This endpoint does not generate an output unless there is an error. + + ```shell-session + $ curl --request POST --data @peering_token.json http://127.0.0.1:8500/v1/peering/establish + ``` + +When you connect server agents through cluster peering, their default behavior is to peer to the `default` partition. To establish peering connections for other partitions through server agents, you must add the `Partition` field to `peering_token.json` and specify the partitions you want to peer. For additional configuration information, refer to [Cluster Peering - HTTP API](/consul/api-docs/peering). + +You can dial the `peering/establish` endpoint once per peering token. Peering tokens cannot be reused after being used to establish a connection. If you need to re-establish a connection, you must generate a new peering token. + + + + + +1. In the Consul UI for the datacenter associated with `cluster 02`, click **Peers** and then **Add peer connection**. +1. Click **Establish peering**. +1. In the **Name of peer** field, enter `cluster-01`. Then paste the peering token in the **Token** field. +1. Click **Add peer**. + + + + +## Export services between clusters + +After you establish a connection between the clusters, you need to create an `exported-services` configuration entry that defines the services that are available for other clusters. Consul uses this configuration entry to advertise service information and support service mesh connections across clusters. + +An `exported-services` configuration entry makes services available to another admin partition. While it can target admin partitions either locally or remotely. Clusters peers always export services to remote partitions. Refer to [exported service consumers](/consul/docs/reference/config-entry/exported-services#consumers-1) for more information. + +You must use the Consul CLI to complete this step. The HTTP API and the Consul UI do not support `exported-services` configuration entries. + + + + +1. Create a configuration entry and specify the `Kind` as `"exported-services"`. + + + + ```hcl + Kind = "exported-services" + Name = "default" + Services = [ + { + ## The name and namespace of the service to export. + Name = "service-name" + Namespace = "default" + + ## The list of peer clusters to export the service to. + Consumers = [ + { + ## The peer name to reference in config is the one set + ## during the peering process. + Peer = "cluster-02" + } + ] + } + ] + ``` + + + +1. Add the configuration entry to your cluster. + + ```shell-session + $ consul config write peering-config.hcl + ``` + +Before you proceed, wait for the clusters to sync and make services available to their peers. To check the peered cluster status, [read the cluster peering connection](/consul/docs/connect/cluster-peering/usage/manage-connections#read-a-peering-connection). + + + + +## Authorize services for peers + +Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters. + +You must use the HTTP API or the Consul CLI to complete this step. The Consul UI supports intentions for local clusters only. + + + + +1. Create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." In the following example, the service intentions configuration entry authorizes the `backend-service` to communicate with the `frontend-service` that is hosted on remote peer `cluster-02`: + + + + ```hcl + Kind = "service-intentions" + Name = "backend-service" + + Sources = [ + { + Name = "frontend-service" + Peer = "cluster-02" + Action = "allow" + } + ] + ``` + + + + If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster. + +1. Add the configuration entry to your cluster. + + ```shell-session + $ consul config write peering-intentions.hcl + ``` + + + + +1. Create a configuration entry and specify the `Kind` as `"service-intentions"`. Declare the service on "cluster-02" that can access the service in "cluster-01." In the following example, the service intentions configuration entry authorizes the `backend-service` to communicate with the `frontend-service` that is hosted on remote peer `cluster-02`: + + + + ```hcl + Kind = "service-intentions" + Name = "backend-service" + + Sources = [ + { + Name = "frontend-service" + Peer = "cluster-02" + Action = "allow" + } + ] + ``` + + + + If the peer's name is not specified in `Peer`, then Consul assumes that the service is in the local cluster. + +1. Add the configuration entry to your cluster. + + ```shell-session + $ curl --request PUT --data @peering-intentions.hcl http://127.0.0.1:8500/v1/config + ``` + + + + +### Authorize service reads with ACLs + +If ACLs are enabled on a Consul cluster, sidecar proxies that access exported services as an upstream must have an ACL token that grants read access. + +Read access to all imported services is granted using either of the following rules associated with an ACL token: + +- `service:write` permissions for any service in the sidecar's partition. +- `service:read` and `node:read` for all services and nodes, respectively, in sidecar's namespace and partition. + +For Consul Enterprise, the permissions apply to all imported services in the service's partition. These permissions are satisfied when using a [service identity](/consul/docs/security/acl/acl-roles#service-identities). + +Refer to [Reading services](/consul/docs/reference/config-entry/exported-services#reading-services) in the `exported-services` configuration entry documentation for example rules. + +For additional information about how to configure and use ACLs, refer to [ACLs system overview](/consul/docs/secure/acl). \ No newline at end of file diff --git a/website/content/docs/east-west/cluster-peering/index.mdx b/website/content/docs/east-west/cluster-peering/index.mdx new file mode 100644 index 000000000000..1b888570b869 --- /dev/null +++ b/website/content/docs/east-west/cluster-peering/index.mdx @@ -0,0 +1,47 @@ +--- +layout: docs +page_title: Cluster peering overview +description: >- + Cluster peering establishes communication between independent clusters in Consul, allowing services to interact across datacenters. Learn how cluster peering works, its differences with WAN federation for multi-datacenter deployments, and how to troubleshoot common issues. +--- + +# Cluster peering overview + +This topic provides an overview of cluster peering, which lets you connect two or more independent Consul clusters so that services deployed to different partitions or datacenters can communicate. + +Cluster peering is enabled in Consul by default. For specific information about cluster peering configuration and usage, refer to following pages. + +## Introduction + +Consul supports cluster peering connections between two [admin partitions](/consul/docs/multi-tenant/admin-partition) _in different datacenters_. Deployments without an Enterprise license can still use cluster peering because every datacenter automatically includes a default partition. Meanwhile, admin partitions _in the same datacenter_ do not require cluster peering connections because you can export services between them without generating or exchanging a peering token. + +The following diagram describes Consul's cluster peering architecture. + +![Diagram of cluster peering with admin partitions](/img/cluster-peering-diagram.png) + +In this diagram, the `default` partition in Consul DC 1 has a cluster peering connection with the `web` partition in Consul DC 2. Enforced by their respective mesh gateways, this cluster peering connection enables `Service B` to communicate with `Service C` as a service upstream. + +Cluster peering leverages several components of Consul's architecture to enforce secure communication between services: + +- A _peering token_ contains an embedded secret that securely establishes communication when shared symmetrically between datacenters. Sharing this token enables each datacenter's server agents to recognize requests from authorized peers, similar to how the [gossip encryption key secures agent LAN gossip](/consul/docs/secure/encryption#gossip-encryption). +- A _mesh gateway_ encrypts outgoing traffic, decrypts incoming traffic, and directs traffic to healthy services. Consul's service mesh features must be enabled in order to use mesh gateways. Mesh gateways support the specific admin partitions they are deployed on. Refer to [Mesh gateways](/consul/docs/east-west/mesh-gateway) for more information. +- An _exported service_ communicates with downstreams deployed in other admin partitions. They are explicitly defined in an [`exported-services` configuration entry](/consul/docs/reference/config-entry/exported-services). +- A _service intention_ secures [service-to-service communication in a service mesh](/consul/docs/secure-mesh/intention). Intentions enable identity-based access between services by exchanging TLS certificates, which the service's sidecar proxy verifies upon each request. + +### Compared with WAN federation + +WAN federation and cluster peering are different ways to connect services through mesh gateways so that they can communicate across datacenters. WAN federation connects multiple datacenters to make them function as if they were a single cluster, while cluster peering treats each datacenter as a separate cluster. As a result, WAN federation requires a primary datacenter to maintain and replicate global states such as ACLs and configuration entries, but cluster peering does not. + +WAN federation and cluster peering also treat encrypted traffic differently. While mesh gateways between WAN federated datacenters use mTLS to keep data encrypted, mesh gateways between peers terminate mTLS sessions, decrypt data to HTTP services, and then re-encrypt traffic to send to services. Data must be decrypted in order to evaluate and apply dynamic routing rules at the destination cluster, which reduces coupling between peers. + +Regardless of whether you connect your clusters through WAN federation or cluster peering, human and machine users can use either method to discover services in other clusters or dial them through the service mesh. + +@include 'tables/compare/east-west.mdx' + +## Guidance + +@include 'text/guidance/east-west/cluster-peering.mdx' + +## Basic troubleshooting + +@include 'text/limitations/east-west/cluster-peering.mdx' \ No newline at end of file diff --git a/website/content/docs/east-west/cluster-peering/manage/k8s.mdx b/website/content/docs/east-west/cluster-peering/manage/k8s.mdx new file mode 100644 index 000000000000..2f087fc372fd --- /dev/null +++ b/website/content/docs/east-west/cluster-peering/manage/k8s.mdx @@ -0,0 +1,121 @@ +--- +layout: docs +page_title: Manage cluster peering connections on Kubernetes +description: >- + Learn how to list, read, and delete cluster peering connections using Consul on Kubernetes. You can also reset cluster peering connections on k8s deployments. +--- + +# Manage cluster peering connections on Kubernetes + +This usage topic describes how to manage cluster peering connections on Kubernetes deployments. + +After you establish a cluster peering connection, you can get a list of all active peering connections, read a specific peering connection's information, and delete peering connections. + +For general guidance for managing cluster peering connections, refer to [Manage L7 traffic with cluster peering](/consul/docs/manage-traffic/cluster-peering/vm). + +## Reset a peering connection + +To reset the cluster peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor` CRD. The only way to create or set a new peering token is to manually adjust the value of the annotation `consul.hashicorp.com/peering-version`. Creating a new token causes the previous token to expire. + +1. In the `PeeringAcceptor` CRD, add the annotation `consul.hashicorp.com/peering-version`. If the annotation already exists, update its value to a higher version. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: PeeringAcceptor + metadata: + name: cluster-02 + annotations: + consul.hashicorp.com/peering-version: "1" ## The peering version you want to set, must be in quotes + spec: + peer: + secret: + name: "peering-token" + key: "data" + backend: "kubernetes" + ``` + + + +1. After updating `PeeringAcceptor`, repeat all of the steps to [establish a new peering connection](/consul/docs/east-west/cluster-peering/establish/k8s). + +## List all peering connections + +In Consul on Kubernetes deployments, you can list all active peering connections in a cluster using the Consul CLI. + +1. If necessary, [configure your CLI to interact with the Consul cluster](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy#configure-your-cli-to-interact-with-consul-cluster). + +1. Run the [`consul peering list` CLI command](/consul/commands/peering/list). + + ```shell-session + $ consul peering list + Name State Imported Svcs Exported Svcs Meta + cluster-02 ACTIVE 0 2 env=production + cluster-03 PENDING 0 0 + ``` + +## Read a peering connection + +In Consul on Kubernetes deployments, you can get information about individual peering connections between clusters using the Consul CLI. + +1. If necessary, [configure your CLI to interact with the Consul cluster](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy#configure-your-cli-to-interact-with-consul-cluster). + +1. Run the [`consul peering read` CLI command](/consul/commands/peering/read). + + ```shell-session + $ consul peering read -name cluster-02 + Name: cluster-02 + ID: 3b001063-8079-b1a6-764c-738af5a39a97 + State: ACTIVE + Meta: + env=production + + Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05 + Peer Server Name: server.dc1.consul + Peer CA Pems: 0 + Peer Server Addresses: + 10.0.0.1:8300 + + Imported Services: 0 + Exported Services: 2 + + Create Index: 89 + Modify Index: 89 + ``` + +## Delete peering connections + +To end a peering connection in Kubernetes deployments, delete both the `PeeringAcceptor` and `PeeringDialer` resources. + +1. Delete the `PeeringDialer` resource from the second cluster. + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT delete --filename dialer.yaml + ``` + +1. Delete the `PeeringAcceptor` resource from the first cluster. + + ```shell-session + $ kubectl --context $CLUSTER1_CONTEXT delete --filename acceptor.yaml + ```` + +To confirm that you deleted your peering connection in `cluster-01`, query the `/health` HTTP endpoint: + +1. Exec into the server pod for the first cluster. + + ```shell-session + $ kubectl exec -it consul-server-0 --context $CLUSTER1_CONTEXT -- /bin/sh + ``` + +1. If you've enabled ACLs, export an ACL token to access the `/health` HTP endpoint for services. The bootstrap token may be used if an ACL token is not already provisioned. + + ```shell-session + $ export CONSUL_HTTP_TOKEN= + ``` + +1. Query the `/health` HTTP endpoint. Peered services with deleted connections should no longe appear. + + ```shell-session + $ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02" + ``` \ No newline at end of file diff --git a/website/content/docs/east-west/cluster-peering/manage/vm.mdx b/website/content/docs/east-west/cluster-peering/manage/vm.mdx new file mode 100644 index 000000000000..80db58940542 --- /dev/null +++ b/website/content/docs/east-west/cluster-peering/manage/vm.mdx @@ -0,0 +1,137 @@ +--- +layout: docs +page_title: Manage cluster peering connections +description: >- + Learn how to list, read, and delete cluster peering connections using Consul. You can use the HTTP API, the CLI, or the Consul UI to manage cluster peering connections. +--- + +# Manage cluster peering connections + +This usage topic describes how to manage cluster peering connections using the CLI, the HTTP API, and the UI. + +After you establish a cluster peering connection, you can get a list of all active peering connections, read a specific peering connection's information, and delete peering connections. + +For Kubernetes-specific guidance for managing cluster peering connections, refer to [Manage cluster peering connections on Kubernetes](/consul/docs/east-west/cluster-peering/manage/k8s). + +## List all peering connections + +You can list all active peering connections in a cluster. + + + + + ```shell-session + $ consul peering list + Name State Imported Svcs Exported Svcs Meta + cluster-02 ACTIVE 0 2 env=production + cluster-03 PENDING 0 0 + ``` + +For more information, including optional flags and parameters, refer to the [`consul peering list` CLI command reference](/consul/commands/peering/list). + + + + +The following example shows how to format an API request to list peering connections: + + ```shell-session + $ curl --header "X-Consul-Token: 0137db51-5895-4c25-b6cd-d9ed992f4a52" http://127.0.0.1:8500/v1/peerings + ``` + +For more information, including optional parameters and sample responses, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#list-all-peerings). + + + + +In the Consul UI, click **Peers**. + +The UI lists peering connections you created for clusters in a datacenter. The name that appears in the list is the name of the cluster in a different datacenter with an established peering connection. + + + + +## Read a peering connection + +You can get information about individual peering connections between clusters. + + + + + +The following example outputs information about a peering connection locally referred to as "cluster-02": + + ```shell-session + $ consul peering read -name cluster-02 + Name: cluster-02 + ID: 3b001063-8079-b1a6-764c-738af5a39a97 + State: ACTIVE + Meta: + env=production + + Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05 + Peer Server Name: server.dc1.consul + Peer CA Pems: 0 + Peer Server Addresses: + 10.0.0.1:8300 + + Imported Services: 0 + Exported Services: 2 + + Create Index: 89 + Modify Index: 89 + ``` + +For more information, including optional flags and parameters, refer to the [`consul peering read` CLI command reference](/consul/commands/peering/read). + + + + + ```shell-session + $ curl --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" http://127.0.0.1:8500/v1/peering/cluster-02 + ``` + +For more information, including optional parameters and sample responses, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#read-a-peering-connection). + + + + +1. In the Consul UI, click **Peers**. + +1. Click the name of a peered cluster to view additional details about the peering connection. + + + + +## Delete peering connections + +You can disconnect the peered clusters by deleting their connection. Deleting a peering connection stops data replication to the peer and deletes imported data, including services and CA certificates. + + + + + The following examples deletes a peering connection to a cluster locally referred to as "cluster-02": + + ```shell-session + $ consul peering delete -name cluster-02 + Successfully submitted peering connection, cluster-02, for deletion + ``` + +For more information, including optional flags and parameters, refer to the [`consul peering delete` CLI command reference](/consul/commands/peering/delete). + + + + + ```shell-session + $ curl --request DELETE --header "X-Consul-Token: b23b3cad-5ea1-4413-919e-c76884b9ad60" http://127.0.0.1:8500/v1/peering/cluster-02 + ``` + +This endpoint does not return a response. For more information, including optional parameters, refer to the [`/peering` endpoint reference](/consul/api-docs/peering#delete-a-peering-connection). + + + +1. In the Consul UI, click **Peers**. The UI lists peering connections you created for clusters in that datacenter. +1. Next to the name of the peer, click **More** (three horizontal dots) and then **Delete**. +1. Click **Delete** to confirm and remove the peering connection. + + + \ No newline at end of file diff --git a/website/content/docs/east-west/cluster-peering/tech-specs/index.mdx b/website/content/docs/east-west/cluster-peering/tech-specs/index.mdx new file mode 100644 index 000000000000..e9d1eda8a55d --- /dev/null +++ b/website/content/docs/east-west/cluster-peering/tech-specs/index.mdx @@ -0,0 +1,84 @@ +--- +layout: docs +page_title: Cluster peering technical specifications +description: >- + Cluster peering connections in Consul interact with mesh gateways, sidecar proxies, exported services, and ACLs. Learn about the configuration requirements for these components. +--- + +# Cluster peering technical specifications + +This reference topic describes the technical specifications associated with using cluster peering in your deployments. These specifications include required Consul components and their configurations. To learn more about Consul's cluster peering feature, refer to [cluster peering overview](/consul/docs/east-west/cluster-peering). + +For cluster peering requirements in Kubernetes deployments, refer to [cluster peering on Kubernetes technical specifications](/consul/docs/east-west/cluster-peering/tech-specs/k8s). + +## Requirements + +Consul's default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. + +In addition, make sure your Consul environment meets the following prerequisites: + +- Consul v1.14 or higher. +- Use [Envoy proxies](/consul/docs/reference/proxy/envoy). Envoy is the only proxy with mesh gateway capabilities in Consul. +- A local Consul agent is required to manage mesh gateway configurations. + +## Mesh gateway specifications + +To change Consul's default configuration and enable cluster peering through mesh gateways, use a mesh configuration entry to update your network's service mesh proxies globally: + +1. In a `mesh` configuration entry, set `PeerThroughMeshGateways` to `true`: + + + + ```hcl + Kind = "mesh" + Peering { + PeerThroughMeshGateways = true + } + ``` + + + +1. Write the configuration entry to Consul: + + ```shell + $ consul config write mesh-config.hcl + ``` + +When cluster peering through mesh gateways, consider the following deployment requirements: + +- A cluster requires a registered mesh gateway in order to export services to peers in other regions or cloud providers. +- The mesh gateway must also be registered in the same admin partition as the exported services and their `exported-services` configuration entry. An enterprise license is required to use multiple admin partitions with a single cluster of Consul servers. +- To use the `local` mesh gateway mode, you must register a mesh gateway in the importing cluster. +- Define the `Proxy.Config` settings using opaque parameters compatible with your proxy. Refer to the [Gateway options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional Envoy proxy configuration information. + +### Mesh gateway modes + +By default, cluster peering connections use mesh gateways in [remote mode](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#remote). Be aware of these additional requirements when changing a mesh gateway's mode. + +- For mesh gateways that connect peered clusters, you can set the `mode` as either `remote` or `local`. +- The `none` mode is invalid for mesh gateways with cluster peering connections. + +Refer to [mesh gateway modes](/consul/docs/connect/gateways/mesh-gateway#modes) for more information. + +## Sidecar proxy specifications + +The Envoy proxies that function as sidecars in your service mesh require configuration in order to properly route traffic to peers. Sidecar proxies are defined in the [service definition](/consul/docs/register/service/vm/define). + +- Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and peer. Refer to the [`upstreams`](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) documentation for details. +- The `proxy.upstreams.destination_name` parameter is always required. +- The `proxy.upstreams.destination_peer` parameter must be configured to enable cross-cluster traffic. +- The `proxy.upstream/destination_namespace` configuration is only necessary if the destination service is in a non-default namespace. + +## Exported service specifications + +The `exported-services` configuration entry is required in order for services to communicate across partitions with cluster peering connections. Basic guidance on using the `exported-services` configuration entry is included in [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#export-services-between-clusters). + +Refer to the [`exported-services` configuration entry](/consul/docs/reference/config-entry/exported-services) reference for more information. + +## ACL specifications + +If ACLs are enabled, you must add tokens to grant the following permissions: + +- Grant `service:write` permissions to services that define mesh gateways in their server definition. +- Grant `service:read` permissions for all services on the partition. +- Grant `mesh:write` permissions to the mesh gateways that participate in cluster peering connections. This permission allows a leaf certificate to be issued for mesh gateways to terminate TLS sessions for HTTP requests. \ No newline at end of file diff --git a/website/content/docs/east-west/cluster-peering/tech-specs/k8s.mdx b/website/content/docs/east-west/cluster-peering/tech-specs/k8s.mdx new file mode 100644 index 000000000000..2d8fccaa9fc2 --- /dev/null +++ b/website/content/docs/east-west/cluster-peering/tech-specs/k8s.mdx @@ -0,0 +1,161 @@ +--- +layout: docs +page_title: Cluster Peering on Kubernetes Technical Specifications +description: >- + In Kubernetes deployments, cluster peering connections interact with mesh gateways, exported services, and ACLs. Learn about requirements specific to k8s, including required Helm values and custom resource definitions (CRDs). +--- + +# Cluster peering on Kubernetes technical specifications + +This reference topic describes the technical specifications associated with using cluster peering in your Kubernetes deployments. These specifications include [required Helm values](#helm-requirements) and [required custom resource definitions (CRDs)](#crd-requirements), as well as required Consul components and their configurations. To learn more about Consul's cluster peering feature, refer to [cluster peering overview](/consul/docs/east-west/cluster-peering). + +For cluster peering requirements in non-Kubernetes deployments, refer to [cluster peering technical specifications](/consul/docs/east-west/cluster-peering/tech-specs). + +## General requirements + +Make sure your Consul environment meets the following prerequisites: + +- Consul v1.14 or higher +- Consul on Kubernetes v1.0.0 or higher +- At least two Kubernetes clusters + +You must also configure the following service mesh components in order to establish cluster peering connections: + +- [Helm](#helm-requirements) +- [Custom resource definitions (CRD)](#crd-requirements) +- [Mesh gateways](#mesh-gateway-requirements) +- [Exported services](#exported-service-requirements) +- [ACLs](#acl-requirements) + +## Helm specifications + +Consul's default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. The following values must be set in the Helm chart to enable mesh gateways: + +- [`global.tls.enabled = true`](/consul/docs/reference/k8s/helm#v-global-tls-enabled) +- [`meshGateway.enabled = true`](/consul/docs/reference/k8s/helm#v-meshgateway-enabled) + +Refer to the following example Helm configuration: + + + +```yaml +global: + name: consul + image: "hashicorp/consul:1.16.0" + peering: + enabled: true + tls: + enabled: true +meshGateway: + enabled: true +``` + + + +After mesh gateways are enabled in the Helm chart, you can separately [configure Mesh CRDs](#mesh-gateway-configuration-for-kubernetes). + +## CRD specifications + +You must create the following CRDs in order to establish a peering connection: + +- `PeeringAcceptor`: Generates a peering token and accepts an incoming peering connection. +- `PeeringDialer`: Uses a peering token to make an outbound peering connection with the cluster that generated the token. + +Refer to the following example CRDs: + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: PeeringAcceptor +metadata: + name: cluster-02 ## The name of the peer you want to connect to +spec: + peer: + secret: + name: "peering-token" + key: "data" + backend: "kubernetes" +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: PeeringDialer +metadata: + name: cluster-01 ## The name of the peer you want to connect to +spec: + peer: + secret: + name: "peering-token" + key: "data" + backend: "kubernetes" +``` + + + + +## Mesh gateway specifications + +To change Consul's default configuration and enable cluster peering through mesh gateways, use a mesh configuration entry to update your network's service mesh proxies globally: + +1. In `cluster-01` create the `Mesh` custom resource with `peeringThroughMeshGateways` set to `true`. + + + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: Mesh + metadata: + name: mesh + spec: + peering: + peerThroughMeshGateways: true + ``` + + + +1. Apply the mesh CRD to `cluster-01`. + + ```shell-session + $ kubectl --context $CLUSTER1_CONTEXT apply -f mesh.yaml + ``` + +1. Apply the mesh CRD to `cluster-02`. + + ```shell-session + $ kubectl --context $CLUSTER2_CONTEXT apply -f mesh.yaml + ``` + + + + For help setting up the cluster context variables used in this example, refer to [assign cluster IDs to environmental variables](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#assign-cluster-ids-to-environmental-variables). + + + +When cluster peering through mesh gateways, consider the following deployment requirements: + +- A Consul cluster requires a registered mesh gateway in order to export services to peers in other regions or cloud providers. +- The mesh gateway must also be registered in the same admin partition as the exported services and their `exported-services` configuration entry. An enterprise license is required to use multiple admin partitions with a single cluster of Consul servers. +- To use the `local` mesh gateway mode, you must register a mesh gateway in the importing cluster. +- Define the `Proxy.Config` settings using opaque parameters compatible with your proxy. For additional Envoy proxy configuration information, refer to [Gateway options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides). + +### Mesh gateway modes + +By default, all cluster peering connections use mesh gateways in [remote mode](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#remote). Be aware of these additional requirements when changing a mesh gateway's mode. + +- For mesh gateways that connect peered clusters, you can set the `mode` as either `remote` or `local`. +- The `none` mode is invalid for mesh gateways with cluster peering connections. + +To learn how to change the mesh gateway mode to `local` on your Kubernetes deployment, refer to [configure the mesh gateway mode for traffic between services](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#configure-the-mesh-gateway-mode-for-traffic-between-services). + +## Exported service specifications + +The `exported-services` CRD is required in order for services to communicate across partitions with cluster peering connections. Basic guidance on using the `exported-services` configuration entry is included in [Establish cluster peering connections](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#export-services-between-clusters). + +Refer to [`exported-services` configuration entry](/consul/docs/reference/config-entry/exported-services) for more information. \ No newline at end of file diff --git a/website/content/docs/east-west/index.mdx b/website/content/docs/east-west/index.mdx new file mode 100644 index 000000000000..2af00973155d --- /dev/null +++ b/website/content/docs/east-west/index.mdx @@ -0,0 +1,45 @@ +--- +layout: docs +page_title: Link service network east/west +description: >- + This topic provides an overview of linking segments of your service mesh in an east/west direction. You can extend your service mesh across regions, runtimes, and cloud providers with cluster peering or WAN federation. +--- + +# Expand service network east/west + +This topic provides an overview of the strategies and processes for linking defined segments of your service mesh to extend east/west operations across cloud regions, runtimes, and platforms. Linking network segments into an extended service mesh enables advanced strategies for deploying and monitoring service operations in your network. + +For more information about how to divide your network, including the difference between deployment strategies enabled by methods such as WAN federation and cluster peering, refer to [manage multi-tenancy](/consul/docs/multi-tenant). + +## Introduction + +Consul supports two general strategies for extending east/west service mesh traffic across your network: + +- Cluster peering +- Wide Area Network (WAN) federation + +Consul community edition supports basic cluster peering and federation scenarios. Implementing advanced scenarios such as federated network areas and cluster peering between multiple admin partitions in datacenters require Consul Enterprise. Refer to [Consul Enterprise](/consul/docs/enterprise) for more information. + +## Cluster peering + +@include 'text/descriptions/cluster-peering.mdx' + +## WAN federation + +@include 'text/descriptions/wan-federation.mdx' + +## Federated network areas + +@include 'text/descriptions/network-area.mdx' + +## Secure communication with mesh gateways + +@include 'text/descriptions/mesh-gateway.mdx' + +## Guidance + +@include 'text/guidance/east-west.mdx' + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/east-west.mdx' \ No newline at end of file diff --git a/website/content/docs/east-west/k8s.mdx b/website/content/docs/east-west/k8s.mdx new file mode 100644 index 000000000000..53068660a054 --- /dev/null +++ b/website/content/docs/east-west/k8s.mdx @@ -0,0 +1,53 @@ +--- +layout: docs +page_title: Link service network east/west on Kubernetes +description: >- + This topic provides an overview of linking segments of your service mesh in an east/west direction on Kubernetes. You can extend your service mesh across regions and cloud providers with cluster peering or WAN federation. +--- + +# Link service network east/west on Kubernetes + +This topic provides an overview of the strategies and processes for linking defined segments of your service mesh to extend east/west operations across cloud regions and platforms when running Consul on Kubernetes. Linking network segments into an extended service mesh enables advanced strategies for deploying and monitoring service operations in your network. + +## Introduction + +Consul supports two general strategies for extending east/west service mesh traffic across your network: + +- Cluster peering +- Wide Area Network (WAN) federation + +Consul community edition supports basic cluster peering and federation scenarios. Implementing advanced scenarios such as federated network areas and cluster peering between multiple admin partitions in datacenters require Consul Enterprise. Refer to [Consul Enterprise](/consul/docs/enterprise) for more information. + +## Cluster peering + +@include 'text/descriptions/cluster-peering.mdx' + +Refer to the following pages for guidance about using cluster peering with Consul on Kubernetes: + +- [Establish cluster peering connections on Kubernetes](/consul/docs/east-west/cluster-peering/establish/k8s) +- [Manage cluster peering connections on Kubernetes](/consul/docs/east-west/cluster-peering/manage/k8s) + +## WAN federation + +@include 'text/descriptions/wan-federation.mdx' + +Refer to the following pages for guidance about using WAN federation with Consul on Kubernetes: + +- [WAN federation between multiple Kubernetes clusters](/consul/docs/east-west/wan-federation/k8s) +- [WAN federation between virtual machines and Kubernetes clusters](/consul/docs/east-west/wan-federation/k8s-vm) + +## Secure communication with mesh gateways + +@include 'text/descriptions/mesh-gateway.mdx' + +## Reference documentation + +For reference material related to the processes for extending your service mesh by linking segments of your network, refer to the following pages: + +- [Proxy defaults configuration reference](/consul/docs/reference/config-entry/proxy-defaults) +- [Helm chart reference](/consul/docs/reference/k8s/helm) +- [Upstream annotations reference](/consul/docs/reference/k8s/annotation-label) + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/east-west.mdx' \ No newline at end of file diff --git a/website/content/docs/east-west/mesh-gateway/admin-partition.mdx b/website/content/docs/east-west/mesh-gateway/admin-partition.mdx new file mode 100644 index 000000000000..47d7fd67f4cb --- /dev/null +++ b/website/content/docs/east-west/mesh-gateway/admin-partition.mdx @@ -0,0 +1,292 @@ +--- +layout: docs +page_title: Enable service-to-service traffic across admin partitions +description: >- + Mesh gateways are specialized proxies that route data between services that cannot communicate directly with upstreams. Learn how to enable service-to-service traffic across admin partitions and review example configuration entries. +--- + +# Enable service-to-service traffic across admin partitions + +-> **Consul Enterprise 1.11.0+:** Admin partitions are supported in Consul Enterprise versions 1.11.0 and newer. + +Mesh gateways enable you to route service mesh traffic between different Consul [admin partitions](/consul/docs/multi-tenant/admin-partition). +Partitions can reside in different clouds or runtime environments where general interconnectivity between all services +in all partitions isn't feasible. + +Mesh gateways operate by sniffing and extracting the server name indication (SNI) header from the service mesh session and routing the connection to the appropriate destination based on the server name requested. The gateway does not decrypt the data within the mTLS session. + +## Prerequisites + +Ensure that your Consul environment meets the following requirements. + +### Consul + +* Consul Enterprise version 1.11.0 or newer. +* A local Consul agent is required to manage its configuration. +* Consul service mesh must be enabled in all partitions. Refer to the [`connect` documentation](/consul/docs/reference/agent/configuration-file/service-mesh#connect) for details. +* Each partition must have a unique name. Refer to the [admin partitions documentation](/consul/docs/multi-tenant/admin-partition) for details. +* If you want to [enable gateways globally](/consul/docs/connect/gateways/mesh-gateway#enabling-gateways-globally) you must enable [centralized configuration](/consul/docs/reference/agent/configuration-file/general#enable_central_service_config). + +### Proxy + +Envoy is the only proxy with mesh gateway capabilities in Consul. + +Mesh gateway proxies receive their configuration through Consul, which automatically generates it based on the proxy's registration. +Consul can only translate mesh gateway registration information into Envoy configuration. + +Sidecar proxies that send traffic to an upstream service through a gateway need to know the location of that gateway. They discover the gateway based on their sidecar proxy registrations. Consul can only translate the gateway registration information into Envoy configuration. + +Sidecar proxies that do not send upstream traffic through a gateway are not affected when you deploy gateways. If you are using Consul's built-in proxy as a service mesh sidecar it will continue to work for intra-datacenter traffic and will receive incoming traffic even if that traffic has passed through a gateway. + +## Configuration + +Configure the following settings to register the mesh gateway as a service in Consul. + +* Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. +* Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and partition. Refer to the [`upstreams` documentation](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) for details. The service `proxy.upstreams.destination_name` is always required. The `proxy.upstreams.destination_partition` must be configured to enable cross-partition traffic. The `proxy.upstreams.destination_namespace` configuration is only necessary if the destination service is in a different namespace. +* Configure the `exported-services` configuration entry to enable Consul to export services contained in an admin partition to one or more additional partitions. Refer to the [Exported Services documentation](/consul/docs/reference/config-entry/exported-services) for details. +* Define the `Proxy.Config` settings using opaque parameters compatible with your proxy, i.e., Envoy. For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information. +* If ACLs are enabled, a token granting `service:write` for the gateway's service name and `service:read` for all services in the datacenter or partition must be added to the gateway's service definition. These permissions authorize the token to route communications for other Consul service mesh services, but does not allow decrypting any of their communications. + +### Modes + +Each upstream associated with a service mesh proxy can be configured so that it is routed through a mesh gateway. +Depending on your network, the proxy's connection to the gateway can operate in one of the following modes: + +* `none` - (Default) No gateway is used and a service mesh connect proxy makes its outbound connections directly + to the destination services. + +* `local` - The service mesh connect proxy makes an outbound connection to a gateway running in the same datacenter. The gateway at the outbound connection is responsible for ensuring that the data is forwarded to gateways in the destination partition. + +* `remote` - The service mesh connect proxy makes an outbound connection to a gateway running in the destination datacenter. + The gateway forwards the data to the final destination service. + +### Service Mesh Proxy Configuration + +Set the proxy to the preferred [mode](#modes) to configure the service mesh proxy. You can specify the mode globally or within child configurations to control proxy behaviors at a lower level. Consul recognizes the following order of precedence if the gateway mode is configured in multiple locations the order of precedence: + +1. Upstream definition (highest priority) +2. Service instance definition +3. Centralized `service-defaults` configuration entry +4. Centralized `proxy-defaults` configuration entry + +## Example Configurations + +Use the following example configurations to help you understand some of the common scenarios. + +### Enabling Gateways Globally + +The following `proxy-defaults` configuration will enable gateways for all mesh services in the `local` mode. + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +MeshGateway { + Mode = "local" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + meshGateway: + mode: local +``` + +```json +{ + "Kind": "proxy-defaults", + "Name": "global", + "MeshGateway": { + "Mode": "local" + } +} +``` + + + +### Enabling Gateways Per Service + +The following `service-defaults` configuration will enable gateways for all mesh services with the name `web`. + + + +```hcl +Kind = "service-defaults" +Name = "web" +MeshGateway { + Mode = "local" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: web +spec: + meshGateway: + mode: local +``` + +```json +{ + "Kind": "service-defaults", + "Name": "web", + "MeshGateway": { + "Mode": "local" + } +} +``` + + + +### Enabling Gateways for a Service Instance + +The following [proxy service configuration](/consul/docs/connect/proxy/mesh) +enables gateways for `web` service instances in the `finance` partition. + + + +```hcl +service { + name = "web-sidecar-proxy" + kind = "connect-proxy" + port = 8181 + proxy { + destination_service_name = "web" + mesh_gateway { + mode = "local" + } + upstreams = [ + { + destination_partition = "finance" + destination_namespace = "default" + destination_type = "service" + destination_name = "billing" + local_bind_port = 9090 + } + ] + } +} +``` + +```json +{ + "service": { + "kind": "connect-proxy", + "name": "web-sidecar-proxy", + "port": 8181, + "proxy": { + "destination_service_name": "web", + "mesh_gateway": { + "mode": "local" + }, + "upstreams": [ + { + "destination_name": "billing", + "destination_namespace": "default", + "destination_partition": "finance", + "destination_type": "service", + "local_bind_port": 9090 + } + ] + } + } +} +``` + + +### Enabling Gateways for a Proxy Upstream + +The following service definition will enable gateways in `local` mode for three different partitions. Note that each service exists in the same namespace, but are separated by admin partition. + + + +```hcl +service { + name = "web-sidecar-proxy" + kind = "connect-proxy" + port = 8181 + proxy { + destination_service_name = "web" + upstreams = [ + { + destination_name = "api" + destination_namespace = "dev" + destination_partition = "api" + local_bind_port = 10000 + mesh_gateway { + mode = "local" + } + }, + { + destination_name = "db" + destination_namespace = "dev" + destination_partition = "db" + local_bind_port = 10001 + mesh_gateway { + mode = "local" + } + }, + { + destination_name = "logging" + destination_namespace = "dev" + destination_partition = "logging" + local_bind_port = 10002 + mesh_gateway { + mode = "local" + } + }, + ] + } +} +``` + +```json +{ + "service": { + "kind": "connect-proxy", + "name": "web-sidecar-proxy", + "port": 8181, + "proxy": { + "destination_service_name": "web", + "upstreams": [ + { + "destination_name": "api", + "destination_namespace": "dev", + "destination_partition": "api", + "local_bind_port": 10000, + "mesh_gateway": { + "mode": "local" + } + }, + { + "destination_name": "db", + "destination_namespace": "dev", + "destination_partition": "db", + "local_bind_port": 10001, + "mesh_gateway": { + "mode": "local" + } + }, + { + "destination_name": "logging", + "destination_namespace": "dev", + "destination_partition": "logging", + "local_bind_port": 10002, + "mesh_gateway": { + "mode": "local" + } + } + ] + } + } +} +``` + diff --git a/website/content/docs/east-west/mesh-gateway/cluster-peer.mdx b/website/content/docs/east-west/mesh-gateway/cluster-peer.mdx new file mode 100644 index 000000000000..db194347428b --- /dev/null +++ b/website/content/docs/east-west/mesh-gateway/cluster-peer.mdx @@ -0,0 +1,139 @@ +--- +layout: docs +page_title: Enable control plane traffic between cluster peers +description: >- + Mesh gateways are specialized proxies that route data between services that cannot communicate directly. Learn how to enable traffic across clusters in different datacenters or admin partitions that have an established peering connection. +--- + +# Enable control plane traffic between cluster peers + +This topic describes how to configure a mesh gateway to route control plane traffic between Consul clusters that share a peer connection. For information about routing service traffic between cluster peers through a mesh gateway, refer to [Enabling Service-to-service Traffic Across Admin Partitions](/consul/docs/east-west/mesh-gateway/admin-partition). + +Control plane traffic between cluster peers includes +the initial secret handshake and the bi-directional stream replicating peering data. +This data is not decrypted by the mesh gateway(s). +Instead, it is transmitted end-to-end using the accepting cluster’s auto-generated TLS certificate on the gRPC TLS port. + + + + +[![Cluster peering with mesh gateways](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-with-mesh-gateways.png)](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-with-mesh-gateways.png) + + + + + +[![Cluster peering without mesh gateways](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-without-mesh-gateways.png)](/img/consul-connect/mesh-gateway/cluster-peering-connectivity-without-mesh-gateways.png) + + + + +## Prerequisites + +To configure mesh gateways for cluster peering control plane traffic, make sure your Consul environment meets the following requirements: + +- Consul version 1.14.0 or newer. +- A local Consul agent in both clusters is required to manage mesh gateway configuration. +- Use [Envoy proxies](/consul/docs/reference/proxy/envoy). Envoy is the only proxy with mesh gateway capabilities in Consul. + +## Configuration + +Configure the following settings to register and use the mesh gateway as a service in Consul. + +### Gateway registration + +Register a mesh gateway in each of cluster that will be peered. + +- Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. +- Define the `Proxy.Config` settings using opaque parameters compatible with your proxy. For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information. +- Apply a [Mesh config entry](/consul/docs/reference/config-entry/mesh#peer-through-mesh-gateways) with `PeerThroughMeshGateways = true`. See [modes](#modes) for a discussion of when to apply this. + +Alternatively, you can also use the CLI to spin up and register a gateway in Consul. For additional information, refer to the [`consul connect envoy` command](/consul/commands/connect/envoy#mesh-gateways). + +For Consul Enterprise clusters, mesh gateways must be registered in the "default" partition because this is implicitly where Consul servers are assigned. + +### ACL configuration + + + + +In addition to the [ACL Configuration](/consul/docs/connect/cluster-peering/tech-specs#acl-specifications) necessary for service-to-service traffic, mesh gateways that route peering control plane traffic must be granted `peering:read` access to all peerings. + +This access allows the mesh gateway to list all peerings in a Consul cluster and generate unique routing per peered datacenter. + + + +```hcl +peering = "read" +``` + +```json +{ + "peering": "read" +} +``` + + + + + + + +In addition to the [ACL Configuration](/consul/docs/connect/cluster-peering/tech-specs#acl-specifications) necessary for service-to-service traffic, mesh gateways that route peering control plane traffic must be granted `peering:read` access to all peerings in all partitions. + +This access allows the mesh gateway to list all peerings in a Consul cluster and generate unique routing per peered partition. + + + +```hcl +partition_prefix "" { + peering = "read" +} +``` + +```json +{ + "partition_prefix": { + "": { + "peering": "read" + } + } +} +``` + + + + + + +### Modes + +Connect proxy configuration [Modes](/consul/docs/connect/gateways/mesh-gateway#connect-proxy-configuration#modes) are not applicable to peering control plane traffic. +The flow of control plane traffic through the gateway is implied by the presence of a [Mesh config entry](/consul/docs/reference/config-entry/mesh#peer-through-mesh-gateways) with `PeerThroughMeshGateways = true`. + + + +```hcl +Kind = "mesh" +Peering { + PeerThroughMeshGateways = true +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh +spec: + peering: + peerThroughMeshGateways: true +``` + + +By setting this mesh config on a cluster before [creating a peering token](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#create-a-peering-token), inbound control plane traffic will be sent through the mesh gateway registered this cluster, also known the accepting cluster. +As mesh gateway instances are registered at the accepting cluster, their addresses will be exposed to the dialing cluster over the bi-directional peering stream. + +Setting this mesh config on a cluster before [establishing a connection](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering#establish-a-connection-between-clusters) will cause the outbound control plane traffic to flow through the mesh gateway. + +To route all peering control plane traffic though mesh gateways, both the accepting and dialing cluster must have the mesh config entry applied. \ No newline at end of file diff --git a/website/content/docs/east-west/mesh-gateway/enable.mdx b/website/content/docs/east-west/mesh-gateway/enable.mdx new file mode 100644 index 000000000000..c16fb9fdecb1 --- /dev/null +++ b/website/content/docs/east-west/mesh-gateway/enable.mdx @@ -0,0 +1,200 @@ +--- +layout: docs +page_title: Enable WAN federation control plane traffic +description: >- + You can use mesh gateways to simplify the networking requirements for WAN federated Consul datacenters. Mesh gateways reduce cross-datacenter connection paths, ports, and communication protocols. +--- + +# Enable WAN federation control plane traffic + +-> **1.8.0+:** This feature is available in Consul versions 1.8.0 and higher + +~> This topic requires familiarity with [mesh gateways](/consul/docs/east-west/mesh-gateway/federation). + +WAN federation via mesh gateways allows for Consul servers in different datacenters +to be federated exclusively through mesh gateways. + +When setting up a +[multi-datacenter](/consul/docs/east-west/wan-federation/create) +Consul cluster, operators must ensure that all Consul servers in every +datacenter must be directly connectable over their WAN-advertised network +address from each other. + +[![WAN federation without mesh gateways](/img/wan-federation-connectivity-traditional.png)](/img/wan-federation-connectivity-traditional.png) + +This requires that operators setting up the virtual machines or containers +hosting the servers take additional steps to ensure the necessary routing and +firewall rules are in place to allow the servers to speak to each other over +the WAN. + +Sometimes this prerequisite is difficult or undesirable to meet: + +- **Difficult:** The datacenters may exist in multiple Kubernetes clusters that + unfortunately have overlapping pod IP subnets, or may exist in different + cloud provider VPCs that have overlapping subnets. + +- **Undesirable:** Network security teams may not approve of granting so many + firewall rules. When using platform autoscaling, keeping rules up to date becomes untenable. + +Operators looking to simplify their WAN deployment and minimize the exposed +security surface area can elect to join these datacenters together using [mesh +gateways](/consul/docs/east-west/mesh-gateway/federation) to do so. + +[![WAN federation with mesh gateways](/img/wan-federation-connectivity-mesh-gateways.png)](/img/wan-federation-connectivity-mesh-gateways.png) + +## Architecture + +There are two main kinds of communication that occur over the WAN link spanning +the gulf between disparate Consul datacenters: + +- **WAN gossip:** We leverage the serf and memberlist libraries to gossip + around failure detector knowledge about Consul servers in each datacenter. + By default this operates point to point between servers over `8302/udp` with + a fallback to `8302/tcp` (which logs a warning indicating the network is + misconfigured). + +- **Cross-datacenter RPCs:** Consul servers expose a special multiplexed port + over `8300/tcp`. Several distinct kinds of messages can be received on this + port, such as RPC requests forwarded from servers in other datacenters. + +In this network topology individual Consul client agents on a LAN in one +datacenter never need to directly dial servers in other datacenters. This +means you could introduce a set of firewall rules prohibiting `10.0.0.0/24` +from sending any traffic at all to `10.1.2.0/24` for security isolation. + +You may already have configured [mesh +gateways](/consul/tutorials/developer-mesh/service-mesh-gateways) +to allow for services in the service mesh to freely connect between datacenters +regardless of the lateral connectivity of the nodes hosting the Consul client +agents. + +By activating WAN federation via mesh gateways the servers +can similarly use the existing mesh gateways to reach each other without +themselves being directly reachable. + +## Configuration + +### TLS + +All Consul servers in all datacenters should have TLS configured with certificates containing +these SAN fields: + + server.. (normal) + .server.. (needed for wan federation) + +This can be achieved using any number of tools, including `consul tls cert create` with the `-node` flag. + +### Mesh Gateways + +There needs to be at least one mesh gateway configured to opt-in to exposing +the servers in its configuration. When using the `consul connect envoy` CLI +this is done by using the flag `-expose-servers`. All this does is to register +the mesh gateway into the catalog with the additional piece of service metadata +of `{"consul-wan-federation":"1"}`. If you are registering the mesh gateways +into the catalog out of band you may simply add this to your existing +registration payload. + +!> Before activating the feature on an existing cluster you should ensure that +there is at least one mesh gateway prepared to expose the servers registered in +each datacenter otherwise the WAN will become only partly connected. + +### Consul Server Options + +There are a few necessary additional pieces of configuration beyond those +required for standing up a +[multi-datacenter](/consul/docs/east-west/wan-federation/create) +Consul cluster. + +Consul servers in the _primary_ datacenter should add this snippet to the +configuration file: + +```hcl +connect { + enabled = true + enable_mesh_gateway_wan_federation = true +} +``` + +Consul servers in all _secondary_ datacenters should add this snippet to the +configuration file: + +```hcl +primary_gateways = [ ":", ... ] +connect { + enabled = true + enable_mesh_gateway_wan_federation = true +} +``` + +The [`retry_join_wan`](/consul/docs/reference/agent/configuration-file/join#retry_join_wan) addresses are +only used for the [traditional federation process](/consul/docs/east-west/wan-federation#traditional-wan-federation). +They must be omitted when federating Consul servers via gateways. + +-> The `primary_gateways` configuration can also use `go-discover` syntax just +like `retry_join_wan`. + +### Bootstrapping + +For ease of debugging (such as avoiding a flurry of misleading error messages) +when intending to activate WAN federation via mesh gateways it is best to +follow this general procedure: + +### New secondary + +1. Upgrade to the desired version of the consul binary for all servers, + clients, and CLI. +2. Start all consul servers and clients on the new version in the primary + datacenter. +3. Ensure the primary datacenter has at least one running, registered mesh gateway with + the service metadata key of `{"consul-wan-federation":"1"}` set. +4. Ensure you are _prepared_ to launch corresponding mesh gateways in all + secondaries. When ACLs are enabled actually registering these requires + upstream connectivity to the primary datacenter to authorize catalog + registration. +5. Ensure all servers in the primary datacenter have updated configuration and + restart. +6. Ensure all servers in the secondary datacenter have updated configuration. +7. Start all consul servers and clients on the new version in the secondary + datacenter. +8. When ACLs are enabled, shortly afterwards it should become possible to + resolve ACL tokens from the secondary, at which time it should be possible + to launch the mesh gateways in the secondary datacenter. + +### Existing secondary + +1. Upgrade to the desired version of the consul binary for all servers, + clients, and CLI. +2. Restart all consul servers and clients on the new version. +3. Ensure each datacenter has at least one running, registered mesh gateway with the + service metadata key of `{"consul-wan-federation":"1"}` set. +4. Ensure all servers in the primary datacenter have updated configuration and + restart. +5. Ensure all servers in the secondary datacenter have updated configuration and + restart. + +### Verification + +From any two datacenters joined together double check the following give you an +expected result: + +- Check that `consul members -wan` lists all servers in all datacenters with + their _local_ ip addresses and are listed as `alive`. + +- Ensure any API request that activates datacenter request forwarding. such as + [`/v1/catalog/services?dc=`](/consul/api-docs/catalog#dc-1) + succeeds. + +### Upgrading the primary gateways + +Once federation is established, secondary datacenters will continuously request +updated mesh gateway addresses from the primary datacenter. Consul routes the requests + through the primary datacenter's mesh gateways. This is because +secondary datacenters cannot directly dial the primary datacenter's Consul servers. +If the primary gateways are upgraded, and their previous instances are decommissioned +before the updates are propagated, then the primary datacenter will become unreachable. + +To safely upgrade primary gateways, we recommend that you apply one of the following policies: +- Avoid decommissioning primary gateway IP addresses. This is because the [primary_gateways](/consul/docs/reference/agent/configuration-file/general#primary_gateways) addresses configured on the secondary servers act as a fallback mechanism for re-establishing connectivity to the primary. + +- Verify that addresses of the new mesh gateways in the primary were propagated +to the secondary datacenters before decommissioning the old mesh gateways in the primary. diff --git a/website/content/docs/east-west/mesh-gateway/federation.mdx b/website/content/docs/east-west/mesh-gateway/federation.mdx new file mode 100644 index 000000000000..3580ec15ad7d --- /dev/null +++ b/website/content/docs/east-west/mesh-gateway/federation.mdx @@ -0,0 +1,313 @@ +--- +layout: docs +page_title: Enable service-to-service traffic across WAN-federated datacenters +description: >- + Mesh gateways are specialized proxies that route data between services that cannot communicate directly. Learn how to enable service-to-service traffic across wan-federated datacenters and review example configuration entries. +--- + +# Enable service-to-service traffic across WAN-federated datacenters + +-> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer. + +Mesh gateways enable service mesh traffic to be routed between different Consul datacenters. +Datacenters can reside in different clouds or runtime environments where general interconnectivity between all services +in all datacenters isn't feasible. + +Mesh gateways operate by sniffing and extracting the server name indication (SNI) header from the service mesh session and routing the connection to the appropriate destination based on the server name requested. The gateway does not decrypt the data within the mTLS session. + +The following diagram describes the architecture for using mesh gateways for cross-datacenter communication: + +![Mesh Gateway Architecture](/img/mesh-gateways.png) + +-> **Mesh Gateway Tutorial**: Follow the [mesh gateway tutorial](/consul/tutorials/developer-mesh/service-mesh-gateways) to learn important concepts associated with using mesh gateways for connecting services across datacenters. + +## Prerequisites + +Ensure that your Consul environment meets the following requirements. + +### Consul + +* Consul version 1.6.0 or newer. +* A local Consul agent is required to manage its configuration. +* Consul [service mesh](//consul/docs/reference/agent/configuration-file/service-mesh#connect) must be enabled in both datacenters. +* Each [datacenter](/consul/docs/reference/agent/configuration-file/general#datacenter) must have a unique name. +* Each datacenters must be [WAN joined](/consul/tutorials/networking/federation-gossip-wan). +* The [primary datacenter](/consul/docs/reference/agent//configuration-file/general#primary_datacenter) must be set to the same value in both datacenters. This specifies which datacenter is the authority for service mesh certificates and is required for services in all datacenters to establish mutual TLS with each other. +* [gRPC](/consul/docs/reference/agent/configuration-file/general#grpc_port) must be enabled. +* If you want to [enable gateways globally](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#enabling-gateways-globally) you must enable [centralized configuration](/consul/docs/reference/agent/configuration-file/general#enable_central_service_config). + +### Network + +* General network connectivity to all services within its local Consul datacenter. +* General network connectivity to all mesh gateways within remote Consul datacenters. + +### Proxy + +Envoy is the only proxy with mesh gateway capabilities in Consul. + +Mesh gateway proxies receive their configuration through Consul, which automatically generates it based on the proxy's registration. +Consul can only translate mesh gateway registration information into Envoy configuration. + +Sidecar proxies that send traffic to an upstream service through a gateway need to know the location of that gateway. They discover the gateway based on their sidecar proxy registrations. Consul can only translate the gateway registration information into Envoy configuration. + +Sidecar proxies that do not send upstream traffic through a gateway are not affected when you deploy gateways. If you are using Consul's built-in proxy as a service mesh sidecar it will continue to work for intra-datacenter traffic and will receive incoming traffic even if that traffic has passed through a gateway. + +## Configuration + +Configure the following settings to register the mesh gateway as a service in Consul. + +* Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. +* Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and datacenter. Refer to the [`upstreams` documentation](/consul/docs/connect/proxies/proxy-config-reference#upstream-configuration-reference) for details. The service `proxy.upstreams.destination_name` is always required. The `proxy.upstreams.datacenter` must be configured to enable cross-datacenter traffic. The `proxy.upstreams.destination_namespace` configuration is only necessary if the destination service is in a different namespace. +* Define the `Proxy.Config` settings using opaque parameters compatible with your proxy (i.e., Envoy). For Envoy, refer to the [Gateway Options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) documentation for additional configuration information. +* If ACLs are enabled, a token granting `service:write` for the gateway's service name and `service:read` for all services in the datacenter or partition must be added to the gateway's service definition. These permissions authorize the token to route communications for other Consul service mesh services, but does not allow decrypting any of their communications. + +### Modes + +Each upstream associated with a service mesh proxy can be configured so that it is routed through a mesh gateway. +Depending on your network, the proxy's connection to the gateway can operate in one of the following modes (refer to the [mesh-architecture-diagram](#mesh-architecture-diagram)): + +* `none` - (Default) No gateway is used and a service mesh sidecar proxy makes its outbound connections directly + to the destination services. + +* `local` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the + same datacenter. That gateway is responsible for ensuring that the data is forwarded to gateways in the destination datacenter. + Refer to the flow labeled `local` in the [mesh-architecture-diagram](#mesh-architecture-diagram). + +* `remote` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the destination datacenter. + The gateway forwards the data to the final destination service. + Refer to the flow labeled `remote` in the [mesh-architecture-diagram](#mesh-architecture-diagram). + +### Service Mesh Proxy Configuration + +Set the proxy to the preferred [mode](#modes) to configure the service mesh proxy. You can specify the mode globally or within child configurations to control proxy behaviors at a lower level. Consul recognizes the following order of precedence if the gateway mode is configured in multiple locations the order of precedence: + +1. Upstream definition (highest priority) +2. Service instance definition +3. Centralized `service-defaults` configuration entry +4. Centralized `proxy-defaults` configuration entry + +## Example Configurations + +Use the following example configurations to help you understand some of the common scenarios. + +### Enabling Gateways Globally + +The following `proxy-defaults` configuration will enable gateways for all mesh services in the `local` mode. + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +MeshGateway { + Mode = "local" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + meshGateway: + mode: local +``` + +```json +{ + "Kind": "proxy-defaults", + "Name": "global", + "MeshGateway": { + "Mode": "local" + } +} +``` + + +### Enabling Gateways Per Service + +The following `service-defaults` configuration will enable gateways for all mesh services with the name `web`. + + + +```hcl +Kind = "service-defaults" +Name = "web" +MeshGateway { + Mode = "local" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: web +spec: + meshGateway: + mode: local +``` + +```json +{ + "Kind": "service-defaults", + "Name": "web", + "MeshGateway": { + "Mode": "local" + } +} + + + +### Enabling Gateways for a Service Instance + +The following [proxy service configuration](/consul/docs/connect/proxy/mesh) +enables gateways for the service instance in the `remote` mode. + + + +```hcl +service { + name = "web-sidecar-proxy" + kind = "connect-proxy" + port = 8181 + proxy { + destination_service_name = "web" + mesh_gateway { + mode = "remote" + } + upstreams = [ + { + destination_name = "api" + datacenter = "secondary" + local_bind_port = 10000 + } + ] + } +} + +# Or alternatively inline with the service definition: + +service { + name = "web" + port = 8181 + connect { + sidecar_service { + proxy { + mesh_gateway { + mode = "remote" + } + upstreams = [ + { + destination_name = "api" + datacenter = "secondary" + local_bind_port = 10000 + } + ] + } + } + } +} +``` + +```json +{ + "service": { + "kind": "connect-proxy", + "name": "web-sidecar-proxy", + "port": 8181, + "proxy": { + "destination_service_name": "web", + "mesh_gateway": { + "mode": "remote" + }, + "upstreams": [ + { + "destination_name": "api", + "datacenter": "secondary", + "local_bind_port": 10000 + } + ] + } + } +} +``` + + + +### Enabling Gateways for a Proxy Upstream + +The following service definition will enable gateways in the `local` mode for one upstream, the `remote` mode for a second upstream and will disable gateways for a third upstream. + + + +```hcl +service { + name = "web-sidecar-proxy" + kind = "connect-proxy" + port = 8181 + proxy { + destination_service_name = "web" + upstreams = [ + { + destination_name = "api" + local_bind_port = 10000 + mesh_gateway { + mode = "remote" + } + }, + { + destination_name = "db" + local_bind_port = 10001 + mesh_gateway { + mode = "local" + } + }, + { + destination_name = "logging" + local_bind_port = 10002 + mesh_gateway { + mode = "none" + } + }, + ] + } +} +``` +```json +{ + "service": { + "kind": "connect-proxy", + "name": "web-sidecar-proxy", + "port": 8181, + "proxy": { + "destination_service_name": "web", + "upstreams": [ + { + "destination_name": "api", + "local_bind_port": 10000, + "mesh_gateway": { + "mode": "remote" + } + }, + { + "destination_name": "db", + "local_bind_port": 10001, + "mesh_gateway": { + "mode": "local" + } + }, + { + "destination_name": "logging", + "local_bind_port": 10002, + "mesh_gateway": { + "mode": "none" + } + } + ] + } + } +} +``` + diff --git a/website/content/docs/east-west/mesh-gateway/index.mdx b/website/content/docs/east-west/mesh-gateway/index.mdx new file mode 100644 index 000000000000..7604b984f721 --- /dev/null +++ b/website/content/docs/east-west/mesh-gateway/index.mdx @@ -0,0 +1,317 @@ +--- +layout: docs +page_title: Mesh gateways overview +description: >- + Mesh gateways are specialized proxies that route data between services that cannot communicate directly. Learn how mesh gateways are used in different Consul configurations. +--- + +# Mesh gateways overview + +Mesh gateways enable service mesh traffic to be routed between different Consul datacenters. +Datacenters can reside in different clouds or runtime environments where general interconnectivity between all services in all datacenters isn't feasible. + +## Prerequisites + +Mesh gateways can be used with any of the following Consul configurations for managing separate datacenters or partitions. + +1. WAN Federation + * [Mesh gateways can be used to route service-to-service traffic between datacenters](/consul/docs/east-west/mesh-gateway/federation) + * [Mesh gateways can be used to route all WAN traffic, including from Consul servers](/consul/docs/east-west/mesh-gateway/enable) +2. Cluster Peering + * [Mesh gateways can be used to route service-to-service traffic between datacenters](/consul/docs/east-west/cluster-peering/establish/vm) + * [Mesh gateways can be used to route control-plane traffic from Consul servers](/consul/docs/east-west/mesh-gateway/cluster-peer) +3. Admin Partitions + * [Mesh gateways can be used to route service-to-service traffic between admin partitions in the same Consul datacenter](/consul/docs/east-west/mesh-gateway/admin-partition) + +### Consul + +Review the [specific guide](#prerequisites) for your use case to determine the required version of Consul. + +### Network + +* General network connectivity to all services within its local Consul datacenter. +* General network connectivity to all mesh gateways within remote Consul datacenters. + +### Proxy + +Envoy is the only proxy with mesh gateway capabilities in Consul. + +Mesh gateway proxies receive their configuration through Consul, which automatically generates it based on the proxy's registration. +Consul can only translate mesh gateway registration information into Envoy configuration. + +Sidecar proxies that send traffic to an upstream service through a gateway need to know the location of that gateway. They discover the gateway based on their sidecar proxy registrations. Consul can only translate the gateway registration information into Envoy configuration. + +Sidecar proxies that do not send upstream traffic through a gateway are not affected when you deploy gateways. If you are using Consul's built-in proxy as a Connect sidecar it will continue to work for intra-datacenter traffic and will receive incoming traffic even if that traffic has passed through a gateway. + +## Configuration + +Configure the following settings to register the mesh gateway as a service in Consul. + +* Specify `mesh-gateway` in the `kind` field to register the gateway with Consul. +* Configure the `proxy.upstreams` parameters to route traffic to the correct service, namespace, and datacenter. Refer to the [`upstreams` documentation](/consul/docs/reference/proxy/connect-proxy#upstream-configuration-reference) for details. The service `proxy.upstreams.destination_name` is always required. The `proxy.upstreams.datacenter` must be configured to enable cross-datacenter traffic. The `proxy.upstreams.destination_namespace` configuration is only necessary if the destination service is in a different namespace. +* Define the `Proxy.Config` settings using opaque parameters compatible with your proxy (i.e., Envoy). For Envoy, refer to the [Gateway Options](/consul/docs/reference/proxy/envoy#gateway-options) and [Escape-hatch Overrides](/consul/docs/reference/proxy/envoy#escape-hatch-overrides) documentation for additional configuration information. +* If ACLs are enabled, a token granting `service:write` for the gateway's service name and `service:read` for all services in the datacenter or partition must be added to the gateway's service definition. These permissions authorize the token to route communications for other Consul service mesh services, but does not allow decrypting any of their communications. + +### Modes + +Each upstream associated with a service mesh proxy can be configured so that it is routed through a mesh gateway. +Depending on your network, the proxy's connection to the gateway can operate in one of the following modes: + +* `none` - No gateway is used and a service mesh sidecar proxy makes its outbound connections directly + to the destination services. This is the default for WAN federation. This setting is invalid for peered clusters + and will be treated as remote instead. + +* `local` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the + same datacenter. That gateway is responsible for ensuring that the data is forwarded to gateways in the destination datacenter. + +* `remote` - The service mesh sidecar proxy makes an outbound connection to a gateway running in the destination datacenter. + The gateway forwards the data to the final destination service. This is the default for peered clusters. + +### Service Mesh Proxy Configuration + +Set the proxy to the preferred [mode](#modes) to configure the service mesh proxy. You can specify the mode globally or within child configurations to control proxy behaviors at a lower level. Consul recognizes the following order of precedence if the gateway mode is configured in multiple locations the order of precedence: + +1. Upstream definition (highest priority) +2. Service instance definition +3. Centralized `service-defaults` configuration entry +4. Centralized `proxy-defaults` configuration entry + +## Example Configurations + +Use the following example configurations to help you understand some of the common scenarios. + +### Enabling Gateways Globally + +The following `proxy-defaults` configuration will enable gateways for all mesh services in the `local` mode. + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +MeshGateway { + Mode = "local" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + meshGateway: + mode: local +``` + +```json +{ + "Kind": "proxy-defaults", + "Name": "global", + "MeshGateway": { + "Mode": "local" + } +} +``` + + + +### Enabling Gateways Per Service + +The following `service-defaults` configuration will enable gateways for all mesh services with the name `web`. + + + +```hcl +Kind = "service-defaults" +Name = "web" +MeshGateway { + Mode = "local" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: web +spec: + meshGateway: + mode: local +``` + +```json +{ + "Kind": "service-defaults", + "Name": "web", + "MeshGateway": { + "Mode": "local" + } +} +``` + + + +### Enabling Gateways for a Service Instance + +The following [proxy service configuration](/consul/docs/connect/proxy/mesh) + enables gateways for the service instance in the `remote` mode. + + + +```hcl +service { + name = "web-sidecar-proxy" + kind = "connect-proxy" + port = 8181 + proxy { + destination_service_name = "web" + mesh_gateway { + mode = "remote" + } + upstreams = [ + { + destination_name = "api" + datacenter = "secondary" + local_bind_port = 10000 + } + ] + } +} + +# Or alternatively inline with the service definition: + +service { + name = "web" + port = 8181 + connect { + sidecar_service { + proxy { + mesh_gateway { + mode = "remote" + } + upstreams = [ + { + destination_name = "api" + datacenter = "secondary" + local_bind_port = 10000 + } + ] + } + } + } +} +``` + +```json +{ + "service": { + "kind": "connect-proxy", + "name": "web-sidecar-proxy", + "port": 8181, + "proxy": { + "destination_service_name": "web", + "mesh_gateway": { + "mode": "remote" + }, + "upstreams": [ + { + "destination_name": "api", + "datacenter": "secondary", + "local_bind_port": 10000 + } + ] + } + } +} +``` + + + +### Enabling Gateways for a Proxy Upstream + +The following service definition will enable gateways in the `local` mode for one upstream, the `remote` mode for a second upstream and will disable gateways for a third upstream. + + + +```hcl +service { + name = "web-sidecar-proxy" + kind = "connect-proxy" + port = 8181 + proxy { + destination_service_name = "web" + upstreams = [ + { + destination_name = "api" + destination_peer = "cluster-01" + local_bind_port = 10000 + mesh_gateway { + mode = "remote" + } + }, + { + destination_name = "db" + datacenter = "secondary" + local_bind_port = 10001 + mesh_gateway { + mode = "local" + } + }, + { + destination_name = "logging" + datacenter = "secondary" + local_bind_port = 10002 + mesh_gateway { + mode = "none" + } + }, + ] + } +} +``` +```json +{ + "service": { + "kind": "connect-proxy", + "name": "web-sidecar-proxy", + "port": 8181, + "proxy": { + "destination_service_name": "web", + "upstreams": [ + { + "destination_name": "api", + "local_bind_port": 10000, + "mesh_gateway": { + "mode": "remote" + } + }, + { + "destination_name": "db", + "local_bind_port": 10001, + "mesh_gateway": { + "mode": "local" + } + }, + { + "destination_name": "logging", + "local_bind_port": 10002, + "mesh_gateway": { + "mode": "none" + } + } + ] + } + } +} +``` + + + +## Guidance + +The following usage documentation is available to help you use mesh gateways in different deployment strategies: + +- [Enable service-to-service traffic across admin partitions](/consul/docs/east-west/mesh-gateway/federation) +- [Enable service-to-service traffic across cluster peers](/consul/docs/east-west/mesh-gateway/federation) +- [Enable service-to-service traffic across WAN-federated datacenters](/consul/docs/east-west/mesh-gateway/federation) \ No newline at end of file diff --git a/website/content/docs/east-west/network-area.mdx b/website/content/docs/east-west/network-area.mdx new file mode 100644 index 000000000000..48d2c3746965 --- /dev/null +++ b/website/content/docs/east-west/network-area.mdx @@ -0,0 +1,547 @@ +--- +layout: docs +page_title: Federate multiple datacenters with network areas +description: >- + Use network areas for advanced datacenter federation. Network areas specify a relationship between a pair of Consul datacenters. +--- + +# Federate multiple datacenters with network areas + +This topic covers how to federate Consul datacenters using the Consul Enterprise network areas feature. Refer to the [WAN federation](/consul/docs/east-west/wan-federation/vms) documentation to learn how to federate Consul community edition datacenters. + + + +The network area feature documented here requires [Consul Enterprise](https://www.hashicorp.com/products/consul/pricing/). If you have purchased or wish to try out Consul Enterprise, refer to [how to access Consul Enterprise](/consul/docs/enterprise#access-consul-enterprise). + + + +## Network areas introduction + +One of the key features of Consul is its support for multiple datacenters. The architecture of Consul is designed to promote a low coupling of datacenters so that connectivity issues or failure of any datacenter does not impact the availability of Consul in other datacenters. This means each datacenter runs independently, each having a dedicated group of servers and a private LAN gossip pool. + +Network areas specify a relationship between a pair of Consul datacenters. Operators create reciprocal areas on each side of the relationship and then join them together, so a given Consul datacenter can participate in many areas, even when some of the peer areas cannot contact each other. This allows for more flexible relationships between Consul datacenters, such as hub/spoke or more general tree structures. + + + +Currently, Consul will only route RPC requests to datacenters it is immediately adjacent to via an area, but future versions of Consul may add routing support. + + + +### Network areas federation vs. WAN federation + +Consul's WAN federation relies on all Consul servers in all datacenters having full connectivity via server RPC (`8300/tcp`) and Serf WAN (`8302/tcp` and `8302/udp`). Securing this setup requires TLS and managing a gossip keyring. With massive Consul deployments, it becomes tricky to support full connectivity between all Consul servers and manage the keyring. + +Network areas let you specify relationship between Consul datacenters. You can create network areas on each datacenter and then join them together, so a given Consul datacenter can participate in many areas, even when some of the peer areas cannot contact each other. This allows for more flexible relationships between Consul datacenters, such as hub/spoke or more general tree structures. + +Consul rotes traffic between network areas via server RPC (`8300/tcp`) so it can be secured with just TLS. Also, network areas do not require full connectivity across all servers in all the datacenter, but only between the servers in the two datacenter that are being federated. + +We recommend network areas when your architecture layout does not permit connectivity between all Consul servers across all datacenters, or when the data in one datacenter only needs to be accessible from one or few other datacenters. + +You can use networks areas alongside Consul's WAN federation and the WAN gossip pool. This lets you migrate seamlessly between the two federation solutions. You can connect a peer datacenter via the WAN gossip pool and a network area at the same time. Consul will forward the RPC requests as long as servers are available in either federation solution. + +Due to the relaxed connectivity constraints, some of the Consul functionalities might not have full compatibility with network areas. If you want to setup [ACL replication](/consul/docs/security/acl/acl-federated-datacenters) or to enable Consul service mesh with CA replication, we recommend you to use WAN gossip federation to leverage all Consul's latest functionalities. + +### Network requirements + +There are a few networking requirements that must be satisfied for network areas to work. + +- All server nodes in the two areas being federated must be able to talk to each other via their server RPC ports (`8300/tcp`). +- If service discovery is to be used across datacenters, the network must be able to route traffic between IP addresses across regions as well. + +Usually, this means that all datacenters must be connected using a VPN or other tunneling mechanism. Consul does not handle VPN or NAT traversal for you. For RPC forwarding to work, the bind address must be accessible from remote nodes. + + +## Configure advanced federation + +To get started, follow the [Deployment Guide](/consul/tutorials/production-deploy/deployment-guide) to start each datacenter. After bootstrapping, you should have two datacenters (`dc1` and `dc2`). Note that datacenter names are opaque to Consul. They are labels that help human operators reason about the Consul datacenters. + +Create a network area in each datacenter. + + + + +Use the [`consul operator area create`](/consul/commands/operator/area#create) command to create the network areas. + +Create a network area from the `dc1` datacenter, listing `dc2` as the peer datacenter. + + + +```shell-session +$ consul operator area create -peer-datacenter=dc2 +Created area "1c8cd5e6-562c-e0de-e369-a13c6205ffe8" with peer datacenter "dc2"! +``` + + + +Create a network area from the `dc2` datacenter, listing `dc1` as the peer datacenter. + + + +```shell-session +$ consul operator area create -peer-datacenter=dc1 +Created area "5bcb95db-5ae8-5265-3dfa-b2cb3452b093" with peer datacenter "dc1"! +``` + + + + + + +Use the [`/operator/area`](/consul/api-docs/operator/area#create-network-area) endpoint to create the network areas. + +Create the network area on a server in datacenter `dc1`, using `dc2` as the peer datacenter. + + + +```shell-session +$ curl -i -s --request POST \ + -H "Content-Type: application/json" \ + --data '{ "PeerDatacenter": "dc2" }' \ + http://localhost:8500/v1/operator/area +``` + + + +That will output the ID of the created area. + + + +```json +{ + "ID":"1c8cd5e6-562c-e0de-e369-a13c6205ffe8" +} +``` + + + +Create the network area on a server in datacenter `dc2`, using `dc1` as the peer datacenter. + + + +```shell-session +$ curl -i -s --request POST \ + -H "Content-Type: application/json" \ + --data '{ "PeerDatacenter": "dc1" }' \ + http://localhost:8500/v1/operator/area +``` + + + +That will output the ID of the created area. + + + +```json +{ + "ID":"5bcb95db-5ae8-5265-3dfa-b2cb3452b093" +} +``` + + + + + + + +You can now query for the area members. + + + + +Use the [`consul operator area members`](/consul/commands/operator/area#members) command to show Consul server nodes present in network areas. + + + +```shell-session +$ consul operator area members +Area Node Address Status Build Protocol DC RTT +1c8cd5e6-562c-e0de-e369-a13c6205ffe8 consul-server-0.dc1 172.18.0.5:8300 alive 1.20.5+ent 2 dc1 0s +``` + + + +The command will only show local servers until the servers join in a network area. + + + + +Use the [`/operator/area/:uuid/members`](/consul/api-docs/operator/area#list-network-area-members) to list the network area members. + + + +```shell-session +$ curl -s http://127.0.0.1:8500/v1/operator/area/1c8cd5e6-562c-e0de-e369-a13c6205ffe8/members +``` + + + +The query will only show local servers until the servers join in a network area. + + + +```json +[ + { + "ID": "6e4910c1-0259-f737-1ae9-8807b2816d0b", + "Name": "consul-server-0.dc1", + "Addr": "172.18.0.5", + "Port": 8300, + "Datacenter": "dc1", + "Role": "server", + "Build": "1.20.5+ent", + "Protocol": 2, + "Status": "alive", + "RTT": 0 + } +] +``` + + + + + + + +## Join servers + +Consul will automatically make sure that all servers within the datacenter where the area was created are joined to the area using the LAN information. You need to join with at least one Consul server in the other datacenter to complete the area: + + + + +Use the [`consul operator area join`](/consul/commands/operator/area#join) command to join the Consul server in `dc2` into the network area. + + + +```shell-session +$ consul operator area join -peer-datacenter=dc2 172.18.0.3 +Address Joined Error +172.18.0.3 true (none) +``` + + + + + + +Use the [`/operator/area/:uuid/join`](/consul/api-docs/operator/area#join-network-area) endpoint to join an existing network area. + + + +```shell-session +$ curl -s \ + -H "Content-Type: application/json" \ + --request PUT \ + --data '[ "172.18.0.3" ]' \ + http://127.0.0.1:8500/v1/operator/area/1c8cd5e6-562c-e0de-e369-a13c6205ffe8/join + +``` + + + +That will output a list of the servers federated with the join. + + + +```json +[ + { + "Address": "172.18.0.3", + "Joined": true, + "Error": "" + } +] +``` + + + + + + +After the join, the remote Consul servers are now listed as part of the area's members. + + + + +Use the [`consul operator area members`](/consul/commands/operator/area#members) command to show Consul server nodes present in network areas. + + + +```shell-session +$ consul operator area members +Area Node Address Status Build Protocol DC RTT +1c8cd5e6-562c-e0de-e369-a13c6205ffe8 consul-server-0.dc1 172.18.0.5:8300 alive 1.20.5+ent 2 dc1 0s +1c8cd5e6-562c-e0de-e369-a13c6205ffe8 consul-server-1.dc2 172.18.0.3:8300 alive 1.20.5+ent 2 dc2 2.964369ms +``` + + + + + + +Use the [`/operator/area/:uuid/members`](/consul/api-docs/operator/area#list-network-area-members) to list the network area members. + + + +```shell-session +$ curl -s http://127.0.0.1:8500/v1/operator/area/1c8cd5e6-562c-e0de-e369-a13c6205ffe8/members +``` + + + +The command will output all the servers present in the network area. + + + +```json +[ + { + "ID": "6e4910c1-0259-f737-1ae9-8807b2816d0b", + "Name": "consul-server-0.dc1", + "Addr": "172.18.0.5", + "Port": 8300, + "Datacenter": "dc1", + "Role": "server", + "Build": "1.20.5+ent", + "Protocol": 2, + "Status": "alive", + "RTT": 0 + }, + { + "ID": "55e61b05-7648-d0d7-e868-6e56e878e371", + "Name": "consul-server-1.dc2", + "Addr": "172.18.0.3", + "Port": 8300, + "Datacenter": "dc2", + "Role": "server", + "Build": "1.20.5+ent", + "Protocol": 2, + "Status": "alive", + "RTT": 2595381 + } +] +``` + + + + + + +Once the two datacenters are federated, the UI shows a dropdown that lists the known datacenters. + +![Consul UI showing service page with WAN dropdown](/img/east-west/network-area/consul-ui-services-multi_dc_selector-dark.png#dark-theme-only) +![Consul UI showing service page with WAN dropdown](/img/east-west/network-area/consul-ui-services-multi_dc_selector.png#light-theme-only) + + + + +## Route RPCs + +With network area enabled, you can route RPC commands in both directions. The following command will set a KV entry in `dc2` from `dc1`. + + + + +Use the `-datacenter` option to specify the datacenter to use for the command. + + + +```shell-session +$ consul kv put -datacenter=dc2 hello_from_dc1 world +Success! Data written to: hello_from_dc1 +``` + + + +Similarly, you can use the parameter to retrieve data from the other datacenter. + + + +```shell-session +$ consul kv get -datacenter=dc2 hello_from_dc1 +world +``` + + + + + + +Use the `dc` query parameter to specify the datacenter to use for the API call. + + + +```shell-session +$ curl --silent \ + --request PUT \ + --data 'world' \ + http://127.0.0.1:8500/v1/kv/hello_from_dc1?dc=dc2 +``` + + + +The command will output `true` on success. Similarly, you can use the parameter to retrieve data from the other datacenter. + + + +```shell-session +$ curl --silent http://127.0.0.1:8500/v1/kv/hello_from_dc1?dc=dc2 +``` + + + +Notice the response returns the base64 encoded value. + + + +```json +[ + { + "LockIndex": 0, + "Key": "hello_from_dc1", + "Flags": 0, + "Value": "d29ybGQ=", + "Partition": "default", + "Namespace": "default", + "CreateIndex": 3817, + "ModifyIndex": 3817 + } +] +``` + + + + + + +## DNS lookups + +The DNS interface supports federation as well. + + + +```shell-session +$ dig @127.0.0.1 -p 8600 consul.service.dc2.consul + +; <<>> DiG 9.18.28-1~deb12u2-Debian <<>> @127.0.0.1 -p 8600 consul.service.dc2.consul +; (1 server found) +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14610 +;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 1232 +;; QUESTION SECTION: +;consul.service.dc2.consul. IN A + +;; ANSWER SECTION: +consul.service.dc2.consul. 0 IN A 172.18.0.3 + +;; Query time: 17 msec +;; SERVER: 127.0.0.1#8600(127.0.0.1) (UDP) +;; WHEN: Thu Apr 17 08:30:40 UTC 2025 +;; MSG SIZE rcvd: 70 +``` + + + +## Delete network areas + +Consul does not provide a command to leave a previously joined network area. To remove the federation between two datacenters, we recommend removing the network area from both datacenters. + + + + +Use the [`consul operator area delete`](/consul/commands/operator/area#delete) command to delete network areas. + + + +```shell-session +$ consul operator area delete -id=5bcb95db-5ae8-5265-3dfa-b2cb3452b093 +Deleted area "5bcb95db-5ae8-5265-3dfa-b2cb3452b093"! +``` + + + +Once you delete the network area from one of the datacenters, Consul removes the federation and the servers are shown as `left` from the other datacenters. + + + +```shell-session +$ consul operator area members +Area Node Address Status Build Protocol DC RTT +1c8cd5e6-562c-e0de-e369-a13c6205ffe8 consul-server-0.dc1 172.18.0.5:8300 alive 1.20.5+ent 2 dc1 0s +1c8cd5e6-562c-e0de-e369-a13c6205ffe8 consul-server-1.dc2 172.18.0.3:8300 left 1.20.5+ent 2 dc2 2.962233ms +``` + + + + + + +Use the [`/operator/area/:uuid`](/consul/api-docs/operator/area#delete-network-area) endpoint to remove the network area. + + + +```shell-session +$ curl --silent \ + --request DELETE \ + http://127.0.0.1:8500/v1/operator/area/5bcb95db-5ae8-5265-3dfa-b2cb3452b093 +``` + + + +Once you delete the network area from one of the datacenters, Consul removes the federation and the servers are shown as `left` from the other datacenters. + + + +```shell-session +$ curl -s http://127.0.0.1:8500/v1/operator/area/1c8cd5e6-562c-e0de-e369-a13c6205ffe8/members +``` + + + +The server is now shown as `left` from the other datacenters. + + + +```json +[ + { + "ID": "55e61b05-7648-d0d7-e868-6e56e878e371", + "Name": "consul-server-1.dc2", + "Addr": "172.18.0.3", + "Port": 8300, + "Datacenter": "dc2", + "Role": "server", + "Build": "1.20.5+ent", + "Protocol": 2, + "Status": "left", + "RTT": 2962233 + }, + { + "ID": "6e4910c1-0259-f737-1ae9-8807b2816d0b", + "Name": "consul-server-0.dc1", + "Addr": "172.18.0.5", + "Port": 8300, + "Datacenter": "dc1", + "Role": "server", + "Build": "1.20.5+ent", + "Protocol": 2, + "Status": "alive", + "RTT": 0 + } +] +``` + + + + + + +## Data replication + +In general, Consul does not replicate data between different Consul datacenters. When a request is made for a resource in another datacenter, the local Consul servers forward an RPC request to the remote Consul servers for that resource and return the results. If the remote datacenter is not available, those resources will also not be available. However, this will not affect the local datacenter. + +There are some special situations where a limited subset of data can be replicated, such as with Consul's built-in ACL replication capability, or external tools like [`consul-replicate`](https://github.com/hashicorp/consul-replicate/). diff --git a/website/content/docs/east-west/vm.mdx b/website/content/docs/east-west/vm.mdx new file mode 100644 index 000000000000..be751a8886db --- /dev/null +++ b/website/content/docs/east-west/vm.mdx @@ -0,0 +1,66 @@ +--- +layout: docs +page_title: Link service network east/west on virtual machines (VMs) +description: >- + This topic provides an overview of linking segments of your service mesh in an east/west direction on virtual machines (VMs). You can extend your service mesh across regions, runtimes, and cloud providers with cluster peering or WAN federation. +--- + +# Link service network east/west on virtual machines (VMs) + +This topic provides an overview of the strategies and processes for linking defined segments of your service mesh to extend east/west operations across cloud regions, runtimes, and platforms when running the Consul binary on VMs. Linking network segments into an extended service mesh enables advanced strategies for deploying and monitoring service operations in your network. + +## Introduction + +Consul supports two general strategies for extending east/west service mesh traffic across your network: + +- Cluster peering +- Wide Area Network (WAN) federation + +Consul community edition supports basic cluster peering and federation scenarios. Implementing advanced scenarios such as federated network areas and cluster peering between multiple admin partitions in datacenters require Consul Enterprise. Refer to [Consul Enterprise](/consul/docs/enterprise) for more information. + +## Cluster peering + +@include 'text/descriptions/cluster-peering.mdx' + +Refer to the following pages for guidance about using cluster peering with Consul on VMs: + +- [Establish cluster peering connections on VMs](/consul/docs/east-west/cluster-peering/establish/vm) +- [Manage cluster peering connections on VMs](/consul/docs/east-west/cluster-peering/manage/vm) + +## WAN federation + +@include 'text/descriptions/wan-federation.mdx' + +Refer to the following pages for guidance about using WAN federation with Consul on VMs: + +- [WAN federation between VMs](/consul/docs/east-west/wan-federation/create) +- [WAN federation between virtual machines and Kubernetes clusters](/consul/docs/east-west/wan-federation/k8s-vm) + +## Federated network areas + +@include 'text/descriptions/network-area.mdx' + +## Secure communication with mesh gateways + +@include 'text/descriptions/mesh-gateway.mdx' + +Refer to the following pages for guidance about using mesh gateways with Consul: + +- [WAN federation with mesh gateways](/consul/docs/east-west/mesh-gateway/federation) +- [Cluster peering with mesh gateways](/consul/docs/east-west/mesh-gateway/cluster-peer) + +## Reference documentation + +For reference material related to the processes for extending your service mesh by linking segments of your network, refer to the following pages: + +- [CLI reference: `consul join` command](/consul/commands/join) +- [CLI reference: `consul operator area` command](/consul/commands/operator/area) +- [CLI reference: `peering` command](/consul/commands/peering) +- [HTTP API reference: `/operator/area` endpoint](/consul/api-docs/operator/area) +- [HTTP API reference: `/peering` endpoint](/consul/api-docs/peering) +- [Mesh gateway configuration reference](/consul/docs/reference/proxy/connect-proxy) +- [Proxy defaults configuration reference](/consul/docs/reference/config-entry/proxy-defaults) + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/east-west.mdx' \ No newline at end of file diff --git a/website/content/docs/east-west/wan-federation/index.mdx b/website/content/docs/east-west/wan-federation/index.mdx new file mode 100644 index 000000000000..57e11b22088b --- /dev/null +++ b/website/content/docs/east-west/wan-federation/index.mdx @@ -0,0 +1,87 @@ +--- +layout: docs +page_title: WAN Federation Overview +description: >- + Federating Consul datacenters enables agents to engage in WAN communication across runtimes and cloud providers. Learn about multi-cluster federation and its network requirements for Consul. +--- + +# WAN Federation Overview + +In Consul, federation is the act of joining two or more Consul datacenters. +When datacenters are joined, Consul servers in each datacenter can communicate +with one another. This enables the following features: + +- Services on all clusters can make calls to each other through Consul Service Mesh. +- [Intentions](/consul/docs/secure-mesh/intention) can be used to enforce rules about which services can communicate across all clusters. +- [L7 Routing Rules](/consul/docs/manage-traffic) can enable multi-cluster failover + and traffic splitting. +- The Consul UI has a drop-down menu that lets you navigate between datacenters. + +## Traditional WAN Federation vs. WAN Federation Via Mesh Gateways + +Consul provides two mechanisms for WAN (Wide Area Network) federation: + +1. Traditional WAN Federation +1. WAN Federation Via Mesh Gateways (newly available in Consul 1.8.0) + +### Traditional WAN Federation + +With traditional WAN federation, all Consul servers must be exposed on the wide area +network. In the Kubernetes context this is often difficult to set up. It would require that +each Consul server pod is running on a Kubernetes node with an IP address that is routable from +all other Kubernetes clusters. Often Kubernetes clusters are deployed into private +subnets that other clusters cannot route to without additional network devices and configuration. + +The Kubernetes solution to the problem of exposing pods is load balancer services but these can't be used +with traditional WAN federation because it requires proxying both UDP and TCP and Kubernetes load balancers only proxy TCP. +In addition, each Consul server would need its own load balancer because each +server needs a unique address. This would increase cost and complexity. + +![Traditional WAN Federation](/img/traditional-wan-federation.png 'Traditional WAN Federation') + +### WAN Federation Via Mesh Gateways + +To solve the problems that occurred with traditional WAN federation, +Consul 1.8.0 now supports WAN federation **via mesh gateways**. This mechanism +only requires that mesh gateways are exposed with routable addresses, not Consul servers. We can front +the mesh gateway pods with a single Kubernetes service and all traffic flows between +datacenters through the mesh gateways. + +![WAN Federation Via Mesh Gateway](/img/mesh-gateway-wan-federation.png 'WAN Federation Via Mesh Gateway') + +## Network Requirements + +Clusters/datacenters can be federated even if they have overlapping pod IP spaces or if they're +on different cloud providers or platforms. Kubernetes clusters can even be +federated with Consul datacenters running on virtual machines (and vice versa). +Because the communication between clusters is end-to-end encrypted, mesh gateways +can even be exposed on the public internet. + +There are three networking requirements: +1. When Consul servers in secondary datacenters first start up, they must be able to make calls directly to the + primary datacenter's mesh gateways. +1. Once the Consul servers in secondary datacenters have made that initial call to the primary datacenter's mesh + gateways, the mesh gateways in the secondary datacenter will be able to start. From this point onwards, all + communication between servers will flow first to the local mesh gateways, and then to the remote mesh gateways. + This means all mesh gateways across datacenters must be able to route to one another. + + For example, if using a load balancer service in front of each cluster's mesh gateway pods, the load balancer IP + must be routable from the other mesh gateway pods. + If using a public load balancer, this is guaranteed. If using a private load balancer + then you'll need to make sure that its IP/DNS address is routable from your other clusters. +1. If ACLs are enabled, primary clusters must be able to make requests to the Kubernetes API URLs of secondary clusters. + +## Next Steps + +Now that you have an overview of federation, proceed to either the +[Federation Between Kubernetes Clusters](/consul/docs/east-west/wan-federation/k8s) +or [Federation Between VMs and Kubernetes](/consul/docs/east-west/wan-federation/k8s-vm) +pages depending on your use case. + +## Guidance + +The following usage documentation is available to help you use WAN federation: + +- [WAN federation between VMs](/consul/docs/east-west/wan-federation/vms) +- [WAN federation between virtual machines and Kubernetes clusters](/consul/docs/east-west/wan-federation/k8s-vm) +- [WAN federation between multiple Kubernetes clusters](/consul/docs/east-west/wan-federation/k8s) \ No newline at end of file diff --git a/website/content/docs/east-west/wan-federation/k8s-vm.mdx b/website/content/docs/east-west/wan-federation/k8s-vm.mdx new file mode 100644 index 000000000000..5467dbee6c7b --- /dev/null +++ b/website/content/docs/east-west/wan-federation/k8s-vm.mdx @@ -0,0 +1,403 @@ +--- +layout: docs +page_title: WAN Federation Through Mesh Gateways - VMs and Kubernetes +description: >- + WAN federation through mesh gateways extends service mesh deployments by enabling Consul on Kubernetes to securely communicate with instances on VMs. Learn how to configure multi-cluster federation with k8s as either the primary or secondary datacenter. +--- + +# WAN Federation Between VMs and Kubernetes Through Mesh Gateways + +-> **1.8.0+:** This feature is available in Consul versions 1.8.0 and higher + +~> This topic requires familiarity with [Mesh Gateways](/consul/docs/east-west/mesh-gateway/federation) and [WAN Federation Via Mesh Gateways](/consul/docs/east-west/mesh-gateway/enable). + +This page describes how to federate Consul clusters separately deployed in VM and Kubernetes runtimes. Refer to [Multi-Cluster Overview](/consul/docs/east-west/wan-federation) +for more information, including [Kubernetes networking requirements](/consul/docs/east-west/wan-federation#network-requirements). + +Consul datacenters running on non-kubernetes platforms like VMs or bare metal can +be federated with Kubernetes datacenters. + +## Kubernetes as the Primary + +One Consul datacenter must be the [primary](/consul/docs/east-west/wan-federation/k8s#primary-datacenter). If your primary datacenter is running on Kubernetes, use the Helm config from the [Primary Datacenter](/consul/docs/east-west/wan-federation/k8s#primary-datacenter) section to install Consul. + +Once installed on Kubernetes, and with the `ProxyDefaults` [resource created](/consul/docs/east-west/wan-federation/k8s#proxydefaults), +you'll need to export the following information from the primary Kubernetes cluster: + +- Certificate authority cert and key (in order to create SSL certs for VMs) +- External addresses of Kubernetes mesh gateways +- Replication ACL token +- Gossip encryption key + +The following sections detail how to export this data. + +### Certificates + +1. Retrieve the certificate authority cert: + + ```sh + kubectl get secrets/consul-ca-cert --namespace consul --template='{{index .data "tls.crt" | base64decode }}' > consul-agent-ca.pem + ``` + +1. And the certificate authority signing key: + + ```sh + kubectl get secrets/consul-ca-key --namespace consul --template='{{index .data "tls.key" | base64decode }}' > consul-agent-ca-key.pem + ``` + +1. With the `consul-agent-ca.pem` and `consul-agent-ca-key.pem` files you can + create certificates for your servers and clients running on VMs that share the + same certificate authority as your Kubernetes servers. + + You can use the `consul tls` commands to generate those certificates: + + ```sh + # NOTE: consul-agent-ca.pem and consul-agent-ca-key.pem must be in the current + # directory. + $ consul tls cert create -server -dc=vm-dc -node + ==> WARNING: Server Certificates grants authority to become a + server and access all state in the cluster including root keys + and all ACL tokens. Do not distribute them to production hosts + that are not server nodes. Store them as securely as CA keys. + ==> Using consul-agent-ca.pem and consul-agent-ca-key.pem + ==> Saved vm-dc-server-consul-0.pem + ==> Saved vm-dc-server-consul-0-key.pem + ``` + + -> Note the `-node` option in the above command. This should be same as the node name of the [Consul Agent](/consul/docs/agent#running-an-agent). This is a [requirement](/consul/docs/east-west/mesh-gateway/enable#tls) for Consul Federation to work. Alternatively, if you plan to use the same certificate and key pair on all your Consul server nodes, or you don't know the nodename in advance, use `-node "*"` instead. + Not satisfying this requirement would result in the following error in the Consul Server logs: + `[ERROR] agent.server.rpc: TLS handshake failed: conn=from= error="remote error: tls: bad certificate"` + + See the help for output of `consul tls cert create -h` to see more options + for generating server certificates. + +1. These certificates can be used in your server config file: + + + + ```hcl + tls { + defaults { + cert_file = "vm-dc-server-consul-0.pem" + key_file = "vm-dc-server-consul-0-key.pem" + ca_file = "consul-agent-ca.pem" + } + } + ``` + + + +1. For clients, you can generate TLS certs with: + + ```shell-session + $ consul tls cert create -client + ==> Using consul-agent-ca.pem and consul-agent-ca-key.pem + ==> Saved dc1-client-consul-0.pem + ==> Saved dc1-client-consul-0-key.pem + ``` + + Or use the [auto_encrypt](/consul/docs/reference/agent/configuration-file/encryption#auto_encrypt) feature. + +### Mesh Gateway Addresses + +Retrieve the WAN addresses of the mesh gateways: + +```shell-session +$ kubectl exec statefulset/consul-server --namespace consul -- sh -c \ + 'curl --silent --insecure https://localhost:8501/v1/catalog/service/mesh-gateway | jq ".[].ServiceTaggedAddresses.wan"' +{ + "Address": "1.2.3.4", + "Port": 443 +} +{ + "Address": "1.2.3.4", + "Port": 443 +} +``` + +In this example, the addresses are the same because both mesh gateway pods are +fronted by the same Kubernetes load balancer. + +These addresses will be used in the server config for the `primary_gateways` +setting: + +```hcl +primary_gateways = ["1.2.3.4:443"] +``` + +### Replication ACL Token + +If ACLs are enabled, you'll also need the replication ACL token: + +```shell-session +$ kubectl get secrets/consul-acl-replication-acl-token --namespace consul --template='{{.data.token | base64decode}}' +e7924dd1-dc3f-f644-da54-81a73ba0a178 +``` + +This token will be used in the server config for the replication token. + +```hcl +acls { + tokens { + replication = "e7924dd1-dc3f-f644-da54-81a73ba0a178" + } +} +``` + +You need to set up additional ACL tokens as needed by the ACL system. Refer to the [ACLs](/consul/docs/secure/acl) docs +for more information. + +### Gossip Encryption Key + +If gossip encryption is enabled, you'll need the key as well. The command +to retrieve the key will depend on which Kubernetes secret you've stored it in. + +This key will be used in server and client configs for the `encrypt` setting: + +```hcl +encrypt = "uF+GsbI66cuWU21kiXLze5JLEX5j4iDFlDTb0ZWNpDI=" +``` + +### Final Configuration + +A final example server config file might look like: + +```hcl +# From above +tls { + defaults { + cert_file = "vm-dc-server-consul-0.pem" + key_file = "vm-dc-server-consul-0-key.pem" + ca_file = "consul-agent-ca.pem" + } + + internal_rpc { + verify_incoming = true + verify_outgoing = true + verify_server_hostname = true + } +} +primary_gateways = ["1.2.3.4:443"] +acl { + enabled = true + default_policy = "deny" + down_policy = "extend-cache" + tokens { + agent = "e7924dd1-dc3f-f644-da54-81a73ba0a178" + replication = "e7924dd1-dc3f-f644-da54-81a73ba0a178" + } +} +encrypt = "uF+GsbI66cuWU21kiXLze5JLEX5j4iDFlDTb0ZWNpDI=" + +# Other server settings +server = true +datacenter = "vm-dc" +data_dir = "/opt/consul" +enable_central_service_config = true +primary_datacenter = "dc1" +connect { + enabled = true + enable_mesh_gateway_wan_federation = true +} +ports { + https = 8501 + http = -1 + grpc = 8502 +} +``` + +## Kubernetes as the Secondary + +If you're running your primary datacenter on VMs then you'll need to manually +construct the [Federation Secret](/consul/docs/east-west/wan-federation/k8s#federation-secret) in order to federate +Kubernetes clusters as secondaries. In addition, primary clusters must be able to make requests to the Kubernetes API URLs of secondary clusters when ACLs are enabled. + +-> Your VM cluster must be running mesh gateways, and have mesh gateway WAN +federation enabled. See [WAN Federation via Mesh Gateways](/consul/docs/east-west/mesh-gateway/enable). + +You'll need: + +1. The root certificate authority cert placed in `consul-agent-ca.pem`. +1. The root certificate authority key placed in `consul-agent-ca-key.pem`. +1. The IP addresses of the mesh gateways running in your VM datacenter. These must + be routable from the Kubernetes cluster. +1. If ACLs are enabled you must create an ACL replication token with the following rules: + + ```hcl + acl = "write" + operator = "write" + agent_prefix "" { + policy = "read" + } + node_prefix "" { + policy = "write" + } + service_prefix "" { + policy = "read" + intentions = "read" + } + ``` + + This token is used for ACL replication and for automatic ACL management in Kubernetes. + + If you're running Consul Enterprise you'll need the rules: + + ```hcl + operator = "write" + agent_prefix "" { + policy = "read" + } + node_prefix "" { + policy = "write" + } + namespace_prefix "" { + acl = "write" + service_prefix "" { + policy = "read" + intentions = "read" + } + } + ``` +1. If ACLs are enabled you must also modify the [anonymous token](/consul/docs/secure/acl/token#anonymous-token) policy to have the following permissions: + + ```hcl + node_prefix "" { + policy = "read" + } + service_prefix "" { + policy = "read" + } + ``` + + With Consul Enterprise, use: + + ```hcl + partition_prefix "" { + namespace_prefix "" { + node_prefix "" { + policy = "read" + } + service_prefix "" { + policy = "read" + } + } + } + ``` + + These permissions are needed to allow cross-datacenter requests. To make a cross-dc request the sidecar proxy in the originating DC needs to know about the + services running in the remote DC. To do so, it needs an ACL token that allows it to look up the services in the remote DC. The way tokens are created in + Kubernetes, the sidecar proxies have local ACL tokens–i.e tokens that are only valid in the local DC. When a request goes from one DC to another, if the + request has a local token, it is stripped from the request because the remote DC won't be able to validate it. When the request lands in the other DC, + it has no ACL token and so will be subject to the anonymous token policy. This is why the anonymous token policy must be configured to allow read access + to all services. When the Kubernetes DC is the primary, this is handled automatically, but when the primary DC is on VMs, this must be configured manually. + + To configure the anonymous token policy, first create a policy with the above rules, then attach it to the anonymous token. For example using the CLI: + + ```sh + echo 'node_prefix "" { + policy = "read" + } + service_prefix "" { + policy = "read" + }' | consul acl policy create -name anonymous -rules - + + consul acl token update -id 00000000-0000-0000-0000-000000000002 -policy-name anonymous + ``` + +1. If gossip encryption is enabled, you'll need the key. + +With that data ready, you can create the Kubernetes federation secret: + +```sh +kubectl create secret generic consul-federation \ + --from-literal=caCert=$(cat consul-agent-ca.pem) \ + --from-literal=caKey=$(cat consul-agent-ca-key.pem) + # If ACLs are enabled uncomment. + # --from-literal=replicationToken="" \ + # If using gossip encryption uncomment. + # --from-literal=gossipEncryptionKey="" +``` + +If ACLs are enabled, you must next determine the Kubernetes API URL for the secondary cluster. The API URL of the +must be specified in the config files for all secondary clusters because secondary clusters need +to create global Consul ACL tokens (tokens that are valid in all datacenters) and these tokens can only be created +by the primary datacenter. By setting the API URL, the secondary cluster will configure a [Consul auth method](/consul/docs/secure/acl/auth-method) +in the primary cluster so that components in the secondary cluster can use their Kubernetes ServiceAccount tokens +to retrieve global Consul ACL tokens from the primary. + +To determine the Kubernetes API URL, first get the cluster name in your kubeconfig: + +```shell-session +$ export CLUSTER=$(kubectl config view -o jsonpath="{.contexts[?(@.name == \"$(kubectl config current-context)\")].context.cluster}") +``` + +Then get the API URL: + +```shell-session +$ kubectl config view -o jsonpath="{.clusters[?(@.name == \"$CLUSTER\")].cluster.server}" +https:// +``` + +You'll use this URL when setting `global.federation.k8sAuthMethodHost`. + +Then use the following Helm config file: + +```yaml +global: + name: consul + datacenter: dc2 + tls: + enabled: true + caCert: + secretName: consul-federation + secretKey: caCert + caKey: + secretName: consul-federation + secretKey: caKey + + # Delete this acls section if ACLs are disabled. + acls: + manageSystemACLs: true + replicationToken: + secretName: consul-federation + secretKey: replicationToken + + federation: + enabled: true + k8sAuthMethodHost: + primaryDatacenter: dc1 + + # Delete this gossipEncryption section if gossip encryption is disabled. + gossipEncryption: + secretName: consul-federation + secretKey: gossipEncryptionKey + +connectInject: + enabled: true +meshGateway: + enabled: true +server: + extraConfig: | + { + "primary_gateways": ["", "", ...] + } +``` + +Notes: + +1. You must fill out the `server.extraConfig` section with the IPs of your mesh +gateways running on VMs. +1. Set `global.federation.k8sAuthMethodHost` to the Kubernetes API URL of this cluster (including `https://`). +1. `global.federation.primaryDatacenter` should be set to the name of your primary datacenter. + +With your config file ready to go, follow our [Installation Guide](/consul/docs/deploy/server/k8s/helm) +to install Consul on your secondary cluster(s). + +After installation, if you're using consul-helm 0.30.0+, [create the +`ProxyDefaults` resource](/consul/docs/east-west/wan-federation/k8s#proxydefaults) +to allow traffic between datacenters. + +## Next Steps + +In both cases (Kubernetes as primary or secondary), after installation, follow the [Verifying Federation](/consul/docs/east-west/wan-federation/k8s#verifying-federation) +section to verify that federation is working as expected. diff --git a/website/content/docs/east-west/wan-federation/k8s.mdx b/website/content/docs/east-west/wan-federation/k8s.mdx new file mode 100644 index 000000000000..6b4fbad32ef8 --- /dev/null +++ b/website/content/docs/east-west/wan-federation/k8s.mdx @@ -0,0 +1,378 @@ +--- +layout: docs +page_title: WAN Federation Through Mesh Gateways - Multiple Kubernetes Clusters +description: >- + WAN federation through mesh gateways enables federating multiple Kubernetes clusters in Consul. Learn how to configure primary and secondary datacenters, export a federation secret, get the k8s API URL, and verify federation. +--- + +# WAN Federation Between Multiple Kubernetes Clusters Through Mesh Gateways + +-> **1.8.0+:** This feature is available in Consul versions 1.8.0 and higher + +~> This topic requires familiarity with [Mesh Gateways](/consul/docs/east-west/mesh-gateway/federation) and [WAN Federation Via Mesh Gateways](/consul/docs/east-west/mesh-gateway/enable). + +-> Looking for a step-by-step guide? Complete the [Secure and Route Service Mesh Communication Across Kubernetes](/consul/tutorials/kubernetes/kubernetes-mesh-gateways?utm_source=docs) tutorial to learn more. + +This page describes how to federate multiple Kubernetes clusters. Refer to [Multi-Cluster Overview](/consul/docs/east-west/wan-federation) for more information, including [networking requirements](/consul/docs/east-west/wan-federation#network-requirements). + +## Primary Datacenter + +Consul treats each Kubernetes cluster as a separate Consul datacenter. In order to federate clusters, one cluster must be designated the primary datacenter. This datacenter will be responsible for creating the certificate authority that signs the TLS certificates that Consul service mesh uses to encrypt and authorize traffic. It also handles validating global ACL tokens. All other clusters that are federated are considered secondaries. + +#### First Time Installation + +If you haven't installed Consul on your cluster, continue reading below. If you've already installed Consul on a cluster and want to upgrade it to support federation, see [Upgrading An Existing Cluster](#upgrading-an-existing-cluster). + +You will need to use the following `values.yaml` file for your primary cluster, with the possible modifications listed below. + + + +```yaml +global: + name: consul + datacenter: dc1 + + # TLS configures whether Consul components use TLS. + tls: + # TLS must be enabled for federation in Kubernetes. + enabled: true + + federation: + enabled: true + # This will cause a Kubernetes secret to be created that + # can be imported by secondary datacenters to configure them + # for federation. + createFederationSecret: true + + acls: + manageSystemACLs: true + # If ACLs are enabled, we must create a token for secondary + # datacenters to replicate ACLs. + createReplicationToken: true + + # Gossip encryption secures the protocol Consul uses to quickly + # discover new nodes and detect failure. + gossipEncryption: + autoGenerate: true + +connectInject: + # Consul Connect service mesh must be enabled for federation. + enabled: true + +meshGateway: + # Mesh gateways are gateways between datacenters. They must be enabled + # for federation in Kubernetes since the communication between datacenters + # goes through the mesh gateways. + enabled: true +``` + + + +Modifications: + +1. The Consul datacenter name is `dc1`. The datacenter name in each federated cluster **must be unique**. +1. ACLs are enabled in the template configuration. When ACLs are enabled, primary clusters must be able to make requests to the Kubernetes API URLs of secondary clusters. To disable ACLs for testing purposes, change the following settings: + + ```yaml + global: + acls: + manageSystemACLs: false + createReplicationToken: false + ``` + + ACLs secure Consul by requiring every API call to present an ACL token that is validated to ensure it has the proper permissions. +1. Gossip encryption is enabled in the above config file. To disable it, comment out or delete the `gossipEncryption` key: + + ```yaml + global: + # gossipEncryption: + # autoGenerate: true + ``` + + Gossip encryption encrypts the communication layer used to discover other nodes in the cluster and report on failure. If you are only testing Consul, this is not required. + +1. The default mesh gateway configuration creates a Kubernetes Load Balancer service. If you wish to customize the mesh gateway, for example using a Node Port service or a custom DNS entry, see the [Helm reference](/consul/docs/reference/k8s/helm#v-meshgateway) for that setting. + +With your `values.yaml` ready to go, follow our [Installation Guide](/consul/docs/deploy/server/k8s/helm) to install Consul on your primary cluster. + +-> **NOTE:** You must be using consul-helm 0.21.0+. To update, run `helm repo update`. + +#### Upgrading An Existing Cluster + +If you have an existing cluster, you will need to upgrade it to ensure it has the following config: + + + +```yaml +global: + tls: + enabled: true + federation: + enabled: true + createFederationSecret: true + acls: + manageSystemACLs: true + createReplicationToken: true +meshGateway: + enabled: true +``` + + + +1. `global.tls.enabled` must be `true`. See [Configuring TLS on an Existing Cluster](/consul/docs/secure-mesh/certificate/existing) for more information on safely upgrading a cluster to use TLS. + +If you've set `enableAutoEncrypt: true`, this is also supported. + +1. `global.federation.enabled` must be set to `true`. This is a new config setting. +1. If using ACLs, you'll already have `global.acls.manageSystemACLs: true`. For the primary cluster, you'll also need to set `global.acls.createReplicationToken: true`. This ensures that an ACL token is created that secondary clusters can use to authenticate with the primary. +1. Mesh Gateways are enabled with the default configuration. The default configuration creates a Kubernetes Load Balancer service. If you wish to customize the mesh gateway, see the [Helm reference](/consul/docs/reference/k8s/helm#v-meshgateway) for that setting. + +With the above settings added to your existing config, follow the [Upgrading](/consul/docs/upgrade/k8s) guide to upgrade your cluster and then come back to the [Federation Secret](#federation-secret) section. + +-> **NOTE:** You must be using consul-helm 0.21.0+. + +#### ProxyDefaults + +If you are using consul-helm 0.30.0+ you must also create a [`ProxyDefaults`](/consul/docs/reference/config-entry/proxy-defaults) resource to configure Consul to use the mesh gateways for service mesh traffic. + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + meshGateway: + mode: 'local' +``` + +The `spec.meshGateway.mode` can be set to `local` or `remote`. If set to `local`, traffic from one datacenter to another will egress through the local mesh gateway. This may be useful if you prefer all your cross-cluster network traffic to egress from the same locations. If set to `remote`, traffic will be routed directly from the pod to the remote mesh gateway (resulting in one less hop). + +Verify that the resource was synced to Consul: + +```shell-session +$ kubectl get proxydefaults global +NAME SYNCED AGE +global True 1s +``` + +Its `SYNCED` status should be `True`. + +-> **NOTE:** The `ProxyDefaults` resource can be created in any namespace, but we recommend creating it in the same namespace that Consul is installed in. Its name must be `global`. + +## Federation Secret + +The federation secret is a Kubernetes secret containing information needed for secondary datacenters/clusters to federate with the primary. This secret is created automatically by setting: + + + +```yaml +global: + federation: + createFederationSecret: true +``` + + + +After the installation into your primary cluster you will need to export this secret: + +```shell-session +$ kubectl get secret consul-federation --namespace consul --output yaml > consul-federation-secret.yaml +``` + +!> **Security note:** The federation secret makes it possible to gain full admin privileges in Consul. This secret must be kept securely, i.e. it should be deleted from your filesystem after importing it to your secondary cluster and you should use RBAC permissions to ensure only administrators can read it from Kubernetes. + +~> **Secret doesn't exist?** If you haven't set `global.name` to `consul` then the name of the secret will be your Helm release name suffixed with `-consul-federation` e.g. `helm-release-consul-federation`. Also ensure you're using the namespace Consul was installed into. + +Now you're ready to import the secret into your secondary cluster(s). + +Switch `kubectl` context to your secondary Kubernetes cluster. In this example our context for our secondary cluster is `dc2`: + +```shell-session +$ kubectl config use-context dc2 +Switched to context "dc2". +``` + +And import the secret: + +```shell-session +$ kubectl apply --filename consul-federation-secret.yaml +secret/consul-federation configured +``` + +#### Federation Secret Contents + +The automatically generated federation secret contains: + +- **Server certificate authority certificate** - This is the certificate authority used to sign Consul server-to-server communication. This is required by secondary clusters because they must communicate with the Consul servers in the primary cluster. +- **Server certificate authority key** - This is the signing key for the server certificate authority. This is required by secondary clusters because they need to create server certificates for each Consul server using the same certificate authority as the primary. + + !> **Security note:** The certificate authority key would enable an attacker to compromise Consul, it should be kept securely. + +- **Consul server config** - This is a JSON snippet that must be used as part of the server config for secondary datacenters. It sets: + + - [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) to the name of the primary datacenter. + - [`primary_gateways`](/consul/docs/reference/agent/configuration-file/general#primary_gateways) to an array of IPs or hostnames for the mesh gateways in the primary datacenter. These are the addresses that Consul servers in secondary clusters will use to communicate with the primary datacenter. + + Even if there are multiple secondary datacenters, only the primary gateways need to be configured. Upon first connection with a primary datacenter, the addresses for other secondary datacenters will be discovered. + +- **ACL replication token** - If ACLs are enabled, secondary datacenters need an ACL token in order to authenticate with the primary datacenter. This ACL token is also used to replicate ACLs from the primary datacenter so that components in each datacenter can authenticate with one another. +- **Gossip encryption key** - If gossip encryption is enabled, secondary datacenters need the gossip encryption key in order to be part of the gossip pool. Gossip is the method by which Consul discovers the addresses and health of other nodes. + + !> **Security note:** This gossip encryption key would enable an attacker to compromise Consul, it should be kept securely. + +## Kubernetes API URL + +If ACLs are enabled, you must next determine the Kubernetes API URL for each secondary cluster. The API URL of the secondary cluster must be specified in the config files for each secondary cluster because they need to create global Consul ACL tokens (tokens that are valid in all datacenters) and these tokens can only be created by the primary datacenter. By setting the API URL, the secondary cluster will configure a [Consul auth method](/consul/docs/secure/acl/auth-method) in the primary cluster so that components in the secondary cluster can use their Kubernetes ServiceAccount tokens to retrieve global Consul ACL tokens from the primary. + +To determine the Kubernetes API URL, first get the cluster name in your kubeconfig for your secondary: + +```shell-session +$ export CLUSTER=$(kubectl config view -o jsonpath="{.contexts[?(@.name == \"$(kubectl config current-context)\")].context.cluster}") +``` + +Then get the API URL: + +```shell-session +$ kubectl config view -o jsonpath="{.clusters[?(@.name == \"$CLUSTER\")].cluster.server}" +https:// +``` + +Keep track of this URL, you'll need it in the next section. + +## Secondary Cluster(s) + +With the primary cluster up and running, and the [federation secret](#federation-secret) imported into the secondary cluster, we can now install Consul into the secondary cluster. + +You will need to use the following `values.yaml` file for your secondary cluster(s), with the modifications listed below. + +-> **NOTE: ** You must use a separate Helm config file for each cluster (primary and secondaries) since their settings are different. + + + +```yaml +global: + name: consul + datacenter: dc2 + tls: + enabled: true + + # Here we're using the shared certificate authority from the primary + # datacenter that was exported via the federation secret. + caCert: + secretName: consul-federation + secretKey: caCert + caKey: + secretName: consul-federation + secretKey: caKey + + acls: + manageSystemACLs: true + + # Here we're importing the replication token that was + # exported from the primary via the federation secret. + replicationToken: + secretName: consul-federation + secretKey: replicationToken + + federation: + enabled: true + k8sAuthMethodHost: + primaryDatacenter: dc1 + gossipEncryption: + secretName: consul-federation + secretKey: gossipEncryptionKey +connectInject: + enabled: true +meshGateway: + enabled: true +server: + # Here we're including the server config exported from the primary + # via the federation secret. This config includes the addresses of + # the primary datacenter's mesh gateways so Consul can begin federation. + extraVolumes: + - type: secret + name: consul-federation + items: + - key: serverConfigJSON + path: config.json + load: true +``` + + + +Modifications: + +1. If ACLs are enabled, change the value of `global.federation.k8sAuthMethodHost` to the full URL (including `https://`) of the secondary cluster's Kubernetes API. +1. `global.federation.primaryDatacenter` must be set to the name of the primary datacenter. +1. The Consul datacenter name for the datacenter in this example is `dc2`. The datacenter name in **each** federated cluster **must be unique**. +1. ACLs are enabled in the above config file. They can be disabled by removing the whole `acls` block: + + ```yaml + acls: + manageSystemACLs: false + replicationToken: + secretName: consul-federation + secretKey: replicationToken + ``` + + If ACLs are enabled in one datacenter, they must be enabled in all datacenters because in order to communicate with that one datacenter ACL tokens are required. + +1. Gossip encryption is enabled in the above config file. To disable it, don't set the `gossipEncryption` key: + + ```yaml + global: + # gossipEncryption: + # secretName: consul-federation + # secretKey: gossipEncryptionKey + ``` + + If gossip encryption is enabled in one datacenter, it must be enabled in all datacenters because in order to communicate with that one datacenter the encryption key is required. + +1. The default mesh gateway configuration creates a Kubernetes Load Balancer service. If you wish to customize the mesh gateway, for example using a Node Port service or a custom DNS entry, see the [Helm reference](/consul/docs/reference/k8s/helm#v-meshgateway) for that setting. + +With your `values.yaml` ready to go, follow our [Installation Guide](/consul/docs/deploy/server/k8s/helm) to install Consul on your secondary cluster(s). + +## Verifying Federation + +To verify that both datacenters are federated, run the `consul members -wan` command on one of the Consul server pods: + +```shell-session +$ kubectl exec statefulset/consul-server --namespace consul -- consul members -wan +Node Address Status Type Build Protocol DC Segment +consul-server-0.dc1 10.32.4.216:8302 alive server 1.8.0 2 dc1 +consul-server-0.dc2 192.168.2.173:8302 alive server 1.8.0 2 dc2 +consul-server-1.dc1 10.32.5.161:8302 alive server 1.8.0 2 dc1 +consul-server-1.dc2 192.168.88.64:8302 alive server 1.8.0 2 dc2 +consul-server-2.dc1 10.32.1.175:8302 alive server 1.8.0 2 dc1 +consul-server-2.dc2 192.168.35.174:8302 alive server 1.8.0 2 dc2 +``` + +In this example (run from `dc1`), you can see that this datacenter knows about the servers in dc2 and that they have status `alive`. + +You can also use the `consul catalog services` command with the `-datacenter` flag to ensure each datacenter can read each other's services. In this example, our `kubectl` context is `dc1` and we're querying for the list of services in `dc2`: + +```shell-session +$ kubectl exec statefulset/consul-server --namespace consul -- consul catalog services -datacenter dc2 +consul +mesh-gateway +``` + +You can switch kubectl contexts and run the same command in `dc2` with the flag `-datacenter dc1` to ensure `dc2` can communicate with `dc1`. + +### Consul UI + +We can also use the Consul UI to verify federation. See [Viewing the Consul UI](/consul/docs/k8s/installation/install#viewing-the-consul-ui) for instructions on how to view the UI. + +~> NOTE: If ACLs are enabled, your kubectl context must be in the primary datacenter to retrieve the bootstrap token mentioned in the UI documentation. + +With the UI open, you'll be able to switch between datacenters via the dropdown in the top left: + +![Consul Datacenter Dropdown](/img/data-center-dropdown.png 'Consul Datacenter Dropdown') + +## Next Steps + +With your Kubernetes clusters federated, complete the [Secure and Route Service Mesh Communication Across Kubernetes](/consul/tutorials/kubernetes/kubernetes-mesh-gateways?utm_source=docs#deploy-microservices) tutorial to learn how to use Consul service mesh to route between services deployed on each cluster. + +You can also read our in-depth documentation on [Consul Service Mesh In Kubernetes](/consul/docs/k8s/connect). + +If you are still considering a move to Kubernetes, or to Consul on Kubernetes specifically, our [Migrate to Microservices with Consul Service Mesh on Kubernetes](/consul/tutorials/microservices?utm_source=docs) collection uses an example application written by a fictional company to illustrate why and how organizations can migrate from monolith to microservices using Consul service mesh on Kubernetes. The case study in this collection should provide information valuable for understanding how to develop services that leverage Consul during any stage of your microservices journey. diff --git a/website/content/docs/east-west/wan-federation/vault-backend.mdx b/website/content/docs/east-west/wan-federation/vault-backend.mdx new file mode 100644 index 000000000000..4debaa41d3b0 --- /dev/null +++ b/website/content/docs/east-west/wan-federation/vault-backend.mdx @@ -0,0 +1,692 @@ +--- +layout: docs +page_title: Vault as secrets backend — WAN federation +description: >- + Federating multiple Kubernetes clusters using Vault as secrets backend. +--- + +# Vault as secrets backend — WAN federation + +This page describes how you can federate multiple Kubernetes clusters using Vault as the secrets backend. See the [Multi-Cluster Overview](/consul/docs/east-west/wan-federation) for more information on use cases and how it works. + +## Differences Between Using Kubernetes Secrets vs. Vault + +The [Federation Between Kubernetes Clusters](/consul/docs/east-west/wan-federation/k8s) page provides an overview of WAN Federation using Mesh Gateways with Kubernetes secrets as the secret backend. When using Vault as the secrets backend, there are different systems and data integration configuration that will be explained in the [Usage](#usage) section of this page. The other main difference is that when using Vault, there is no need for you to export and import a [Federation Secret](/consul/docs/east-west/wan-federation/k8s#federation-secret) in each datacenter. + +## Usage + +The expected use case is to create WAN Federation on Kubernetes clusters. The following procedure results in a WAN Federation with Vault as the secrets backend between two clusters, dc1 and dc2. dc1 acts as the primary Consul cluster and also contains the Vault server installation. dc2 is the secondary Consul cluster. + +![Consul on Kubernetes with Vault as the Secrets Backend](/img/k8s/consul-vault-wan-federation-topology.svg 'Consul on Kubernetes with Vault as the Secrets Backend') + +The Vault Injectors in each cluster will ensure that every pod in cluster has a Vault agent inject into the pod. + +![Vault Injectors inject Vault agents into pods](/img/k8s/consul-vault-wan-federation-vault-injector.svg 'Vault Injectors inject Vault agents into pods') + +The Vault Agents on each Consul pod will communicate directly with Vault on its externally accessible endpoint. Consul pods are also configured with Vault annotations that configure the secrets that the pod needs as well as the path that the Vault agent should locally store those secrets. + +![Vault agent and server communication](/img/k8s/consul-vault-wan-federation-vault-communication.svg 'Vault agent and server communication') + +The two data centers will federated using mesh gateways. This communication topology is also described in the [WAN Federation Via Mesh Gateways](/consul/docs/east-west/wan-federation#wan-federation-via-mesh-gateways) section of [Multi-Cluster Federation Overview](/consul/docs/east-west/wan-federation). + +![Mesh Federation via Mesh Gateways](/img/k8s/consul-vault-wan-federation-mesh-communication.svg 'Mesh Federation via Mesh Gateways') + +### Install Vault + +In this setup, you will deploy Vault server in the primary datacenter (dc1) Kubernetes cluster, which is also the primary Consul datacenter. You will configure your Vault Helm installation in the secondary datacenter (dc2) Kubernetes cluster to use it as an external server. This way there will be a single vault server cluster that will be used by both Consul datacenters. + +~> **Note**: For demonstration purposes, the following example deploys a Vault server in dev mode. Do not use dev mode for production installations. Refer to the [Vault Deployment Guide](/vault/tutorials/day-one-raft/raft-deployment-guide) for guidance on how to install Vault in a production setting. + +1. Change your current Kubernetes context to target the primary datacenter (dc1). + + ```shell-session + $ kubectl config use-context + ``` + +1. Now, use the values files below for your Helm install. + + + + ```yaml + server: + dev: + enabled: true + service: + enabled: true + type: LoadBalancer + ui: + enabled: true + ``` + + + ```shell-session + $ helm install vault-dc1 --values vault-dc1.yaml hashicorp/vault --wait + ``` + +### Configuring your local environment + +1. Install Consul locally so that you can generate the gossip key. Please see the [Precompiled Binaries](/consul/docs/install#precompiled-binaries) section of the [Install Consul page](/consul/docs/install#precompiled-binaries). + +1. Set the VAULT_TOKEN with a default value. + + ```shell-session + $ export VAULT_ADDR=root + ``` + +1. Get the external IP or DNS name of the Vault server's load balancer. + + + + On EKS, you can get the hostname of the Vault server's load balancer with the following command: + + ```shell-session + $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') + ``` + + + + + + On GKE, you can get the IP address of the Vault server's load balancer with the following command: + + ```shell-session + $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` + + + + + + On AKS, you can get the IP address of the Vault server's load balancer with the following command: + + ```shell-session + $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 --output jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` + + + + +1. Set the VAULT_ADDR environment variable. + + ```shell-session + $ export VAULT_ADDR=http://${VAULT_SERVER_HOST}:8200 + ``` + +## Systems Integration +There are two main procedures to enable Vault as the service mesh certificate provider in Kubernetes. + +Complete the following steps once: + 1. Enabling Vault KV Secrets Engine - Version 2. + 1. Enabling Vault PKI Engine. + +Repeat the following steps for each datacenter in the cluster: + 1. Installing the Vault Injector within the Consul datacenter installation + 1. Configuring a Kubernetes Auth Method in Vault to authenticate and authorize operations from the Consul datacenter + 1. Enable Vault as the Secrets Backend in the Consul datacenter + +### Configure Vault Secrets engines +1. Enable [Vault KV secrets engine - Version 2](/vault/docs/secrets/kv/kv-v2) in order to store the [Gossip Encryption Key](/consul/docs/reference/k8s/helm#v-global-acls-replicationtoken) and the ACL Replication token ([`global.acls.replicationToken`](/consul/docs/reference/k8s/helm#v-global-acls-replicationtoken)). + + ```shell-session + $ vault secrets enable -path=consul kv-v2 + ``` + +1. Enable Vault PKI Engine in order to leverage Vault for issuing Consul Server TLS certificates. + + ```shell-session + $ vault secrets enable pki + ``` + + ```shell-session + $ vault secrets tune -max-lease-ttl=87600h pki + ``` + +### Primary Datacenter (dc1) +1. Install the Vault Injector in your Consul Kubernetes cluster (dc1), which is used for accessing secrets from Vault. + + -> **Note**: In the primary datacenter (dc1), you will not have to configure `injector.externalvaultaddr` value because the Vault server is in the same primary datacenter (dc1) cluster. + + + + ```yaml + server: + dev: + enabled: true + service: + enabled: true + type: LoadBalancer + injector: + enabled: true + authPath: auth/kubernetes-dc1 + ui: + enabled: true + ``` + + + Next, install Vault in the Kubernetes cluster. + + ```shell-session + $ helm upgrade vault-dc1 --values vault-dc1.yaml hashicorp/vault --wait + ``` + +1. Configure the Kubernetes Auth Method in Vault for the primary datacenter (dc1). + + ```shell-session + $ vault auth enable -path=kubernetes-dc1 kubernetes + ``` + + Because Consul is in the same datacenter cluster as Vault, the Vault Auth Method can use its own CA Cert and JWT to authenticate Consul dc1 service account requests. Therefore, you do not need to set `token_reviewer` and `kubernetes_ca_cert` on the dc1 Kubernetes Auth Method. + +1. Configure Auth Method with Kubernetes API host + + ```shell-session + $ vault write auth/kubernetes-dc1/config kubernetes_host=https://kubernetes.default.svc + ``` + +1. Enable Vault as the secrets backend in the primary datacenter (dc1). However, you will not yet apply the Helm install command. You will issue the Helm upgrade command after the [Data Integration](#setup-per-consul-datacenter-1) section. + + + + ```yaml + global: + secretsBackend: + vault: + enabled: true + ``` + + + + +### Secondary Datacenter (dc2) +1. Install the Vault Injector in the secondary datacenter (dc2). + + In the secondary datacenter (dc2), you will configure the `externalvaultaddr` value point to the external address of the Vault server in the primary datacenter (dc1). + + Change your Kubernetes context to target the secondary datacenter (dc2): + + ```shell-session + $ kubectl config use-context + ``` + + + + ```yaml + server: + enabled: false + injector: + enabled: true + externalVaultAddr: ${VAULT_ADDR} + authPath: auth/kubernetes-dc2 + ``` + + + + Next, install Vault in the Kubernetes cluster. + ```shell-session + $ helm install vault-dc2 --values vault-dc2.yaml hashicorp/vault --wait + ``` + +1. Configure the Kubernetes Auth Method in Vault for the datacenter + + ```shell-session + $ vault auth enable -path=kubernetes-dc2 kubernetes + ``` + +1. Create a service account with access to the Kubernetes API in the secondary datacenter (dc2). For the secondary datacenter (dc2) auth method, you first need to create a service account that allows the Vault server in the primary datacenter (dc1) cluster to talk to the Kubernetes API in the secondary datacenter (dc2) cluster. + + ```shell-session + $ cat <> auth-method-serviceaccount.yaml + # auth-method.yaml + apiVersion: rbac.authorization.k8s.io/v1 + kind: ClusterRoleBinding + metadata: + name: vault-dc2-auth-method + roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:auth-delegator + subjects: + - kind: ServiceAccount + name: vault-dc2-auth-method + namespace: default + --- + apiVersion: v1 + kind: ServiceAccount + metadata: + name: vault-dc2-auth-method + namespace: default + EOF + ``` + + ```shell-session + $ kubectl apply --values auth-method-serviceaccount.yaml + ``` + +1. Next, you will need to get the token and CA cert from that service account secret. + + ```shell-session + $ export K8S_DC2_CA_CERT="$(kubectl get secret `kubectl get serviceaccounts vault-dc2-auth-method --output jsonpath='{.secrets[0].name}'` --output jsonpath='{.data.ca\.crt}' | base64 --decode)" + ``` + + ```shell-session + $ export K8S_DC2_JWT_TOKEN="$(kubectl get secret `kubectl get serviceaccounts vault-dc2-auth-method --output jsonpath='{.secrets[0].name}'` --output jsonpath='{.data.token}' | base64 --decode)" + ``` + +1. Configure the auth method with the JWT token of service account. First, get the externally reachable address of the secondary Consul datacenter (dc2) in the secondary Kubernetes cluster. Then set `kubernetes_host` in the auth method configuration. + + ```shell-session + $ export KUBE_API_URL_DC2=$(kubectl config view --output jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}") + ``` + + ```shell-session + $ vault write auth/kubernetes-dc2/config \ + kubernetes_host="${KUBE_API_URL_DC2}" \ + token_reviewer_jwt="${K8S_DC2_JWT_TOKEN}" \ + kubernetes_ca_cert="${K8S_DC2_CA_CERT}" + ``` + +1. Enable Vault as the secrets backend in the secondary Consul datacenter (dc2). However, you will not yet apply the Helm install command. You will issue the Helm upgrade command after the [Data Integration](#setup-per-consul-datacenter-1) section. + + + + ```yaml + global: + secretsBackend: + vault: + enabled: true + ``` + + + +## Data Integration +There are two main procedures for using Vault as the service mesh certificate provider in Kubernetes. + +Complete the following steps once: + 1. Store the secrets in Vault. + 1. Create a Vault policy that authorizes the desired level of access to the secrets. + +Repeat the following steps for each datacenter in the cluster: + 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. + 1. Update the Consul on Kubernetes helm chart. + +### Secrets and Policies +1. Store the ACL bootstrap and replication tokens, gossip encryption key, and root CA certificate secrets in Vault. + + ```shell-session + $ vault kv put consul/secret/gossip key="$(consul keygen)" + ``` + + ```shell-session + $ vault kv put consul/secret/bootstrap token="$(uuidgen | tr '[:upper:]' '[:lower:]')" + ``` + + ```shell-session + $ vault kv put consul/secret/replication token="$(uuidgen | tr '[:upper:]' '[:lower:]')" + ``` + ```shell-session + $ vault write pki/root/generate/internal common_name="Consul CA" ttl=87600h + ``` + +1. Create Vault policies that authorize the desired level of access to the secrets. + + ```shell-session + $ vault policy write gossip - < + ``` +### Primary Datacenter (dc1) +1. Create Server TLS and Service Mesh Cert Policies + + ```shell-session + $ vault policy write consul-cert-dc1 - < + + ```yaml + global: + datacenter: "dc1" + name: consul + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server + consulClientRole: consul-client + consulCARole: consul-ca + manageSystemACLsRole: server-acl-init + connectCA: + address: http://vault-dc1.default:8200 + rootPKIPath: connect_root/ + intermediatePKIPath: dc1/connect_inter/ + authMethodPath: kubernetes-dc1 + tls: + enabled: true + enableAutoEncrypt: true + caCert: + secretName: pki/cert/ca + federation: + enabled: true + createFederationSecret: false + acls: + manageSystemACLs: true + createReplicationToken: true + bootstrapToken: + secretName: consul/data/secret/bootstrap + secretKey: token + replicationToken: + secretName: consul/data/secret/replication + secretKey: token + gossipEncryption: + secretName: consul/data/secret/gossip + secretKey: key + server: + replicas: 1 + serverCert: + secretName: "pki/issue/consul-cert-dc1" + connectInject: + replicas: 1 + enabled: true + meshGateway: + enabled: true + replicas: 1 + ``` + + + + Next, install Consul in the primary Kubernetes cluster (dc1). + ```shell-session + $ helm install consul-dc1 --values consul-dc1.yaml hashicorp/consul + ``` + +### Pre-installation for Secondary Datacenter (dc2) +1. Update the Consul on Kubernetes Helm chart. For secondary datacenter (dc2), you need to get the address of the mesh gateway from the _primary datacenter (dc1)_ cluster. + + Keep your Kubernetes context targeting dc1 and set the `MESH_GW_HOST` environment variable that you will use in the Consul Helm chart for secondary datacenter (dc2). + + ```shell-session + $ kubectl config use-context + ``` + + Next, get mesh gateway address: + + + + + ```shell-session + $ export MESH_GW_HOST=$(kubectl get svc consul-mesh-gateway --output jsonpath='{.status.loadBalancer.ingress[0].hostname}') + ``` + + + + + + ```shell-session + $ export MESH_GW_HOST=$(kubectl get svc consul-mesh-gateway --output jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` + + + + + + ```shell-session + $ export MESH_GW_HOST=$(kubectl get svc consul-mesh-gateway --output jsonpath='{.status.loadBalancer.ingress[0].ip}') + ``` + + + + +1. Change your Kubernetes context to target the primary datacenter (dc2): + ```shell-session + $ kubectl config use-context + ``` +### Secondary Datacenter (dc2) + +1. Create Server TLS and Service Mesh Cert Policies + + ```shell-session + $ vault policy write consul-cert-dc2 - < **Note**: To configure Vault as the service mesh (connect) CA in secondary datacenters, you need to make sure that the Root CA path is the same. The intermediate path is different for each datacenter. In the `connectCA` Helm configuration for a secondary datacenter, you can specify a `intermediatePKIPath` that is, for example, prefixed with the datacenter for which this configuration is intended (e.g. `dc2/connect-intermediate`). + + + + ```yaml + global: + datacenter: "dc2" + name: consul + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server + consulClientRole: consul-client + consulCARole: consul-ca + manageSystemACLsRole: server-acl-init + connectCA: + address: ${VAULT_ADDR} + rootPKIPath: connect_root/ + intermediatePKIPath: dc2/connect_inter/ + authMethodPath: kubernetes-dc2 + tls: + enabled: true + enableAutoEncrypt: true + caCert: + secretName: "pki/cert/ca" + federation: + enabled: true + primaryDatacenter: dc1 + k8sAuthMethodHost: ${KUBE_API_URL_DC2} + primaryGateways: + - ${MESH_GW_HOST}:443 + acls: + manageSystemACLs: true + replicationToken: + secretName: consul/data/secret/replication + secretKey: token + gossipEncryption: + secretName: consul/data/secret/gossip + secretKey: key + server: + replicas: 1 + serverCert: + secretName: "pki/issue/consul-cert-dc2" + connectInject: + replicas: 1 + enabled: true + controller: + enabled: true + meshGateway: + enabled: true + replicas: 1 + ``` + + + + Next, install Consul in the consul Kubernetes cluster (dc2). + + ```shell-session + $ helm install consul-dc2 -f consul-dc2.yaml hashicorp/consul + ``` + +## Next steps +You have completed the process of federating the secondary datacenter (dc2) with the primary datacenter (dc1) using Vault as the Secrets backend. To validate that everything is configured properly, please confirm that all pods within both datacenters are in a running state. + +For additional information about specific Consul secrets that you can store in Vault, refer to [Data Integration](/consul/docs/deploy/server/k8s/vault/data) in the [Vault as a Secrets Backend](/consul/docs/deploy/server/k8s/vault) documentation. diff --git a/website/content/docs/east-west/wan-federation/vms.mdx b/website/content/docs/east-west/wan-federation/vms.mdx new file mode 100644 index 000000000000..c0114429fdb2 --- /dev/null +++ b/website/content/docs/east-west/wan-federation/vms.mdx @@ -0,0 +1,262 @@ +--- +layout: docs +page_title: Federate Consul datacenters on VMs +description: >- + Federating Consul datacenters enables agents to engage in WAN communication across runtimes and cloud providers. Learn about multi-cluster federation and its network requirements for Consul on virtual machines. +--- + +# Federate Consul datacenters on VMs + +This topic covers how to federate Consul datacenters using a single WAN gossip pool. + +Consul Enterprise version 0.8.0 added support for an advanced multiple datacenter capability. Refer to the [Federate multiple datacenters with network areas](/consul/docs/east-west/network-area) documentation for more details. + +### WAN gossip pool + +One of the key features of Consul is its support for multiple datacenters. The architecture of Consul is designed to promote a low coupling of datacenters so that connectivity issues or failure of any datacenter does not impact the availability of Consul in other datacenters. This means each datacenter runs independently, each having a dedicated group of servers and a private LAN gossip pool. + +![Traditional WAN Federation](/img/traditional-wan-federation.png 'Traditional WAN Federation') + +## Network requirements + +There are a few networking requirements that must be satisfied for WAN federation to work. + +- All server nodes must be able to talk to each other. Otherwise, the gossip protocol and RPC forwarding will not work. +- If you need to use service discovery across datacenters, the network must be able to route traffic between IP addresses across regions. + +Usually, this means that all datacenters must be connected using a VPN or other tunneling mechanism. Consul does not handle VPN or NAT traversal for you. For RPC forwarding to work, the bind address must be accessible from remote nodes. + +## Setup two datacenters + +To get started, follow the [Deployment guide](/consul/tutorials/production-deploy/deployment-guide) to start each datacenter. If ACLs are enabled or you are using Consul Enterprise, you must set the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) in the server's configuration in both datacenters. + +After bootstrapping, you should have two datacenters, `dc1` and `dc2`. Note that datacenter names are opaque to Consul. They are labels that help human operators reason about the Consul datacenters. + +To query the known WAN nodes, use the `members` command with the `-wan` parameter on either datacenter. + + + + +```shell-session +$ consul members -wan +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0.dc1 172.18.0.5:8302 alive server 1.20.2 2 dc1 default +``` + + + + +```shell-session +$ consul members -wan +Node Address Status Type Build Protocol DC Partition Segment +consul-server-1.dc2 172.18.0.7:8302 alive server 1.20.2 2 dc2 default +``` + + + + +The command provides a list of all known members in the WAN gossip pool. In this case, the servers are not yet connected so each datacenter is only aware of local servers. + + + +The `consul members -wan` command only contains server nodes. Client nodes send requests to the local datacenter server, so they do not participate in WAN gossip. Local servers forward client requests to a server in the target datacenter as necessary. + + + +## Join the servers manually + +To federate two datacenters, use the `join` command. + +```shell-session +$ consul join -wan ... +``` + +The `join` command is used with the `-wan` flag to indicate you are attempting to join a server in the WAN gossip pool. As with LAN gossip, you only need to join a single existing member. Consul will use the gossip protocol to exchange information about all known members. + +### Example + +The example uses two server nodes, `consul-server-0` in `dc1` and `consul-server-1` in `dc2`. Run the following command on the first server to connect `dc1` to `dc2`. + +```shell-session +$ consul join -wan consul-server-1 +Successfully joined cluster by contacting 1 nodes. +``` + +During the joining procedure, Consul records the following lines on each server logs. + + + + + + +```log +##... +[INFO] agent: (WAN) joining: wan_addresses=["consul-server-1"] +[DEBUG] agent.server.memberlist.wan: memberlist: Initiating push/pull sync with: 172.18.0.7:8302 +[INFO] agent.server.serf.wan: serf: EventMemberJoin: consul-server-1.dc2 172.18.0.7 +[INFO] agent: (WAN) joined: number_of_nodes=1 +[DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/join/consul-server-1?wan=1 from=127.0.0.1:59484 latency=1.282667ms +[DEBUG] agent: warning: request content-type is not supported: request-path=/v1/agent/join/consul-server-1?wan=1 +[DEBUG] agent: warning: response content-type header not explicitly set.: request-path=/v1/agent/join/consul-server-1?wan=1 +[INFO] agent.server: Handled event for server in area: event=member-join server=consul-server-1.dc2 area=wan +[DEBUG] agent.server.serf.wan: serf: messageJoinType: consul-server-0.dc1 +[DEBUG] agent.server.memberlist.wan: memberlist: Stream connection from=172.18.0.7:44500 +[DEBUG] agent.server.serf.wan: serf: messageJoinType: consul-server-0.dc1 +[DEBUG] agent.server.serf.wan: serf: messageJoinType: consul-server-0.dc1 +[DEBUG] agent.server.serf.wan: serf: messageJoinType: consul-server-0.dc1 +[DEBUG] agent.server.memberlist.wan: memberlist: Initiating push/pull sync with: consul-server-1.dc2 172.18.0.7:8302 +##... +``` + + + + + + + + +```log +##... +[INFO] agent.server.serf.wan: serf: EventMemberJoin: consul-server-0.dc1 172.18.0.5 +[INFO] agent.server: Handled event for server in area: event=member-join server=consul-server-0.dc1 area=wan +[DEBUG] agent.server.serf.wan: serf: messageJoinType: consul-server-0.dc1 +[DEBUG] agent.server.memberlist.wan: memberlist: Initiating push/pull sync with: consul-server-0.dc1 172.18.0.5:8302 +[DEBUG] agent.server.serf.wan: serf: messageJoinType: consul-server-0.dc1 +##... +``` + + + + + + +## Join the servers at startup + +In a production environment, we recommend you setup WAN federation in Consul's configuration files. This way, Consul applies the configuration immediately at startup and it is persisted across reboots. Consul provides the [`retry_join_wan`](/consul/docs/reference/agent/configuration-file/join#retry_join_wan) parameter to configure the addresses of the other servers to WAN join at startup. + + + +```hcl +retry_join_wan = ["dc2-server-1", "dc2-server-2"] +``` + +```json +{ + "retry_join_wan": ["dc2-server-1", "dc2-server-2"] +} +``` + + + +## Verify datacenter configuration + +Once the join is complete, Consul will show servers from both datacenters in the WAN gossip pool. + + + + +Use the `members` command to verify that all server nodes gossiping over WAN. + +```shell-session +$ consul members -wan +Node Address Status Type Build Protocol DC Partition Segment +consul-server-0.dc1 172.18.0.5:8302 alive server 1.20.2 2 dc1 default +consul-server-1.dc2 172.18.0.7:8302 alive server 1.20.2 2 dc2 default +``` + + + + +Use the [HTTP Catalog API](/consul/api-docs/catalog#list-datacenters) to get a list of all known datacenters. + +```shell-session +$ curl http://localhost:8500/v1/catalog/datacenters +``` + +In this example, the two datacenters known to Consul are `dc1` and `dc2`. + + + +```json +[ + "dc2", + "dc1" +] +``` + + + +You can use the `dc` selector to retrieve information for servers in other datacenters. + +```shell-session +$ curl http://localhost:8500/v1/catalog/nodes?dc=dc2 +``` + +This will return all the known nodes in the `dc2` datacenter. + + + +```json +[ + { + "ID": "a3727eaa-da9d-8dda-07ff-97b7b451415d", + "Node": "consul-server-1", + "Address": "172.18.0.7", + "Datacenter": "dc2", + "TaggedAddresses": { + "lan": "172.18.0.7", + "lan_ipv4": "172.18.0.7", + "wan": "172.18.0.7", + "wan_ipv4": "172.18.0.7" + }, + "Meta": { + "consul-network-segment": "", + "consul-version": "1.20.2" + }, + "CreateIndex": 13, + "ModifyIndex": 15 + } +] +``` + + + + + + +Federated datacenters show a dropdown on the UI that allows to select the datacenter for which to show services and nodes. + +![Consul UI showing service page with WAN dropdown](/img/east-west/wan-federation/consul-ui-services-multi_dc_selector-dark.png#dark-theme-only) +![Consul UI showing service page with WAN dropdown](/img/east-west/wan-federation/consul-ui-services-multi_dc_selector.png#light-theme-only) + + + + +## Troubleshooting WAN federation + +Configuring `serf_wan`, `advertise_addr_wan` and `translate_wan_addrs` can lead to a situation where `consul members -wan` lists remote nodes but RPC operations fail with one of the following errors: + +- `No path to datacenter` +- `rpc error getting client: failed to get conn: dial tcp :0->:: i/o timeout` + +The most likely cause of these errors is that `bind_addr` is set to a private address preventing the RPC server from accepting connections across the WAN. Setting `bind_addr` to a public address (or one that can be routed across the WAN) will resolve this issue. Be aware that exposing the RPC server on a public port should only be done **after** firewall rules have been established. + +The [`translate_wan_addrs`](/consul/docs/reference/agent/configuration-file/general#translate_wan_addrs) configuration provides a basic address rewriting capability. + +## Data replication + +In general, data is not replicated between different Consul datacenters. When a request is made for a resource in another datacenter, the local Consul servers forward an RPC request to the remote Consul servers for that resource and return the results. If the remote datacenter is not available, then those resources will also not be available. However, this will not otherwise affect the local datacenter. + +There are some special situations where a limited subset of data can be replicated, such as with Consul's built-in ACL replication capability, or external tools like [consul-replicate](https://github.com/hashicorp/consul-replicate/). + +## Next steps + +This topic provides you with the commands required to setup WAN gossip across two VMs datacenters to create basic federation. + +If you want to secure your Consul datacenter with ACLs, refer to [Enable ACLs in WAN-federated datacenters](/consul/docs/secure/acl/token/federation). + +To learn how to setup federation across two Kubernetes cluster, refer to [WAN Federation Between Multiple Kubernetes Clusters Through Mesh Gateways](/consul/docs/east-west/wan-federation/k8s). + +Consul lets you federate datacenters running on different platforms. Refer to [WAN Federation Between VMs and Kubernetes through mesh gateways](/consul/docs/east-west/wan-federation/k8s-vm) to learn how to federate your Kubernetes cluster with a VM-based Consul datacenter. + + diff --git a/website/content/docs/ecs.mdx b/website/content/docs/ecs.mdx new file mode 100644 index 000000000000..8278d55fee51 --- /dev/null +++ b/website/content/docs/ecs.mdx @@ -0,0 +1,53 @@ +--- +layout: docs +page_title: Consul on AWS Elastic Container Service (ECS) Overview +description: >- + You can deploy Consul service mesh applications to Amazon Web Services ECS by running each task with an application container, a client agent, and an Envoy proxy. Learn how Consul service mesh works on ECS and find getting started tutorials for several scenarios. +--- + +# Consul on AWS Elastic Container Service (ECS) overview + +This overview provides information about connecting your workloads managed by [AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/) to a Consul service mesh. A Consul service mesh automates service-to-service authorization and encryption across your Consul services. You can use a service mesh in ECS networks to secure communication between ECS tasks and communication between tasks and external services. + +## Workflow + +You can install Consul on ECS with the [HashiCorp Terraform modules](/consul/docs/register/service/ecs/) or by [manually configuring the task definition](/consul/docs/register/service/ecs/manual). We strongly recommend using the Terraform modules and resources because Terraform automatically builds and enables Consul service mesh containers for your workloads. The Terraform module installation method also allows you to add your existing ECS task definitions to the Consul service mesh without additional configuration. + +### Terraform module installation + +1. Create and run a Terraform configuration that includes the ECS task, modules, and resources. +1. Configure routes between ECS tasks in your cluster. Once the service mesh is built, you must define paths for traffic between services. +1. Configure the ECS bind address. Binding to the loopback address allows the sidecar proxy running in the same task to only make requests within the service mesh. + +### Manual installation + +To manually install Consul, you must create definitions for each container that operates in the ECS cluster. Refer to [Architecture](/consul/docs/reference/architecture/ecs) for information about the Consul containers you must deploy. Note that there is no manual process for creating gateway task containers. Gateways enable you to connect multiple datacenters or admin partitions. You must use Terraform if you want to deploy gateways to your network. + +## Guidance + +Refer to the following documentation and tutorials for additional guidance. + +### Tutorials + +- [Integrate your AWS ECS services into Consul service mesh](/consul/tutorials/cloud-integrations/consul-ecs): Shows how to use Terraform to run Consul service mesh applications on ECS with self-managed Enterprise or HCP Consul Dedicated. + +You can also refer to the following example configurations: + +- [Examples on GitHub](https://github.com/hashicorp/terraform-aws-consul-ecs/tree/main/examples) +- [Consul with dev server on ECS using the Fargate launch type](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/examples/dev-server-fargate) +- [Consul with dev server onn ECS using the EC2 launch type](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/examples/dev-server-ec2) + +### Documentation + +- [Install Consul on ECS with Terraform](/consul/docs/register/service/ecs) +- [Configure routes between ECS tasks](/consul/docs/connect/ecs) +- [Configure the ECS task bind address](/consul/docs/register/service/ecs/task-bind-address) +- [Install Consul on ECS manually](/consul/docs/register/service/ecs/manual) + +### Reference + +- [Architecture](/consul/docs/reference/architecture/ecs) +- [Technical specifications](/consul/docs/reference/ecs/tech-specs) +- [Task configuration reference](/consul/docs/reference/ecs) +- [Cross-compatibility reference](/consul/docs/upgrade/ecs) +- [Consul server JSON schema reference](/consul/docs/reference/ecs/server-json) \ No newline at end of file diff --git a/website/content/docs/ecs/architecture.mdx b/website/content/docs/ecs/architecture.mdx deleted file mode 100644 index 96823ae1bd6a..000000000000 --- a/website/content/docs/ecs/architecture.mdx +++ /dev/null @@ -1,98 +0,0 @@ ---- -layout: docs -page_title: Consul on AWS Elastic Container Service (ECS) architecture -description: >- - Learn about the Consul architecture on Amazon Web Services ECS deployments. Learn about how the two work together, including the order tasks and containers startup and shutdown, as well as requirements for the AWS IAM auth method, the ACL controller and tokens, and health check syncing. ---- - -# Consul on AWS Elastic Container Service (ECS) architecture - -This topic provides reference information about the Consul's deployment architecture on AWS ECS. The following diagram shows the main components of the Consul architecture when deployed to an ECS cluster. - -![Diagram that provides an overview of the Consul Architecture on ECS](/img/ecs/consul-on-ecs-architecture-dataplanes.png#light-theme-only) -![Diagram that provides an overview of the Consul Architecture on ECS ](/img/ecs/consul-on-ecs-architecture-dataplanes-dark.png#dark-theme-only) - -## Components - -Consul starts several components and containers inside the ECS cluster. Using a combination of short-lived containers (`mesh-init`) and long-lived containers (`health-sync`) ensures that any long running containers do not have root access to Consul. Refer to [Startup sequence](#startup-sequence) for details about the order of the startup procedure. - -### `mesh-init` container - -The `mesh-init` container is a short-lived container that performs the following actions: - -- Logs into Consul servers -- Communicates directly with Consul server -- Registers proxies and services -- Creates a bootstrap configuration file for Consul dataplane container and stores it in a shared volume -- Invokes the `iptables` SDK to configure traffic redirection rules - -### `health-sync` container - -The `health-sync` container is a long-lived container that performs the following actions: - -- Synchronizes ECS health checks -- Watches the Consul server for changes - -When you stop the ECS task, it performs the following actions: - -- Deregisters service and proxy instances on receiving SIGTERM to support graceful shutdown -- Performs logout from [ACL auth method](/consul/docs/security/acl/auth-methods) - -### `dataplane` container - -The dataplane process runs in the same container as the Envoy proxy and performs the following actions: - -- Consumes and configures itself according to the bootstrap configuration written by the `mesh-init` container. -- Contains and starts up the Envoy sidecar. - -### ECS controller container - -One ECS task in the cluster contains the controller container, which performs the following actions: - -- Creates AWS IAM auth methods -- Creates ACL policies and roles -- Maintains ACL state -- Removes tokens when services exit -- Deregisters services if the ECS task exits without deregistering them -- Registers a _synthetic node_ that enables Consul to register services to the catalog - -## Startup sequence - -Deploying Consul to ECS starts the following process to build the architecture: - -1. The `mesh-init` container starts and logs in to Consul. -1. The `mesh-init` container registers services and proxies with the Consul servers. -1. The `mesh-init` container writes the bootstrap configuration for the Consul dataplane process and stores it in a shared volume. -1. The `mesh-init` container configures Consul DNS and modifies traffic redirection rules. -1. The `dataplane` container starts and configures itself using the bootstrap configuration generated by the `mesh-init` container. -1. The `dataplane` container starts the Envoy sidecar proxy. -1. The `health-sync` container starts listening for ECS health checks. -1. When the ECS task indicates that the application instance is healthy, the `health-sync` container marks the service as healthy and allows traffic to flow. - -## Consul security components - -Consul leverages AWS components to facilitate its own security features. - -### Auth methods - -Consul on ECS uses the AWS IAM auth method so that ECS tasks can automatically obtain Consul ACL tokens during startup. - -When ACLs are enabled, the Terraform modules for Consul on ECS support AWS IAM auth methods by default. The ECS controller sets up the auth method on the Consul servers. The `mesh-task` module configures the ECS task definition to be compatible with the auth method. - -A unique task IAM role is required for each ECS task family. A task family represents only one Consul service and the task IAM role must encode the Consul service name. As a result, task IAM roles must not be shared by different task families. - -By default, the mesh-task module creates and configures the task IAM role for you. - -To pass an existing IAM role to the mesh-task module using the `task_role` input variable, configure the IAM role as described in ECS Task Role Configuration to be compatible with the AWS IAM auth method. - -### ECS task roles - -The [ECS task role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) is an IAM role associated with an ECS task. - -When an ECS task starts up, it runs a `consul login` command. The command obtains credentials for the task role from AWS and then uses those credentials to sign the login request to the AWS IAM auth method. The credentials prove the ECS task's identity to the Consul servers. - -You must configure the task role with the following details for it to be compatible with the AWS IAM auth method: - -- An `iam:GetRole` permission to fetch itself. Refer to [IAM Policies](/consul/docs/security/acl/auth-methods/aws-iam#iam-policies) for additional information. -- A `consul.hashicorp.com.service-name` tag on the task role which contains the Consul service name for the application in this task. -- When using Consul Enterprise, add a `consul.hashicorp.com.namespace` tag on the task role indicating the Consul Enterprise namespace where this service is registered. \ No newline at end of file diff --git a/website/content/docs/ecs/deploy/bind-addresses.mdx b/website/content/docs/ecs/deploy/bind-addresses.mdx deleted file mode 100644 index acee9209ea70..000000000000 --- a/website/content/docs/ecs/deploy/bind-addresses.mdx +++ /dev/null @@ -1,47 +0,0 @@ ---- -layout: docs -page_title: Configure the ECS task bind address -description: >- - Learn how to bind workloads to the loopback address, also called localhost and 127.0.0.1, so that your applications only send and receive traffic through Consul service mesh. ---- - -# Configure the ECS task bind address - -This topic describes how to configure the bind address for your workloads to the loopback address. - -## Introduction - -Binding workloads to the loopback address ensures that your application only receives traffic through the dataplane container running in the same task, which limits requests to the service mesh. The loopback address is also called `localhost`, `lo`, and `127.0.0.1`. If your application is listening on all interfaces, such as `0.0.0.0`, then other applications can bypass the proxy and call it directly. - -## Requirements - -Consul service mesh must be deployed to ECS before you can bind a network address. For more information, refer to the following topics: - -- [Deploy Consul to ECS using the Terraform module](/consul/docs/ecs/deploy/terraform) -- [Deploy Consul to ECS manually](/consul/docs/ecs/deploy/manual) - -## Change the listening address - -Changing the listening address is specific to your language and framework, but binding the loopback address to a dynamic value, such as an environment variable, is a best practice: - -```bash -$ export BIND_ADDRESS="127.0.0.1:8080" -``` - -The following examples demonstrate how to bind the loopback address to an environment variable in golang and Django (Python): - - - -```go -s := &http.Server{ - Addr: os.Getenv("BIND_ADDRESS"), - ... -} -log.Fatal(s.ListenAndServe()) -``` - -```bash -python manage.py runserver "$BIND_ADDRESS" -``` - - diff --git a/website/content/docs/ecs/deploy/configure-routes.mdx b/website/content/docs/ecs/deploy/configure-routes.mdx deleted file mode 100644 index ed9b99610488..000000000000 --- a/website/content/docs/ecs/deploy/configure-routes.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: docs -page_title: Configure routes between ECS tasks -description: >- - Learn how to configure routes between tasks after deploying Consul service mesh to your ECS workloads. ---- - -# Configure routes between ECS tasks - -This topic describes how to configure routes between tasks after registering the tasks to Consul service mesh. - -## Overview - -To enable tasks to call through the service mesh, complete the following steps: - -1. Configure the sidecar proxy to listen on a different port for each upstream service your application needs to call. -1. Modify your application to make requests to the sidecar proxy on the specified port. - -## Requirements - -Consul service mesh must be deployed to ECS before you can bind a network address. For more information, refer to the following topics: - -- [Deploy Consul to ECS using the Terraform module](/consul/docs/ecs/deploy/terraform) -- [Deploy Consul to ECS manually](/consul/docs/ecs/deploy/manual) - -## Configure the sidecar proxy - -Add the `upstreams` block to your application configuration and specify the following fields: - -- `destinationName`: Specifies the name of the upstream service as it is registered in the Consul service catalog. -- `localBindPort`: Specifies the port that the proxy forwards requests to. You must specify an unused port but it does not need to match the upstream service port. - -In the following example, the route from an application named `web` to an application named `backend` goes through port `8080`: - -```hcl -module "web" { - family = "web" - upstreams = [ - { - destinationName = "backend" - localBindPort = 8080 - } - ] -} -``` - -You must include all upstream services in the `upstream` configuration. - -## Configure your application - -Use an appropriate environment variable in your container definition to configure your application to call the upstream service at the loopback address. - -In the following example, the `web` application calls the `backend` service by sending requests to the -`BACKEND_URL` environment variable: - -```hcl -module "web" { - family = "web" - upstreams = [ - { - destinationName = "backend" - localBindPort = 8080 - } - ] - container_definitions = [ - { - name = "web" - environment = [ - { - name = "BACKEND_URL" - value = "http://localhost:8080" - } - ] - ... - } - ] - ... -} -``` diff --git a/website/content/docs/ecs/deploy/manual.mdx b/website/content/docs/ecs/deploy/manual.mdx deleted file mode 100644 index 9b54aa05792c..000000000000 --- a/website/content/docs/ecs/deploy/manual.mdx +++ /dev/null @@ -1,343 +0,0 @@ ---- -layout: docs -page_title: Deploy Consul to ECS manually -description: >- - Manually install Consul on Amazon Web Services ECS by using the Docker `consul-ecs` image to create task definitions that include required containers. Learn how to configure task definitions with example configurations. ---- - -# Deploy Consul to ECS manually - -The following instructions describe how to use the `consul-ecs` Docker image to manually create the ECS task definition without Terraform. If you intend to peer the service mesh to multiple Consul datacenters or partitions, you must use the Consul ECS Terraform module to install your service mesh on ECS. There is no manual process for deploying a mesh gateway to an ECS cluster. - -## Requirements - -You should have some familiarity with AWS ECS. Refer to [What is Amazon Elastic Container Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) for details. - -### Secure configuration requirements - -You must meet the following requirements and prerequisites to enable security features in Consul service mesh: - -- Enable [TLS encryption](https://developer.hashicorp.com/consul/docs/security/encryption#rpc-encryption-with-tls) on your Consul servers so that they can communicate security with Consul dataplane containers over gRPC. -- Enable [access control lists (ACLs)](/consul/docs/security/acl) on your Consul servers. ACLs provide authentication and authorization for access to Consul servers on the mesh. -- You should be familiar with specifying sensitive data on ECS. Refer to [Passing sensitive data to a container](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the AWS documentation for additional information. - -You should be familiar with configuring Consul's secure features, including how to create ACL tokens and policies. Refer to the following resources: -- [Create a service token](/consul/docs/security/acl/tokens/create/create-a-service-token) -- [Day 1: Security tutorial](https://developer.hashicorp.com/consul/tutorials/security) for additional information. - -Consul requires a unique IAM role for each ECS task family. Task IAM roles cannot be shared by different task families because the task family is unique to each Consul service. - -## Configure ECS task definition file - -Create a JSON file for the task definition. The task definition is the ECS blueprint for your software services on AWS. Refer to the [ECS task definitions in the AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) for additional information. - -In addition to your application container, add configurations to your task definition that creates the following Consul containers: - -- Dataplane container -- Control-plane container -- ECS controller container - -## Top-level fields - -The following table describes the top-level fields you must include in the task definition: - -| Field name | Description | Type | -| --- | --- | --- | -| `family` | The task family name. This is used as the Consul service name by default. | string | -| `networkMode` | Must be `awsvpc`, which is the only network mode supported by Consul on ECS. | string | -| `volumes` | Volumes on the host for sharing configuration between containers for initial task setup. You must define a `consul_data` and `consul_binary` bind mount. Bind mounts can be mounted into one or more containers in order to share files among containers. For Consul on ECS, certain binaries and configuration are shared among containers during task startup. | list | -| `containerDefinitions` | Defines the application container that runs in the task. Refer to [Define your application container](#define-your-application-container). | list | - -The following example shows the top-level fields: - -```json -{ - "family": "my-example-client-app", - "networkMode": "awsvpc", - "volumes": [ - { - "name": "consul_data" - }, - { - "name": "consul_binary" - } - ], - "containerDefinitions": [...], - "tags": [ - { - "key": "consul.hashicorp.com/mesh", - "value": "true" - }, - { - "key": "consul.hashicorp.com/service-name", - "value": "example-client-app" - } - ] -} -``` - -## Configure task tags - -The `tags` list must include the following tags if you are using the ECS controller in a [secure configuration](/consul/docs/ecs/deploy/manual#secure-configuration-requirements). -Without these tags, the ACL controller is unable to provision a service token for the task. - -| Tag | Description | Type | Default | -| --- | --- | --- | --- | -| `consul.hashicorp.com/mesh` | Enables the ECS controller. Set to `false` to disable the ECS controller. | String | `true` | -| `consul.hashicorp.com/service-name` | Specifies the name of the Consul service associated with this task. Required if the service name is different than the task `family`. | String | None | -| `consul.hashicorp.com/partition` | Specifies the Consul admin partition associated with this task. | String | `default` | -| `consul.hashicorp.com/namespace` | Specifies the name of the Consul namespace associated with this task. | String | `default` | - -## Define your application container - -Specify your application container configurations in the `containerDefinitions` field. The following table describes all `containerDefinitions` fields: - -| Field name | Description | Type | -| --- | --- | --- | -| `name` | The name of your application container. | string | -| `image` | The container image used to run your application. | string | -| `essential` | Must be `true` to ensure the health of your application container affects the health status of the task. | boolean | -| `dependsOn` | Specifies container dependencies that ensure your application container starts after service mesh setup is complete. Refer to [Application container dependency configuration](#application-container-dependency-configuration) for details. | list | - -Refer to the [ECS Task Definition](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) documentation for a complete reference. - -### Application container dependency configuration - -The control-plane and the dataplane containers are dependencies that enforce a specific startup order. The settings ensure that your application container starts after the control-plane container finishes setting up the task and after the dataplane is ready to proxy traffic between this task and the service mesh. - -The `dependsOn` list must include the following maps: - -```json -{ - { - "containerName": "consul-ecs-control-plane", - "condition": "SUCCESS" - }, - { - "containerName": "consul-dataplane", - "condition": "HEALTHY" - } -} -``` - -## Configure the dataplane container - -The dataplane container runs Envoy proxy for Consul service mesh. Specify the fields described in the following table to declare a dataplane container: - -| Field name | Description | Type | -| --- | --- | --- | -| `name` | Specifies the name of the container. This must be `consul-dataplane`. | string | -| `image` | Specifies the Envoy image. This must be a [supported version of Envoy](/consul/docs/connect/proxies/envoy#supported-versions). | string | -| `dependsOn` | Specifies container dependencies that ensure the dataplane container starts after the control-pane container has written the Envoy bootstrap configuration file. Refer to [Dataplane container dependency configuration](#dataplane-container-dependency-configuration) for details. | list | -| `healthCheck` | Must be set as shown above to monitor the health of Envoy's primary listener port, which ties into container dependencies and startup ordering. | map | -| `mountPoints` | Specifies a shared volume so that the dataplane container can access and consume the Envoy configuration file that the control-plane container generates. The keys and values in this configuration must be defined as described in [Dataplane container dependency configuration](#dataplane-container-dependency-configuration). | list | -| `ulimits` | The nofile ulimit must be raised to a sufficiently high value so that Envoy does not fail to open sockets. | list | -| `entrypoint` | Must be set to the custom Envoy entrypoint `consul-ecs envoy-entrypoint` to facilitate graceful shutdown. | list | -| `command` | Specifies the startup command to pass the bootstrap configuration to Envoy. | list | - -### Dataplane container dependency configuration - -The `dependsOn` configuration ensures that the dataplane container starts after the control-plane container successfully writes the Envoy bootstrap configuration file to the shared volume. The `dependsOn` list must include the following map: - -```json -[ - { - "containerName": "consul-ecs-control-plane", - "condition": "SUCCESS" - } -] -``` - -### Dataplane container volume mount configuration - -The `mountPoints` configuration defines a volume and path where dataplane container can consume the Envoy bootstrap configuration file generated by the control-plane container. You must specify the following keys and values: - -```json -{ - "mountPoints": [ - { - "readOnly": true, - "containerPath": "/consul", - "sourceVolume": "consul_data" - } - ], -} -``` - -## Configure the control-plane container - -The control-plane container is the first Consul container to start and set up the instance for Consul service mesh. It registers the service and proxy for this task with Consul and writes the Envoy bootstrap configuration to a shared volume. - -Specify the fields described in the following table to declare the control-plane container: - -| Field name | Description | Type | -| --- | --- | --- | -| `name` | Specifies the name of the container. This must be `control-plane`. | string | -| `image` | Specifies the `consul-ecs` image. Specify the following public AWS registry to avoid rate limits: `public.ecr.aws/hashicorp/consul-ecs` | string | -| `mountPoints` | Specifies a shared volume to store the Envoy bootstrap configuration file that the dataplane container can access and consume. The keys and values in this configuration must be defined as described in [Control-plane shared volume configuration](#control-plane-shared-volume-configuration). | list | -| `command` | Set to `["control-plane"]` so that the container runs the `control-plane` command. | list | -| `environment` | Specifies the `CONSUL_ECS_CONFIG_JSON` environment variable, which configures the container to connect to the Consul servers. Refer to [Control-plane to Consul servers configuration](#control-plane-to-Consul-servers-configuration) for details. | list | - -### Control-plane shared volume configuration - -The `mountPoints` configuration defines a volume and path where the control-plane container stores the Envoy bootstrap configuration file required to start Envoy. You must specify the following keys and values: - -```json -"mountPoints": [ - { - "readOnly": false, - "containerPath": "/consul", - "sourceVolume": "consul_data" - }, - { - "readOnly": true, - "containerPath": "/bin/consul-inject", - "sourceVolume": "consul_binary" - } -], -``` - -### Control-plane to Consul servers configuration - -Provide Consul server connection settings to the mesh task module so that the module can configure the control-plane and ECS controller containers to connect to the servers. - -1. In your `variables.tf` file, define variables for the host URL and TLS settings for gRPC and HTTP traffic. Refer to the [mesh task module reference](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task?tab=inputs) for information about the variables you can define. In the following example, the Consul server address is defined in the `consul_server_hosts` variable: - - ```hcl - variable "consul_server_hosts" { - description = "Address of Consul servers." - type = string - } - - ``` -1. Add an `environment` block to the control-plane and ECS controller containers definition. -1. Set the `environment.name` field to the `CONSUL_ECS_CONFIG_JSON` environment variable and the value to `local.encoded_config`. - - ```hcl - environment = [ - { - name = "CONSUL_ECS_CONFIG_JSON", - value = local.encoded_config - } - ] - ``` - When you apply the configuration, the mesh task module interpolates the server configuration variables, builds a `config.tf` file, and injects the settings into the appropriate containers. - -For additional information about the `config.tf` file, refer to the [JSON schema reference documentation](/consul/docs/ecs/reference/config-json-schema). - -## Register the task definition configuration - -Register the task definition with your ECS cluster using the AWS Console, AWS CLI, or another method supported by AWS. You must also create an ECS Service to start tasks using the task definition. Refer to the following ECS documentation for information on how to register the task definition: - -- [Creating a task definition using the classic console](https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-task-definition-classic.html) -- [Register task definition CLI](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html) - -## Deploy the controller container - -The controller container runs in a separate ECS task and is responsible for Consul security features. The controller uses the AWS IAM auth method to enable ECS tasks to automatically obtain Consul ACL tokens when the task starts up. Refer to [Consul security components](/consul/docs/ecs/architecture#consul-security-components) for additional information. - -Verify that you have completed the prerequisites described in [Secure configuration requirements](#secure-configuration-requirements) and configure the following components to enable Consul security features. - -- ACL policy -- ECS task role -- Auth method for service tokens -### Create an ACL policy - -On the Consul server, create a policy that grants the following access for the controller: - -- `acl:write` -- `operator:write` -- `node:write` -- `service:write` - -The policy allows Consul to generate a token linked to the policy. Refer to [Create a service token](/consul/docs/security/acl/tokens/create/create-a-service-token) for instructions. - -### ECS task role - -1. Create an ECS task role and configure an `iam:GetRole` permission so that it can fetch itself. Refer to [IAM Policies](/consul/docs/security/acl/auth-methods/aws-iam#iam-policies) for instructions. -1. Add an `consul.hashicorp.com.service-name` tag to the task role that contains the Consul service name for the application in the task. -1. When using Consul Enterprise, you must also include the `consul.hashicorp.com.namespace` tag to specify the namespace to register the service in. - -### Configure the auth method for service tokens - -Run the `consul acl auth-method create` command on a Consul server to create an instance of the auth method for service tokens. - -The following example command configures the auth method to associate a service identity -to each token created during login to this auth method instance. - -```shell-session -$ consul acl auth-method create \ - -type aws-iam \ - -name iam-ecs-service-token \ - -description="AWS IAM auth method for ECS service tokens" \ - -config '{ - "BoundIAMPrincipalArns": ["arn:aws:iam:::role/consul-ecs/*"], - "EnableIAMEntityDetails": true, - "IAMEntityTags": [ - "consul.hashicorp.com.service-name" - ] -}' -``` - -If you want to create these resources in a particular partition, include the `-partition ` option when creating Consul ACL roles, policies, auth methods, and binding rules with the Consul CLI. - -You must specify the following flags in the command: - -| Flag | Description | Type | -| --- | --- | --- | -| `-type` | Must be `aws-iam`. | String | -| `-name` | Specify a name for the auth method. Must be unique among all auth methods. | String | -| `-description` | Specify a description for the auth method. | String | -| `-config` | A JSON string containing the configuration for the auth method. Refer to [Auth method `-config` parameter](#auth-method–config-parameter) for details. | String | -| `-partition` | Specifies an admin partition that the auth method is valid for. | String | - -#### Auth method `-config` parameter - -You must specify the following configuration in the `-config` flag: - -| Flag | Description | Type | -| --- | --- | --- | -| `BoundIAMPrincipalArns` | Specifies a list of trusted IAM roles. We recommend using a wildcard to trust IAM roles at a particular path. | List | -| `EnableIAMEntityDetails` | Must be `true` so that the auth method can retrieve IAM role details, such as the role path and role tags. | Boolean | -| `IAMEntityTags` | Specifies a list of IAM role tags to make available to binding rules. Must include the service name tag. | List | - -Refer to the [auth method configuration parameters documentation](/consul/docs/security/acl/auth-methods/aws-iam#config-parameters) for additional information. - -### Create the binding rule - -Run the `consul acl binding-rule create` command on a Consul server to create a binding rule. The rule associates a service identity with each token created on successful login to this instance of the auth method. - -In the following example, Consul takes the service identity name from the `consul.hashicorp.com.service-name` tag specified for authenticating IAM role identity. - -```shell-session -$ consul acl binding-rule create \ - -method iam-ecs-service-token \ - -description 'Bind a service identity from IAM role tags for ECS service tokens' \ - -bind-type service \ - -bind-name '${entity_tags.consul.hashicorp.com.service-name}' -``` - -Note that you must include the `-partition ` option to the Consul CLI when creating Consul ACL roles, policies, auth methods, and binding rules, in order to create these resources in a particular partition. - -### Configure storage for secrets - -Secure and store Consul Server CA certificates so that they are available to ECS tasks. You may require more than one certificate for different Consul protocols. Refer to the following documentation for instructions on how to store and pass secrets to ECS tasks: - -- [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-parameters.html) -- [AWS Secrets Manager](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html) - -You can reference stored secrets using their ARN. The examples show ARNs for secrets stored in AWS Secrets Manager: - -- **Consul Server CA Cert for RPC**: `arn:aws:secretsmanager:us-west-2:000000000000:secret:my-consul-ca-cert` -- **Consul Server CA Cert for HTTPS**: `arn:aws:secretsmanager:us-west-2:000000000000:secret:my-consul-https-ca-cert` - -### Configure audit logging - -You can configure Consul servers connected to your ECS workloads to capture a log of authenticated events. Refer to [Audit Logging](/consul/docs/enterprise/audit-logging) for details. - -## Next steps - -After deploying the Consul service mesh infrastructure, you must still define routes between service instances as well as configure the bind address for your applications so that they only receive traffic through the mesh. Refer to the following topics: - -- [Configure routes between ECS tasks](/consul/docs/ecs/deploy/configure-routes) -- [Configure the ECS task bind address](/consul/docs/ecs/deploy/bind-addresses) diff --git a/website/content/docs/ecs/deploy/migrate-existing-tasks.mdx b/website/content/docs/ecs/deploy/migrate-existing-tasks.mdx deleted file mode 100644 index cb405e518afc..000000000000 --- a/website/content/docs/ecs/deploy/migrate-existing-tasks.mdx +++ /dev/null @@ -1,99 +0,0 @@ ---- -layout: docs -page_title: Migrate existing tasks to Consul service mesh -description: >- - You can migrate tasks in existing Amazon Web Services ECS deployments to a service mesh deployed with Terraform. Learn how to convert a task specified as an ECS task definition into a `mesh-task` Terraform module. ---- - -# Migrate existing tasks to Consul on ECS with Terraform - -To migrate existing tasks to Consul, rewrite the existing Terraform code for your tasks so that the container definitions include the [`mesh-task` Terraform module](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task). - -Your tasks must already be defined in Terraform using the `ecs_task_definition` resource so that they can then be converted to use the `mesh-task` module. - -## Example - -The following example shows an existing task definition configured in Terraform: - -```hcl -resource "aws_ecs_task_definition" "my_task" { - family = "my_task" - requires_compatibilities = ["FARGATE"] - network_mode = "awsvpc" - cpu = 256 - memory = 512 - execution_role_arn = "arn:aws:iam::111111111111:role/execution-role" - task_role_arn = "arn:aws:iam::111111111111:role/task-role" - container_definitions = jsonencode( - [{ - name = "example-client-app" - image = "docker.io/org/my_task:v0.0.1" - essential = true - portMappings = [ - { - containerPort = 9090 - hostPort = 9090 - protocol = "tcp" - } - ] - cpu = 0 - mountPoints = [] - volumesFrom = [] - }] - ) -} - -resource "aws_ecs_service" "my_task" { - name = "my_task" - cluster = "arn:aws:ecs:us-east-1:111111111111:cluster/my-cluster" - task_definition = aws_ecs_task_definition.my_task.arn - desired_count = 1 - network_configuration { - subnets = ["subnet-abc123"] - } - launch_type = "FARGATE" -} -``` - - -Replace the `aws_ecs_task_definition` resource with the `mesh-task` module so that Consul adds the necessary dataplane containers that enable your task to join the mesh. The `mesh-task` module uses inputs similar to your old ECS task definition but creates a new version of the task definition with additional containers. - -The following Terraform configuration uses the `mesh-task` module to replace the previous example's task definition: - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - version = "" - - family = "my_task" - container_definitions = [ - { - name = "example-client-app" - image = "docker.io/org/my_task:v0.0.1" - essential = true - portMappings = [ - { - containerPort = 9090 - hostPort = 9090 - protocol = "tcp" - } - ] - cpu = 0 - mountPoints = [] - volumesFrom = [] - } - ] - - port = 9090 - consul_server_hosts = "
    " -} - -``` - -Note the following differences: - -- The `execution_role_arn` and `task_role_arn` fields are removed. The `mesh-task` module creates the task and execution roles by default. If you need to use existing IAM roles, set the `task_role` and `execution_role` fields to pass in existing roles. -- The `port` field specifies the port that your application listens on. If your application has no listening port, set `outbound_only = true` and remove the `port` field. -- The `jsonencode()` function is removed from the `container_definitions` field. - -The `mesh-task` module creates a new version of your task definition with the necessary dataplane containers so you can delete your existing `aws_ecs_task_definition` resource. diff --git a/website/content/docs/ecs/deploy/terraform.mdx b/website/content/docs/ecs/deploy/terraform.mdx deleted file mode 100644 index 5d5574a967f3..000000000000 --- a/website/content/docs/ecs/deploy/terraform.mdx +++ /dev/null @@ -1,479 +0,0 @@ ---- -layout: docs -page_title: Deploy Consul to ECS using the Terraform module -description: >- - Terraform modules simplify the process to install Consul on Amazon Web Services ECS. Learn how to create task definitions, schedule tasks for your service mesh, and configure routes with example configurations so that you can Deploy Consul to ECS using Terraform. ---- - -# Deploy Consul to ECS using the Terraform module - -This topic describes how to create a Terraform configuration that deploys Consul service mesh to your ECS cluster workloads. Consul server agents do not run on ECS and must be deployed to another runtime, such as EKS, and connected to your ECS workloads. Refer [Consul on AWS Elastic Container Service overview](/consul/docs/ecs) for additional information. - -## Overview - -Create a Terraform configuration file that includes the ECS task definition and Terraform modules that build the Consul service mesh components. The task definition is the ECS blueprint for your software services on AWS. Refer to the [ECS task definitions documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definitions.html) for additional information. - -You can add the following modules and resources to your Terraform configuration: - -- `mesh-task` module: Adds the Consul ECS control-plane and Consul dataplane containers to the task definition along with your application container. Envoy runs as a subprocess within the Consul dataplane container. -- `aws_ecs_service` resource: Adds an ECS service to run and maintain your task instance. -- `gateway-task` module: Adds mesh gateway containers to the cluster. Mesh gateways enable service-to-service communication across different types of network areas. - -To enable Consul security features for your production workloads, you must also deploy the `controller` module, which provisions ACL tokens for service mesh tasks. - -After defining your Terraform configuration, use `terraform apply` to deploy Consul to your ECS cluster. - -## Requirements - -- You should be familiar with creating Terraform configuration files. Refer to the [Terraform documentation](/terraform/docs) for information about how to get started with Terraform. -- You should be familiar with AWS ECS. Refer to [What is Amazon Elastic Container Service](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) in the Amazon AWS documentation for additional information. -- If you intend to use the `gateway-task` module to deploy mesh gateways, you must enable TLS. Refer to [Configure the ECS controller](#configure-the-ecs-controller) for additional information. - -### Secure configuration requirements - -You must meet the following requirements and prerequisites to enable security features in Consul service mesh: - -- Enable [TLS encryption](/consul/docs/security/encryption#rpc-encryption-with-tls) on your Consul servers so that they can communicate securely with Consul containers over gRPC. -- Enable [access control lists (ACLs)](/consul/docs/security/acl) on your Consul servers. ACLs provide authentication and authorization for access to Consul servers on the mesh. -- You should be familiar with specifying sensitive data on ECS. Refer to [Passing sensitive data to a container](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data.html) in the AWS documentation for additional information. - -Additionally, Consul requires a unique IAM role for each ECS task family. Task IAM roles cannot be shared by different task families because the task family is unique to each Consul service. - -You should be familiar with configuring Consul's secure features, including how to create ACL tokens and policies. Refer to the following resources for additional information: - -- [Create a service token](/consul/docs/security/acl/tokens/create/create-a-service-token) -- [Day 1: Security tutorial](https://developer.hashicorp.com/consul/tutorials/security) - -## Create the task definition - -Create a Terraform configuration file and add your ECS task definition. The task definition includes your application containers, Consul control-plane container, dataplane container, and controller container. If you intend to peer the service mesh to multiple Consul datacenters or partitions, [add the gateway-task module](#configure-the-gateway-task-module), which deploys gateway containers that enable connectivity between network areas in your network. - -## Configure the mesh task module - -Add a `module` block to your Terraform configuration and specify the following fields: - -- `source`: Specifies the location of the `mesh-task` module. This field must be set to `hashicorp/consul-ecs/aws//modules/mesh-task`. The `mesh-task` module automatically adds the Consul service mesh infrastructure when you apply the Terraform configuration. -- `version`: Specifies the version of the `mesh-task` module to use. -- `family`: Specifies the [ECS task definition family](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#family). Consul also uses the `family` value as the Consul service name by default. -- `container_definitions`: Specifies a list of [container definitions](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#container_definitions) for the task definition. This field is where you include your application containers. - -Refer to the [`mesh-task` module reference documentation in the Terraform registry](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task?tab=inputs) for information about all options you can configure. - -You should also configure the ACL and encryption settings if they are enabled on your Consul servers. Refer to [Enable secure deployment](#enable-secure-deployment) for additional information. - -In the following example, the Terraform configuration file `mesh-task.tf` creates a task definition with an application container called `example-client-app`: - - - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - version = "" - - family = "my_task" - container_definitions = [ - { - name = "example-client-app" - image = "docker.io/org/my_task:v0.0.1" - essential = true - portMappings = [ - { - containerPort = 9090 - hostPort = 9090 - protocol = "tcp" - } - ] - cpu = 0 - mountPoints = [] - volumesFrom = [] - } - ] - - port = 9090 -} -``` - - - -The following fields are required. Refer to the [module reference documentation](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task?tab=inputs) for a complete reference. - -## Configure Consul server settings - -Provide Consul server connection settings to the mesh task module so that the module can configure the control-plane and ECS controller containers to connect to the servers. - -1. In your `variables.tf` file, define variables for the host URL and the TLS settings for gRPC and HTTP traffic. Refer to the [mesh task module reference](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task?tab=inputs) for information about the variables you can define. In the following example, the Consul server address is defined in the `consul_server_hosts` variable: - - ```hcl - variable "consul_server_hosts" { - description = "Address of Consul servers." - type = string - } - ``` -1. Add an `environment` block to the control-plane and ECS controller containers definition. -1. Set the `environment.name` field to the `CONSUL_ECS_CONFIG_JSON` environment variable and the value to `local.encoded_config`. - - ```hcl - environment = [ - { - name = "CONSUL_ECS_CONFIG_JSON", - value = local.encoded_config - } - ] - ``` - - When you apply the configuration, the mesh task module interpolates the server configuration variables, builds a `config.tf` file, and injects the settings into the appropriate containers. For additional information about the `config.tf` file, refer to the [JSON schema reference documentation](/consul/docs/ecs/reference/consul-server-json). - -## Configure an ECS service to run your task instances - -To start a task using the task definition, add the `aws_ecs_service` resource to your configuration to create an ECS service. [ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) are one of the most common ways to start tasks using a task definition. - -Reference the `mesh-task` module's `task_definition_arn` output value in your `aws_ecs_service` resource. The following example adds an ECS service for a task definition referenced in as `module.my_task.task_definition_arn`: - - - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - ... -} - -resource "aws_ecs_service" "my_task" { - name = "my_task_service" - task_definition = module.my_task.task_definition_arn - launch_type = "FARGATE" - propagate_tags = "TASK_DEFINITION" - ... -} -``` - - - -Refer to [`aws_ecs_service` in the Terraform registry](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service) for a complete configuration reference. - -If you are deploying a test instance of your ECS application, you can apply your configuration in Terraform. Refer to [Run your configuration](#run-your-configuration) for instructions. To configure your deployment for a production environment, you must also deploy the ECS controller module. Refer to [Configure the ECS controller](#configure-the-ecs-controller) for instructions. - -If you intend to leverage multi-datacenter Consul features, such as WAN federation and cluster peering, then you must add the `gateway-task` module for each Consul datacenter in your network. Refer to [Configure the gateway task module](#configure-the-gateway-task-module) for instructions. - -## Configure the gateway task module - -The `gateway-task` module deploys a mesh gateway, which enables service-to-service communication across network areas. Mesh gateways detect the server name indication (SNI) header from the service mesh session and route the connection to the appropriate destination. - -Refer to the following documentation for additional information: - -- [WAN Federation via Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways) -- [Service-to-service Traffic Across Datacenters](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters) - -To use mesh gateways, TLS must be enabled in your cluster. Refer to the [requirements section](#requirements) for additional information. - -1. Add a `module` block to your Terraform configuration file and specify a label. The label is a unique identifier for the gateway. -1. Add a `source` to the `module` and specify the location of the `gateway-task`. The value must be `hashicorp/consul-ecs/aws//modules/gateway-task`. -1. Specify the following required inputs: - - `ecs_cluster_arn`: The ARN of the ECS cluster for the gateway. - - `family`: Specifies a name for multiple versions of the task. Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html#family) for details. - - `kind`: Set to `mesh-gateway` - - `subnets`: Specifies a list of subnet IDs where the gateway task should be deployed. -1. If you are deploying to a production environment, you must also add the `acl` and `tls` configurations. Refer to [Configure the ECS controller](#configure-the-ecs-controller) for details. -1. Configure any additional parameters necessary for your environment. Refer to the [module reference documentation](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task?tab=inputs) for information about all parameters. - -The following example defines a mesh gateway task called `my-gateway`: - - - -```hcl -module "my_mesh_gateway" { - source = "hashicorp/consul-ecs/aws//modules/gateway-task" - version = "" - kind = "mesh-gateway" - - family = "my-gateway" - ecs_cluster_arn = "" - subnets = [""] - consul_server_hosts = "
    " - tls = true - consul_ca_cert_arn = "" -} -``` - - - -Refer to [gateway-task module in the Terraform registry](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task?tab=inputs) for a complete reference. - -Refer to the [gateway task configuration examples](#gateway-task-configuration-examples) for additional example configurations. - -## Configure the ECS controller - -Deploy the ECS controller container to its own ECS task in the cluster. Refer to [ECS controller container](/consul/docs/ecs/reference/architecture#ecs-controller) for details about the container. - -Verify that you have completed the prerequisites described in [Secure configuration requirements](#secure-configuration-requirements) and complete the following steps to configure the controller container. - -### Create a an ACL token for the controller - -1. On the Consul server, create a policy that grants the following access for the controller: - - - `acl:write` - - `operator:write` - - `node:write` - - `service:write` - - The policy allows Consul to generate a token linked to the policy. Refer to [Create a service token](/consul/docs/security/acl/tokens/create/create-a-service-token) for instructions. -1. Create a token and link it to the ACL controller policy. Refer to the [ACL tokens documentation](/consul/docs/security/acl/tokens) for instructions. - -### Configure an AWS secrets manager secret - -Add the `aws_secretsmanager_secret` resource to your Terraform configuration and specify values for retrieving the CA and TLS certificates. The resource enables services to communicate over TLS and present ACL tokens. The ECS controller also uses the secret manager to retrieve the value of the bootstrap token. - -In the following example, Terraform creates the CA certificates for gRPC and HTTPS in the secrets manager. Consul retrieves the CA certificate PEM file from the secret manager so that the mesh task can use TLS for HTTP and gRPC traffic: - - - -```hcl -resource "tls_private_key" "ca" { - algorithm = "ECDSA" - ecdsa_curve = "P384" -} - -resource "tls_self_signed_cert" "ca" { - private_key_pem = tls_private_key.ca.private_key_pem - - subject { - common_name = "Consul Agent CA" - organization = "HashiCorp Inc." - } - - // 5 years. - validity_period_hours = 43800 - - is_ca_certificate = true - set_subject_key_id = true - - allowed_uses = [ - "digital_signature", - "cert_signing", - "crl_signing", - ] -} - -resource "aws_secretsmanager_secret" "ca_key" { - name = "${var.name}-${var.datacenter}-ca-key" - recovery_window_in_days = 0 -} - -resource "aws_secretsmanager_secret_version" "ca_key" { - secret_id = aws_secretsmanager_secret.ca_key.id - secret_string = tls_private_key.ca.private_key_pem -} - -resource "aws_secretsmanager_secret" "ca_cert" { - name = "${var.name}-${var.datacenter}-ca-cert" - recovery_window_in_days = 0 -} - -resource "aws_secretsmanager_secret_version" "ca_cert" { - secret_id = aws_secretsmanager_secret.ca_cert.id - secret_string = tls_self_signed_cert.ca.cert_pem -} -``` - - - -Note that you could use a single `CERT PEM` for both variables. The `consul_ca_cert_arn` is the default ARN applicable to both the protocols. You can also use protocol-specific certificate PEMs with the `consul_https_ca_cert_arn` and `consul_grpc_ca_cert_arn` variables. - -The following Terraform configuration passes the generated CA certificate ARN to the `mesh-task` module and ensures that the CA certificate and PEM variable are set for both HTTPS and gRPC communication. - - - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - version = "" - - ... - - tls = true - consul_ca_cert_arn = aws_secretsmanager_secret.ca_cert.arn -} -``` - - - -### Enable secure deployment - -To enable secure deployment, add the following configuration to the task module. - - - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - version = "" - - ... - - tls = true - consul_grpc_ca_cert_arn = aws_secretsmanager_secret.ca_cert.arn - - acls = true - consul_server_hosts = "https://consul-server.example.com" - consul_https_ca_cert_arn = aws_secretsmanager_secret.ca_cert.arn -} - -``` - - - -### Complete configuration examples - -The [terraform-aws-consul-ecs GitHub repository](https://github.com/hashicorp/terraform-aws-consul-ecs/tree/main/examples) contains examples that you can reference to help you deploy Consul service mesh to your ECS workloads. - -## Apply your Terraform configuration - -Run Terraform to create the task definition. - -Save the Terraform configuration for the task definition to a file, such as `mesh-task.tf`. -You should place this file in a directory alongside other Terraform configuration files for your project. - -The `mesh-task` module requires the AWS Terraform provider. The following example shows how to include -and configure the AWS provider in a file called `provider.tf`. Refer to [AWS provider in the Terraform registry](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) -for additional documentation and specifications. - - - -```hcl -terraform { - required_providers { - aws = { - source = "hashicorp/aws" - version = "" - } - } -} - -provider "aws" { - region = "" - ... -} -``` - - - -Specify any additional AWS resources for your project in Terraform configuration files -in the same directory. The following example shows a basic project directory: - -```shell-session -$ ls -mesh-task.tf -provider.tf -... -``` - -Issue the following commands to run the configuration: - -1. `terraform init`: This command downloads dependencies, such as Terraform providers. -1. `terraform apply`: This command directs Terraform to create the AWS resources, such as the task definition from the `mesh-task` module. - -Terraform reads all files in the current directory that have a `.tf` file extension. -Refer to the [Terraform documentation](/terraform/docs) for more information and Terraform best practices. - -## Next steps - -After deploying the Consul service mesh infrastructure, you must still define routes between service instances as well as configure the bind address for your applications so that they only receive traffic through the mesh. Refer to the following topics: - -- [Configure routes between ECS tasks](/consul/docs/ecs/deploy/configure-routes) -- [Configure the ECS task bind address](/consul/docs/ecs/deploy/bind-addresses) - -## Gateway task configuration examples - -The following examples illustrate how to configure the `gateway-task` for different use cases. - -### Ingress - -Mesh gateways need to be reachable over the WAN to route traffic between datacenters. Configure the following options in the `gateway-task` module to enable ingress through the mesh gateway. - -| Input variable | Type | Description | -| --- | --- | --- | -| `lb_enabled` | Boolean | Set to `true` to automatically deploy and configure a network load balancer for ingress to the mesh gateway. | -| `lb_vpc_id` | string | Specifies the VPC to launch the load balancer in. | -| `lb_subnets` | list of strings | Specifies one or more public subnets to associate with the load balancer. | - - - -```hcl -module "my_mesh_gateway" { - ... - - lb_enabled = true - lb_vpc_id = "" - lb_subnets = [""] -} -``` - - - -Alternatively, you can manually configure ingress to the mesh gateway and provide the `wan_address` and `wan_port` inputs to the `gateway-task` module. The `wan_port` field is optional. Port `8443` is used by default. - - - -```hcl -module "my_mesh_gateway" { - ... - - wan_address = "" - wan_port = -} -``` - - - -Mesh gateways route L4 TCP connections and do not terminate mTLS sessions. If you manually configure [AWS Elastic Load Balancing](https://aws.amazon.com/elasticloadbalancing/) for ingress to a mesh gateway, you must use an [AWS Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) or a [Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html). - -### ACLs - -When ACLs are enabled, configure the following options in the `gateway-task` module. - -| Option | Type | Description | -| --- | --- | --- | -| `acl` | Boolean | Set to `true` when ACLs are enabled. | -| `consul_server_hosts` | string | Specifies the HTTP `address` of the Consul server. Required for the mesh gateway task to log into Consul using the IAM auth method so that it can obtain its client and service tokens. | -| `consul_https_ca_cert_arn` | string | Specifies ARN of the Secrets Manager secret that contains the certificate for the Consul HTTPS API. | - - - -```hcl -module "my_mesh_gateway" { - ... - - acls = true - consul_server_hosts = "" - tls = true - consul_https_ca_cert_arn = "" -} -``` - - - -### WAN federation - -Configure the following options in the `gateway-task` to enable [WAN federation through mesh gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). - -| Option | Type | Description | -| --- | --- | --- | -| `consul_datacenter` | string | Specifies the name of the local Consul datacenter. | -| `consul_primary_datacenter` | string | Specifies the name of the primary Consul datacenter. | -| `enable_mesh_gateway_wan_federation` | Boolean | Set to `true` to enable WAN federation. | -| `enable_acl_token_replication` | Boolean | Set to `true` to enable ACL token replication and allow the creation of local tokens secondary datacenters. | - -The following example shows how to configure the `gateway-task` module. - - - -```hcl -module "my_mesh_gateway" { - ... - - enable_mesh_gateway_wan_federation = true -} -``` - - - -When federating Consul datacenters over the WAN with ACLs enabled, [ACL Token replication](/consul/docs/security/acl/acl-federated-datacenters) must be enabled on all server and client agents in all datacenters. diff --git a/website/content/docs/ecs/enterprise.mdx b/website/content/docs/ecs/enterprise.mdx deleted file mode 100644 index c07a41c58d60..000000000000 --- a/website/content/docs/ecs/enterprise.mdx +++ /dev/null @@ -1,150 +0,0 @@ ---- -layout: docs -page_title: Consul Enterprise on AWS Elastic Container Service (ECS) -description: >- - You can deploy Consul Enterprise on Amazon Web Services ECS with an official Docker image. Learn about supported Enterprise features, including requirements for admin partitions, namespaces, and audit logging. ---- - -# Consul Enterprise on AWS Elastic Container Service (ECS) - -You can run Consul Enterprise on ECS by specifying the Consul Enterprise Docker image in the Terraform module parameters. - -## Specify the Consul image - -When you set up an instance of the [`mesh-task`](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task) or [`gateway-task`](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task) module, -set the parameter `consul_image` to a Consul Enterprise image. The following example instructs the `mesh-task` module to import Consul Enterprise version 1.12.0: - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - version = "" - - consul_image = "hashicorp/consul-enterprise:1.12.0-ent" - ... -} -``` - -## Licensing - -!> **Warning:** Consul Enterprise is currently only fully supported when [ACLs are enabled](/consul/docs/ecs/deploy/terraform#secure-configuration-requirements). - -Consul Enterprise [requires a license](/consul/docs/enterprise/license/overview). If running -Consul on ECS with ACLs enabled, the license will be automatically pulled down from Consul servers. - -Currently there is no capability for specifying the license when ACLs are disabled so if you wish to -run Consul Enterprise clients then you must enable ACLs. - -## Running Consul Community Edition clients - -You can operate Consul Enterprise servers with Consul CE (community edition) clients as long as the features you are using do not require Consul Enterprise client support. Admin partitions and namespaces, for example, require Consul Enterprise clients and are not supported with Consul CE. - -## Feature Support - -Consul on ECS supports the following Consul Enterprise features. -If you are only using features that run on Consul servers, then you can use a CE client in your service mesh tasks on ECS. -If client support is required for any of the features, then you must use a Consul Enterprise client in your `mesh-tasks`. - -| Feature | Supported | Description | -|-----------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Automated Backups/Snapshot Agent | Yes\* | Running the snapshot agent on ECS is not currently supported but you are able to run the snapshot agent alongside your Consul servers on VMs. | -| Automated Upgrades | Yes (servers) | This feature runs on Consul servers. | -| Enhanced Read Scalability | Yes (servers) | This feature runs on Consul servers. | -| Single Sign-On/OIDC | Yes (servers) | This feature runs on Consul servers. | -| Redundancy Zones | Yes (servers) | This feature runs on Consul servers. | -| Advanced Federation/Network Areas | Yes (servers) | This feature runs on Consul servers. | -| Sentinel | Yes (servers) | This feature runs on Consul servers. | -| Network Segments | No | Currently there is no capability to configure the network segment Consul clients on ECS run in. | -| Namespaces | Yes | This feature requires Consul Enterprise servers. CE clients can register into the `default` namespace. Registration into a non-default namespace requires a Consul Enterprise client. | -| Admin Partitions | Yes | This feature requires Consul Enterprise servers. CE clients can register into the `default` admin partition. Registration into a non-default partition requires a Consul Enterprise client. | -| Audit Logging | Yes | This feature requires Consul Enterprise clients. | - -### Admin Partitions and Namespaces - -Consul on ECS supports [admin partitions](/consul/docs/enterprise/admin-partitions) and [namespaces](/consul/docs/enterprise/namespaces) when Consul Enterprise servers and clients are used. These features have the following requirements: - -- ACLs must be enabled. -- ACL controller must run in the ECS cluster. -- `mesh-task` must use a Consul Enterprise client image. -- `gateway-task` must use a Consul Enterprise client image. - -The ACL controller manages configuration of the AWS IAM auth method on the Consul servers. It -ensures unused tokens created by tasks are cleaned up. It also creates admin partitions and -namespaces if they do not already exist. - -~> **Note:** The ACL controller does not delete admin partitions or namespaces once they are created. - -Each ACL controller manages a single admin partition. Consul on ECS supports one ACL controller per ECS cluster; -therefore, the administrative boundary for admin partitions is one admin partition per ECS cluster. - -The following example demonstrates how to configure the ACL controller to enable admin partitions -and manage an admin partition named `my-partition`. The `consul_partition` field is optional and if it -is not provided when `consul_partitions_enabled = true`, will default to the `default` admin partition. - - - -```hcl -module "acl_controller" { - source = "hashicorp/consul-ecs/aws//modules/acl-controller" - - ... - - consul_partitions_enabled = true - consul_partition = "my-partition" -} -``` - - - -Services are assigned to admin partitions and namespaces through the use of [task tags](/consul/docs/ecs/deploy/manual#configure-task-tags). -The `mesh-task` module automatically adds the necessary tags to the task definition. -If the ACL controller is configured for admin partitions, services on the mesh will -always be assigned to an admin partition and namespace. If the `mesh-task` does not define -the partition it will default to the `default` admin partition. Similarly, if a `mesh-task` does -not define the namespace it will default to the `default` namespace. - -The following example demonstrates how to create a `mesh-task` assigned to the admin partition named -`my-partition`, in the `my-namespace` namespace. - - - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - family = "my_task" - - ... - - consul_image = "hashicorp/consul-enterprise:-ent" - consul_partition = "my-partition" - consul_namespace = "my-namespace" -} -``` - - - -### Audit Logging - -Consul on ECS supports [audit logging](/consul/docs/enterprise/audit-logging) when using Consul Enterprise clients. -This feature has the following requirements: - -- ACLs must be enabled. -- `mesh-task` must use a Consul Enterprise image. -- `gateway-task` must use a Consul Enterprise image. - -To enable audit logging, set `audit_logging = true` when configuring the client. - - - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - family = "my_task" - - ... - - consul_image = "hashicorp/consul-enterprise:-ent" - audit_logging = true -} -``` - - diff --git a/website/content/docs/ecs/index.mdx b/website/content/docs/ecs/index.mdx deleted file mode 100644 index 992d7eb3df7b..000000000000 --- a/website/content/docs/ecs/index.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -layout: docs -page_title: Consul on AWS Elastic Container Service (ECS) Overview -description: >- - You can deploy Consul service mesh applications to Amazon Web Services ECS by running each task with an application container, a client agent, and an Envoy proxy. Learn how Consul service mesh works on ECS and find getting started tutorials for several scenarios. ---- - -# Consul on AWS Elastic Container Service (ECS) overview -This overview provides information about connecting your workloads managed by [AWS Elastic Container Service (ECS)](https://aws.amazon.com/ecs/) to a Consul service mesh. A Consul service mesh automates service-to-service authorization and encryption across your Consul services. You can use a service mesh in ECS networks to secure communication between ECS tasks and communication between tasks and external services. - - -## Workflow - -You can install Consul on ECS with the [HashiCorp Terraform modules](/consul/docs/ecs/deploy/terraform) or by [manually configuring the task definition](/consul/docs/ecs/deploy/manual). We strongly recommend using the Terraform modules and resources because Terraform automatically builds and enables Consul service mesh containers for your workloads. The Terraform module installation method also allows you to add your existing ECS task definitions to the Consul service mesh without additional configuration. - -### Terraform module installation - -1. Create and run a Terraform configuration that includes the ECS task, modules, and resources. -1. Configure routes between ECS tasks in your cluster. Once the service mesh is built, you must define paths for traffic between services. -1. Configure the ECS bind address. Binding to the loopback address allows the sidecar proxy running in the same task to only make requests within the service mesh. - - -### Manual installation - -To manually install Consul, you must create definitions for each container that operates in the ECS cluster. Refer to [Architecture](/consul/docs/ecs/architecture) for information about the Consul containers you must deploy. Note that there is no manual process for creating gateway task containers. Gateways enable you to connect multiple datacenters or admin partitions. You must use Terraform if you want to deploy gateways to your network. - -## Guidance - -Refer to the following documentation and tutorials for additional guidance. - -### Tutorials - -- [Integrate your AWS ECS services into Consul service mesh](/consul/tutorials/cloud-integrations/consul-ecs): Shows how to use Terraform to run Consul service mesh applications on ECS with self-managed Enterprise or HCP Consul Dedicated. - -You can also refer to the following example configurations: - -- [Examples on GitHub](https://github.com/hashicorp/terraform-aws-consul-ecs/tree/main/examples) -- [Consul with dev server on ECS using the Fargate launch type](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/examples/dev-server-fargate) -- [Consul with dev server onn ECS using the EC2 launch type](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/examples/dev-server-ec2) - -### Documentation - -- [Install Consul on ECS with Terraform](/consul/docs/ecs/deploy/terraform) -- [Configure routes between ECS tasks](/consul/docs/ecs/deploy/configure-routes) -- [Configure the ECS task bind address](/consul/docs/ecs/deploy/bind-addresses) -- [Install Consul on ECS manually](/consul/docs/ecs/deploy/manual) - -### Reference - -- [Architecture](/consul/docs/ecs/architecture) -- [Technical specifications](/consul/docs/ecs/tech-specs) -- [Task configuration reference](/consul/docs/ecs/reference/configuration-reference) -- [Cross-compatibility reference](/consul/docs/ecs/reference/compatibility) -- [Consul server JSON schema reference](/consul/docs/ecs/reference/consul-server-json) \ No newline at end of file diff --git a/website/content/docs/ecs/reference/compatibility.mdx b/website/content/docs/ecs/reference/compatibility.mdx deleted file mode 100644 index cc99996866b4..000000000000 --- a/website/content/docs/ecs/reference/compatibility.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -layout: docs -page_title: Consul on AWS Elastic Container Service (ECS) Compatibility Matrix -description: >- - The binary for Consul on Amazon Web Services ECS and the Terraform modules for automating deployments are tightly coupled and have specific version requirements. Review compatibility information for versions of Consul and `consul-ecs` to help you choose compatible versions. ---- - -# Consul on AWS Elastic Container Service (ECS) compatibility matrix - -For every release of Consul on ECS, the `consul-ecs` binary and `consul-ecs` Terraform module are updated. The versions of the Terraform module and binary are tightly coupled. For example, `consul-ecs` 0.5.2 binary must use the `consul-ecs` 0.5.2 Terraform module. - -## Supported Consul versions - -| `consul` version | Compatible `consul-ecs` version | -|----------------------- | ----------------------------- | -| 1.18.x | 0.8.x | -| 1.16.x 1.17.x | 0.7.x | -| 1.15.x, 1.14.x, 1.13.x | 0.6.x | -| 1.14.x, 1.13.x, 1.12.x | 0.5.2+ | -| 1.11.x | 0.3.0, 0.4.x | -| 1.10.x | 0.2.x | - - -## Supported Envoy versions - -Refer to [Envoy - Supported Versions](/consul/docs/connect/proxies/envoy#supported-versions) for information about which versions of Envoy are supported for each version of Consul. As a best practice, we recommend using the default version of Envoy that is provided in the Terraform module. This is because we test the default versions with Consul ECS binaries for the given version. diff --git a/website/content/docs/ecs/reference/configuration-reference.mdx b/website/content/docs/ecs/reference/configuration-reference.mdx deleted file mode 100644 index 3e283e6f12bc..000000000000 --- a/website/content/docs/ecs/reference/configuration-reference.mdx +++ /dev/null @@ -1,197 +0,0 @@ ---- -layout: docs -page_title: Consul on AWS Elastic Container Service (ECS) Configuration Reference -description: >- - Use the `consul-ecs` reference guide to manually configure Consul for deployment on Amazon Web Services ECS. Learn how the configuration values correspond to Terraform module input variables and review JSON configuration models for `consulLogin`, `gateway`, `proxy`, and `service` fields. ---- - -# Consul on AWS Elastic Container Service (ECS) Configuration Reference - -This topic describes configuration options for the JSON configuration format used by the `consul-ecs` binary. This configuration is passed to the `consul-ecs` binary as a string using the `CONSUL_ECS_CONFIG_JSON` environment variable. - -This configuration format follows a [JSON schema](https://github.com/hashicorp/consul-ecs/blob/main/config/schema.json) that can be used for validation. - -## Terraform `mesh-task` module configuration - -The `mesh-task` Terraform module provides input variables for commonly used fields. The following table shows which Terraform input variables correspond to each field of the Consul ECS configuration. Refer to the [Terraform registry documentation](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task?tab=inputs) for a complete reference of supported input variables for the `mesh-task` module. - -| Terraform Input Variable | Consul ECS Config Field | -| ------------------------ | ------------------------------------- | -| `upstreams` | [`proxy.upstreams`](#proxy-upstreams) | -| `checks` | [`service.checks`](#service-checks) | -| `consul_service_name` | [`service.name`](#service) | -| `consul_service_tags` | [`service.tags`](#service) | -| `consul_service_meta` | [`service.meta`](#service) | -| `consul_namespace` | [`service.namespace`](#service) | -| `consul_partition` | [`service.partition`](#service) | - -Each of these Terraform input variables follows the Consul ECS configuration schema. The remaining fields of the Consul ECS configuration that are not listed in this table can be passed using the `consul_ecs_config` input variable. - -# Top-level fields - -These are the top-level fields for the Consul ECS configuration format. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `bootstrapDir` | `string` | required | The directory at which to mount the shared volume where Envoy bootstrap configuration is written by `consul-ecs mesh-init`. | -| `consulCACertFile` | `string` | optional | The file path of the Consul server CA certificate. | -| `consulHTTPAddr` | `string` | optional | The HTTP(S) URL of the Consul server. Required when `authMethod.enabled` is set | -| [`consulLogin`](#consullogin) | `object` | optional | Configuration for logging into the AWS IAM auth method. | -| [`gateway`](#gateway) | `object` | optional | Configuration for the gateway proxy registration. | -| `healthSyncContainers` | `array` | optional | The names of containers that will have health check status synced from ECS into Consul. Cannot be specified with `service.checks`. | -| `logLevel` | `string` | optional | Sets the log level for the `consul-ecs mesh-init` and `consul-ecs health-sync` commands. Defaults to `INFO`. Must be one of `TRACE`, `DEBUG`, `INFO`, `WARN`, `ERROR`, or `null`. | -| [`proxy`](#proxy) | `object` | optional | Configuration for the sidecar proxy registration with Consul. | -| [`service`](#service) | `object` | optional | Configuration for Consul service registration. | - -# `consulLogin` - -Configuration for logging into the AWS IAM auth method. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `enabled` | `boolean` | optional | Enables logging into Consul's AWS IAM auth method to obtain an ACL token. The auth method must be configured on the Consul server and the ECS task role must be trusted by the auth method. After logging in, the token is written to the file `/service-token`. | -| `extraLoginFlags` | `array` | optional | Additional CLI flags to pass to the `consul login` command. These are appended to the command `consul login -type aws -method -token-sink-file -aws-auto-bearer-token -aws-include-identity`. | -| `includeEntity` | `boolean` | optional | Adds the `-aws-include-entity` flag to the `consul login` command. Defaults to `true`. Set to `false` to remove the flag from the command. The `-aws-include-entity` flag should only be passed if the Consul AWS IAM auth method is configured with `EnableIAMEntityDetails=true`. | -| `method` | `string` | optional | The name of Consul auth method. This is passed as the `-method` option to the `consul login` command. Defaults to `iam-ecs-service-token`. | - -# `gateway` - -Configuration for the gateway proxy registration. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `kind` | `string` | required | Specifies the type of gateway to register. Must be `mesh-gateway`. | -| [`lanAddress`](#gateway-lanaddress) | `object` | optional | LAN address and port for the gateway. If not specified, defaults to the task/node address. | -| `meta` | `object` | optional | Key-value pairs of metadata to include for the gateway. | -| `name` | `string` | optional | The name the gateway will be registered as in Consul. Defaults to the Task family name. | -| `namespace` | `string` | optional | Consul namespace in which the gateway will be registered. | -| `partition` | `string` | optional | Consul admin partition in which the gateway will be registered. | -| [`proxy`](#gateway-proxy) | `object` | optional | Object that contains the proxy parameters. | -| `tags` | `array` | optional | List of string values that can be used to add labels to the gateway. | -| [`wanAddress`](#gateway-wanaddress) | `object` | optional | WAN address and port for the gateway. If not specified, defaults to the task/node address. | - -# `gateway.lanAddress` - -LAN address and port for the gateway. If not specified, defaults to the task/node address. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `address` | `string` | optional | | -| `port` | `integer` | optional | | - -# `gateway.proxy` - -Object that contains the proxy parameters. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `config` | `object` | optional | | - -# `gateway.wanAddress` - -WAN address and port for the gateway. If not specified, defaults to the task/node address. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `address` | `string` | optional | | -| `port` | `integer` | optional | | - -# `proxy` - -Configuration for the sidecar proxy registration with Consul. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `config` | `object` | optional | Object value that specifies an opaque JSON configuration. The JSON is stored and returned along with the service instance when called from the API. | -| [`meshGateway`](#proxy-meshgateway) | `object` | optional | Specifies the mesh gateway configuration for the proxy. | -| [`upstreams`](#proxy-upstreams) | `array` | optional | The list of the upstream services that the proxy should create listeners for. | - -# `proxy.meshGateway` - -Specifies the mesh gateway configuration for the proxy. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `mode` | `string` | required | Specifies how upstreams with a remote destination datacenter are resolved. Must be one of `none`, `local`, or `remote`. | - -# `proxy.upstreams` - -The list of the upstream services that the proxy should create listeners for. Each `upstream` object may contain the following fields. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `config` | `object` | optional | Specifies opaque configuration options that will be provided to the proxy instance for the upstream. | -| `datacenter` | `string` | optional | Specifies the datacenter to issue the discovery query to. | -| `destinationName` | `string` | required | Specifies the name of the upstream service or prepared query to route the service mesh to. | -| `destinationNamespace` | `string` | optional | Specifies the namespace containing the upstream service. | -| `destinationPartition` | `string` | optional | Specifies the name of the admin partition containing the upstream service. | -| `destinationType` | `string` | optional | Specifies the type of discovery query the proxy should use for finding service mesh instances. Must be one of `service`, `prepared_query`, or `null`. | -| `localBindAddress` | `string` | optional | Specifies the address to bind a local listener to. | -| `localBindPort` | `integer` | required | Specifies the port to bind a local listener to. The application will make outbound connections to the upstream from the local port. | -| [`meshGateway`](#proxy-upstreams-meshgateway) | `object` | optional | Specifies the mesh gateway configuration for the proxy for this upstream. | - -## `proxy.upstreams.meshGateway` - -Specifies the mesh gateway configuration for the proxy for this upstream. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `mode` | `string` | required | Specifies how the upstream with a remote destination datacenter gets resolved. Must be one of `none`, `local`, or `remote`. | - -# `service` - -Configuration for Consul service registration. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| [`checks`](#service-checks) | `array` | optional | The list of Consul checks for the service. Cannot be specified with `healthSyncContainers`. | -| `enableTagOverride` | `boolean` | optional | Determines if the anti-entropy feature for the service is enabled | -| `meta` | `object` | optional | Key-value pairs of metadata to include for the Consul service. | -| `name` | `string` | optional | The name the service will be registered as in Consul. Defaults to the Task family name if empty or null. | -| `namespace` | `string` | optional | The Consul namespace where the service will be registered. | -| `partition` | `string` | optional | The Consul admin partition where the service will be registered. | -| `port` | `integer` | required | Port the application listens on, if any. | -| `tags` | `array` | optional | List of string values that can be used to add service-level labels. | -| [`weights`](#service-weights) | `object` | optional | Configures the weight of the service in terms of its DNS service (SRV) response. | - -# `service.checks` - -Defines the Consul checks for the service. Each `check` object may contain the following fields. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `aliasNode` | `string` | optional | Specifies the ID of the node for an alias check. | -| `aliasService` | `string` | optional | Specifies the ID of a service for an alias check. | -| `args` | `array` | optional | Command arguments to run to update the status of the check. | -| `body` | `string` | optional | Specifies a body that should be sent with `HTTP` checks. | -| `checkId` | `string` | optional | The unique ID for this check on the node. Defaults to the check `name`. | -| `failuresBeforeCritical` | `integer` | optional | Specifies the number of consecutive unsuccessful results required before check status transitions to critical. | -| `grpc` | `string` | optional | Specifies a `gRPC` check. Must be an endpoint that supports the [standard gRPC health checking protocol](https://github.com/grpc/grpc/blob/master/doc/health-checking.md). The endpoint will be probed every `interval`. | -| `grpcUseTls` | `boolean` | optional | Specifies whether to use TLS for this gRPC health check. | -| `h2ping` | `string` | optional | Specifies this is an h2ping check. Must be an address, which will be pinged every `interval`. | -| `h2pingUseTls` | `boolean` | optional | Specifies whether TLS is used for an h2ping check. | -| `header` | `object` | optional | Specifies a set of headers that should be set for HTTP checks. Each header can have multiple values. | -| `http` | `string` | optional | Specifies this is an HTTP check. Must be a URL against which request is performed every `interval`. | -| `interval` | `string` | optional | Specifies the frequency at which to run this check. Required for HTTP, TCP, and UDP checks. | -| `method` | `string` | optional | Specifies the HTTP method to be used for an HTTP check. When no value is specified, `GET` is used. | -| `name` | `string` | optional | The name of the check. | -| `notes` | `string` | optional | Specifies arbitrary information for humans. This is not used by Consul internally. | -| `os_service` | `string` | optional | Specifies the name of a service on which to perform an [OS service check](/consul/docs/services/usage/checks#osservice-check). The check runs according the frequency specified in the `interval` parameter. | -| `status` | `string` | optional | Specifies the initial status the health check. Must be one of `passing`, `warning`, `critical`, `maintenance`, or `null`. | -| `successBeforePassing` | `integer` | optional | Specifies the number of consecutive successful results required before check status transitions to passing. | -| `tcp` | `string` | optional | Specifies this is a TCP check. Must be an IP/hostname plus port to which a TCP connection is made every `interval`. | -| `tcpUseTls` | `boolean` | optional | Specifies whether to use TLS for this `TCP` health check. If TLS is enabled, then by default, a valid TLS certificate is expected. Certificate verification can be disabled by setting `TLSSkipVerify` to `true`. | -| `timeout` | `string` | optional | Specifies a timeout for outgoing connections. Applies to script, HTTP, TCP, UDP, and gRPC checks. Must be a duration string, such as `10s` or `5m`. | -| `tlsServerName` | `string` | optional | Specifies an optional string used to set the SNI host when connecting via TLS. | -| `tlsSkipVerify` | `boolean` | optional | Specifies if the check should verify the chain and hostname of the certificate presented by the server being checked. Set to `true` to disable verification. We recommend setting to `false` for production use. Default is `false`. Supported check types: `HTTP`, `H2Ping`, `gRPC`, and `TCP`| -| `ttl` | `string` | optional | Specifies this is a TTL check. Must be a duration string, such as `10s` or `5m`. | -| `udp` | `string` | optional | Specifies this is a UDP check. Must be an IP/hostname plus port to which UDP datagrams are sent every `interval`. | - -# `service.weights` - -Configures the weight of the service in terms of its DNS service (SRV) response. - -| Field | Type | Required | Description | -| ----- | ---- | -------- | ----------- | -| `passing` | `integer` | required | Weight for the service when its health checks are passing. | -| `warning` | `integer` | required | Weight for the service when it has health checks in `warning` status. | diff --git a/website/content/docs/ecs/reference/consul-server-json.mdx b/website/content/docs/ecs/reference/consul-server-json.mdx deleted file mode 100644 index 87401bbb2d3f..000000000000 --- a/website/content/docs/ecs/reference/consul-server-json.mdx +++ /dev/null @@ -1,120 +0,0 @@ ---- -layout: docs -page_title: Consul server configuration JSON schema reference -description: Learn about the fields available in the JSON scheme for configuring ECS task connections to Consul servers. ---- - -# Consul server configuration JSON schema reference - -This topic provides reference information about the JSON schema used to build the `config.tf` file. Refer to [Configure Consul server settings](/consul/docs/ecs/deploy/terraform#configure-consul-server-settings) for information about how Consul on ECS uses the JSON schema. - -## Configuration model - -The following list describes the attributes, data types, and default values, if any, in the `config.tf` file. Click on a value to learn more about the attribute. - -- [`consulServers`](#consulservers): map - - [`hosts`](#consulservers-hosts): string - - [`skipServerWatch`](#consulservers-hosts): boolean | `false` - - [`defaults`](#consulservers-defaults): map - - [`caCertFile`](#consulservers-defaults): string - - [`tlsServerName`](#consulservers-defaults): string - - [`tls`](#consulservers-defaults): boolean | `false` - - [`grpc`](#consulservers-grpc): map - - [`port`](#consulservers-grpc): number - - [`caCertFile`](#consulservers-grpc): string - - [`tlsServerName`](#consulservers-grpc): string - - [`tls`](#consulservers-grpc): boolean | `false` - - [`http`](#consulservers-http): map - - [`https`](#consulservers-http): boolean | `false` - - [`port`](#consulservers-http): number - - [`caCertFile`](#consulservers-http): string - - [`tlsServerName`](#consulservers-http): string - - [`tls`](#consulservers-http): boolean | `false` - -## Specification - -This section provides details about the attributes in the `config.tf` file. - -### `consulServers` - -Parent-level attribute containing all of the server configurations. All other configurations in the file are children of the `consulServers` attribute. - -#### Values - -- Default: None -- Data type: Map - - -### `consulServers.hosts` - -Map that contains the `skipServerWatch` configuration for Consul server hosts. - -#### Values - -- Default: None -- Data type: Map - -### `consulServers.hosts.skipServerWatch` - -Boolean that disables watches on the Consul server. Set to `true` if the Consul server is already behind a load balancer. - -#### Values - -- Default: `false` -- Data type: Boolean - -### `consulServers.defaults` - -Map of default server configurations. Defaults apply to gRPC and HTTP traffic. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the attributes available in the `defaults` configuration: - -| Attribute | Description | Data type | Default | -| --- | --- | --- | --- | -| `caCertFile` | Specifies the path to the certificate .pem file. | String | None | -| `tlsServerName` | Specifies the name of the TLS server. | String | None | -| `tls` | Enables TLS on the server. | Boolean | `false` | - - -### `consulServers.grpc` - -Map of server configuration for gRPC traffic that override attributes defined in `consulServers.defaults`. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the attributes available in the `grpc` configuration: - -| Attribute | Description | Data type | Default | -| --- | --- | --- | --- | -| `port` | Specifies the port number for gRPC communication. | Number | None | -| `caCertFile` | Specifies the path to the certificate .pem file. | String | None | -| `tlsServerName` | Specifies the name of the TLS server. | String | None | -| `tls` | Enables TLS for gRPC traffic on the server. | Boolean | `false` | - -### `consulServers.http` - -Map of server configuration for HTTP traffic that override attributes defined in `consulServers.defaults`. - -#### Values - -- Default: None -- Data type: Map - -The following table describes the attributes available in the `grpc` configuration: - -| Attribute | Description | Data type | Default | -| --- | --- | --- | --- | -| `https` | Enables HTTPS. | Boolean | `false` | -| `port` | Specifies the port number for HTTPS communication. | Number | None | -| `caCertFile` | Specifies the path to the certificate .pem file. | String | None | -| `tlsServerName` | Specifies the name of the TLS server. | String | None | -| `tls` | Enables TLS for HTTPS traffic on the server. | Boolean | `false` | - diff --git a/website/content/docs/ecs/tech-specs.mdx b/website/content/docs/ecs/tech-specs.mdx deleted file mode 100644 index d5fa5b399f58..000000000000 --- a/website/content/docs/ecs/tech-specs.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -layout: docs -page_title: Technical specifications for Consul on AWS Elastic Container Service (ECS) -description: >- - Consul has requirements to install and run on Amazon Web Services ECS. Learn about Consul's requirements for Fargate and EC2, including network mode and subnet information, as well as server, routing, and ACL controller considerations. ---- - -# Technical specifications for Consul on ECS - -This topic describes the supported runtimes and environments for using Consul service mesh for your ECS workloads. - -For requirements associated with using the Terraform `mesh-task` module to deploy Consul service mesh, refer [Deploy Consul with the Terraform module](/consul/docs/ecs/deploy/terraform). For requirements associated with manually deploying Consul service mesh to your ECS cluster, refer [Deploy Consul manually](/consul/docs/ecs/deploy/manual). - -## Supported environments, runtimes, and capabilities - -Consul on ECS supports the following environments, runtimes, and capabilities: - -- **Launch Types:** Fargate and EC2 -- **Network Modes:** `awsvpc` -- **Subnets:** Private and public subnets. Tasks must have network access to Amazon ECR or other public container registries to pull images. -- **Consul servers:** You can use your own Consul servers running on virtual machines or [use HCP Consul Dedicated to host the servers for you](/hcp/docs/consul/dedicated). -- **ECS controller:** The ECS controller assists with reconciling state back to Consul and facilitates Consul security features. -- **Admin partitions:** Enable ACLs and configure the ECS controller to use admin partitions. You must deploy one controller for each admin partition. -- **Namespaces:** Enable ACLs and configure the ECS controller to use namespaces. -- **Dataplane containers:** To manage proxies using Consul dataplane, you must use the Terraform `mesh-task` module to install Consul service mesh. -- **Transparent proxy:** Consul on ECS 0.8.x supports transparent proxy for ECS on EC2 tasks. Transparent proxy in ECS requires the host to have `NET_ADMIN` capabilities, which ECS Fargate does not currently support. You can enable transparent proxy with the `enable_transparent_proxy` parameter in the `mesh-task` Terraform module or through `ecs_config_json`. The `enable_transparent_proxy` parameter has precedence over `ecs_config_json`. - - Refer to the [`terraform-aws-consul-ecs`](https://github.com/hashicorp/terraform-aws-consul-ecs/tree/main/examples/dev-server-ec2-transparent-proxy) for an example. -- **API Gateway:** Consul on ECS 0.8.x supports API gateway. Refer to the [`terraform-aws-consul-ecs`](https://github.com/hashicorp/terraform-aws-consul-ecs/tree/main/examples/api-gateway) for an example. - - Refer to the [Consul ECS GitHub repository](https://github.com/hashicorp/terraform-aws-consul-ecs/tree/main/examples/dev-server-ec2-transparent-proxy) for examples of how to use transparent proxy with Consul on ECS. - -## Resource usage - -We used the following procedure to measure resource usage: - -- Executed performance tests while deploying clusters of various sizes. We - ensured that deployment conditions stressed Consul on ESC components. -- After each performance test session, we recorded resource usage for each - component to determine worst-case scenario resource usage in a production - environment. -- We used Fargate's minimum allowed CPU (256 shares) and memory settings (512 - MB) on ECS during performance testing to demonstrate that Consul on ECS along - with application containers can run on the smallest ECS tasks. - -The following table describes the maximum resource usage we observed for each container under these testing conditions: - -| Container | CPU | Memory | -| -------------- | --- | ------ | -| ECS controller | 5% | 43 MB | -| Control plane | 6% | 35 MB | -| Dataplane | 10% | 87 MB | - -The containers added by Consul on ECS consume resources well below the minimum CPU and -memory limits for an ECS task. Use the `memory` and `cpu` settings for the task definition -if additional resources are necessary for your application task. - -Refer to [Architecture](/consul/docs/ecs/architecture) for details about each component. \ No newline at end of file diff --git a/website/content/docs/ecs/upgrade-to-dataplanes.mdx b/website/content/docs/ecs/upgrade-to-dataplanes.mdx deleted file mode 100644 index c0a788dc666d..000000000000 --- a/website/content/docs/ecs/upgrade-to-dataplanes.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -layout: docs -page_title: Upgrade to Consul dataplane architecture -description: Learn how to upgrade your existing Consul service mesh on ECS workloads to the agentless dataplanes architecture. ---- - -# Upgrade to Consul dataplane architecture - -This topic describes how to manually upgrade a live installation of Consul on ECS to the dataplane-based architecture with zero downtime. Since v0.7.0, Consul service mesh on ECS uses [Consul dataplanes](/consul/docs/connect/dataplane), which are lightweight processes for managing Envoy proxies in containerized networks. Refer to the [release notes](/consul/docs/release-notes/consul-ecs/v0_7_x) for additional information about the switch to Consul dataplanes. - -## Requirements - -Before you upgrading to the dataplane-based architecture, you must upgrade your Consul servers to a version compatible with Consul ECS: - -- Consul 1.14.x and later -- Consul dataplane 1.3.x and later - -## Deploy the latest version of the ECS controller module - -In an ACL enabled cluster, deploy the latest version of the ECS controller module in `hashicorp/terraform-aws-consul-ecs` along with the older version of the ACL controller. Note that both the controllers should coexist until the upgrade is complete. The new version of the controller only tracks tasks that use dataplanes. - -## Upgrade workloads - -For application tasks, upgrade the individual task definitions to `v0.7.0` or later of the `mesh-task` module. You must upgrade each task one at a time. - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/mesh-task" - version = "v0.7.0" -} -``` - -For gateway tasks, upgrade the individual task definitions to `v0.7.0` or later of the `gateway-task` module. You must upgrade each task one by one independently. ECS creates new versions of tasks before shutting down the older tasks to support zero downtime deployments. - -```hcl -module "my_task" { - source = "hashicorp/consul-ecs/aws//modules/gateway-task" - version = "v0.7.0" -} -``` - -## Delete previous tasks - -After upgrading all tasks, you can destroy the `acl-controller` containers, which are replaced by the ECS controller. You can manually remove any artifacts related to the old architecture, including Consul clients and ACL controllers, by executing the following commands: - -1. Run `consul acl policy delete` to delete the client policy. You can pass either the ID of the policy or the name of the policy, for example: - - ```shell-session - $ consul acl policy delete -name="consul-ecs-client-policy" - ``` - - Refer to the [`consul acl policy delete`](/consul/commands/acl/policy/delete) documentation for additional information. - -1. Run the `consul acl role delete` command to delete the client role. You can pass either the ID of the role or the name of the role, for example: - - ```shell-session - $ consul acl role delete -name="consul-ecs-client-role" - ``` - - Refer to the [`consul acl role delete`](/consul/commands/acl/role/delete) documentation for additional information. - -1. Run the `consul acl auth-method delete` command and specify the auth method name to delete. - - ```shell-session - $ consul acl auth-method delete -name="iam-ecs-client-token" - ``` - - Refer to the [`consul acl auth-method delete`](/consul/commands/acl/auth-method/delete) documentation for additional information. \ No newline at end of file diff --git a/website/content/docs/enterprise/admin-partitions.mdx b/website/content/docs/enterprise/admin-partitions.mdx deleted file mode 100644 index f9dc2f6b3509..000000000000 --- a/website/content/docs/enterprise/admin-partitions.mdx +++ /dev/null @@ -1,332 +0,0 @@ ---- -layout: docs -page_title: Admin Partitions (Enterprise) -description: >- - Admin partitions define boundaries between services managed by separate teams, enabling a service mesh across k8s clusters controlled by a single Consul server. Learn about their requirements and how to deploy admin partitions on Kubernetes. ---- - -# Consul Enterprise Admin Partitions - - - -This feature requires version 1.11.0+ of -HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -This topic provides and overview of admin partitions, which are entities that define one or more administrative boundaries for single Consul deployments. - -## Introduction - -Admin partitions exist a level above namespaces in the identity hierarchy. They contain one or more namespaces and allow multiple independent tenants to share a Consul server cluster. As a result, admin partitions enable you to define administrative and communication boundaries between services managed by separate teams or belonging to separate stakeholders. They can also segment production and non-production services within the Consul deployment. - -As of Consul v1.11, every _datacenter_ contains a single administrative partition named `default` when created. With Consul Enterprise, operators have the option of creating multiple partitions within a single datacenter. - --> **Preexisting nodes**: Admin partitions were introduced in Consul 1.11. Nodes existed in global scope prior to 1.11. After upgrading to Consul 1.11 or later, all nodes will be scoped to an admin partition, which will be the `default` partition when initially upgrading an existing deployment or for CE versions. - -There are tutorials available to help you get started with admin partitions. - -- [Multi-Tenancy with Administrative Partitions](/consul/tutorials/enterprise/consul-admin-partitions?utm_source=docs) -- [Multi Cluster Applications with Consul Enterprise Admin Partitions](/consul/tutorials/kubernetes/kubernetes-admin-partitions?utm_source=docs) - -### Default Admin Partition - -Each Consul cluster will have a default admin partition named `default`. The `default` partition must contain the Consul servers. The `default` admin partition is different from other partitions that may be created because the namespaces and resources in this partition are replicated between datacenters when they are federated. - -Any resource created without specifying an admin partition will inherit the partition of the ACL token used to create the resource. - --> **Preexisting resources and the `default` partition**: Admin partitions were introduced in Consul 1.11. After upgrading to Consul 1.11 or later, the `default` partition will contain all resources created in previous versions. - -### Naming Admin Partitions - -Only characters that are valid in DNS names can be used to name admin partitions. -Names must also begin with a lowercase letter. - -### Namespaces - -When an admin partition is created, it will include the `default` namespace. You can create additional namespaces within the partition. Resources created within a namespace are not shared across partitions. - -### Cross-datacenter Replication - -Only resources in the `default` admin partition will be replicated to secondary datacenters (also see [Known Limitations](#known-limitations)). - -### DNS Queries - -When queried, the DNS interface returns results for a single admin partition. -The query may explicitly specify the admin partition to use in the lookup. -If you do not specify an admin partition in the query, -the lookup uses the admin partition of the Consul agent that received the query. -Server agents always exist within the `default` admin partition. -Client agents are configured to operate within a specific admin partition. - -By default, Consul on Kubernetes uses [Consul dataplanes](/consul/docs/connect/dataplane) instead of client agents to manage communication between service instances. But to use the Consul DNS for service discovery, you must start a Consul client in client admin partitions. - -### Service Mesh Configurations - -The partition in which [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults) and [`mesh`](/consul/docs/connect/config-entries/mesh) configurations are created define the scope of the configurations. Services registered in a partition will use the `proxy-defaults` and `mesh` configurations that have been created in the partition. - -### Cross-partition Networking - -You can configure services to be discoverable by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `exported-services` configuration entry in the partition where the services are registered. Refer to the [`exported-services` documentation](/consul/docs/connect/config-entries/exported-services) for details. Additionally, the requests made by downstream applications must have the correct DNS name for the Virtual IP Service lookup to occur. Service Virtual IP lookups allow for communications across Admin Partitions when using Transparent Proxy. Refer to the [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups-for-consul-enterprise) for additional information. - --> **Export mesh gateway **: When ACL is enabled in Consul-k8s and `meshgateway.mode` is set to `local`, the `mesh-gateway` service must be exported to their consumers for cross-partition traffic. - -### Cluster Peering - -You can use [cluster peering](/consul/docs/connect/cluster-peering/) between two admin partitions to connect clusters owned by different operators. Without Consul Enterprise, cluster peering is limited to the `default` partitions in each datacenter. Enterprise users can [establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) between any two admin partitions as long as the partitions are in separate datacenters. It is not possible to establish cluster peering connections between two partitions in a single datacenter. - -## Requirements - -Your Consul configuration must meet the following requirements to use admin partitions. - -### Versions - -- Consul 1.11.1 and newer - -### General Networking Requirements - -All Consul clients must be able to initiate Gossip, HTTPS, and RPC connections to the servers. All servers must also be able to initiate Gossip connections to the clients. - -For Consul on Kubernetes, a dedicated `partition` Kubernetes `LoadBalancer` service is deployed to allow communication from clients to servers for admin partitions support (refer to [Kubernetes Requirements](#kubernetes-requirements) for additional information). - -For other runtimes, refer to the documentation for your infrastructure environment for instructions on how to allow communication on the following ports: -- 443 (HTTPS API requests) -- 8300 (RPC) -- 8301 (Gossip) -- 8502 (gRPC from [Consul Dataplane](/consul/docs/connect/dataplane/consul-dataplane)) - -### Security Configurations - -- The agent token used by the client agent must allow `node:write` in the admin partition. -- The `write` permission for `proxy-defaults` requires `mesh:write`. See [Admin Partition Rules](/consul/docs/security/acl/acl-rules#admin-partition-rules) for additional information. -- The `write` permissions for ingress and terminating gateways require `mesh:write` privileges. -- Wildcards (`*`) are not supported for the partition field when creating intentions for admin partitions. The partition name must be explicitly specified. -- With the exception of the `default` admin partition, ACL rules configured for admin partitions are isolated, so policies defined in partitions outside of the `default` partition can only reference their local partition. - -### Agent Configurations - -- The admin partition name should be specified in client agent configurations: - - ```hcl - partition = "" - ``` - -- The anti-entropy sync will use the configured admin partition name when registering the node. - -### Kubernetes Requirements - -One of the primary use cases for admin partitions is for enabling a service mesh across multiple Kubernetes clusters. The following requirements must be met to create admin partitions on Kubernetes: - -- If you are deploying Consul servers on Kubernetes, then ensure that the Consul servers are deployed within the same Kubernetes cluster. Consul servers may be deployed external to Kubernetes and configured using the `externalServers` stanza. -- Workloads deployed on the same Kubernetes cluster as the Consul Servers must use the `default` partition. If the workloads are required to run on a non-default partition, then the clients must be deployed in a separate Kubernetes cluster. -- A Consul Enterprise license must be installed on each Kubernetes cluster. -- The helm chart for consul-k8s v0.39.0 or greater. -- Consul 1.11.1-ent or greater. -- A designated Kubernetes `LoadBalancer` service must be exposed on the Consul server cluster. This enable the following communication channels to the Consul servers: - - RPC on port 8300 - - Gossip on port 8301 - - HTTPS API requests on port 443 API requests -- Mesh gateways must be deployed as a Kubernetes `LoadBalancer` service on port 443 across all Kubernetes clusters. -- Cross-partition networking must be implemented as described in [Cross-Partition Networking](#cross-partition-networking). - -## Usage - -This section describes how to deploy Consul admin partitions to Kubernetes clusters. Refer to the [admin partition CLI documentation](/consul/commands/partition) for information about command line usage. - -### Deploying Consul with Admin Partitions on Kubernetes - -The expected use case is to create admin partitions on Kubernetes clusters. This is because many organizations prefer to use cloud-managed Kubernetes offerings to provision separate Kubernetes clusters for individual teams, business units, or environments. This is opposed to deploying a single, large Kubernetes cluster. Organizations encounter problems, however, when they attempt to use a service mesh to enable multi-cluster use cases, such as administration tasks and communication between nodes. - -The following procedure will result in an admin partition in each Kubernetes cluster. The Consul clients running in the cluster with servers will be in the `default` partition. Another partition called `clients` will also be created. - -#### Prepare to install Consul across multiple Kubernetes clusters - -Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernetes-requirements) before proceeding. - -1. Verify that your VPC is configured to enable connectivity between the pods running workloads and Consul servers. Refer to your virtual cloud provider's documentation for instructions on configuring network connectivity. -1. Set environment variables to use with shell commands. - - ```shell-session - $ export HELM_RELEASE_SERVER=server - $ export HELM_RELEASE_CLIENT=client - $ export SERVER_CONTEXT= - $ export CLIENT_CONTEXT= - ``` - -1. Create the license secret in server cluster. - - ```shell-session - $ kubectl create --context ${SERVER_CONTEXT} namespace consul - $ kubectl create secret --context ${SERVER_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic - ``` - -1. Create the license secret in the non-default partition cluster for your workloads. This step must be repeated for every additional non-default partition cluster. - - ```shell-session - $ kubectl create --context ${CLIENT_CONTEXT} namespace consul - $ kubectl create secret --context ${CLIENT_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic - ``` - -#### Install the Consul server cluster - -1. Set your context to the server cluster. - - ```shell-session - $ kubectl config use-context ${SERVER_CONTEXT} - ``` - -1. Create a server configuration values file to override the default Consul Helm chart settings: - - - - - - ```yaml - global: - enableConsulNamespaces: true - tls: - enabled: true - image: hashicorp/consul-enterprise:1.16.3-ent - adminPartitions: - enabled: true - acls: - manageSystemACLs: true - enterpriseLicense: - secretName: license - secretKey: key - meshGateway: - enabled: true - ``` - - - - - Refer to the [Helm Chart Configuration reference](/consul/docs/k8s/helm) for details about the parameters you can specify in the file. - -1. Install the Consul server(s) using the values file created in the previous step: - - ```shell-session - $ helm install ${HELM_RELEASE_SERVER} hashicorp/consul --version "1.0.0" --create-namespace --namespace consul --values server.yaml - ``` - -1. After the server starts, get the external IP address for partition service so that it can be added to the client configuration (`externalServers.hosts`). The IP address is used to bootstrap connectivity between servers and workload pods on the non-default partition cluster. - - ```shell-session - $ kubectl get services --selector="app=consul,component=server" --namespace consul --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}" - 34.135.103.67 - ``` - -1. Get the Kubernetes authentication method URL for the non-default partition cluster running your workloads: - - ```shell-session - $ kubectl config view --output "jsonpath={.clusters[?(@.name=='${CLIENT_CONTEXT}')].cluster.server}" - ``` - - Use the IP address printed to the console to configure the `externalServers.k8sAuthMethodHost` parameter in the workload configuration file for your non-default partition cluster running your workloads. - -1. Copy the server certificate to the non-default partition cluster running your workloads. - - ```shell-session - $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert --context ${SERVER_CONTEXT} -n consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename - - ``` - -1. Copy the server key to the non-default partition cluster running your workloads: - - ```shell-session - $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-key --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename - - ``` - -1. If ACLs were enabled in the server configuration values file, copy the token to the non-default partition cluster running your workloads: - - ```shell-session - $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-partitions-acl-token --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename - - ``` - -#### Install on the non-default partition clusters running workloads - -1. Switch to the workload non-default partition clusters running your workloads: - - ```shell-session - $ kubectl config use-context ${CLIENT_CONTEXT} - ``` - -1. Create a configuration for each non-default admin partition. - - - - - - ```yaml - global: - name: consul - enabled: false - enableConsulNamespaces: true - image: hashicorp/consul-enterprise:1.16.3-ent - adminPartitions: - enabled: true - name: clients - tls: - enabled: true - caCert: - secretName: server-consul-ca-cert # See step 6 from `Install Consul server cluster` - secretKey: tls.crt - caKey: - secretName: server-consul-ca-key # See step 7 from `Install Consul server cluster` - secretKey: tls.key - acls: - manageSystemACLs: true - bootstrapToken: - secretName: server-consul-partitions-acl-token # See step 8 from `Install Consul server cluster` - secretKey: token - enterpriseLicense: - secretName: license - secretKey: key - externalServers: - enabled: true - hosts: [34.135.103.67] # See step 4 from `Install Consul server cluster` - tlsServerName: server.dc1.consul - k8sAuthMethodHost: https://104.154.156.146 # See step 5 from `Install Consul server cluster` - meshGateway: - enabled: true - ``` - - - - -1. Install the non-default partition clusters running your workloads: - - ```shell-session - $ helm install ${HELM_RELEASE_CLIENT} hashicorp/consul --version "1.0.0" --create-namespace --namespace consul --values client.yaml - ``` - -### Verifying the Deployment - -You can log into the Consul UI to verify that the partitions appear as expected. - -1. Set your context to the server cluster. - - ```shell-session - $ kubectl config use-context ${SERVER_CONTEXT} - ``` - -1. If ACLs are enabled, you will need the partitions ACL token, which can be read from the Kubernetes secret. The token is an encoded string that must be decoded in base64, e.g.: - - ```shell-session - $ kubectl get secret --namespace consul --context ${SERVER_CONTEXT} --template "{{ .data.token | base64decode }}" ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token - ``` - - The example command gets the secret from the default partition cluster, decodes the secret, and prints the token to the console. - -1. Open the Consul UI in a browser using the external IP address and port number described in a previous step (see [step 4](#get-external-ip-address)). - -1. Click **Log in** and enter the decoded token when prompted. - -You will see the `default` and `clients` partitions available in the **Admin Partition** drop-down menu. - -![Partitions will appear in the Admin Partitions drop-down menu within the Consul UI.](/img/admin-partitions/consul-admin-partitions-verify-in-ui.png) - -## Known Limitations - -- Only the `default` admin partition is supported when federating multiple Consul datacenters in a WAN. -- Admin partitions have no theoretical limit. We intend to conduct a large-scale test to identify a recommended max in the future. diff --git a/website/content/docs/enterprise/audit-logging.mdx b/website/content/docs/enterprise/audit-logging.mdx deleted file mode 100644 index 75d3b33a4108..000000000000 --- a/website/content/docs/enterprise/audit-logging.mdx +++ /dev/null @@ -1,234 +0,0 @@ ---- -layout: docs -page_title: Audit Logging (Enterprise) -description: >- - Audit logging secures Consul by capturing a record of HTTP API access and usage. Learn how to format agent configuration files to enable audit logs and specify the path to save logs to. ---- - -# Audit Logging - - - -This feature requires -HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -With Consul Enterprise v1.8.0+, audit logging can be used to capture a clear and -actionable log of authenticated events (both attempted and committed) that Consul -processes via its HTTP API. These events are then compiled into a JSON format for easy export -and contain a timestamp, the operation performed, and the user who initiated the action. - -Audit logging enables security and compliance teams within an organization to get -greater insight into Consul access and usage patterns. - -Complete the [Capture Consul Events with Audit Logging](/consul/tutorials/datacenter-operations/audit-logging) tutorial to learn more about Consul's audit logging functionality, - -For detailed configuration information on configuring the Consul Enterprise's audit -logging, review the Consul [Audit Log](/consul/docs/agent/config/config-files#audit) -documentation. - -## Example Configuration - -Audit logging must be enabled on every agent in order to accurately capture all -operations performed through the HTTP API. To enable logging, add -the [`audit`](/consul/docs/agent/config/config-files#audit) stanza to the agent's configuration. - --> **Note**: Consul only logs operations which are initiated via the HTTP API. -The audit log does not record operations that take place over the internal RPC -communication channel used for agent communication. - - - - -The following example configures a destination called "My Sink". Since rotation is enabled, -audit events will be stored at files named: `/tmp/audit-.json`. The log file will -be rotated either every 24 hours, or when the log file size is greater than 25165824 bytes -(24 megabytes). - - - -```hcl -audit { - enabled = true - sink "My sink" { - type = "file" - format = "json" - path = "/tmp/audit.json" - delivery_guarantee = "best-effort" - rotate_duration = "24h" - rotate_max_files = 15 - rotate_bytes = 25165824 - } -} -``` - -```json -{ - "audit": { - "enabled": true, - "sink": { - "My sink": { - "type": "file", - "format": "json", - "path": "/tmp/audit.json", - "delivery_guarantee": "best-effort", - "rotate_duration": "24h", - "rotate_max_files": 15, - "rotate_bytes": 25165824 - } - } - } -} -``` - -```yaml -server: - auditLogs: - enabled: true - sinks: - - name: My Sink - type: file - format: json - path: /tmp/audit.json - delivery_guarantee: best-effort - rotate_duration: 24h - rotate_max_files: 15 - rotate_bytes: 25165824 -``` - - - - - - - -The following example configures a destination called "My Sink" which emits audit -logs to standard out. - - - -```hcl -audit { - enabled = true - sink "My sink" { - type = "file" - format = "json" - path = "/dev/stdout" - delivery_guarantee = "best-effort" - } -} -``` - -```json -{ - "audit": { - "enabled": true, - "sink": { - "My sink": { - "type": "file", - "format": "json", - "path": "/dev/stdout", - "delivery_guarantee": "best-effort" - } - } - } -} -``` - -```yaml -server: - auditLogs: - enabled: true - sinks: - - name: My Sink - type: file - format: json - path: /dev/stdout - delivery_guarantee: best-effort -``` - - - - - - -## Example Audit Log - -In this example a client has issued an HTTP GET request to look up the `ssh` -service in the `/v1/catalog/service/` endpoint. - -Details from the HTTP request are recorded in the audit log. The `stage` field -is set to `OperationStart` which indicates the agent has begun processing the -request. - -The value of the `payload.auth.accessor_id` field is the accessor ID of the -[ACL token](/consul/docs/security/acl#tokens) which issued the request. - - - -```json -{ - "created_at": "2020-12-08T12:30:29.196365-05:00", - "event_type": "audit", - "payload": { - "id": "e4a20aec-d250-72c4-2aea-454fe8ae8051", - "version": "1", - "type": "HTTPEvent", - "timestamp": "2020-12-08T12:30:29.196206-05:00", - "auth": { - "accessor_id": "08f05787-3609-8001-65b4-922e5d52e84c", - "description": "Bootstrap Token (Global Management)", - "create_time": "2020-12-01T11:01:51.652566-05:00" - }, - "request": { - "operation": "GET", - "endpoint": "/v1/catalog/service/ssh", - "remote_addr": "127.0.0.1:64015", - "user_agent": "curl/7.54.0", - "host": "127.0.0.1:8500" - }, - "stage": "OperationStart" - } -} -``` - - - -After the request is processed, a corresponding log entry is written for the HTTP -response. The `stage` field is set to `OperationComplete` which indicates the agent -has completed processing the request. - - - -```json -{ - "created_at": "2020-12-08T12:30:29.202935-05:00", - "event_type": "audit", - "payload": { - "id": "1f85053f-badb-4567-d239-abc0ecee1570", - "version": "1", - "type": "HTTPEvent", - "timestamp": "2020-12-08T12:30:29.202863-05:00", - "auth": { - "accessor_id": "08f05787-3609-8001-65b4-922e5d52e84c", - "description": "Bootstrap Token (Global Management)", - "create_time": "2020-12-01T11:01:51.652566-05:00" - }, - "request": { - "operation": "GET", - "endpoint": "/v1/catalog/service/ssh", - "remote_addr": "127.0.0.1:64015", - "user_agent": "curl/7.54.0", - "host": "127.0.0.1:8500" - }, - "response": { - "status": "200" - }, - "stage": "OperationComplete" - } -} -``` - - diff --git a/website/content/docs/enterprise/backups.mdx b/website/content/docs/enterprise/backups.mdx deleted file mode 100644 index ba00952566c0..000000000000 --- a/website/content/docs/enterprise/backups.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -layout: docs -page_title: Automated Backups (Enterprise) -description: >- - Learn about launching the snapshot agent to automatically backup files to a cloud storage provider so that you can restore Consul servers. Supported providers include Amazon S3, Google Cloud Storage, and Azure Blob Storage. ---- - -# Automated Backups - - - -This feature requires -HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -Consul Enterprise enables you to run -the snapshot agent within your environment as a service (Systemd as an example) -or scheduled through other means. Once running, the snapshot agent service operates as a highly -available process that integrates with the snapshot API to automatically manage -taking snapshots, backup rotation, and sending backup files offsite to Amazon S3 -(or another S3-compatible endpoint), Google Cloud Storage, or Azure Blob Storage. - -This capability provides an enterprise solution for backup and restoring the state of Consul servers -within an environment in an automated manner. These snapshots are atomic and point-in-time. Consul -datacenter backups include (but are not limited to): - -- Key/Value Store Entries -- Service Catalog Registrations -- Prepared Queries -- Sessions -- Access Control Lists (ACLs) -- Namespaces - -For more experience leveraging Consul's snapshot functionality, complete the [Datacenter Backups in Consul](/consul/tutorials/production-deploy/backup-and-restore?utm_source=docs) tutorial. -For detailed configuration information on configuring the Consul Enterprise's snapshot agent, review the -[Consul Snapshot Agent documentation](/consul/commands/snapshot/agent). diff --git a/website/content/docs/enterprise/cts.mdx b/website/content/docs/enterprise/cts.mdx new file mode 100644 index 000000000000..9b8ea0563226 --- /dev/null +++ b/website/content/docs/enterprise/cts.mdx @@ -0,0 +1,30 @@ +--- +layout: docs +page_title: Consul-Terraform-Sync Enterprise +description: >- + Consul-Terraform-Sync Enterprise +--- + +# Consul-Terraform-Sync Enterprise + +Consul-Terraform-Sync (CTS) Enterprise is available with [Consul Enterprise](https://www.hashicorp.com/products/consul) and requires a Consul [license](/consul/docs/enterprise/license/cts) to be applied. + +Enterprise features of CTS address organization complexities of collaboration, operations, scale, and governance. CTS Enterprise supports an official integration with [HCP Terraform](https://cloud.hashicorp.com/products/terraform) and [Terraform Enterprise](/terraform/enterprise), the self-hosted distribution, to extend insight into dynamic updates of your network infrastructure. + +| Features | Community Edition | Enterprise | +|----------|-------------|------------| +| Consul Namespace | Default namespace only | Filter task triggers by any namespace | +| Automation Driver | Terraform Community Edition | Terraform Community Edition, HCP Terraform, or Terraform Enterprise | +| Terraform Workspaces | Local | Local workspaces with the Terraform driver or [remote workspaces](/terraform/cloud-docs/workspaces) with the HCP Terraform driver | +| Terraform Backend Options | [azurerm](/terraform/language/settings/backends/azurerm), [consul](/terraform/language/settings/backends/consul), [cos](/terraform/language/settings/backends/cos), [gcs](/terraform/language/settings/backends/gcs), [kubernetes](/terraform/language/settings/backends/kubernetes), [local](/terraform/language/settings/backends/local), [manta](/terraform/language/v1.2.x/settings/backends/manta), [pg](/terraform/language/settings/backends/pg), and [s3](/terraform/language/settings/backends/s3) with the Terraform driver | The supported backends for CTS with the Terraform driver or HCP Terraform with the HCP Terraform driver | +| Terraform Version | One Terraform version for all tasks | Optional Terraform version per task when using the HCP Terraform driver | +| Terraform Run Output | CTS logs | CTS logs or Terraform output organized by HCP Terraform remote workspaces | +| Credentials and secrets | On disk as `.tfvars` files or in shell environment | Secured variables stored in remote workspace | +| Audit | | Terraform audit logs ([HCP Terraform](/terraform/cloud-docs/api-docs/audit-trails) or [Terraform Enterprise](/terraform/enterprise/admin/infrastructure/logging)) | +| Collaboration | | Run [history](/terraform/cloud-docs/run/manage), [triggers](/terraform/cloud-docs/workspaces/settings/run-triggers), and [notifications](/terraform/cloud-docs/workspaces/settings/notifications) supported on HCP Terraform | +| Governance | | [Sentinel](/terraform/cloud-docs/policy-enforcement) to enforce governance policies as code | + +The [HCP Terraform driver](/consul/docs/nia/configuration#terraform-cloud-driver) enables CTS Enterprise to integrate with HCP Terraform or Terraform Enterprise. The [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud) page provides an overview of how the integration works within CTS. + +## Consul Admin Partition Support +CTS subscribes to a Consul agent. Depending on the admin partition the Consul agent is a part of and the services within the admin partition, CTS will be able to subscribe to those services and support the automation workflow. As such, admin partitions are not relevant to the CTS workflow. We recommend deploying a single CTS instance that subscribes to services/KV within a single partition and using a different CTS instance (or instances) to subscribe to services/KV in another partition. \ No newline at end of file diff --git a/website/content/docs/enterprise/downgrade.mdx b/website/content/docs/enterprise/downgrade.mdx new file mode 100644 index 000000000000..d9c77b383ee1 --- /dev/null +++ b/website/content/docs/enterprise/downgrade.mdx @@ -0,0 +1,208 @@ +--- +layout: docs +page_title: Downgrade from Consul Enterprise to the community edition +description: >- + Learn how to downgrade your installation of Consul Enterprise to the free Consul community edition (CE) +--- + +# Downgrade from Consul Enterprise to the community edition + +This document describes how to downgrade from Consul Enterprise to Consul community edition (CE). + +## Overview + +You can downgrade Consul editions if you no longer require Consul Enterprise features. Complete the following steps to downgrade to Consul CE: + +1. Download the CE binary. +1. Backup your Consul data and set appropriate log levels. +1. Complete the downgrade procedures. + +### Request-handling details + +During the downgrade process, the Consul CE server handles the raft replication logs in one of the following ways: + +- Drops the request. +- Filters out data from requests sent from non-default namespaces or partitions. +- Panics and stops the downgrade. + +The following examples describe scenarios where the server may drop the requests: + +- Registration requests in non-default namespace +- Services or health checks in non-default namespaces or partitions. + +- Write requests to peered clusters if the local partition connecting to the peer is non-default. + +The following examples describe scenarios where the server may filter out data from requests: + +- Intention sources that target non-default namespaces or partitions are filtered out of the configuration entry. +- Exports of services within non-default namespaces or partitions are filtered out of the configuration entry. + +The server may panic and stop the downgrade when Consul cannot safely filter configuration entries that route traffic. This is because Consul is unable to determine if the filtered configuration entries send traffic to services that are able to handle the traffic. Consul CE panics in order to prevent harm to existing service mesh routes. + +In these situations, you must first remove references to services within non-default namespaces or partitions from those configuration entries. + +The server may panic in the following cases: + +- Service splitter, service resolver, and service router configuration entry requests that have references to services located in non-default namespaces or partitions cause the server to panic. + +## Requirements + +You can only downgrade Consul editions for v1.18 and later. + +## Download the CE binary version + +First, download the binary for CE. + + + + +All current and past versions of the CE and Enterprise releases are +available on the [HashiCorp releases page](https://releases.hashicorp.com/consul) + +Example: +```shell-session +$ export VERSION=1.18.0 +$ curl https://releases.hashicorp.com/consul/${VERSION}/consul_${VERSION} _linux_amd64.zip +``` + + + + +To downgrade Consul edition on Kubernetes, modify the image version in your Helm chart and follow the upgrade process described in [Upgrade Consul version](/consul/docs/k8s/upgrade#upgrade-consul-version). + + + + + +## Prepare for the downgrade to CE + +1. Take a snapshot of the existing Consul state so that you have a safe fallback if an error occurs. + + ```shell-session + $ consul snapshot save backup.snap + ``` + +1. Run the following command to verify that the snapshot was successfully saved: + + ```shell-session + $ consul snapshot inspect backup.snap + ``` + + Example output: + + ``` + ID 2-1182-1542056499724 + Size 4115 + Index 1182 + Term 2 + Version 1 + ``` + +1. Store the snapshot in a safe location. Refer to the following documentation for additional information about using snapshots: + + - [Consul snapshot](/consul/commands/snapshot) + - [Backup Consul Data and State tutorial](/consul/tutorials/production-deploy/backup-and-restore) + +1. Temporarily modify your Consul configuration so that its [log_level](/consul/commands/agent#_log_level) + is set to `debug`. This enables Consul to report detailed information if an error occurs. +1. Issue the following command on your servers to reload the configuration: + + ```shell-session + $ consul reload + ``` + +1. If applicable, modify the following configuration entries: + - [Service resolver](/consul/docs/reference/config-entry/service-resolver): + 1. Remove services configured as failovers in non-default namespaces or services that belong to a sameness group. + 1. Remove services configured as redirects that belong to non-default namespaces or partitions. + - [Service splitter](/consul/docs/reference/config-entry/service-splitter): + 1. Remove services configured as splits that belong to non-default namespaces or partitions. + - [Service router](/consul/docs/reference/config-entry/service-router): + 1. Remove services configured as a destination that belong to non-default namespaces or partitions. + +## Perform the downgrade + +1. Restart or redeploy all Consul clients with a CE version of the binary. You can use a service management system, such as `systemd` or `upstart`, to restart the Consul service. If you are not using a service management system, you must restart the agent manually. The following example uses `systemctl` to restart the Consul service: + + ```shell-session + $ sudo systemctl restart consul + ``` + +1. Issue the following command to discover which server is currently the leader: + + ```shell-session + $ consul operator raft list-peers + ``` + + Consul prints the raft information. The format and content may differ based on your version of Consul: + + ```shell-session + Node ID Address State Voter RaftProtocol + dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3 + dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3 + dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3 + ``` + +1. Make a note of the leader so that you can perform the downgrade on agents in the proper order. + +1. Update the server binaries to use the CE version. Complete the following steps in order for each server agent in the `follower` state. Then, repeat the steps for the `leader` agent. + + 1. Set an environment variable named `CONSUL_ENTERPRISE_DOWNGRADE_TO_CE` to `true`. The following example sets variable using `systemd`: + 1. Edit the Consul systemd service unit file: + ```shell-session + $ sudo vi /etc/systemd/system/consul.service + ``` + 1. Add the environment variables you want to set for Consul under the `[Service]` section of the unit file and save the changes: + ```shell-session + [Service] + Environment=CONSUL_ENTERPRISE_DOWNGRADE_TO_CE=true + ``` + 1. Reload `systemd`: + ```shell-session + $ sudo systemctl daemon-reload + ``` + 1. Restart the Consul service. You can use a service management system, such as `systemd` or `upstart`. If you are not using a service management system, you must restart the agent manually. The following example restarts Consul using `systemctl`: + ```shell-session + $ sudo systemctl restart consul + ``` + 1. To validate that the agent has rejoined the cluster and is in sync with the leader, issue the following command: + ```shell-session + $ consul info + ``` + Verify that the `commit_index` and `last_log_index` fields have the same value. Differing values may be result of an unexpected leadership election due to loss of quorum. + +1. Run the following command to verify that all servers appear in the cluster as expected and are on the correct version: + + ```shell-session + $ consul members + ``` + + Consul prints cluster membership information. The exact format and content depends on your Consul version: + + ```shell-session + Node Address Status Type Build Protocol DC + dc1-node1 10.11.0.2:8301 alive server 1.18.0 2 dc1 + dc1-node2 10.11.0.3:8301 alive server 1.18.0 2 dc1 + dc1-node3 10.11.0.4:8301 alive server 1.18.0 2 dc1 + ``` + +1. Verify the raft state to make sure there is a leader and sufficient voters: + + ```shell-session + $ consul operator raft list-peers + ``` + + Consul prints the raft information. The format and content may differ based on your version of Consul: + + ```shell-session + Node ID Address State Voter RaftProtocol + dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3 + dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3 + dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3 + ``` + +1. Set your `log_level` back to its original value and issue the following command on your servers to reload the configuration: + + ```shell-session + $ consul reload + ``` \ No newline at end of file diff --git a/website/content/docs/enterprise/ecs.mdx b/website/content/docs/enterprise/ecs.mdx new file mode 100644 index 000000000000..1df0c3c4a8fa --- /dev/null +++ b/website/content/docs/enterprise/ecs.mdx @@ -0,0 +1,150 @@ +--- +layout: docs +page_title: Consul Enterprise on AWS Elastic Container Service (ECS) +description: >- + You can deploy Consul Enterprise on Amazon Web Services ECS with an official Docker image. Learn about supported Enterprise features, including requirements for admin partitions, namespaces, and audit logging. +--- + +# Consul Enterprise on AWS Elastic Container Service (ECS) + +You can run Consul Enterprise on ECS by specifying the Consul Enterprise Docker image in the Terraform module parameters. + +## Specify the Consul image + +When you set up an instance of the [`mesh-task`](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/mesh-task) or [`gateway-task`](https://registry.terraform.io/modules/hashicorp/consul-ecs/aws/latest/submodules/gateway-task) module, +set the parameter `consul_image` to a Consul Enterprise image. The following example instructs the `mesh-task` module to import Consul Enterprise version 1.12.0: + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + version = "" + + consul_image = "hashicorp/consul-enterprise:1.12.0-ent" + ... +} +``` + +## Licensing + +!> **Warning:** Consul Enterprise is currently only fully supported when [ACLs are enabled](/consul/docs/ecs/deploy/terraform#secure-configuration-requirements). + +Consul Enterprise [requires a license](/consul/docs/enterprise/license). If running +Consul on ECS with ACLs enabled, the license will be automatically pulled down from Consul servers. + +Currently there is no capability for specifying the license when ACLs are disabled so if you wish to +run Consul Enterprise clients then you must enable ACLs. + +## Running Consul Community Edition clients + +You can operate Consul Enterprise servers with Consul CE (community edition) clients as long as the features you are using do not require Consul Enterprise client support. Admin partitions and namespaces, for example, require Consul Enterprise clients and are not supported with Consul CE. + +## Feature Support + +Consul on ECS supports the following Consul Enterprise features. +If you are only using features that run on Consul servers, then you can use a CE client in your service mesh tasks on ECS. +If client support is required for any of the features, then you must use a Consul Enterprise client in your `mesh-tasks`. + +| Feature | Supported | Description | +|-----------------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| Automated Backups/Snapshot Agent | Yes\* | Running the snapshot agent on ECS is not currently supported but you are able to run the snapshot agent alongside your Consul servers on VMs. | +| Automated Upgrades | Yes (servers) | This feature runs on Consul servers. | +| Enhanced Read Scalability | Yes (servers) | This feature runs on Consul servers. | +| Single Sign-On/OIDC | Yes (servers) | This feature runs on Consul servers. | +| Redundancy Zones | Yes (servers) | This feature runs on Consul servers. | +| Advanced Federation/Network Areas | Yes (servers) | This feature runs on Consul servers. | +| Sentinel | Yes (servers) | This feature runs on Consul servers. | +| Network Segments | No | Currently there is no capability to configure the network segment Consul clients on ECS run in. | +| Namespaces | Yes | This feature requires Consul Enterprise servers. CE clients can register into the `default` namespace. Registration into a non-default namespace requires a Consul Enterprise client. | +| Admin Partitions | Yes | This feature requires Consul Enterprise servers. CE clients can register into the `default` admin partition. Registration into a non-default partition requires a Consul Enterprise client. | +| Audit Logging | Yes | This feature requires Consul Enterprise clients. | + +### Admin Partitions and Namespaces + +Consul on ECS supports [admin partitions](/consul/docs/multi-tenant/admin-partition) and [namespaces](/consul/docs/multi-tenant/namespace) when Consul Enterprise servers and clients are used. These features have the following requirements: + +- ACLs must be enabled. +- ACL controller must run in the ECS cluster. +- `mesh-task` must use a Consul Enterprise client image. +- `gateway-task` must use a Consul Enterprise client image. + +The ACL controller manages configuration of the AWS IAM auth method on the Consul servers. It +ensures unused tokens created by tasks are cleaned up. It also creates admin partitions and +namespaces if they do not already exist. + +~> **Note:** The ACL controller does not delete admin partitions or namespaces once they are created. + +Each ACL controller manages a single admin partition. Consul on ECS supports one ACL controller per ECS cluster; +therefore, the administrative boundary for admin partitions is one admin partition per ECS cluster. + +The following example demonstrates how to configure the ACL controller to enable admin partitions +and manage an admin partition named `my-partition`. The `consul_partition` field is optional and if it +is not provided when `consul_partitions_enabled = true`, will default to the `default` admin partition. + + + +```hcl +module "acl_controller" { + source = "hashicorp/consul-ecs/aws//modules/acl-controller" + + ... + + consul_partitions_enabled = true + consul_partition = "my-partition" +} +``` + + + +Services are assigned to admin partitions and namespaces through the use of [task tags](/consul/docs/ecs/deploy/manual#configure-task-tags). +The `mesh-task` module automatically adds the necessary tags to the task definition. +If the ACL controller is configured for admin partitions, services on the mesh will +always be assigned to an admin partition and namespace. If the `mesh-task` does not define +the partition it will default to the `default` admin partition. Similarly, if a `mesh-task` does +not define the namespace it will default to the `default` namespace. + +The following example demonstrates how to create a `mesh-task` assigned to the admin partition named +`my-partition`, in the `my-namespace` namespace. + + + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + family = "my_task" + + ... + + consul_image = "hashicorp/consul-enterprise:-ent" + consul_partition = "my-partition" + consul_namespace = "my-namespace" +} +``` + + + +### Audit Logging + +Consul on ECS supports [audit logging](/consul/docs/monitor/log/audit) when using Consul Enterprise clients. +This feature has the following requirements: + +- ACLs must be enabled. +- `mesh-task` must use a Consul Enterprise image. +- `gateway-task` must use a Consul Enterprise image. + +To enable audit logging, set `audit_logging = true` when configuring the client. + + + +```hcl +module "my_task" { + source = "hashicorp/consul-ecs/aws//modules/mesh-task" + family = "my_task" + + ... + + consul_image = "hashicorp/consul-enterprise:-ent" + audit_logging = true +} +``` + + diff --git a/website/content/docs/enterprise/ent-to-ce-downgrades.mdx b/website/content/docs/enterprise/ent-to-ce-downgrades.mdx deleted file mode 100644 index a5b6061f40a6..000000000000 --- a/website/content/docs/enterprise/ent-to-ce-downgrades.mdx +++ /dev/null @@ -1,208 +0,0 @@ ---- -layout: docs -page_title: Downgrade from Consul Enterprise to the community edition -description: >- - Learn how to downgrade your installation of Consul Enterprise to the free Consul community edition (CE) ---- - -# Downgrade from Consul Enterprise to the community edition - -This document describes how to downgrade from Consul Enterprise to Consul community edition (CE). - -## Overview - -You can downgrade Consul editions if you no longer require Consul Enterprise features. Complete the following steps to downgrade to Consul CE: - -1. Download the CE binary. -1. Backup your Consul data and set appropriate log levels. -1. Complete the downgrade procedures. - -### Request-handling details - -During the downgrade process, the Consul CE server handles the raft replication logs in one of the following ways: - -- Drops the request. -- Filters out data from requests sent from non-default namespaces or partitions. -- Panics and stops the downgrade. - -The following examples describe scenarios where the server may drop the requests: - -- Registration requests in non-default namespace -- Services or health checks in non-default namespaces or partitions. - -- Write requests to peered clusters if the local partition connecting to the peer is non-default. - -The following examples describe scenarios where the server may filter out data from requests: - -- Intention sources that target non-default namespaces or partitions are filtered out of the configuration entry. -- Exports of services within non-default namespaces or partitions are filtered out of the configuration entry. - -The server may panic and stop the downgrade when Consul cannot safely filter configuration entries that route traffic. This is because Consul is unable to determine if the filtered configuration entries send traffic to services that are able to handle the traffic. Consul CE panics in order to prevent harm to existing service mesh routes. - -In these situations, you must first remove references to services within non-default namespaces or partitions from those configuration entries. - -The server may panic in the following cases: - -- Service splitter, service resolver, and service router configuration entry requests that have references to services located in non-default namespaces or partitions cause the server to panic. - -## Requirements - -You can only downgrade Consul editions for v1.18 and later. - -## Download the CE binary version - -First, download the binary for CE. - - - - -All current and past versions of the CE and Enterprise releases are -available on the [HashiCorp releases page](https://releases.hashicorp.com/consul) - -Example: -```shell-session -$ export VERSION=1.18.0 -$ curl https://releases.hashicorp.com/consul/${VERSION}/consul_${VERSION} _linux_amd64.zip -``` - - - - -To downgrade Consul edition on Kubernetes, modify the image version in your Helm chart and follow the upgrade process described in [Upgrade Consul version](/consul/docs/k8s/upgrade#upgrade-consul-version). - - - - - -## Prepare for the downgrade to CE - -1. Take a snapshot of the existing Consul state so that you have a safe fallback if an error occurs. - - ```shell-session - $ consul snapshot save backup.snap - ``` - -1. Run the following command to verify that the snapshot was successfully saved: - - ```shell-session - $ consul snapshot inspect backup.snap - ``` - - Example output: - - ``` - ID 2-1182-1542056499724 - Size 4115 - Index 1182 - Term 2 - Version 1 - ``` - -1. Store the snapshot in a safe location. Refer to the following documentation for additional information about using snapshots: - - - [Consul snapshot](/consul/commands/snapshot) - - [Backup Consul Data and State tutorial](/consul/tutorials/production-deploy/backup-and-restore) - -1. Temporarily modify your Consul configuration so that its [log_level](/consul/docs/agent/config/cli-flags#_log_level) - is set to `debug`. This enables Consul to report detailed information if an error occurs. -1. Issue the following command on your servers to reload the configuration: - - ```shell-session - $ consul reload - ``` - -1. If applicable, modify the following configuration entries: - - [Service resolver](/consul/docs/connect/config-entries/service-resolver): - 1. Remove services configured as failovers in non-default namespaces or services that belong to a sameness group. - 1. Remove services configured as redirects that belong to non-default namespaces or partitions. - - [Service splitter](/consul/docs/connect/config-entries/service-splitter): - 1. Remove services configured as splits that belong to non-default namespaces or partitions. - - [Service router](/consul/docs/connect/config-entries/service-router): - 1. Remove services configured as a destination that belong to non-default namespaces or partitions. - -## Perform the downgrade - -1. Restart or redeploy all Consul clients with a CE version of the binary. You can use a service management system, such as `systemd` or `upstart`, to restart the Consul service. If you are not using a service management system, you must restart the agent manually. The following example uses `systemctl` to restart the Consul service: - - ```shell-session - $ sudo systemctl restart consul - ``` - -1. Issue the following command to discover which server is currently the leader: - - ```shell-session - $ consul operator raft list-peers - ``` - - Consul prints the raft information. The format and content may differ based on your version of Consul: - - ```shell-session - Node ID Address State Voter RaftProtocol - dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3 - dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3 - dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3 - ``` - -1. Make a note of the leader so that you can perform the downgrade on agents in the proper order. - -1. Update the server binaries to use the CE version. Complete the following steps in order for each server agent in the `follower` state. Then, repeat the steps for the `leader` agent. - - 1. Set an environment variable named `CONSUL_ENTERPRISE_DOWNGRADE_TO_CE` to `true`. The following example sets variable using `systemd`: - 1. Edit the Consul systemd service unit file: - ```shell-session - $ sudo vi /etc/systemd/system/consul.service - ``` - 1. Add the environment variables you want to set for Consul under the `[Service]` section of the unit file and save the changes: - ```shell-session - [Service] - Environment=CONSUL_ENTERPRISE_DOWNGRADE_TO_CE=true - ``` - 1. Reload `systemd`: - ```shell-session - $ sudo systemctl daemon-reload - ``` - 1. Restart the Consul service. You can use a service management system, such as `systemd` or `upstart`. If you are not using a service management system, you must restart the agent manually. The following example restarts Consul using `systemctl`: - ```shell-session - $ sudo systemctl restart consul - ``` - 1. To validate that the agent has rejoined the cluster and is in sync with the leader, issue the following command: - ```shell-session - $ consul info - ``` - Verify that the `commit_index` and `last_log_index` fields have the same value. Differing values may be result of an unexpected leadership election due to loss of quorum. - -1. Run the following command to verify that all servers appear in the cluster as expected and are on the correct version: - - ```shell-session - $ consul members - ``` - - Consul prints cluster membership information. The exact format and content depends on your Consul version: - - ```shell-session - Node Address Status Type Build Protocol DC - dc1-node1 10.11.0.2:8301 alive server 1.18.0 2 dc1 - dc1-node2 10.11.0.3:8301 alive server 1.18.0 2 dc1 - dc1-node3 10.11.0.4:8301 alive server 1.18.0 2 dc1 - ``` - -1. Verify the raft state to make sure there is a leader and sufficient voters: - - ```shell-session - $ consul operator raft list-peers - ``` - - Consul prints the raft information. The format and content may differ based on your version of Consul: - - ```shell-session - Node ID Address State Voter RaftProtocol - dc1-node1 ae15858f-7f5f-4dcb-b7d5-710fdcdd2745 10.11.0.2:8300 leader true 3 - dc1-node2 20e6be1b-f1cb-4aab-929f-f7d2d43d9a96 10.11.0.3:8300 follower true 3 - dc1-node3 658c343b-8769-431f-a71a-236f9dbb17b3 10.11.0.4:8300 follower true 3 - ``` - -1. Set your `log_level` back to its original value and issue the following command on your servers to reload the configuration: - - ```shell-session - $ consul reload - ``` \ No newline at end of file diff --git a/website/content/docs/enterprise/federation.mdx b/website/content/docs/enterprise/federation.mdx deleted file mode 100644 index 9f84ef5d3789..000000000000 --- a/website/content/docs/enterprise/federation.mdx +++ /dev/null @@ -1,33 +0,0 @@ ---- -layout: docs -page_title: Federated Network Areas (Enterprise) -description: >- - Network areas connect individual datacenters in a WAN federation, providing an alternative to connecting every datacenter. Learn how to support hub-and-spoke network topologies in a WAN federated Consul deployment. ---- - -# Consul Enterprise Advanced Federation - - - -This feature requires -self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -Consul's core federation capability uses the same gossip mechanism that is used -for a single datacenter. This requires that every server from every datacenter -be in a fully connected mesh with an open gossip port (8302/tcp and 8302/udp) -and an open server RPC port (8300/tcp). For organizations with large numbers of -datacenters, it becomes difficult to support a fully connected mesh. It is often -desirable to have topologies like hub-and-spoke with central management -datacenters and "spoke" datacenters that can't interact with each other. - -[Consul Enterprise](https://www.hashicorp.com/consul) offers a [network -area mechanism](/consul/tutorials/datacenter-operations/federation-network-areas) that allows operators to -federate Consul datacenters together on a pairwise basis, enabling -partially-connected network topologies. Once a link is created, Consul agents -can make queries to the remote datacenter in service of both API and DNS -requests for remote resources (in spite of the partially-connected nature of the -topology as a whole). Consul datacenters can simultaneously participate in both -network areas and the existing WAN pool, which eases migration. diff --git a/website/content/docs/enterprise/index.mdx b/website/content/docs/enterprise/index.mdx index b77df6969232..421809a30e3a 100644 --- a/website/content/docs/enterprise/index.mdx +++ b/website/content/docs/enterprise/index.mdx @@ -18,43 +18,43 @@ The following features are [available in several forms of Consul Enterprise](#co ### Multi-Tenancy -- [Admin Partitions](/consul/docs/enterprise/admin-partitions): Define administrative boundaries between tenants within a single Consul datacenter. -- [Namespaces](/consul/docs/enterprise/namespaces): Define resource boundaries within a single admin partition for further organizational flexibility. -- [Sameness Groups](/consul/docs/connect/config-entries/sameness-group): Define partitions and cluster peers as members of a group with identical services. +- [Admin Partitions](/consul/docs/multi-tenant/admin-partition): Define administrative boundaries between tenants within a single Consul datacenter. +- [Namespaces](/consul/docs/multi-tenant/namespace): Define resource boundaries within a single admin partition for further organizational flexibility. +- [Sameness Groups](/consul/docs/reference/config-entry/sameness-group): Define partitions and cluster peers as members of a group with identical services. ### Resiliency -- [Automated Backups](/consul/docs/enterprise/backups): Configure the automatic backup of Consul state. -- [Redundancy Zones](/consul/docs/enterprise/redundancy): Deploy backup voting Consul servers to efficiently improve Consul fault tolerance -- [Server request rate limits per source IP](/consul/docs/agent/limits/usage/limit-request-rates-from-ips): Limit gRPC and RPC traffic to servers for source IP addresses. -- [Traffic rate limiting for services](/consul/docs/connect/manage-traffic/limit-request-rates): Limit the rate of HTTP requests a service receives per service instance. -- [Locality-aware routing](/consul/docs/connect/manage-traffic/route-to-local-upstreams): Prioritize upstream services in the same region and zone as the downstream service. -- [Fault injection](/consul/docs/connect/manage-traffic/fault-injection): Explore the resiliency of downstream services in response to problems with an upstream service, such as errors, latency, or response rate limits. +- [Automated Backups](/consul/docs/manage/scale/automated-backup): Configure the automatic backup of Consul state. +- [Redundancy Zones](/consul/docs/manage/scale/redundancy-zone): Deploy backup voting Consul servers to efficiently improve Consul fault tolerance +- [Server request rate limits per source IP](/consul/docs/manage/rate-limit/source): Limit gRPC and RPC traffic to servers for source IP addresses. +- [Traffic rate limiting for services](/consul/docs/manage-traffic/rate-limit): Limit the rate of HTTP requests a service receives per service instance. +- [Locality-aware routing](/consul/docs/manage-traffic/route-local): Prioritize upstream services in the same region and zone as the downstream service. +- [Fault injection](/consul/docs/troubleshoot/fault-injection): Explore the resiliency of downstream services in response to problems with an upstream service, such as errors, latency, or response rate limits. ### Scalability -- [Read Replicas](/consul/docs/enterprise/read-scale): Deploy non-voting Consul servers to enhance the scalability of read requests. +- [Read Replicas](/consul/docs/manage/scale/read-replica): Deploy non-voting Consul servers to enhance the scalability of read requests. ### Operational simplification -- [Long Term Support (LTS)](/consul/docs/enterprise/long-term-support): Reduce operational overhead and risk by using LTS releases that are maintained for longer than standard releases. -- [Automated Upgrades](/consul/docs/enterprise/upgrades): Ease upgrades by automating the transition from existing to newly deployed Consul servers. -- [Consul-Terraform-Sync Enterprise](/consul/docs/nia/enterprise): Leverage the enhanced network infrastructure automation capabilities of the enterprise version of Consul-Terraform-Sync. +- [Long Term Support (LTS)](/consul/docs/upgrade/lts): Reduce operational overhead and risk by using LTS releases that are maintained for longer than standard releases. +- [Automated Upgrades](/consul/docs/upgrade/automated): Ease upgrades by automating the transition from existing to newly deployed Consul servers. +- [Consul-Terraform-Sync Enterprise](/consul/docs/enterprise/cts): Leverage the enhanced network infrastructure automation capabilities of the enterprise version of Consul-Terraform-Sync. ### Complex network topology support -- [Network Areas](/consul/docs/enterprise/federation): Support complex network topologies between federated Consul datacenters with pairwise federation rather than full mesh federation. -- [Network Segments](/consul/docs/enterprise/network-segments/network-segments-overview): Support complex network topologies within a Consul datacenter by enforcing boundaries in Consul client gossip traffic. +- [Network Areas](/consul/docs/east-west/network-area): Support complex network topologies between federated Consul datacenters with pairwise federation rather than full mesh federation. +- [Network Segments](/consul/docs/multi-tenant/network-segment): Support complex network topologies within a Consul datacenter by enforcing boundaries in Consul client gossip traffic. ### Governance -- [OIDC Auth Method](/consul/docs/security/acl/auth-methods/oidc): Manage user access to Consul through an OIDC identity provider instead of Consul ACL tokens directly. -- [Audit Logging](/consul/docs/enterprise/audit-logging): Understand Consul access and usage patterns by reviewing access to the Consul HTTP API. -- JWT authentication and authorization for API gateway: Prevent unverified traffic at the API gateway using JWTs for authentication and authorization on [VMs](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms) and on [Kubernetes](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-k8s). +- [OIDC Auth Method](/consul/docs/secure/acl/auth-method/oidc): Manage user access to Consul through an OIDC identity provider instead of Consul ACL tokens directly. +- [Audit Logging](/consul/docs/monitor/log/audit): Understand Consul access and usage patterns by reviewing access to the Consul HTTP API. +- JWT authentication and authorization for API gateway: Prevent unverified traffic at the API gateway using JWTs for authentication and authorization on [VMs](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) and on [Kubernetes](/consul/docs/north-south/api-gateway/secure-traffic/jwt/k8s). ### Regulatory compliance -- [FIPS 140-2 Compliance](/consul/docs/enterprise/fips): Leverage FIPS builds of Consul Enterprise to ensure your Consul deployments are secured with BoringCrypto and CNGCrypto, and compliant with FIPS 140-2. +- [FIPS 140-2 Compliance](/consul/docs/deploy/server/fips): Leverage FIPS builds of Consul Enterprise to ensure your Consul deployments are secured with BoringCrypto and CNGCrypto, and compliant with FIPS 140-2. @@ -78,7 +78,7 @@ You can try out HCP Consul Dedicated for free. Refer to the ### Self-managed Consul Enterprise To access Consul Enterprise in a self-managed installation, -[apply a purchased license](/consul/docs/enterprise/license/overview) +[apply a purchased license](/consul/docs/enterprise/license) to the Consul Enterprise binary that grants access to the desired features. Contact your [HashiCorp Support contact](https://support.hashicorp.com/) for a development license. @@ -94,25 +94,25 @@ Available Enterprise features per Consul form and license include: | Feature | [HashiCorp Cloud Platform (HCP) Consul] | [Consul Enterprise] | Legacy Consul Enterprise (module-based) | | ------------------------------------------------------------------------------------------------------------- | --------------------------------------- | ------------------- | ------------------------------------------------- | | Consul servers as a managed service | Yes | No (self-managed) | No (self-managed) | -| [Admin Partitions](/consul/docs/enterprise/admin-partitions) | All tiers | Yes | With Governance and Policy module | -| [Audit Logging](/consul/docs/enterprise/audit-logging) | Standard tier and above | Yes | With Governance and Policy module | -| [Automated Server Backups](/consul/docs/enterprise/backups) | All tiers | Yes | Yes | -| [Automated Server Upgrades](/consul/docs/enterprise/upgrades) | All tiers | Yes | Yes | -| [Consul-Terraform-Sync Enterprise](/consul/docs/nia/enterprise) | All tiers | Yes | Yes | -| [Enhanced Read Scalability](/consul/docs/enterprise/read-scale) | No | Yes | With Global Visibility, Routing, and Scale module | -| [Fault injection](/consul/docs/connect/manage-traffic/fault-injection) | Yes | Yes | No | -| [FIPS 140-2 Compliance](/consul/docs/enterprise/fips) | No | Yes | No | -| [JWT verification for API gateways](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms) | Yes | Yes | Yes | -| [Locality-aware routing](/consul/docs/connect/manage-traffic/route-to-local-upstreams) | Yes | Yes | Yes | -| [Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) | Not applicable | Yes | Not applicable | -| [Namespaces](/consul/docs/enterprise/namespaces) | All tiers | Yes | With Governance and Policy module | -| [Network Areas](/consul/docs/enterprise/federation) | No | Yes | With Global Visibility, Routing, and Scale module | -| [Network Segments](/consul/docs/enterprise/network-segments/network-segments-overview) | No | Yes | With Global Visibility, Routing, and Scale module | -| [OIDC Auth Method](/consul/docs/security/acl/auth-methods/oidc) | No | Yes | Yes | -| [Redundancy Zones](/consul/docs/enterprise/redundancy) | Not applicable | Yes | With Global Visibility, Routing, and Scale module | -| [Sameness Groups](/consul/docs/connect/config-entries/sameness-group) | No | Yes | Not applicable | -| [Server request rate limits per source IP](/consul/docs/agent/limits/usage/limit-request-rates-from-ips) | All tiers | Yes | With Governance and Policy module | -| [Traffic rate limiting for services](/consul/docs/connect/manage-traffic/limit-request-rates) | Yes | Yes | Yes | +| [Admin Partitions](/consul/docs/multi-tenant/admin-partition) | All tiers | Yes | With Governance and Policy module | +| [Audit Logging](/consul/docs/monitor/log/audit) | Standard tier and above | Yes | With Governance and Policy module | +| [Automated Server Backups](/consul/docs/manage/scale/automated-backup) | All tiers | Yes | Yes | +| [Automated Server Upgrades](/consul/docs/upgrade/automated) | All tiers | Yes | Yes | +| [Consul-Terraform-Sync Enterprise](/consul/docs/enterprise/cts) | All tiers | Yes | Yes | +| [Enhanced Read Scalability](/consul/docs/manage/scale/read-replica) | No | Yes | With Global Visibility, Routing, and Scale module | +| [Fault injection](/consul/docs/troubleshoot/fault-injection) | Yes | Yes | No | +| [FIPS 140-2 Compliance](/consul/docs/deploy/server/fips) | No | Yes | No | +| [JWT verification for API gateways](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) | Yes | Yes | Yes | +| [Locality-aware routing](/consul/docs/manage-traffic/route-local) | Yes | Yes | Yes | +| [Long Term Support (LTS)](/consul/docs/upgrade/lts) | Not applicable | Yes | Not applicable | +| [Namespaces](/consul/docs/multi-tenant/namespace) | All tiers | Yes | With Governance and Policy module | +| [Network Areas](/consul/docs/east-west/network-area) | No | Yes | With Global Visibility, Routing, and Scale module | +| [Network Segments](/consul/docs/multi-tenant/network-segment) | No | Yes | With Global Visibility, Routing, and Scale module | +| [OIDC Auth Method](/consul/docs/secure/acl/auth-method/oidc) | No | Yes | Yes | +| [Redundancy Zones](/consul/docs/manage/scale/redundancy-zone) | Not applicable | Yes | With Global Visibility, Routing, and Scale module | +| [Sameness Groups](/consul/docs/reference/config-entry/sameness-group) | No | Yes | Not applicable | +| [Server request rate limits per source IP](/consul/docs/manage/rate-limit/source) | All tiers | Yes | With Governance and Policy module | +| [Traffic rate limiting for services](/consul/docs/manage-traffic/rate-limit) | Yes | Yes | Yes | [HashiCorp Cloud Platform (HCP) Consul]: https://cloud.hashicorp.com/products/consul [Consul Enterprise]: https://www.hashicorp.com/products/consul/ @@ -127,24 +127,24 @@ Consul Enterprise feature availability can change depending on your server and c | Enterprise Feature | VM Client | K8s Client | ECS Client | |-------------------------------------------------------------------------------------------------------------- | :-------: | :--------: | :--------: | -| [Admin Partitions](/consul/docs/enterprise/admin-partitions) | ✅ | ✅ | ✅ | -| [Audit Logging](/consul/docs/enterprise/audit-logging) | ✅ | ✅ | ✅ | -| [Automated Server Backups](/consul/docs/enterprise/backups) | ✅ | ✅ | ✅ | -| [Automated Server Upgrades](/consul/docs/enterprise/upgrades) | ✅ | ✅ | ✅ | -| [Enhanced Read Scalability](/consul/docs/enterprise/read-scale) | ✅ | ✅ | ✅ | -| [Fault injection](/consul/docs/connect/manage-traffic/fault-injection) | ✅ | ✅ | ✅ | -| [FIPS 140-2 Compliance](/consul/docs/enterprise/fips) | ✅ | ✅ | ✅ | -| [JWT verification for API gateways](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms) | ✅ | ✅ | ❌ | -| [Locality-aware routing](/consul/docs/connect/manage-traffic/route-to-local-upstreams) | ✅ | ✅ | ✅ | -| [Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) | ✅ | ✅ | ❌ | -| [Namespaces](/consul/docs/enterprise/namespaces) | ✅ | ✅ | ✅ | -| [Network Areas](/consul/docs/enterprise/federation) | ✅ | ✅ | ✅ | -| [Network Segments](/consul/docs/enterprise/network-segments/network-segments-overview) | ✅ | ✅ | ❌ | -| [OIDC Auth Method](/consul/docs/security/acl/auth-methods/oidc) | ✅ | ✅ | ✅ | -| [Redundancy Zones](/consul/docs/enterprise/redundancy) | ✅ | ✅ | ✅ | -| [Sameness Groups](/consul/docs/connect/config-entries/sameness-group) | ✅ | ✅ | ✅ | -| [Server request rate limits per source IP](/consul/docs/agent/limits/usage/limit-request-rates-from-ips) | ✅ | ✅ | ✅ | -| [Traffic rate limiting for services](/consul/docs/connect/manage-traffic/limit-request-rates) | ✅ | ✅ | ✅ | +| [Admin Partitions](/consul/docs/multi-tenant/admin-partition) | ✅ | ✅ | ✅ | +| [Audit Logging](/consul/docs/monitor/log/audit) | ✅ | ✅ | ✅ | +| [Automated Server Backups](/consul/docs/manage/scale/automated-backup) | ✅ | ✅ | ✅ | +| [Automated Server Upgrades](/consul/docs/upgrade/automated) | ✅ | ✅ | ✅ | +| [Enhanced Read Scalability](/consul/docs/manage/scale/read-replica) | ✅ | ✅ | ✅ | +| [Fault injection](/consul/docs/troubleshoot/fault-injection) | ✅ | ✅ | ✅ | +| [FIPS 140-2 Compliance](/consul/docs/deploy/server/fips) | ✅ | ✅ | ✅ | +| [JWT verification for API gateways](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) | ✅ | ✅ | ❌ | +| [Locality-aware routing](/consul/docs/manage-traffic/route-local) | ✅ | ✅ | ✅ | +| [Long Term Support (LTS)](/consul/docs/upgrade/lts) | ✅ | ✅ | ❌ | +| [Namespaces](/consul/docs/multi-tenant/namespace) | ✅ | ✅ | ✅ | +| [Network Areas](/consul/docs/east-west/network-area) | ✅ | ✅ | ✅ | +| [Network Segments](/consul/docs/multi-tenant/network-segment) | ✅ | ✅ | ❌ | +| [OIDC Auth Method](/consul/docs/secure/acl/auth-method/oidc) | ✅ | ✅ | ✅ | +| [Redundancy Zones](/consul/docs/manage/scale/redundancy-zone) | ✅ | ✅ | ✅ | +| [Sameness Groups](/consul/docs/reference/config-entry/sameness-group) | ✅ | ✅ | ✅ | +| [Server request rate limits per source IP](/consul/docs/manage/rate-limit/source) | ✅ | ✅ | ✅ | +| [Traffic rate limiting for services](/consul/docs/manage-traffic/rate-limit) | ✅ | ✅ | ✅ | @@ -152,48 +152,48 @@ Consul Enterprise feature availability can change depending on your server and c | Enterprise Feature | VM Client | K8s Client | ECS Client | |-------------------------------------------------------------------------------------------------------------- | :-------: | :--------: | :--------: | -| [Admin Partitions](/consul/docs/enterprise/admin-partitions) | ✅ | ✅ | ✅ | -| [Audit Logging](/consul/docs/enterprise/audit-logging) | ✅ | ✅ | ✅ | -| [Automated Server Backups](/consul/docs/enterprise/backups) | ✅ | ✅ | ✅ | -| [Automated Server Upgrades](/consul/docs/enterprise/upgrades) | ❌ | ❌ | ❌ | -| [Enhanced Read Scalability](/consul/docs/enterprise/read-scale) | ❌ | ❌ | ❌ | -| [Fault injection](/consul/docs/connect/manage-traffic/fault-injection) | ✅ | ✅ | ✅ | -| [FIPS 140-2 Compliance](/consul/docs/enterprise/fips) | ✅ | ✅ | ✅ | -| [JWT verification for API gateways](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-k8s) | ✅ | ✅ | ❌ | -| [Locality-aware routing](/consul/docs/connect/manage-traffic/route-to-local-upstreams) | ✅ | ✅ | ✅ | -| [Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) | ✅ | ✅ | ❌ | -| [Namespaces](/consul/docs/enterprise/namespaces) | ✅ | ✅ | ✅ | -| [Network Areas](/consul/docs/enterprise/federation) | ✅ | ✅ | ✅ | -| [Network Segments](/consul/docs/enterprise/network-segments/network-segments-overview) | ✅ | ✅ | ❌ | -| [OIDC Auth Method](/consul/docs/security/acl/auth-methods/oidc) | ✅ | ✅ | ✅ | -| [Redundancy Zones](/consul/docs/enterprise/redundancy) | ❌ | ❌ | ❌ | -| [Sameness Groups](/consul/docs/connect/config-entries/sameness-group) | ✅ | ✅ | ✅ | -| [Server request rate limits per source IP](/consul/docs/agent/limits/usage/limit-request-rates-from-ips) | ✅ | ✅ | ✅ | -| [Traffic rate limiting for services](/consul/docs/connect/manage-traffic/limit-request-rates) | ✅ | ✅ | ✅ | +| [Admin Partitions](/consul/docs/multi-tenant/admin-partition) | ✅ | ✅ | ✅ | +| [Audit Logging](/consul/docs/monitor/log/audit) | ✅ | ✅ | ✅ | +| [Automated Server Backups](/consul/docs/manage/scale/automated-backup) | ✅ | ✅ | ✅ | +| [Automated Server Upgrades](/consul/docs/upgrade/automated) | ❌ | ❌ | ❌ | +| [Enhanced Read Scalability](/consul/docs/manage/scale/read-replica) | ❌ | ❌ | ❌ | +| [Fault injection](/consul/docs/troubleshoot/fault-injection) | ✅ | ✅ | ✅ | +| [FIPS 140-2 Compliance](/consul/docs/deploy/server/fips) | ✅ | ✅ | ✅ | +| [JWT verification for API gateways](/consul/docs/north-south/api-gateway/secure-traffic/jwt/k8s) | ✅ | ✅ | ❌ | +| [Locality-aware routing](/consul/docs/manage-traffic/route-local) | ✅ | ✅ | ✅ | +| [Long Term Support (LTS)](/consul/docs/upgrade/lts) | ✅ | ✅ | ❌ | +| [Namespaces](/consul/docs/multi-tenant/namespace) | ✅ | ✅ | ✅ | +| [Network Areas](/consul/docs/east-west/network-area) | ✅ | ✅ | ✅ | +| [Network Segments](/consul/docs/multi-tenant/network-segment) | ✅ | ✅ | ❌ | +| [OIDC Auth Method](/consul/docs/secure/acl/auth-method/oidc) | ✅ | ✅ | ✅ | +| [Redundancy Zones](/consul/docs/manage/scale/redundancy-zone) | ❌ | ❌ | ❌ | +| [Sameness Groups](/consul/docs/reference/config-entry/sameness-group) | ✅ | ✅ | ✅ | +| [Server request rate limits per source IP](/consul/docs/manage/rate-limit/source) | ✅ | ✅ | ✅ | +| [Traffic rate limiting for services](/consul/docs/manage-traffic/rate-limit) | ✅ | ✅ | ✅ | | Enterprise Feature | VM Client | K8s Client | ECS Client | | ------------------------------------------------------------------------------------------------------------- | :-------: | :--------: | :--------: | -| [Admin Partitions](/consul/docs/enterprise/admin-partitions) | ✅ | ✅ | ✅ | -| [Audit Logging](/consul/docs/enterprise/audit-logging) | ✅ | ✅ | ✅ | -| [Automated Server Backups](/consul/docs/enterprise/backups) | ✅ | ✅ | ✅ | -| [Automated Server Upgrades](/consul/docs/enterprise/upgrades) | ✅ | ✅ | ✅ | -| [Enhanced Read Scalability](/consul/docs/enterprise/read-scale) | ❌ | ❌ | ❌ | -| [Fault injection](/consul/docs/connect/manage-traffic/fault-injection) | ✅ | ✅ | ✅ | -| [FIPS 140-2 Compliance](/consul/docs/enterprise/fips) | ❌ | ❌ | ❌ | -| [JWT verification for API gateways](/consul/docs/connect/gateways/api-gateway/secure-traffic/verify-jwts-vms) | ✅ | ✅ | ❌ | -| [Locality-aware routing](/consul/docs/connect/manage-traffic/route-to-local-upstreams) | ✅ | ✅ | ✅ | -| [Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) | N/A | N/A | N/A | -| [Namespaces](/consul/docs/enterprise/namespaces) | ✅ | ✅ | ✅ | -| [Network Areas](/consul/docs/enterprise/federation) | ❌ | ❌ | ❌ | -| [Network Segments](/consul/docs/enterprise/network-segments/network-segments-overview) | ❌ | ❌ | ❌ | -| [OIDC Auth Method](/consul/docs/security/acl/auth-methods/oidc) | ❌ | ❌ | ❌ | -| [Redundancy Zones](/consul/docs/enterprise/redundancy) | N/A | N/A | N/A | -| [Sameness Groups](/consul/docs/connect/config-entries/sameness-group) | ✅ | ✅ | ✅ | -| [Server request rate limits per source IP](/consul/docs/agent/limits/usage/limit-request-rates-from-ips) | ✅ | ✅ | ✅ | -| [Traffic rate limiting for services](/consul/docs/connect/manage-traffic/limit-request-rates) | ✅ | ✅ | ✅ | +| [Admin Partitions](/consul/docs/multi-tenant/admin-partition) | ✅ | ✅ | ✅ | +| [Audit Logging](/consul/docs/monitor/log/audit) | ✅ | ✅ | ✅ | +| [Automated Server Backups](/consul/docs/manage/scale/automated-backup) | ✅ | ✅ | ✅ | +| [Automated Server Upgrades](/consul/docs/upgrade/automated) | ✅ | ✅ | ✅ | +| [Enhanced Read Scalability](/consul/docs/manage/scale/read-replica) | ❌ | ❌ | ❌ | +| [Fault injection](/consul/docs/troubleshoot/fault-injection) | ✅ | ✅ | ✅ | +| [FIPS 140-2 Compliance](/consul/docs/deploy/server/fips) | ❌ | ❌ | ❌ | +| [JWT verification for API gateways](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) | ✅ | ✅ | ❌ | +| [Locality-aware routing](/consul/docs/manage-traffic/route-local) | ✅ | ✅ | ✅ | +| [Long Term Support (LTS)](/consul/docs/upgrade/lts) | N/A | N/A | N/A | +| [Namespaces](/consul/docs/multi-tenant/namespace) | ✅ | ✅ | ✅ | +| [Network Areas](/consul/docs/east-west/network-area) | ❌ | ❌ | ❌ | +| [Network Segments](/consul/docs/multi-tenant/network-segment) | ❌ | ❌ | ❌ | +| [OIDC Auth Method](/consul/docs/secure/acl/auth-method/oidc) | ❌ | ❌ | ❌ | +| [Redundancy Zones](/consul/docs/manage/scale/redundancy-zone) | N/A | N/A | N/A | +| [Sameness Groups](/consul/docs/reference/config-entry/sameness-group) | ✅ | ✅ | ✅ | +| [Server request rate limits per source IP](/consul/docs/manage/rate-limit/source) | ✅ | ✅ | ✅ | +| [Traffic rate limiting for services](/consul/docs/manage-traffic/rate-limit) | ✅ | ✅ | ✅ | - + \ No newline at end of file diff --git a/website/content/docs/enterprise/license/cts.mdx b/website/content/docs/enterprise/license/cts.mdx new file mode 100644 index 000000000000..c4f306b2cf70 --- /dev/null +++ b/website/content/docs/enterprise/license/cts.mdx @@ -0,0 +1,83 @@ +--- +layout: docs +page_title: Consul-Terraform-Sync Enterprise License +description: >- + Consul-Terraform-Sync Enterprise License +--- + +# Consul-Terraform-Sync Enterprise License + + + Licenses are only required for Consul-Terraform-Sync (CTS) Enterprise + + +CTS Enterprise binaries require a [Consul Enterprise license](/consul/docs/enterprise/license) to run. There is no CTS Enterprise specific license. As a result, CTS Enterprise's licensing is very similar to Consul Enterprise. + +All CTS Enterprise features are available with a valid Consul Enterprise license, regardless of your Consul Enterprise packaging or pricing model. + +To get a trial license for CTS, you can sign-up for the [trial license for Consul Enterprise](/consul/docs/enterprise/license/faq#q-where-can-users-get-a-trial-license-for-consul-enterprise). + +## Automatic License Retrieval +CTS automatically retrieves a license from Consul on startup and then attempts to retrieve a new license once a day. If the current license is reaching its expiration date, CTS attempts to retrieve a license with increased frequency, as defined by the [License Expiration Date Handling](/consul/docs/nia/enterprise/license#license-expiration-handling). + +~> Enabling automatic license retrieval is recommended when using HCP Consul Dedicated, as HCP Consul Dedicated licenses expire more frequently than Consul Enterprise licenses. Without auto-retrieval enabled, you have to restart CTS every time you load a new license. + +## Setting the License Manually + +If a license needs to be manually set, choose one of the following methods (in order of precedence) to set the license: + +1. Set the `CONSUL_LICENSE` environment variable to the license string. + + ```shell-session + export CONSUL_LICENSE= + ``` + +1. Set the `CONSUL_LICENSE_PATH` environment variable to the path of the file containing the license. + + ```shell-session + export CONSUL_LICENSE_PATH=// + ``` + +1. To point to the file containing the license, in the configuration file, configure the [`license`](/consul/docs/nia/configuration#license) path option. + + ```hcl + license { + path = "//" + } + ``` + +1. To point to the file containing the license, in the configuration file, configure the [`license_path`](/consul/docs/nia/configuration#license_path) option i. **Deprecated in CTS 0.6.0 and will be removed in a future release. Use [license block](/consul/docs/nia/configuration#license) instead.** + + ```hcl + license_path = "//" + ``` + +~> **Note**: the [options to set the license and the order of precedence](/consul/docs/enterprise/license/overview#binaries-without-built-in-licenses) are the same as Consul Enterprise server agents. +Visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) for detailed steps on how to install the license key. + +### Updating the License Manually +To update the license when it expires or is near the expiration date and automatic license retrieval is disabled: + +1. Update the license environment variable or configuration with the new license value or path to the new license file +1. Stop and restart CTS Enterprise + +Once CTS Enterprise starts again, it will pick up the new license and run the tasks with any changes that may have occurred between the stop and restart period. + +## License Expiration Handling + +Licenses have an expiration date and a termination date. The termination date is a time at or after the license expires. CTS Enterprise will cease to function once the termination date has passed. + +The time between the expiration and termination dates is a grace period. Grace periods are generally 24-hours, but you should refer to your license agreement for complete terms of your grace period. + +When approaching expiration and termination, by default, CTS Enterprise will attempt to retrieve a new license. If auto-retrieval is disabled, CTS Enterprise will provide notifications in the system logs: + +| Time period | Behavior - auto-retrieval enabled (default) |Behavior - auto-retrieval disabled | +| ------------------------------------------- |-------------------------------------------- |---------------------------------- | +| 30 days before expiration | License retrieval attempt every 24-hours | Warning-level log every 24-hours | +| 7 days before expiration | License retrieval attempt every 1 hour | Warning-level log every 1 hour | +| 1 day before expiration | License retrieval attempt every 5 minutes | Warning-level log every 5 minutes | +| 1 hour before expiration | License retrieval attempt every 1 minute | Warning-level log every 1 minute | +| At or after expiration (before termination) | License retrieval attempt every 1 minute | Error-level log every 1 minute | +| At or after termination | Error-level log and exit | Error-level log and exit | + +~> **Note**: Notification frequency and [grace period](/consul/docs/enterprise/license/faq#q-is-there-a-grace-period-when-licenses-expire) behavior is the same as Consul Enterprise. diff --git a/website/content/docs/enterprise/license/faq.mdx b/website/content/docs/enterprise/license/faq.mdx index 4278f7ec5ebf..81b379220655 100644 --- a/website/content/docs/enterprise/license/faq.mdx +++ b/website/content/docs/enterprise/license/faq.mdx @@ -26,7 +26,7 @@ This will no longer work since each server must be able to find a valid license All customers on Consul Enterprise 1.8/1.9 must first upgrade their client and server agents to the latest patch release. During the upgrade the license file must also be configured on client agents in an environment variable or file path, otherwise the Consul agents will fail to retrieve the license with a valid agent token. The upgrade process varies if ACLs are enabled or disabled in the cluster. -Refer to the instructions on [upgrading to 1.10.x](/consul/docs/upgrading/instructions/upgrade-to-1-10-x) for details. +Refer to the instructions on [upgrading to 1.10.x](/consul/docs/upgrade/instructions/upgrade-to-1-10-x) for details. ## Q: Is there a tutorial available for the license configuration steps? @@ -36,11 +36,11 @@ Please visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashi The list below is a great starting point for learning more about the license changes introduced in Consul Enterprise 1.10.0+ent. -- [Consul Enterprise Upgrade Documentation](/consul/docs/enterprise/upgrades) +- [Consul Enterprise Upgrade Documentation](/consul/docs/upgrade/automated) -- [Consul License Documentation](/consul/docs/enterprise/license/overview) +- [Consul License Documentation](/consul/docs/enterprise/license) -- [License configuration values documentation](/consul/docs/enterprise/license/overview#binaries-without-built-in-licenses) +- [License configuration values documentation](/consul/docs/enterprise/license#binaries-without-built-in-licenses) - [Install a HashiCorp Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) @@ -101,7 +101,7 @@ Consul snapshot agents will attempt to retrieve the license from servers if cert ## Q: Where can users get a trial license for Consul Enterprise? -Contact your [HashiCorp Support contact](https://support.hashicorp.com/) for a development license. +[Contact HashiCorp Sales](https://www.hashicorp.com/en/contact-sales) for a development license. ~> **Trial install will cease operation 24 hours after 30-day license expiration**: Trial licenses are not meant to be used in production. @@ -110,19 +110,19 @@ Contact your [HashiCorp Support contact](https://support.hashicorp.com/) for a d ## Q: How can I renew a license? -Contact your organization's [HashiCorp account team](https://support.hashicorp.com/hc/en-us) for information on how to renew your organization's enterprise license. +Contact your organization's [HashiCorp account team](https://www.hashicorp.com/en/contact-sales) for information on how to renew your organization's enterprise license. ## Q: I'm an existing enterprise customer but don't have my license, how can I get it? -Contact your organization's [HashiCorp account team](https://support.hashicorp.com/hc/en-us) for information on how to renew your organization's enterprise license. +Contact your organization's [HashiCorp account team](https://www.hashicorp.com/en/contact-sales) for information on how to renew your organization's enterprise license. ## Q: Are the license files locked to a specific cluster? The license files are not locked to a specific cluster or cluster node. The above changes apply to all nodes in a cluster. -## Q: Will this impact HCP Consul Dedicated? +## Q: Will this impact HCP Consul? -This will not impact HCP Consul Dedicated. +This will not impact HCP Consul. ## Q: Does this need to happen every time a node restarts, or is this a one-time check? @@ -147,7 +147,7 @@ Please see the [upgrade requirements](/consul/docs/enterprise/license/faq#q-what 1. Run [`consul license get -signed`](/consul/commands/license#get) to extract the license from their running cluster. Store the license in a secure location on disk. 1. Set up the necessary configuration so that when Consul Enterprise reboots it will have access to the required license. This could be via the client agent configuration file or an environment variable. 1. Visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) for detailed steps on how to install the license key. -1. Follow the Consul upgrade [documentation](/consul/docs/upgrading). +1. Follow the Consul upgrade [documentation](/consul/docs/upgrade). ### Kubernetes @@ -155,27 +155,27 @@ Please see the [upgrade requirements](/consul/docs/enterprise/license/faq#q-what 1. In order to use Consul Enterprise 1.10.0 or greater on Kubernetes you must use version 0.32.0 or greater of the Helm chart. 1. You should already have a Consul Enterprise license set as a Kubernetes secret. If you do not, refer to [how to obtain a copy of your license](/consul/docs/enterprise/license/faq#q-i-m-an-existing-enterprise-customer-but-don-t-have-my-license-how-can-i-get-it). -Once you have the license then create a Kubernetes secret containing the license as described in [Kubernetes - Consul Enterprise](/consul/docs/k8s/deployment-configurations/consul-enterprise). -1. Follow the [Kubernetes Upgrade Docs](/consul/docs/k8s/upgrade) to upgrade. No changes to your `values.yaml` file are needed to enable enterprise autoloading since this support is built in to consul-helm 0.32.0 and greater. +Once you have the license then create a Kubernetes secret containing the license as described in [Kubernetes - Consul Enterprise](/consul/docs/deploy/server/k8s/enterprise). +1. Follow the [Kubernetes Upgrade Docs](/consul/docs/upgrade/k8s) to upgrade. No changes to your `values.yaml` file are needed to enable enterprise autoloading since this support is built in to consul-helm 0.32.0 and greater. -!> **Warning:** If you are upgrading the Helm chart but **not** upgrading the Consul version, you must set `server.enterpriseLicense.enableLicenseAutoload: false`. See [Kubernetes - Consul Enterprise](/consul/docs/k8s/deployment-configurations/consul-enterprise) for more details. +!> **Warning:** If you are upgrading the Helm chart but **not** upgrading the Consul version, you must set `server.enterpriseLicense.enableLicenseAutoload: false`. See [Kubernetes - Consul Enterprise](/consul/docs/deploy/server/k8s/enterprise) for more details. ## Q: What is the migration path for customers who want to migrate from their existing perpetually-licensed binaries to the license on disk flow? ### VM -1. Acquire a valid Consul Enterprise license. If you are an existing HashiCorp enterprise customer you may contact your organization's [customer success manager](https://support.hashicorp.com/hc/en-us) (CSM) for information on how to get your organization's enterprise license. +1. Acquire a valid Consul Enterprise license. If you are an existing HashiCorp enterprise customer you may contact your organization's [customer success manager](https://www.hashicorp.com/en/contact-sales) (CSM) for information on how to get your organization's enterprise license. 1. Store the license in a secure location on disk. 1. Set up the necessary configuration so that when Consul Enterprise reboots it will have the required license. This could be via the client agent configuration file or an environment variable. Visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) for detailed steps on how to install the license key. -1. Follow the Consul upgrade [documentation](/consul/docs/upgrading). +1. Follow the Consul upgrade [documentation](/consul/docs/upgrade). ### Kubernetes -1. Acquire a valid Consul Enterprise license. If you are an existing HashiCorp enterprise customer you may contact your organization's [customer success manager](https://support.hashicorp.com/hc/en-us) (CSM) for information on how to get your organization's enterprise license. +1. Acquire a valid Consul Enterprise license. If you are an existing HashiCorp enterprise customer you may contact your organization's [customer success manager](https://www.hashicorp.com/en/contact-sales) (CSM) for information on how to get your organization's enterprise license. 1. Set up the necessary configuration so that when Consul Enterprise reboots it will have the required license. This could be via the client agent configuration file or an environment variable. Visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) for detailed steps on how to install the license key. -1. Proceed with the `helm` [upgrade instructions](/consul/docs/k8s/upgrade) +1. Proceed with the `helm` [upgrade instructions](/consul/docs/upgrade/k8s) ## Q: Will Consul downgrades/rollbacks work? diff --git a/website/content/docs/enterprise/license/index.mdx b/website/content/docs/enterprise/license/index.mdx new file mode 100644 index 000000000000..c7cc8443cc33 --- /dev/null +++ b/website/content/docs/enterprise/license/index.mdx @@ -0,0 +1,84 @@ +--- +layout: docs +page_title: Enterprise licenses +description: >- + Consul Enterprise server, client, and snapshot agents require a license on startup in order to use Enterprise features. Learn how to apply licenses using environment variables or configuration files. +--- + +# Consul Enterprise license + +This topic provides an overview of the Consul Enterprise license. + +## Licensing Overview + +All Consul Enterprise agents must be licensed when they are started. Where that license comes from will depend +on which binary is in use, whether the agent is a server, client or snapshot agent, and whether ACLs have been +enabled for the cluster. + +-> ** Consul Enterprise 1.10.0 removed temporary licensing.** Prior to 1.10.0, Consul Enterprise +agents could start without a license and then have a license applied to them later on via the CLI +or API. That functionality has been removed and replaced with the ability to load licenses from the +agent's configuration or environment. Also, prior to 1.10.0, server agents would automatically propagate +the license between themselves. This no longer occurs and the license must be present on each server agent +when it is started. + +Consul Enterprise 1.14.0, when running on Kubernetes, removed client agents and replaced these with virtual agents. +Virtual agents are nodes that Consul service mesh services run on. HashiCorp uses virtual agents to determine license entitlements for customers on per-node licensing and pricing agreements. + +-> Visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) for detailed steps on how to install the license key. + +### Applying a License + +For Consul Enterprise 1.10.0 or greater, a license must be available at the time the agent starts. +For server agents this means that they must either have the [`license_path`](/consul/docs/reference/agent/configuration-file/general#license_path) +configuration set or have a license configured in the servers environment with the `CONSUL_LICENSE` or +`CONSUL_LICENSE_PATH` environment variables. Both the configuration item and the `CONSUL_LICENSE_PATH` +environment variable point to a file containing the license whereas the `CONSUL_LICENSE` environment +variable should contain the license as the value. If multiple variables are set, +the following order of precedence applies: + +1. `CONSUL_LICENSE` environment variable +2. `CONSUL_LICENSE_PATH` environment variable +3. `license_path` configuration item. + +Client agents and [snapshot agents](/consul/docs/manage/scale/automated-backup) +may also be licensed in the very same manner. +However, to avoid the need to configure the license on many client agents and snapshot agents, +those agents have the capability to retrieve the license automatically under the conditions described below. + +Virtual agents do not need the license to run. + +Updating the license for an agent depends on the method you used to apply the license. +- **If you used the `CONSUL_LICENSE` +environment variable**: After updating the environment variable, restart the affected agents. +- **If you used the +`CONSUL_LICENSE_PATH` environment variable**: Update the license file first. Then, restart the affected agents. +- **If you used the `license_path` configuration item**: Update the license file first. Then, run [`consul reload`](/consul/commands/reload) for the affected agents. + +#### Client Agent License Retrieval + +When a client agent starts without a license in its configuration or environment, it will try to retrieve the +license from the servers via RPCs. That RPC always requires a valid non-anonymous ACL token to authorize the +request but the token doesn't need any particular permissions. As the license is required before the client +actually joins the cluster, where to make those RPC requests to is inferred from the +[`retry_join`](/consul/docs/reference/agent/configuration-file/join#retry_join) configuration. If `retry_join` is unset or no +[`agent` token](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent) is set then the client agent will immediately shut itself down. + +If all preliminary checks pass the client agent will attempt to reach out to any server on its RPC port to +request the license. These requests will be retried for up to 5 minutes and if it is unable to retrieve a +license within that time frame it will shut itself down. + +If ACLs are disabled then the license must be provided to the client agent through one of the three methods listed below. +Failure in providing the client agent with a licence will prevent the client agent from joining the cluster. + +1. `CONSUL_LICENSE` environment variable +2. `CONSUL_LICENSE_PATH` environment variable +3. `license_path` configuration item. + +#### Snapshot Agent License Retrieval + +The snapshot agent has similar functionality to the client agent for automatically retrieving the license. However, +instead of requiring a server agent to talk to, the snapshot agent can request the license from the server or +client agent it would use for all other operations. It still requires an ACL token to authorize the request. Also +like client agents, the snapshot agent will shut itself down after being unable to retrieve the license for 5 +minutes. diff --git a/website/content/docs/enterprise/license/overview.mdx b/website/content/docs/enterprise/license/overview.mdx deleted file mode 100644 index d2bcb88a4aec..000000000000 --- a/website/content/docs/enterprise/license/overview.mdx +++ /dev/null @@ -1,82 +0,0 @@ ---- -layout: docs -page_title: Enterprise Licenses -description: >- - Consul Enterprise server, client, and snapshot agents require a license on startup in order to use Enterprise features. Learn how to apply licenses using environment variables or configuration files. ---- - -# Consul Enterprise License - -## Licensing Overview - -All Consul Enterprise agents must be licensed when they are started. Where that license comes from will depend -on which binary is in use, whether the agent is a server, client or snapshot agent, and whether ACLs have been -enabled for the cluster. - --> ** Consul Enterprise 1.10.0 removed temporary licensing.** Prior to 1.10.0, Consul Enterprise -agents could start without a license and then have a license applied to them later on via the CLI -or API. That functionality has been removed and replaced with the ability to load licenses from the -agent's configuration or environment. Also, prior to 1.10.0, server agents would automatically propagate -the license between themselves. This no longer occurs and the license must be present on each server agent -when it is started. - -Consul Enterprise 1.14.0, when running on Kubernetes, removed client agents and replaced these with virtual agents. -Virtual agents are nodes that Consul service mesh services run on. HashiCorp uses virtual agents to determine license entitlements for customers on per-node licensing and pricing agreements. - --> Visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) for detailed steps on how to install the license key. - -### Applying a License - -For Consul Enterprise 1.10.0 or greater, a license must be available at the time the agent starts. -For server agents this means that they must either have the [`license_path`](/consul/docs/agent/config/config-files#license_path) -configuration set or have a license configured in the servers environment with the `CONSUL_LICENSE` or -`CONSUL_LICENSE_PATH` environment variables. Both the configuration item and the `CONSUL_LICENSE_PATH` -environment variable point to a file containing the license whereas the `CONSUL_LICENSE` environment -variable should contain the license as the value. If multiple variables are set, -the following order of precedence applies: - -1. `CONSUL_LICENSE` environment variable -2. `CONSUL_LICENSE_PATH` environment variable -3. `license_path` configuration item. - -Client agents and [snapshot agents](/consul/docs/enterprise/backups) -may also be licensed in the very same manner. -However, to avoid the need to configure the license on many client agents and snapshot agents, -those agents have the capability to retrieve the license automatically under the conditions described below. - -Virtual agents do not need the license to run. - -Updating the license for an agent depends on the method you used to apply the license. -- **If you used the `CONSUL_LICENSE` -environment variable**: After updating the environment variable, restart the affected agents. -- **If you used the -`CONSUL_LICENSE_PATH` environment variable**: Update the license file first. Then, restart the affected agents. -- **If you used the `license_path` configuration item**: Update the license file first. Then, run [`consul reload`](/consul/commands/reload) for the affected agents. - -#### Client Agent License Retrieval - -When a client agent starts without a license in its configuration or environment, it will try to retrieve the -license from the servers via RPCs. That RPC always requires a valid non-anonymous ACL token to authorize the -request but the token doesn't need any particular permissions. As the license is required before the client -actually joins the cluster, where to make those RPC requests to is inferred from the -[`retry_join`](/consul/docs/agent/config/config-files#retry_join) configuration. If `retry_join` is unset or no -[`agent` token](/consul/docs/agent/config/config-files#acl_tokens_agent) is set then the client agent will immediately shut itself down. - -If all preliminary checks pass the client agent will attempt to reach out to any server on its RPC port to -request the license. These requests will be retried for up to 5 minutes and if it is unable to retrieve a -license within that time frame it will shut itself down. - -If ACLs are disabled then the license must be provided to the client agent through one of the three methods listed below. -Failure in providing the client agent with a licence will prevent the client agent from joining the cluster. - -1. `CONSUL_LICENSE` environment variable -2. `CONSUL_LICENSE_PATH` environment variable -3. `license_path` configuration item. - -#### Snapshot Agent License Retrieval - -The snapshot agent has similar functionality to the client agent for automatically retrieving the license. However, -instead of requiring a server agent to talk to, the snapshot agent can request the license from the server or -client agent it would use for all other operations. It still requires an ACL token to authorize the request. Also -like client agents, the snapshot agent will shut itself down after being unable to retrieve the license for 5 -minutes. diff --git a/website/content/docs/enterprise/license/utilization-reporting.mdx b/website/content/docs/enterprise/license/reporting.mdx similarity index 100% rename from website/content/docs/enterprise/license/utilization-reporting.mdx rename to website/content/docs/enterprise/license/reporting.mdx diff --git a/website/content/docs/enterprise/long-term-support.mdx b/website/content/docs/enterprise/long-term-support.mdx deleted file mode 100644 index ff73a4cc9254..000000000000 --- a/website/content/docs/enterprise/long-term-support.mdx +++ /dev/null @@ -1,122 +0,0 @@ ---- -layout: docs -page_title: Long Term Support (LTS) -description: >- - Consul Enterprise offers a Long Term Support (LTS) release program. Learn about benefits and terms, including which versions receive extended support, for how long, and fix eligibility. ---- - -# Long Term Support (LTS) - - - This program requires Consul Enterprise. - - -Consul Enterprise offers annual Long Term Support (LTS) releases starting with version v1.15. LTS releases maintain longer fix eligibility and support larger upgrade jumps. - -This page describes the LTS program, its benefits for Enterprise users, and the support the program includes. - -## Overview - -Many Enterprise organizations want to minimize their operational risk by operating a maintained Consul version that is eligible for critical fixes and does not require frequent upgrades. - -Annual Consul Enterprise LTS releases enable organizations to receive critical fixes without upgrading their major version more than once per year. - -The following table lists the lifecycle for each existing LTS release, including expected end of maintenance (EOM). - -| LTS release | Release date | Estimated EOM date | -| ----------- | ------------- | --------------------------------- | -| v1.18 Ent | Feb 27, 2024 | Feb 28, 2026 (v1.24 Ent release) | -| v1.15 Ent | Feb 23, 2023 | Feb 28, 2025 (v1.21 Ent release) | - -## Release lifecycle - -Consul Enterprise LTS releases are maintained for longer than other Consul releases, -as described in the following sections. - -### Standard Term Support lifecycle - -All major releases of Consul Enterprise receive _Standard Term Support_ (STS) -for approximately one year, per HashiCorp's -[Support Period Policy](https://support.hashicorp.com/hc/en-us/articles/360021185113-Support-Period-and-End-of-Life-EOL-Policy). - -With STS, each major release branch is maintained until it is -three (3) releases from the latest major release. -For example, Consul v1.14.x is maintained until the release of Consul v1.17.0. - -### Long Term Support lifecycle - -Starting with Consul Enterprise v1.15, the first major release of the calendar year -receives _Long Term Support_ (LTS) for approximately 2 years. -The first major release of the calendar year typically occurs in late February. - -![Consul Enterprise Long Term Support lifecycle diagram](/img/long-term-support/consul-enterprise-long-term-support-lifecycle.png#light-theme-only) -![Consul Enterprise Long Term Support lifecycle diagram](/img/long-term-support/consul-enterprise-long-term-support-lifecycle-dark.png#dark-theme-only) - -An LTS release is maintained until it is six (6) releases from the latest major release. -For example, Consul Enterprise v1.15.x is maintained until the release of Consul Enterprise v1.21.0. - -During the LTS window, [eligible LTS fixes](#fix-eligibility) are provided through -a new minor release on the affected LTS release branch. - -## Annual upgrade to next LTS release - -We recommend upgrading your Consul LTS version once per year to the next LTS version -in order to receive the full benefit of LTS. This upgrade pattern ensures the -organization is always operating a maintained release with minimal upgrades. - -Only Consul Enterprise LTS versions support direct upgrades to the next 3 major versions, -enabling direct upgrades from one LTS release to the subsequent LTS release. -For example, Consul Enterprise v1.15.x supports upgrading directly to Consul Enterprise v1.18.x. - -Because Consul has 3 major version releases per year, -LTS enables you to catch up on a year's worth of Consul releases in a single upgrade. - -STS releases of Consul support direct upgrades to the next 2 major versions, -as described in the [standard upgrade instructions](/consul/docs/upgrading/instructions). -Without LTS, catching up on a year's worth of releases requires two upgrades. -For example, upgrading from Consul Enterprise v1.14.x to Consul Enterprise v1.17.x -requires an intermediate upgrade to v1.15.x or v1.16.x. - -## Fix eligibility - -Eligibility for an LTS fix is subject to the following criteria: -- A non-security bug must be a Severity 1 (Urgent) or Severity 2 (High) issue - as defined in [HashiCorp Support's Severity Definitions](https://support.hashicorp.com/hc/en-us/articles/115011286028). -- A security bug (CVE) is considered eligible in accordance with - HashiCorp's standard practices for security bugs on supported products. - Refer to the standard - [Support Period Policy](https://support.hashicorp.com/hc/en-us/articles/360021185113-Support-Period-and-End-of-Life-EOL-Policy) - for more information. -- The bug must be present in built artifacts intended for use in production - created from one of the following HashiCorp repositories: - - hashicorp/consul-enterprise - - hashicorp/consul-k8s - - hashicorp/consul-dataplane -- The bug must be applicable for at least one of the following - computer architecture and operating system combinations: - - linux_386 - - linux_amd64 - - linux_arm - - linux_arm64 - - windows_386 - - windows_amd64 - -Eligibility for a fix does not guarantee that a fix will be issued. -For example, some fixes may not be technically possible on a given release branch, -or they may present an undue burden or risk relative to the benefit of the fix. - -HashiCorp may, in its sole discretion, include fixes in a minor release -on an LTS release branch that do not meet the eligibility criteria above. - -## Version compatibility with dependencies - -Consul integrates with Envoy and Kubernetes releases that are maintained for less time -than Consul Enterprise LTS. - -HashiCorp will make a reasonable effort to keep each Consul Enterprise LTS release branch -compatible with a maintained release branch of Envoy and Kubernetes until the LTS release branch -approaches its end of maintenance. - -For more details on LTS version compatibility with dependencies, refer to the following topics: -- [Kubernetes version compatibility with Consul](/consul/docs/k8s/compatibility) -- [Envoy and Consul dataplane version compatibility with Consul](/consul/docs/connect/proxies/envoy) diff --git a/website/content/docs/enterprise/namespaces.mdx b/website/content/docs/enterprise/namespaces.mdx deleted file mode 100644 index d692ee4b9d0f..000000000000 --- a/website/content/docs/enterprise/namespaces.mdx +++ /dev/null @@ -1,118 +0,0 @@ ---- -layout: docs -page_title: Namespaces (Enterprise) -description: >- - Namespaces reduce operational challenges in large deployments. Learn how to define a namespace so that multiple users or teams can access and use the same datacenter without impacting each other. ---- - -# Consul Enterprise Namespaces - - - -This feature requires -HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -With Consul Enterprise 1.7.0+, data for different users or teams -can be isolated from each other with the use of namespaces. Namespaces help reduce operational challenges -by removing restrictions around uniqueness of resource names across distinct teams, and enable operators -to provide self-service through delegation of administrative privileges. - -For more information on how to use namespaces with Consul Enterprise please review the following tutorials: - -- [Register and Discover Services within Namespaces](/consul/tutorials/namespaces/namespaces-share-datacenter-access?utm_source=docs) - Register multiple services within different namespaces in Consul. -- [Setup Secure Namespaces](/consul/tutorials/namespaces/namespaces-secure-shared-access?utm_source=docs) - Secure resources within a namespace and delegate namespace ACL rights via ACL tokens. - -## Namespace Definition - -Namespaces are managed exclusively through the [HTTP API](/consul/api-docs/namespaces) and the [Consul CLI](/consul/commands/namespace). -The HTTP API accepts only JSON formatted definitions while the CLI will parse either JSON or HCL. - -An example namespace definition looks like the following: - - - -```json -{ - "Name": "team-1", - "Description": "Namespace for Team 1", - "ACLs": { - "PolicyDefaults": [ - { - "ID": "77117cf6-d976-79b0-d63b-5a36ac69c8f1" - }, - { - "Name": "node-read" - } - ], - "RoleDefaults": [ - { - "ID": "69748856-ae69-d620-3ec4-07844b3c6be7" - }, - { - "Name": "ns-team-2-read" - } - ] - }, - "Meta": { - "foo": "bar" - } -} -``` - -```hcl -Name = "team-1" -Description = "Namespace for Team 1" -ACLs { - PolicyDefaults = [ - { - ID = "77117cf6-d976-79b0-d63b-5a36ac69c8f1" - }, - { - Name = "node-read" - } - ] - RoleDefaults = [ - { - "ID": "69748856-ae69-d620-3ec4-07844b3c6be7" - }, - { - "Name": "ns-team-2-read" - } - ] -} -Meta { - foo = "bar" -} -``` - - - -### Fields - -- `Name` `(string: )` - The namespaces name must be a valid DNS hostname label. - -- `Description` `(string: "")` - This field is intended to be a human readable description of the - namespace's purpose. It is not used internally. - -- `ACLs` `(object: )` - This fields is a nested JSON/HCL object to contain the namespaces - ACL configuration. - - - `PolicyDefaults` `(array)` - A list of default policies to be applied to all tokens - created in this namespace. The ACLLink object can contain an `ID` and/or `Name` field. When the - policies ID is omitted Consul will resolve the name to an ID before writing the namespace - definition internally. Note that all policies linked in a namespace definition must be defined - within the `default` namespace, and the ACL token used to create or edit the - namespace must have [`acl:write` access](/consul/docs/security/acl/acl-rules#acl-resource-rules) to the linked policy. - - - `RoleDefaults` `(array)` - A list of default roles to be applied to all tokens - created in this namespace. The ACLLink object can contain an `ID` and/or `Name` field. When the - roles' ID is omitted Consul will resolve the name to an ID before writing the namespace - definition internally. Note that all roles linked in a namespace definition must be defined - within the `default` namespace, and the ACL token used to create or edit the - namespace must have [`acl:write` access](/consul/docs/security/acl/acl-rules#acl-resource-rules) to the linked role. - -- `Meta` `(map: )` - Specifies arbitrary KV metadata to associate with - this namespace. diff --git a/website/content/docs/enterprise/network-segments/create-network-segment.mdx b/website/content/docs/enterprise/network-segments/create-network-segment.mdx deleted file mode 100644 index bc300f7708a6..000000000000 --- a/website/content/docs/enterprise/network-segments/create-network-segment.mdx +++ /dev/null @@ -1,194 +0,0 @@ ---- -layout: docs -page_title: Create Network Segments -description: >- - Learn how to create Consul network segments to enable services in the LAN gossip pool to communicate across communication boundaries. ---- - -# Create Network Segments - -This topic describes how to create Consul network segments so that services can connect to other services in the LAN gossip pool that have been placed into separate communication boundaries. Refer to [Network Segments Overview](/consul/docs/enterprise/network-segments/network-segments-overview) for additional information. - - -## Requirements - -- Consul Enterprise 0.9.3+ - -## Define segments in the server configuration - -1. Add the `segments` block to your server configuration. Refer to the [`segments`](/consul/docs/agent/config/config-files#segments) documentation for details about how to define the configuration. - - In the following example, an `alpha` segment is configured to listen for traffic on port `8303` and a `beta` segment is configured to listen to traffic on port `8304`: - - - - ```hcl - segments = [ - { - name = "alpha" - bind = "10.0.0.1" - advertise = "10.0.0.1" - port = 8303 - }, - { - name = "beta" - bind = "10.0.0.1" - advertise = "10.0.0.1" - port = 8304 - } - ] - ``` - - ```json - { - "segments": [ - { - "name": "alpha", - "bind": "10.0.0.1", - "advertise": "10.0.0.1", - "port": 8303 - }, - { - "name": "beta", - "bind": "10.0.0.1", - "advertise": "10.0.0.1", - "port": 8304 - } - ] - } - ``` - - - -1. Start the server using the `consul agent` command. Copy the address for each segment listener so that you can [direct clients to join the segment](#configure-clients-to-join-segments) when you start them: - - ```shell-session - $ consul agent -config-file server.hcl - [INFO] serf: EventMemberJoin: server1.dc1 10.20.10.11 - [INFO] serf: EventMemberJoin: server1 10.20.10.11 - [INFO] consul: Started listener for LAN segment "alpha" on 10.20.10.11:8303 - [INFO] serf: EventMemberJoin: server1 10.20.10.11 - [INFO] consul: Started listener for LAN segment "beta" on 10.20.10.11:8304 - [INFO] serf: EventMemberJoin: server1 10.20.10.11 - ``` -1. Verify that the server is a member of all segments: - - ```shell-session - $ consul members - Node Address Status Type Build Protocol DC Segment - server1 10.20.10.11:8301 alive server 1.14+ent 2 dc1 - ``` - -## Configure clients to join segments - -Client agents can only be members of one segment at a time. You can direct clients to join a segment by specifying the address and name of the segment with the [`-join`](/consul/docs/agent/config/cli-flags#_join) and [`-segment`](/consul/docs/agent/config/cli-flags#_segment) command line flags when starting the agent. - -```shell-session -$ consul agent -config-file client.hcl -join 10.20.10.11:8303 -segment alpha -``` - -Alternatively, you can add the [`retry_join`](/consul/docs/agent/config/config-files#retry_join) and [`segment`](/consul/docs/agent/config/config-files#segment-1) parameters to your client agent configuration file: - -```hcl -node_name = "consul-client" -server = false -datacenter = "dc1" -data_dir = "consul/client-data" -log_level = "INFO" -retry_join = ["10.20.10.11:8303"] -segment = "alpha" -``` - -## Verify segments - -You can use the CLI, API, or GUI to verify which segments your agents have joined. - - - - - -Run the `consul members` command to verify that the client agents are joined to the correct segments: - - - -```shell-session -$ consul members -Node Address Status Type Build Protocol DC Partition Segment -server 192.168.4.159:8301 alive server 1.14+ent 2 dc1 default -client1 192.168.4.159:8447 alive client 1.14+ent 2 dc1 default alpha -``` - - - -You can also pass the name of a segment in the `-segment` flag to view agents in a specific segment. Note that server agents display their LAN listener port for the specified segment the segment filter applied. In the following example, the command returns port `8303` for alpha, rather than for the `` segment port: - - - -```shell-session -$ consul members -segment alpha -Node Address Status Type Build Protocol DC Segment -server1 10.20.10.11:8301 alive server 1.14+ent 2 dc1 alpha -client1 10.20.10.21:8303 alive client 1.14+ent 2 dc1 alpha -``` - - - -Refer to the [`members`](/consul/commands/members) documentation for additional information. - - - - - -Call the `/agent/members` API endpoint to view members that the agent sees in the cluster gossip pool. - - - -```shell-session -$ curl http://127.0.0.1:8500/v1/agent/members?segment=alpha - -{ - "Addr" : "192.168.4.163", - "DelegateCur" : 4, - "DelegateMax" : 5, - "DelegateMin" : 2, - "Name" : "consul-client", - "Port" : 8447, - "ProtocolCur" : 2, - "ProtocolMax" : 5, - "ProtocolMin" : 1, - "Status" : 1, - "Tags" : { - "build" : "1.13.1+ent:5bd604e6", - "dc" : "dc1", - "ft_admpart" : "1", - "ft_ns" : "1", - "id" : "aeaf70d7-57f7-7eaf-e246-6edfe8386e9c", - "role" : "node", - "segment" : "alpha", - "vsn" : "2", - "vsn_max" : "3", - "vsn_min" : "2" - } -} -``` - - - -Refer to the [`/agent/members` API endpoint documentation](/consul/api-docs/agent#list-members) for additional information. - - - - -If the UI is enabled in your agent configuration, the segment name appears in the node’s Metadata tab. - -1. Open the URL for the UI. By default, the UI is `localhost:8500`. -1. Click **Node** in the sidebar and click on the name of the client agent you want to check. -1. Click the **Metadata** tab. The network segment appears as a key-value pair. - - - - - -## Related resources - -You can also create and run a prepared query to query for additional information about the services registered to client nodes. Prepared queries are HTTP API endpoint features that enable you to run complex queries of Consul nodes. Refer [Prepared Query HTTP Endpoint](/consul/api-docs/query) for usage. diff --git a/website/content/docs/enterprise/network-segments/network-segments-overview.mdx b/website/content/docs/enterprise/network-segments/network-segments-overview.mdx deleted file mode 100644 index f23358c4bed5..000000000000 --- a/website/content/docs/enterprise/network-segments/network-segments-overview.mdx +++ /dev/null @@ -1,59 +0,0 @@ ---- -layout: docs -page_title: Network Segments Overview -description: >- - Network segments enable LAN gossip traffic within a datacenter when network rules or firewalls prevent specific sets of clients from communicating directly. Learn about segmented network concepts. ---- - -# Network Segments Overview - -Network segmentation is the practice of dividing a network into multiple segments or subnets that act as independent networks. This topic provides an overview of concepts related to operating Consul in a segmented network. - - - -This feature requires Consul Enterprise version 0.9.3 or later. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -## Segmented networks - -Consul requires full connectivity between all agents in a datacenter within a LAN gossip pool. In some environments, however, business policies enforced through network rules or firewalls prevent full connectivity between all agents. These environments are called _segmented networks_. Network segments are isolated LAN gossip pools that only require full connectivity between agent members on the same segment. - -To use Consul in a segmented network, you must define the segments in your server agent configuration and direct client agents to join one of the segments. The Consul network segment configuration should match the LAN gossip pool boundaries. The following diagram shows how a network may be segmented: - -![Consul datacenter agent connectivity with network segments](/img/network-segments/consul-network-segments-multiple.png) - -## Default network segment - -By default, all Consul agents are part of a shared Serf LAN gossip pool, referred to as the `` network segment. Because all agents are within the same segment, full mesh connectivity within the datacenter is required. The following diagram shows the `` network segment: - -![Consul datacenter default agent connectivity: one network segment](/img/network-segments/consul-network-segments-single.png) - -## Segment membership - -Server agents are members of all segments. The datacenter includes the `` segment, as well as additional segments defined in the `segments` server agent configuration option. Refer to the [`segments`](/consul/docs/agent/config/config-files#segments) documentation for additional information. - -Each client agent can only be a member of one segment at a time. Client agents are members of the `` segment unless they are configured to join a different segment. -For a client agent to join the Consul datacenter, it must connect to another agent (client or server) within its configured segment. - --> **Info:** Network segments enable you to operate a Consul datacenter without full -mesh (LAN) connectivity between agents. To federate multiple Consul datacenters -without full mesh (WAN) connectivity between all server agents in all datacenters, -use [Network Areas (Enterprise)](/consul/docs/enterprise/federation). - -## Consul networking models - -Network segments are a subset of other Consul networking models. Understanding the broader models will help you segment your network. Refer to [Architecture Overview](/consul/docs/architecture) for additional information about the following concepts. - -### Clusters - -You can segment networks within a Consul _cluster_. A cluster is one or more Consul servers that form a Raft quorum and one or more Consul clients that are members of the same [datacenter](/consul/docs/agent/config/cli-flags#_datacenter). The cluster is sometimes called the _local cluster_. Consul clients discover and make RPC requests to Consul servers in their local cluster through the gossip mechanism. Consul CE uses LAN gossip for intra-cluster communication between agents. - -### LAN gossip pool - -A set of fully-connected Consul agents is a _LAN gossip pool_. LAN gossip pools use the Serf protocol to maintain a shared view of the members of the pool for different purposes, such as finding a Consul server in a local cluster or finding servers in a remote cluster. A segmented LAN gossip pool limits a group of agents to only connect with the agents in its segment. - -## Network segments versus network areas - -Network segments enable you to operate a Consul datacenter without full mesh connectivity between agents using a LAN gossip pool. To federate multiple Consul datacenters without full mesh connectivity between all server agents in all datacenters, use [network areas](/consul/docs/enterprise/federation). Network areas are a Consul Enterprise capability. diff --git a/website/content/docs/enterprise/read-scale.mdx b/website/content/docs/enterprise/read-scale.mdx deleted file mode 100644 index f4357d301ef8..000000000000 --- a/website/content/docs/enterprise/read-scale.mdx +++ /dev/null @@ -1,25 +0,0 @@ ---- -layout: docs -page_title: Read Replicas (Enterprise) -description: >- - Learn how you can add non-voting servers to datacenters as read replicas to provide enhanced read scalability without impacting write latency. ---- - -# Enhanced Read Scalability with Read Replicas - - - -This feature requires -HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -Consul Enterprise provides the ability to scale clustered Consul servers -to include voting servers and read replicas. Read replicas still receive data from the cluster replication, -however, they do not take part in quorum election operations. Expanding your Consul cluster in this way can scale -reads without impacting write latency. - -For more details, review the [Consul server configuration](/consul/docs/agent/config) -documentation and the [-read-replica](/consul/docs/agent/config/cli-flags#_read_replica) -configuration flag. diff --git a/website/content/docs/enterprise/redundancy.mdx b/website/content/docs/enterprise/redundancy.mdx deleted file mode 100644 index 8d68b6b43aa6..000000000000 --- a/website/content/docs/enterprise/redundancy.mdx +++ /dev/null @@ -1,33 +0,0 @@ ---- -layout: docs -page_title: Redundancy Zones (Enterprise) -description: >- - Redundancy zones are regions of a cluster containing "hot standby" servers, or non-voting servers that can replace voting servers in the event of a failure. Learn about redundancy zones and how they improve resiliency and increase fault tolerance without affecting latency. ---- - -# Redundancy Zones - - - -This feature requires -self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -Consul Enterprise redundancy zones provide -both scaling and resiliency benefits by enabling the deployment of non-voting -servers alongside voting servers on a per availability zone basis. - -When using redundancy zones, if an operator chooses to deploy Consul across 3 availability zones, they -could have 2 (or more) servers (1 voting/1 non-voting) in each zone. In the event that a voting -member in an availability zone fails, the redundancy zone configuration would automatically -promote the non-voting member to a voting member. In the event that an entire availability -zone was lost, a non-voting member in one of the existing availability zones would promote -to a voting member, keeping server quorum. This capability functions as a "hot standby" -for server nodes while also providing (and expanding) the capabilities of -[enhanced read scalability](/consul/docs/enterprise/read-scale) by also including recovery -capabilities. - -For more information, complete the [Redundancy Zones](/consul/tutorials/datacenter-operations/autopilot-datacenter-operations#redundancy-zones) tutorial -and reference the [Consul Autopilot](/consul/commands/operator/autopilot) documentation. diff --git a/website/content/docs/enterprise/upgrades.mdx b/website/content/docs/enterprise/upgrades.mdx deleted file mode 100644 index 73b6648fedf5..000000000000 --- a/website/content/docs/enterprise/upgrades.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -layout: docs -page_title: Automated Upgrades (Enterprise) -description: >- - Automated upgrades simplify the process for updating Consul. Learn how Consul can gracefully transition from existing server agents to a new set of server agents without Consul downtime. ---- - -~> **Starting with Consul 1.14, and patch releases 1.13.3 and 1.12.6, Consul will disallow upgrades to new versions with a release date after license expiration**: - If you are looking to upgrade to a version of Consul that was released after your license expired, please be sure to renew your license before attempting an upgrade. - - -# Automated Upgrades - - - -This feature requires -HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. -Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. - - - -Consul Enterprise enables the capability of automatically upgrading a cluster of Consul servers to a new -version as updated server nodes join the cluster. This automated upgrade will spawn a process which monitors the amount of voting members -currently in a cluster. When an equal amount of new server nodes are joined running the desired version, the lower versioned servers -will be demoted to non voting members. Demotion of legacy server nodes will not occur until the voting members on the new version match. -Once this demotion occurs, the previous versioned servers can be removed from the cluster safely. - -Review the [Consul operator autopilot](/consul/commands/operator/autopilot) documentation and complete the [Automated Upgrade](/consul/tutorials/datacenter-operations/autopilot-datacenter-operations#upgrade-migrations) tutorial to learn more about automated upgrades. diff --git a/website/content/docs/envoy-extension/apigee.mdx b/website/content/docs/envoy-extension/apigee.mdx new file mode 100644 index 000000000000..8ae9602f0ce6 --- /dev/null +++ b/website/content/docs/envoy-extension/apigee.mdx @@ -0,0 +1,183 @@ +--- +layout: docs +page_title: Delegate authorization to Apigee +description: Learn how to use the `ext-authz` Envoy extension to delegate data plane authorization requests to Apigee. +--- + +# Delegate authorization to Apigee + +This topic describes how to use the external authorization Envoy extension to delegate data plane authorization requests to Apigee. + +For more detailed guidance, refer to the [`learn-consul-apigee-external-authz` repository](https://github.com/hashicorp-education/learn-consul-apigee-external-authz) on GitHub. + +## Workflow + +Complete the following steps to use the external authorization extension with Apigee: + +1. Deploy the Apigee Adapter for Envoy and register the service in Consul. +1. Configure the `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. +1. Apply the configuration entry. + +## Deploy the Apigee Adapter for Envoy + +The [Apigee Adapter for Envoy](https://cloud.google.com/apigee/docs/api-platform/envoy-adapter/v2.0.x/concepts) is an Apigee-managed API gateway that uses Envoy to proxy API traffic. + +To download and install Apigee Adapter for Envoy, refer to the [getting started documentation](https://cloud.google.com/apigee/docs/api-platform/envoy-adapter/v2.0.x/getting-started) or follow along with the [`learn-consul-apigee-external-authz` GitHub repository](https://github.com/hashicorp-education/learn-consul-apigee-external-authz). + +After you deploy the service in your desired runtime, create a service defaults configuration entry for the service's gRPC protocol. + + + + +```hcl +Kind = "service-defaults" +Name = "apigee-remote-service-envoy" +Protocol = "grpc" +``` + + + + +```json +{ + "kind": "service-defaults", + "name": "apigee-remote-service-envoy", + "protocol": "grpc" +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: apigee-remote-service-envoy + namespace: apigee +spec: + protocol: grpc +``` + + + + +## Configure the `EnvoyExtensions` + +Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. + +- When you configure Envoy extensions on proxy defaults, they apply to every service. +- When you configure Envoy extensions on service defaults, they apply to all instances of a service with that name. + + + Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. + + +Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. + +The following example configures the default behavior for all services named `api` so that the Envoy proxies running as sidecars for those service instances target the apigee-remote-service-envoy service for gRPC authorization requests: + + + + +```hcl +Kind = "service-defaults" +Name = "api" +EnvoyExtensions = [ + { + Name = "builtin/ext-authz" + Arguments = { + ProxyType = "connect-proxy" + Config = { + GrpcService = { + Target = { + Service = { + Name = "apigee-remote-service-envoy" + } + } + } + } + } + } +] +``` + + + +```json +{ + "Kind": "service-defaults", + "Name": "api", + "EnvoyExtensions": [{ + "Name": "builtin/ext-authz", + "Arguments": { + "ProxyType": "connect-proxy", + "Config": { + "GrpcService": { + "Target": { + "Service": { + "Name": "apigee-remote-service-envoy" + } + } + } + } + } + } + ] +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: api + namespace: default +spec: + envoyExtensions: + - name: builtin/ext-authz + arguments: + proxyType: connect-proxy + config: + grpcService: + target: + service: + name: apigee-remote-service-envoy + namespace: apigee +``` + + + +Refer to the [external authorization extension configuration reference](/consul/docs/reference/proxy/extensions/ext-authz) for details on how to configure the extension. + +Refer to the [proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/reference/config-entry/service-defaults) for details on how to define the configuration entries. + + + Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. + + +## Apply the configuration entry + +On the CLI, you can use the `consul config write` command and specify the names of the configuration entries to apply them to Consul. For Kubernetes-orchestrated networks, use the `kubectl apply` command to update the relevant CRD. + + + +```shell-session +$ consul config write apigee-remote-service-envoy.hcl +$ consul config write api-auth-service-defaults.hcl +``` + +```shell-session +$ consul config write apigee-remote-service-envoy.json +$ consul config write api-auth-service-defaults.json +``` + +```shell-session +$ kubectl apply -f apigee-remote-service-envoy.yaml +$ kubectl apply -f api-auth-service-defaults.yaml +``` + + diff --git a/website/content/docs/envoy-extension/ext.mdx b/website/content/docs/envoy-extension/ext.mdx new file mode 100644 index 000000000000..f5eb2d1ca45e --- /dev/null +++ b/website/content/docs/envoy-extension/ext.mdx @@ -0,0 +1,141 @@ +--- +layout: docs +page_title: Delegate authorization to an external service +description: Learn how to use the `ext-authz` Envoy extension to delegate data plane authorization requests to external systems. +--- + +# Delegate authorization to an external service + +This topic describes how to use the external authorization Envoy extension to delegate data plane authorization requests to external systems. + +## Workflow + +Complete the following steps to use the external authorization extension: + +1. Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. +1. Apply the configuration entry. + +## Add the `EnvoyExtensions` + +Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. + +- When you configure Envoy extensions on proxy defaults, they apply to every service. +- When you configure Envoy extensions on service defaults, they apply to a specific service. + +Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. + +The following example shows a service defaults configuration entry for the `api` service that directs the Envoy proxy to make gRPC authorization requests to the `authz` service: + + + + +```hcl +Kind = "service-defaults" +Name = "api" +EnvoyExtensions = [ + { + Name = "builtin/ext-authz" + Arguments = { + ProxyType = "connect-proxy" + Config = { + GrpcService = { + Target = { + Service = { + Name = "authz" + } + } + } + } + } + } +] +``` + + + +```json +{ + "Kind": "service-defaults", + "Name": "api", + "EnvoyExtensions": [{ + "Name": "builtin/ext-authz", + "Arguments": { + "ProxyType": "connect-proxy", + "Config": { + "GrpcService": { + "Target": { + "Service": { + "Name": "authz" + } + } + } + } + } + } + ] +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: api + namespace: default +spec: + envoyExtensions: + - name: builtin/ext-authz + arguments: + proxyType: connect-proxy + config: + grpcService: + target: + service: + name: authz + namespace: authz +``` + + + +Refer to the [external authorization extension configuration reference](/consul/docs/reference/proxy/extensions/ext-authz) for details on how to configure the extension. + +Refer to the [proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/reference/config-entry/service-defaults) for details on how to define the configuration entries. + + + Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. + + +### Unsupported Envoy configuration fields + +The following Envoy configurations are not supported: + +| Configuration | Workaround | +| --- | --- | +| `deny_at_disable` | Disable filter by removing it from the service's configuration in the configuration entry. | +| `failure_mode_allow` | Set the `EnvoyExtension.Required` field to `true` in the [service defaults configuration entry](/consul/docs/reference/config-entry/service-defaults#envoyextensions) or [proxy defaults configuration entry](/consul/docs/reference/config-entry/proxy-defaults#envoyextensions). | +| `filter_enabled` | Set the `EnvoyExtension.Required` field to `true` in the [service defaults configuration entry](/consul/docs/reference/config-entry/service-defaults#envoyextensions) or [proxy defaults configuration entry](/consul/docs/reference/config-entry/proxy-defaults#envoyextensions). | +| `filter_enabled_metadata` | Set the `EnvoyExtension.Required` field to `true` in the [service defaults configuration entry](/consul/docs/reference/config-entry/service-defaults#envoyextensions) or [proxy defaults configuration entry](/consul/docs/reference/config-entry/proxy-defaults#envoyextensions). | +| `transport_api_version` | Consul only supports v3 of the transport API. As a result, there is no workaround for implementing the behavior of this field. | + +## Apply the configuration entry + +If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. + + + +```shell-session +$ consul config write api-auth-service-defaults.hcl +``` + +```shell-session +$ consul config write api-auth-service-defaults.json +``` + +```shell-session +$ kubectl apply -f api-auth-service-defaults.yaml +``` + + diff --git a/website/content/docs/envoy-extension/index.mdx b/website/content/docs/envoy-extension/index.mdx new file mode 100644 index 000000000000..1b47d9451d35 --- /dev/null +++ b/website/content/docs/envoy-extension/index.mdx @@ -0,0 +1,57 @@ +--- +layout: docs +page_title: Envoy extensions +description: >- + Envoy extensions are plugins that you can use to add support for additional Envoy features without modifying the Consul codebase. +--- + +# Envoy extensions overview + +This topic provides an overview of Envoy extensions in Consul service mesh deployments. You can modify Consul-generated Envoy resources to add additional functionality without modifying the Consul codebase. + +## Introduction + +Consul supports two methods for modifying Envoy behavior. You can either modify the Envoy resources Consul generates through [escape hatches](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) or configure your services to use Envoy extensions using the `EnvoyExtension` parameter. Implementing escape hatches requires rewriting the Envoy resources so that they are compatible with Consul, a task that also requires understanding how Consul names Envoy resources and enforces intentions. + +Instead of modifying Consul code, you can configure your services to use Envoy extensions through the `EnvoyExtensions` field. This field is definable in [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults#envoyextensions) and [`service-defaults`](/consul/docs/reference/config-entry/service-defaults#envoyextensions) configuration entries. + + +## Supported extensions + +Envoy extensions enable additional service mesh functionality in Consul by changing how the sidecar proxies behave. Extensions dynamically modify the configuration of Envoy proxies based on Consul configuration entries, enabling a wider set of use cases for the service mesh traffic that passes through an Envoy proxy. Consul supports the following extensions: + +- External authorization +- Fault injection +- Lua +- Lambda +- OpenTelemetry Access Logging +- Property override +- WebAssembly (Wasm) + +### External authorization + +The `ext-authz` extension lets you configure external authorization filters for Envoy proxy so that you can route requests to external authorization systems. Refer to the [external authorization documentation](/consul/docs/envoy-extension/ext) for more information. + +### Fault injection + +The `fault-injection` extension lets you alter responses from an upstream service so that users can test the resilience of their system to different unexpected issues. Refer to the [fault injection documentation](/consul/docs/troubleshoot/fault-injection) for more information. + +### Lambda + +The `lambda` Envoy extension enables services to make requests to AWS Lambda functions through the mesh as if they are a normal part of the Consul catalog. Refer to the [Lambda extension documentation](/consul/docs/connect/proxies/envoy-extensions/usage/lambda) for more information. + +### Lua + +The `lua` Envoy extension enables HTTP Lua filters in your Consul Envoy proxies. It allows you to run Lua scripts during Envoy requests and responses from Consul-generated Envoy resources. Refer to the [Lua extension documentation](/consul/docs/envoy-extension/lua) for more information. + +### OpenTelemetry Access Logging + +The `otel-access-logging` Envoy extension lets you configure Envoy proxies to send access logs to OpenTelemetry collector service. Refer to the [OpenTelemetry Access Logging extension documentation](/consul/docs/envoy-extension/otel-access-logging) for more information. + +### Property override + +The `property-override` extension lets you set and unset individual properties on the Envoy resources that Consul generates. Use the extension instead of [escape-hatch overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides) to enable advanced Envoy configuration. Refer to the [property override documentation](/consul/docs/envoy-extension/property-override) for more information. + +### WebAssembly + +The `wasm` extension enables you to configure TCP and HTTP filters that invoke custom WebAssembly (Wasm) plugins. Refer to the [WebAssembly extension documentation](/consul/docs/envoy-extension/wasm) for more information. \ No newline at end of file diff --git a/website/content/docs/envoy-extension/lambda.mdx b/website/content/docs/envoy-extension/lambda.mdx new file mode 100644 index 000000000000..31ba8d372031 --- /dev/null +++ b/website/content/docs/envoy-extension/lambda.mdx @@ -0,0 +1,192 @@ +--- +layout: docs +page_title: Lambda Envoy Extension +description: >- + Learn how the `lambda` Envoy extension enables Consul to join AWS Lambda functions to its service mesh. +--- + +# Invoke Lambda functions in Envoy proxy + +The Lambda Envoy extension configures outbound traffic on upstream dependencies allowing mesh services to properly invoke AWS Lambda functions. Lambda functions appear in the catalog as any other Consul service. + +You can only enable the Lambda extension through `service-defaults`. This is because the Consul uses the `service-defaults` configuration entry name as the catalog name for the Lambda functions. + +## Configuration specifications + +The Lambda Envoy extension has the following arguments: + +| Arguments | Type | Default | Description | +| ---------------- | ----------| -------- | -------------------------------------------------------------------------------- | +| `ARN` | `string` | Required | Specifies the [AWS ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) for the service's Lambda. | +| `InvocationMode` | `boolean` | `false` | Determines if Consul configures the Lambda to be invoked using the `synchronous` or `asynchronous` [invocation mode](https://docs.aws.amazon.com/lambda/latest/operatorguide/invocation-modes.html). | +| `PayloadPassthrough` | `string` | `synchronous` | Determines if the body Envoy receives is converted to JSON or directly passed to Lambda. | + +Unlike [manual lambda registration](/consul/docs/register/service/lambda/manual#supported-meta-fields), the Envoy extension infers the region from the ARN. + +## Workflow + +Complete the following steps to use the Lambda Envoy extension: + +1. Configure EnvoyExtensions through `service-defaults`. +1. Apply the configuration entry. + +### Configure `EnvoyExtensions` + +To use the Lambda Envoy extension, you must configure and apply a `service-defaults` configuration entry. Consul uses service default's name as the Consul service name for the Lambda function. Downstream services also use the name to invoke the Lambda. + +The following example configures the Lambda Envoy extension to create a service named `lambda-1234` in the mesh that can invoke the associated Lambda function. + + + + +```hcl +Kind = "service-defaults" +Name = "lambda-1234" +Protocol = "http" +EnvoyExtensions { + Name = "builtin/aws/lambda" + Arguments = { + ARN = "arn:aws:lambda:us-west-2:111111111111:function:lambda-1234" + } +} +``` + + + + +```json +{ + "kind": "service-defaults", + "name": "lambda-1234", + "protocol": "http", + "envoy_extensions": [{ + "name": "builtin/aws/lambda", + "arguments": { + "arn": "arn:aws:lambda:us-west-2:111111111111:function:lambda-1234" + } + }] +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: lambda-1234 +spec: + protocol: http + envoyExtensions: + - name = "builtin/aws/lambda" + arguments: + arn: arn:aws:lambda:us-west-2:111111111111:function:lambda-1234 +``` + + + + +For a full list of parameters for `EnvoyExtensions`, refer to the [`service-defaults`](/consul/docs/reference/config-entry/service-defaults#envoyextensions). + + You can only enable the Lambda extension through `service-defaults`. + +Refer to [Configuration specification](#configuration-specification) section to find a full list of arguments for the Lambda Envoy extension. + +### Apply the configuration entry + +Apply the `service-defaults` configuration entry. + + + +```shell-session +$ consul config write lambda-envoy-extension.hcl +``` + +```shell-session +$ consul config write lambda-envoy-extension.json +``` + +```shell-session +$ kubectl apply lambda-envoy-extension.yaml +``` + + + +## Examples + +The following example configuration adds a single Lambda function running in two regions into the mesh. You can use the `lambda-1234` service name to invoke it, as if it was any other service in the mesh. + + + + +```hcl +Kind = "service-defaults" +Name = "lambda-1234" +Protocol = "http" +EnvoyExtensions { + Name = "builtin/aws/lambda" + + Arguments = { + payloadPassthrough: false + arn: arn:aws:lambda:us-west-2:111111111111:function:lambda-1234 + } +} +EnvoyExtensions { + Name = "builtin/aws/lambda" + + Arguments = { + payloadPassthrough: false + arn: arn:aws:lambda:us-east-1:111111111111:function:lambda-1234 + } +} +``` + + + + +```hcl +{ + "kind": "service-defaults", + "name": "lambda-1234", + "protocol": "http", + "envoy_extensions": [{ + "name": "builtin/aws/lambda", + "arguments": { + "payload_passthrough": false, + "arn": "arn:aws:lambda:us-west-2:111111111111:function:lambda-1234" + } + }, + { + "name": "builtin/aws/lambda", + "arguments": { + "payload_passthrough": false, + "arn": "arn:aws:lambda:us-east-1:111111111111:function:lambda-1234" + } + }] +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: lambda-1234 +spec: + protocol: http + envoyExtensions: + - name = "builtin/aws/lambda" + arguments: + payloadPassthrough: false + arn: arn:aws:lambda:us-west-2:111111111111:function:lambda-1234 + - name = "builtin/aws/lambda" + arguments: + payloadPassthrough: false + arn: arn:aws:lambda:us-east-1:111111111111:function:lambda-1234 +``` + + + \ No newline at end of file diff --git a/website/content/docs/envoy-extension/lua.mdx b/website/content/docs/envoy-extension/lua.mdx new file mode 100644 index 000000000000..2fd8c073ffdb --- /dev/null +++ b/website/content/docs/envoy-extension/lua.mdx @@ -0,0 +1,248 @@ +--- +layout: docs +page_title: Lua Envoy Extension +description: >- + Learn how the `lua` Envoy extension enables Consul to run Lua scripts during Envoy requests and responses from Consul-generated Envoy resources. +--- + +# Run Lua scripts in Envoy proxy + +The Lua Envoy extension enables the [HTTP Lua filter](https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/lua_filter) in your Consul Envoy proxies, letting you run Lua scripts when requests and responses pass through Consul-generated Envoy resources. + +Envoy filters support setting and getting dynamic metadata, allowing a filter to share state information with subsequent filters. To set dynamic metadata, configure the HTTP Lua filter. Users can call `streamInfo:dynamicMetadata()` from Lua scripts to get the request's dynamic metadata. + +## Configuration specifications + +To use the Lua Envoy extension, configure the following arguments in the `EnvoyExtensions` block: + +| Arguments | Type | Default | Description | +| -------------- | -------- | -------- | -------------------------------------------------------------------------------- | +| `ListenerType` | `string` | Required | Specifies if the extension is applied to the `inbound` or `outbound` listener. | +| `ProxyType` | `string` | Required | Determines the proxy type the extension applies to. Supported values are `connect-proxy` and `api-gateway`. | +| `Script` | `string` | Required | The Lua script that is configured to run by the HTTP Lua filter. | + +## Workflow + +Complete the following steps to use the Lua Envoy extension: + +1. Configure EnvoyExtensions through `service-defaults` or `proxy-defaults`. +1. Apply the configuration entry. + +### Configure `EnvoyExtensions` + +To use Envoy extensions, you must configure and apply a `proxy-defaults` or `service-defaults` configuration entry with the Envoy extension. + +- When you configure Envoy extensions on `proxy-defaults`, they apply to every service. +- When you configure Envoy extensions on `service-defaults`, they apply to a specific service. + +Consul applies Envoy extensions configured in `proxy-defaults` before it applies extensions in `service-defaults`. As a result, the Envoy extension configuration in `service-defaults` may override configurations in `proxy-defaults`. + +The following example configures the Lua Envoy extension on every service by using the `proxy-defaults`. + + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +Config { + protocol = "http" +} +EnvoyExtensions { + Name = "builtin/lua" + Arguments = { + ProxyType = "connect-proxy" + Listener = "inbound" + Script = <<-EOF +function envoy_on_request(request_handle) + meta = request_handle:streamInfo():dynamicMetadata() + m = meta:get("consul") + request_handle:headers():add("x-consul-service", m["service"]) + request_handle:headers():add("x-consul-namespace", m["namespace"]) + request_handle:headers():add("x-consul-datacenter", m["datacenter"]) + request_handle:headers():add("x-consul-trust-domain", m["trust-domain"]) +end + EOF + } +} +``` + + + + +```json +{ + "kind": "proxy-defaults", + "name": "global", + "protocol": "http", + "envoy_extensions": [{ + "name": "builtin/lua", + "arguments": { + "proxy_type": "connect-proxy", + "listener": "inbound", + "script": "function envoy_on_request(request_handle)\nmeta = request_handle:streamInfo():dynamicMetadata()\nm = \nmeta:get("consul")\nrequest_handle:headers():add("x-consul-service", m["service"])\nrequest_handle:headers():add("x-consul-namespace", m["namespace"])\nrequest_handle:headers():add("x-consul-datacenter", m["datacenter"])\nrequest_handle:headers():add("x-consul-trust-domain", m["trust-domain"])\nend" + } + }] +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + protocol: http + envoyExtensions: + - name = "builtin/lua" + arguments: + proxyType: "connect-proxy" + listener: "inbound" + script: |- +function envoy_on_request(request_handle) + meta = request_handle:streamInfo():dynamicMetadata() + m = meta:get("consul") + request_handle:headers():add("x-consul-service", m["service"]) + request_handle:headers():add("x-consul-namespace", m["namespace"]) + request_handle:headers():add("x-consul-datacenter", m["datacenter"]) + request_handle:headers():add("x-consul-trust-domain", m["trust-domain"]) +end +``` + + + + +For a full list of parameters for `EnvoyExtensions`, refer to the [`service-defaults`](/consul/docs/reference/config-entry/service-defaults#envoyextensions) and [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults#envoyextensions) configuration entries reference documentation. + + + Applying `EnvoyExtensions` to `ProxyDefaults` may produce unintended consequences. We recommend enabling `EnvoyExtensions` with `ServiceDefaults` in most cases. + + +Refer to [Configuration specification](#configuration-specification) section to find a full list of arguments for the Lua Envoy extension. + +### Apply the configuration entry + +Apply the `proxy-defaults` or `service-defaults` configuration entry. + + + +```shell-session +$ consul config write lua-envoy-extension-proxy-defaults.hcl +``` + +```shell-session +$ consul config write lua-envoy-extension-proxy-defaults.json + +``` + +```shell-session +$ kubectl apply lua-envoy-extension-proxy-defaults.yaml +``` + + + +## Examples + +The following example configuration configures the Lua Envoy extension to insert the HTTP Lua filter for all Consul services named `myservice` and add the Consul service name to the`x-consul-service` header for all inbound requests. The `ListenerType` makes it so that the extension applies only on the inbound listener of the service's connect proxy. + + + +```hcl +Kind = "service-defaults" +Name = "myservice" +EnvoyExtensions = [ + { + Name = "builtin/lua" + + Arguments = { + ProxyType = "connect-proxy" + Listener = "inbound" + Script = < + +Alternatively, you can apply the same extension configuration to [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults#envoyextensions) configuration entries. This will apply to all services instead of the one you specified in the proxy default's name. + +You can also specify multiple Lua filters through the Envoy extensions. They will not override each other. + + + +```hcl +Kind = "service-defaults" +Name = "myservice" +EnvoyExtensions = [ + { + Name = "builtin/lua", + Arguments = { + ProxyType = "connect-proxy" + Listener = "inbound" + Script = <<-EOF +function envoy_on_request(request_handle) + meta = request_handle:streamInfo():dynamicMetadata() + m = meta:get("consul") + request_handle:headers():add("x-consul-datacenter", m["datacenter1"]) +end + EOF + } + }, + { + Name = "builtin/lua", + Arguments = { + ProxyType = "connect-proxy" + Listener = "inbound" + Script = <<-EOF +function envoy_on_request(request_handle) + meta = request_handle:streamInfo():dynamicMetadata() + m = meta:get("consul") + request_handle:headers():add("x-consul-datacenter", m["datacenter2"]) +end + EOF + } + } +] +``` + + + +The following example configuration configures the Lua Envoy extension to insert the HTTP Lua filter for all Consul API Gateways named `my-api-gateway` to modify the response body when the status of the http request from upstream is 404. + + + +```hcl + Kind = "service-defaults" + Name = "my-api-gateway" + EnvoyExtensions = [ + { + Name = "builtin/lua", + Arguments = { + ProxyType = "api-gateway" + Listener = "outbound" + Script = < diff --git a/website/content/docs/envoy-extension/otel.mdx b/website/content/docs/envoy-extension/otel.mdx new file mode 100644 index 000000000000..d1893e2b4331 --- /dev/null +++ b/website/content/docs/envoy-extension/otel.mdx @@ -0,0 +1,137 @@ +--- +layout: docs +page_title: Send access logs to OpenTelemetry collector service +description: Learn how to use the `otel-access-logging` Envoy extension to send access logs to OpenTelemetry collector service. +--- + +# Send access logs to OpenTelemetry collector service + +This topic describes how to use the OpenTelemetry Access Logging Envoy extension to send access logs to OpenTelemetry collector service. + +## Workflow + +Complete the following steps to use the OpenTelemetry access logging extension: + +1. Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. +1. Apply the configuration entry. + +## Add the `EnvoyExtensions` + +Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. + +- When you configure Envoy extensions on proxy defaults, they apply to every service. +- When you configure Envoy extensions on service defaults, they apply to a specific service. + +Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. + +The following example shows a service defaults configuration entry for the `api` service that directs the Envoy proxy to make gRPC OpenTelemetry access logging requests to the `otel-collector` service: + + + + +```hcl +Kind = "service-defaults" +Name = "api" +EnvoyExtensions = [ + { + Name = "builtin/otel-access-logging" + Arguments = { + ProxyType = "connect-proxy" + Config = { + GrpcService = { + Target = { + Service = { + Name = "otel-collector" + } + } + } + } + } + } +] +``` + + + +```json +{ + "Kind": "service-defaults", + "Name": "api", + "EnvoyExtensions": [{ + "Name": "builtin/otel-access-logging", + "Arguments": { + "ProxyType": "connect-proxy", + "Config": { + "GrpcService": { + "Target": { + "Service": { + "Name": "otel-collector" + } + } + } + } + } + }] +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: api + namespace: default +spec: + envoyExtensions: + - name: builtin/otel-access-logging + arguments: + proxyType: connect-proxy + config: + grpcService: + target: + service: + name: otel-collector + namespace: otel-collector +``` + + + + +Refer to the [OpenTelemetry Access Logging extension configuration reference](/consul/docs/reference/proxy/extensions/otel) for details on how to configure the extension. + +Refer to the [proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/reference/config-entry/service-defaults) for details on how to define the configuration entries. + + + Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. + + +### Unsupported Envoy configuration fields + +The following Envoy configurations are not supported: + +| Configuration | Workaround | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------ | +| `transport_api_version` | Consul only supports v3 of the transport API. As a result, there is no workaround for implementing the behavior of this field. | + +## Apply the configuration entry + +If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. + + + +```shell-session +$ consul config write api-otel-collector-service-defaults.hcl +``` + +```shell-session +$ consul config write api-otel-collector-service-defaults.json +``` + +```shell-session +$ kubectl apply -f api-otel-collector-service-defaults.yaml +``` + + diff --git a/website/content/docs/envoy-extension/property-override.mdx b/website/content/docs/envoy-extension/property-override.mdx new file mode 100644 index 000000000000..45b2efcee2c8 --- /dev/null +++ b/website/content/docs/envoy-extension/property-override.mdx @@ -0,0 +1,207 @@ +--- +layout: docs +page_title: Configure Envoy proxy properties +description: Learn how to use the property-override extension for Envoy proxies to set and remove individual properties for the Envoy resources Consul generates. +--- + +# Configure Envoy proxy properties + +This topic describes how to use the `property-override` extension to set and remove individual properties for the Envoy resources Consul generates. The extension uses [`protoreflect`](https://pkg.go.dev/google.golang.org/protobuf/reflect/protoreflect), which enables Consul to dynamically manipulate messages. + +The extension currently supports setting scalar and enum fields, removing individual fields addressable by `Path`, and initializing unset intermediate message fields indicated in `Path`. + +It currently does _not_ support the following use cases: +- Adding, updating, or removing repeated field members +- Adding or updating [protobuf `map`](https://protobuf.dev/programming-guides/proto3/#maps) fields +- Adding or updating [protobuf `Any`](https://protobuf.dev/programming-guides/proto3/#any) fields + +## Workflow + +Complete the following steps to use the `property-override` extension: + +1. Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. +1. Apply the configuration entry. + + + The property override extension is an advanced feature capable of introducing unintended consequences or reducing cluster security if used incorrectly. Consul does not enforce TLS retention, intentions, or other security-critical components of the Envoy configuration. Additionally, Consul does not verify that the configuration does not contain errors that affect service traffic. + + +## Add the `EnvoyExtensions` + +Add Envoy extension configurations to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. + +- When you configure Envoy extensions on proxy defaults, they apply to every service. +- When you configure Envoy extensions on service defaults, they apply to a specific service. + +Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. + +The following example shows a service defaults configuration entry named `api` that sets the `/respect_dns_ttl` field for the `other-svc` upstream service: + + + + +```hcl +Kind = "service-defaults" +Name = "api" +Protocol = "http" +EnvoyExtensions = [ + { + Name = "builtin/property-override" + Arguments = { + ProxyType = "connect-proxy" + Patches = [ + { + ResourceFilter = { + ResourceType = "cluster" + TrafficDirection = "outbound" + Services = [{ + Name = "other-svc" + }] + } + Op = "add" + Path = "/respect_dns_ttl" + Value = true + } + ] + } + } +] +``` + + + +```json +{ + "kind": "service-defaults", + "name": "api", + "protocol": "http", + "envoyExtensions": [{ + "name": "builtin/property-override", + "arguments": { + "proxyType": "connect-proxy", + "patches": [{ + "resourceFilter": { + "resourceType": "cluster", + "trafficDirection": "outbound", + "services": [{ "name": "other-svc" }] + }, + "op": "add", + "path": "/respect_dns_ttl", + "value": true + }] + } + }] +} +``` + + + +```yaml +apiversion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: api +spec: + protocol: http + envoyExtensions: + name = "builtin/property-override" + arguments: + proxyType: "connect-proxy", + patches: + - resourceFilter: + resourceType: "cluster" + trafficDirection: "outbound" + services: + - name: "other-svc" + op: "add" + path: "/respect_dns_ttl", + value: true +``` + + + + + +Refer to the [property override configuration reference](/consul/docs/reference/proxy/extensions/property-override) for details on how to configure the extension. + +Refer to the [proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/reference/config-entry/service-defaults) for details on how to define the configuration entries. + + + Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. + + +### Constructing paths + +To target the properties for an Envoy resource type, you must specify the path where the properties exist in the [`Path` field](/consul/docs/reference/proxy/extensions/property-override#patches-path) of the property override extension configuration. + +To view a list of supported fields for the `Path` field, set the `Path` field to an empty or partially invalid string. Consul will return an error with a list of supported fields for the first unrecognized segment of the path. By default, Consul only returns the first ten fields, but you can set the [`Debug` field](/consul/docs/reference/proxy/extensions/property-override#debug) to `true` to direct Consul to output all possible fields. + +The following examplem configure will trigger Consul to return an error with the top-level fields available for the Envoy cluster resource: + +```hcl +Kind = "service-defaults" +Name = "api" +EnvoyExtensions = [ + { + Name = "builtin/property-override" + Arguments = { + Debug = true + ProxyType = "connect-proxy" + Patches = [ + { + ResourceFilter = { + ResourceType = "cluster" + TrafficDirection = "outbound" + } + Op = "add" + Path = "" + Value = 5 + } + ] + } + } +] +``` + +After applying the configuration entry, Consul prints a message that includes the possible fields for the resource: + +```shell-session +$ consul config write api.hcl +non-empty, non-root Path is required; +available envoy.config.cluster.v3.Cluster fields: +transport_socket_matches +name +alt_stat_name +type +cluster_type +eds_cluster_config +connect_timeout +## ... +``` + +You can use the output to help you construct the appropriate value for the `Path` field. For example: + +```shell-session +$ consul config write api.hcl 2>&1 | grep round_robin +round_robin_lb_config +``` + +## Apply the configuration entry + +If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. + + + +```shell-session +$ consul config write property-override-extension-service-defaults.hcl +``` + +```shell-session +$ consul config write property-override-extension-service-defaults.json +``` + +```shell-session +$ kubectl apply property-override-extension-service-defaults.yaml +``` + + diff --git a/website/content/docs/envoy-extension/wasm.mdx b/website/content/docs/envoy-extension/wasm.mdx new file mode 100644 index 000000000000..73c96a54b96d --- /dev/null +++ b/website/content/docs/envoy-extension/wasm.mdx @@ -0,0 +1,179 @@ +--- +layout: docs +page_title: Run WebAssembly plug-ins in Envoy proxy +description: Learn how to use the Consul wasm extension for Envoy, which directs Consul to run your WebAssembly (Wasm) plugins for Envoy proxies in your service mesh. +--- + +# Run WebAssembly plug-ins in Envoy proxy + +This topic describes how to use the `wasm` extension, which directs Consul to run your WebAssembly (Wasm) plug-ins for Envoy proxies. + +You can create Wasm plugins for Envoy and integrate them using the `wasm` extension. Wasm is a binary instruction format for stack-based virtual machines that has the potential to run anywhere after it has been compiled. Wasm plug-ins run as filters in a service mesh application's sidecar proxy. + +## Workflow + +Complete the following steps to use the Wasm Envoy extension: + +- Create your Wasm plugin. You must ensure that your plugin functions as expected. Refer to the [WebAssembly website](https://webassembly.org/) for information and links to documentation. +- Configure an `EnvoyExtensions` block in a service defaults or proxy defaults configuration entry. +- Apply the configuration entry. + +## Add the `EnvoyExtensions` + +Add Envoy extension configuration to a proxy defaults or service defaults configuration entry. Place the extension configuration in an `EnvoyExtensions` block in the configuration entry. + +- When you configure Envoy extensions on proxy defaults, they apply to every service. +- When you configure Envoy extensions on service defaults, they apply to a specific service. + +Consul applies Envoy extensions configured in proxy defaults before it applies extensions in service defaults. As a result, the Envoy extension configuration in service defaults may override configurations in proxy defaults. + +The following example shows a service defaults configuration entry named `api` that uses an upstream service named `file-server` to serve a Wasm-based web application firewall (WAF). + + + + +```hcl +Kind = "service-defaults" +Name = "api" +Protocol = "http" +EnvoyExtensions = [ + { + Name = "builtin/wasm" + Arguments = { + Protocol = "http" + ListenerType = "inbound" + PluginConfig = { + VmConfig = { + Code = { + Remote = { + HttpURI = { + Service = { + Name = "file-server" + } + URI = "https://file-server/waf.wasm" + } + SHA256 = "c9ef17f48dcf0738b912111646de6d30575718ce16c0cbde3e38b21bb1771807" + } + } + } + Configuration = < + + +```json +{ + "kind": "service-defaults", + "name": "api", + "protocol": "http", + "envoyExtensions": [{ + "name": "builtin/wasm", + "arguments": { + "protocol": "http", + "listenerType": "inbound", + "pluginConfig": { + "VmConfig": { + "Code": { + "Remote": { + "HttpURI": { + "Service": { + "Name": "file-server" + }, + "URI": "https://file-server/waf.wasm" + } + } + } + }, + "Configuration": { + "rules": [ + "Include @demo-conf", + "Include @crs-setup-demo-conf", + "SecDebugLogLevel 9", + "SecRuleEngine On", + "Include @owasp_crs/*.conf" + ] + } + + } + } + }] +} +``` + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: api +spec: + protocol: http + envoyExtensions: + - name: builtin/wasm + required: true + arguments: + protocol: http + listenerType: inbound + pluginConfig: + VmConfig: + Code: + Remote: + HttpURI: + Service: + Name: file-server + URI: https://file-server/waf.wasm + Configuration: + rules: + - Include @demo-conf + - Include @crs-setup-demo-conf + - SecDebugLogLevel 9 + - SecRuleEngine On + - Include @owasp_crs/*.conf +``` + + + + +Refer to the [Wasm extension configuration reference](/consul/docs/reference/proxy/extensions/wasm) for details on how to configure the extension. + +Refer to the [proxy defaults configuration entry reference](/consul/docs/reference/config-entry/proxy-defaults) and [service defaults configuration entry reference](/consul/docs/reference/config-entry/service-defaults) for details on how to define the configuration entries. + + + Adding Envoy extensions default proxy configurations may have unintended consequences. We recommend configuring `EnvoyExtensions` in service defaults configuration entries in most cases. + + +## Apply the configuration entry + +If your network is deployed to virtual machines, use the `consul config write` command and specify the proxy defaults or service defaults configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. The following example applies the extension in a proxy defaults configuration entry. + + + +```shell-session +$ consul config write wasm-extension-serve-waf.hcl +``` + +```shell-session +$ consul config write wasm-extension-serve-waf.json +``` + +```shell-session +$ kubectl apply wasm-extension-serve-waf.yaml +``` + + diff --git a/website/content/docs/error-messages/api-gateway.mdx b/website/content/docs/error-messages/api-gateway.mdx new file mode 100644 index 000000000000..357210a8ff35 --- /dev/null +++ b/website/content/docs/error-messages/api-gateway.mdx @@ -0,0 +1,74 @@ +--- +layout: docs +page_title: API gateway on Kubernetes error messages +description: Use error message outputs to debug and troubleshoot Consul API gateways on Kubernetes. +--- + +# API gateway on Kubernetes error messages + +This topic provides information about potential error messages associated with Consul API Gateway. If you receive an error message that does not appear in this section, refer to the following resources: + +- [Common Consul errors](/consul/docs/troubleshoot/common-errors#common-errors-on-kubernetes) +- [Consul troubleshooting guide](/consul/docs/error-messages/consul) +- [Consul Discuss forum](https://discuss.hashicorp.com/) + + + +## Helm installation failed: "no matches for kind" + +```log +Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: [unable to recognize "": no matches for kind "GatewayClass" in version "gateway.networking.k8s.io/v1alpha2", unable to recognize "": no matches for kind "GatewayClassConfig" in version "api-gateway.consul.hashicorp.com/v1alpha1"] +``` +**Conditions:** +Consul API Gateway generates this error when the required CRD files have not been installed in Kubernetes prior to installing Consul API Gateway. + +**Impact:** +The installation process typically fails after this error message is generated. + +**Resolution:** +Install the required CRDs. Refer to the [Consul API Gateway installation instructions](/consul/docs/north-south/api-gateway/k8s/enable) for instructions. + +## Operation cannot be fulfilled, the object has been modified + +``` +{"error": "Operation cannot be fulfilled on gatewayclassconfigs.consul.hashicorp.com \"consul-api-gateway\": the object has been modified; please apply your changes to the latest version and try again"} + +``` +**Conditions:** +This error occurs when the gateway controller attempts to update an object that has been modified previously. It is a normal part of running the controller and will resolve itself by automatically retrying. + +**Impact:** +Excessive error logs are produced, but there is no impact to the functionality of the controller. + +**Resolution:** +No action needs to be taken to resolve this issue. \ No newline at end of file diff --git a/website/content/docs/error-messages/consul.mdx b/website/content/docs/error-messages/consul.mdx new file mode 100644 index 000000000000..c919fc821080 --- /dev/null +++ b/website/content/docs/error-messages/consul.mdx @@ -0,0 +1,257 @@ +--- +layout: docs +page_title: Consul error messages +description: >- + Troubleshoot issues based on the error message. Common errors result from failed actions, timeouts, multiple entries, bad and expired certificates, invalid characters, syntax parsing, malformed responses, and exceeded deadlines. +--- + +# Consul error messages + +This topic provides information about potential error messages associated with Consul. If you receive an error message that does not appear in this section, refer to the following resources: + +- [Consul on Kubernetes error messages](/consul/docs/error-messages/k8s) +- [API Gateway error messages](/consul/docs/error-messages/api-gateway) +- [Consul-Terraform-Sync error messages](/consul/docs/error-messages/cts) +- [Consul Discuss forum](https://discuss.hashicorp.com/) + +## Configuration file errors + +The following errors are related to misconfigured files. + +### Multiple network interfaces + + + +```log +Multiple private IPv4 addresses found. Please configure one with 'bind' and/or 'advertise'. +``` + + + +Your server has multiple active network interfaces. Consul needs to know which interface to use for local LAN communications. Add the [`bind`][bind] option to your configuration. + + + +If your server does not have a static IP address, you can use a [go-sockaddr template][go-sockaddr] as the argument to the `bind` option. For example: `"bind_addr": "{{GetInterfaceIP \"eth0\"}}"`. + + + +### Configuration syntax errors + +If Consul returns the following errors, there is a syntax error in your configuration file. + +```shell-session +$ consul agent -server -config-file server.json +==> Error parsing server.json: invalid character '`' looking for beginning of value + +## Other examples +==> Error parsing config.hcl: At 1:12: illegal char +==> Error parsing config.hcl: At 1:32: key 'foo' expected start of object ('{') or assignment ('=') +==> Error parsing server.json: invalid character '`' looking for beginning of value +``` + +If the error message doesn't identify the exact location in the file where the problem is, try using [jq] to find it. For example: + +```shell-session +$ jq . server.json +parse error: Invalid numeric literal at line 3, column 29 +``` + +## Invalid host name + +If Consul returns the following errors, you have specified an invalid host name. + + + +```log +Node name "consul_client.internal" will not be discoverable via DNS due to invalid characters. +``` + + + +Add the [`node name`][node_name] option to your agent configuration and provide a valid DNS name. + +## I/O timeouts + +If Consul returns the following errors, Consul is experience I/O timeouts. + + + +```log +Failed to join 10.0.0.99: dial tcp 10.0.0.99:8301: i/o timeout +``` + + + + + +```log +Failed to sync remote state: No cluster leader +``` + + + +If the Consul client and server are on the same LAN, then most likely, a firewall is blocking connections to the Consul server. + +If they are not on the same LAN, check the [`retry_join`][retry_join] settings in the Consul client configuration. The client should be configured to join a cluster inside its local network. + +## Deadline exceeded + +If Consul returns the following errors, there may be a general performance problem on the Consul server. + + + +```log +Error getting server health from "XXX": context deadline exceeded +``` + + + +Make sure you are monitoring Consul telemetry and system metrics according to our [monitoring guide][monitoring]. Increase the CPU or memory allocation to the server if needed. Check the performance of the network between Consul nodes. + +## Too many open files + +If Consul returns the following errors on a busy cluster, the operating system may not provide enough file descriptors to the Consul process. + + + +```log +Error accepting TCP connection: accept tcp [::]:8301: too many open files in system +``` + + + + + +```log +Get http://localhost:8500/: dial tcp 127.0.0.1:31643: socket: too many open files +``` + + + +You need to increase the limit for the Consul user and maybe the system-wide limit. Refer to [this guide][files] for instructions to do so on Linux. Alternatively, if you are starting Consul from `systemd`, you could add `LimitNOFILE=65536` to the unit file for Consul. Refer to the [sample systemd file][systemd]. + +## Snapshot close error + +Our RPC protocol requires support for a TCP half-close in order to signal the other side that they are done reading the stream, since we don't know the size in advance. This saves us from having to buffer just to calculate the size. + +If a host does not properly implement half-close, you may see an error message `[ERR] consul: Failed to close snapshot: write tcp ->: write: broken pipe` when saving snapshots. This should not affect saving and restoring snapshots. + +This has been a [known issue](https://github.com/docker/libnetwork/issues/1204) in Docker, but may manifest in other environments as well. + +## ACL not found + +If Consul returns the following error, this indicates that you have ACL enabled in your cluster but you aren't passing a valid token. + + + +```log +RPC error making call: rpc error making call: ACL not found +``` + + + +Make sure that when creating your tokens that they have the correct permissions set. In addition, you would want to make sure that an agent token is provided on each call. + +## TLS and certificates + +The follow errors are related to TLS and certificate issues. + +### Incorrect certificate or certificate name + +If Consul returns the following error, if you have provided Consul with invalid certificates. + + + +```log +Remote error: tls: bad certificate +``` + + + + + +```log +X509: certificate signed by unknown authority +``` + + + +Make sure that your Consul clients and servers are using the correct certificates, and that they've been signed by the same CA. Refer to the [Enable TLS encryption with built-in certificate authority][certificates] for step-by-step instructions. + +If you generate your own certificates, make sure the server certificates include the special name `server.dc1.consul` in the Subject Alternative Name (SAN) field. (If you change the values of `datacenter` or `domain` in your configuration, update the SAN accordingly.) + +### HTTP instead of HTTPS + +If Consul returns the following error, you are attempting to connect to a Consul agent with HTTP on a port that has been configured for HTTPS. + + + +```log +Error querying agent: malformed HTTP response +``` + + + + +```log +Net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02" +``` + + + +If you are using the Consul CLI, make sure you are specifying `https` in the `-http-addr` flag or the `CONSUL_HTTP_ADDR` environment variable. + +If you are interacting with the API, change the URI scheme to `https`. + +## License warnings + +If Consul returns the following error, you have installed Consul Enterprise and your license is about to expire. + + + +```log +License: expiration time: YYYY-MM-DD HH:MM:SS -0500 EST, time left: 29m0s +``` + + + +If you are an Enterprise customer, [provide a license key][license] to Consul before it shuts down. Otherwise, install the Consul Community edition binary instead. + +Enterprise binaries are followed by a `+ent` suffix on the [download site][releases]. + +## Rate limit reached on the server + +You may receive a `RESOURCE_EXHAUSTED` error from the Consul server if the maximum number of read or write requests per second have been reached. Refer to [Set a global limit on traffic rates](/consul/docs/manage/rate-limit/global) for additional information. You can retry another server unless the number of retries is exhausted. If the number of retries is exhausted, you should implement an exponential backoff. + +The `RESOURCE_EXHAUSTED` RPC response is translated into a `429 Too Many Requests` error code on the HTTP interface. + +The server may respond as `UNAVAILABLE` if it is the leader node and the global write request rate limit is reached. The solution is to apply an exponential backoff until the leader has capacity to serve those requests. + +The `UNAVAILABLE` RPC response is translated into a `503 Service Unavailable` error code on the RPC requests sent through HTTP interface. + +## Runtime config + + Consul outputs the warning `Static Runtime config has changed and need a manual config reload to be applied` when any of the following configurations are changed while [-auto-reload-config](/consul/commands/agent#_auto_reload_config) is enabled. + + - [encrypt_verify_incoming](/consul/docs/reference/agent/configuration-file/encryption#encrypt_verify_incoming) + - [verify_incoming](/consul/docs/reference/agent/configuration-file/tls#verify_incoming) + - [verify_incoming_rpc](/consul/docs/reference/agent/configuration-file/tls#verify_incoming_rpc) + - [verify_incoming_https](/consul/docs/reference/agent/configuration-file/tls#verify_incoming_https) + - [verify_outgoing](/consul/docs/reference/agent/configuration-file/tls#verify_outgoing) + - [verify_server_hostname](/consul/docs/reference/agent/configuration-file/tls#verify_server_hostname) + - [ca_file](/consul/docs/reference/agent/configuration-file/tls#ca_file) + - [ca_path](/consul/docs/reference/agent/configuration-file/tls#ca_path) + +To resolve this error, you must manually issue the `consul reload` command or send a `SIGHUP` to the Consul process to reload the new values. + +[node_name]: /consul/docs/reference/agent/configuration-file/node#node_name +[retry_join]: /consul/commands/agent#retry-join +[license]: /consul/commands/license +[releases]: https://releases.hashicorp.com/consul/ +[files]: https://easyengine.io/tutorials/linux/increase-open-files-limit +[certificates]: /consul/docs/secure/encryption/tls/enable/new/builtin +[systemd]: /consul/tutorials/production-deploy/deployment-guide#configure-systemd +[monitoring]: /consul/tutorials/day-2-operations/monitor-datacenter-health +[bind]: /consul/commands/agent#_bind +[jq]: https://stedolan.github.io/jq/ diff --git a/website/content/docs/error-messages/cts.mdx b/website/content/docs/error-messages/cts.mdx new file mode 100644 index 000000000000..b61202c0e188 --- /dev/null +++ b/website/content/docs/error-messages/cts.mdx @@ -0,0 +1,157 @@ +--- +layout: docs +page_title: Consul-Terraform-Sync error messages +description: >- + Look up Consul-Terraform-Sync error message to learn how to resolve potential issues using CTS. +--- + +# Consul-Terraform-Sync error messages + +This topic provides information about potential error messages associated with Consul-Terraform-Sync (CTS). If you receive an error message that does not appear in this section, refer to the following resources: + +- [Consul error messages](/consul/docs/error-messages/consul) +- [Consul on Kubernetes error messages](/consul/docs/error-messages/k8s) +- [API Gateway error messages](/consul/docs/error-messages/api-gateway) +- [Consul Discuss forum](https://discuss.hashicorp.com/) + +## Missing local module + +If you configured the CTS cluster to run in [high availability mode](/consul/docs/automate/infrastructure/high-availability) and the local module is missing, then the following message appears in the log: + +```log hideClipboard +[ERROR] ha.compat: error="compatibility check failure: stat ./example-module: no such file or directory" +``` + +The resolution is to add the missing local module on the incompatible CTS instance. Refer to the [`module` documentation](/consul/docs/automate/infrastructure/configure#module) in the CTS configuration reference for additional information. + +## Redirect requests to leader __ at address + +```json hideClipboard +{ + "error": { + "message": "redirect requests to leader 'cts-01' at cts-01.example.com:8558" + } +} +``` + +**Conditions**: + +- CTS can determine the leader. +- `high_availability.instance.address` is configured for the leader. +- The CTS instance you sent the request to is not the leader. + +**Resolution**: + +Redirect the request to the leader instance, for example: + +```shell-session +$ curl --request GET cts-01.example.com:8558/v1/tasks +``` + + +## Redirect requests to leader __ + +```json +{ + "error": { + "message": "redirect requests to leader 'cts-01'" + } +} +``` + +**Conditions**: + +- CTS can determine the leader. +- The CTS instance you sent the request to is not the leader. +- `high_availability.instance.address` is not configured. + +**Resolution**: + +Identify the leader instance address and redirect the request to the leader. You can identify the leader by calling the [`status/cluster` API endpoint](/consul/docs/reference/cts/api/status#cluster-status) or by checking the logs for the following entry: + + + +```log +[INFO] ha: acquired leadership lock: id=. +``` + + + +We recommend deploying a cluster that has three instances. + +## Redirect requests to leader + + + +```json +{ + "error": { + "message": "redirect requests to leader" + } +} +``` + + + +**Conditions**: + +- The CTS instance you sent the request to is not the leader. +- The CTS is unable to determine the leader. +- Note that these conditions are rare. + +**Resolution**: + +Identify and send the request to the leader CTS instance. You can identify the leader by calling the [`status/cluster` API endpoint](/consul/docs/reference/cts/api/status#cluster-status) or by checking the logs for the following entry: + + + +```log +[INFO] ha: acquired leadership lock: id= +``` + + + +## This endpoint is only available with high availability configured + + + +```json +{ + "error": { + "message": "this endpoint is only available with high availability configured" + } +} +``` + + + +**Conditions**: + +- You called the [`status/cluster` API endpoint](/consul/docs/reference/cts/api/status#cluster-status) without configuring CTS for [high availability](/consul/docs/automate/infrastructure/high-availability). + +**Resolution**: + +Configure CTS to run in [high availability mode](/consul/docs/automate/infrastructure/high-availability). + +## Unsupported status parameter value + + + +```json +{ + "error": { + "message": "example error message: unsupported status parameter value" + } +} +``` + + + +**Conditions**: + +- You sent a request to the `status` API endpoint. +- The request included an unsupported parameter value. + +**Resolution**: + +Send a new request and verify that all of the parameter values are correct. \ No newline at end of file diff --git a/website/content/docs/error-messages/k8s.mdx b/website/content/docs/error-messages/k8s.mdx new file mode 100644 index 000000000000..cbe34ff14d51 --- /dev/null +++ b/website/content/docs/error-messages/k8s.mdx @@ -0,0 +1,114 @@ +--- +layout: docs +page_title: Consul on Kubernetes error messages +description: >- + Troubleshoot issues based on the error message. Common errors result from failed actions, timeouts, multiple entries, bad and expired certificates, invalid characters, syntax parsing, malformed responses, and exceeded deadlines. +--- + +# Consul on Kubernetes error messages + +This topic provides information about potential error messages associated with Consul on Kubernetes. If you receive an error message that does not appear in this section, refer to the following resources: + +- [Consul error messages](/consul/docs/error-messages/consul) +- [API Gateway error messages](/consul/docs/error-messages/api-gateway) +- [Consul-Terraform-Sync error messages](/consul/docs/error-messages/cts) +- [Consul Discuss forum](https://discuss.hashicorp.com/) + +## Unable to connect to the Consul client on the same host + +If the pods are unable to connect to a Consul client running on the same host, first check if the Consul clients are up and running with `kubectl get pods`. + +```shell-session +$ kubectl get pods --selector="component=client" +NAME READY STATUS RESTARTS AGE +consul-kzws6 1/1 Running 0 58s +``` + +If you are still unable to connect and see `i/o timeout` or `connection refused` errors when connecting to the Consul client on the Kubernetes worker, this could be because the container networking interface (CNI) does not [support](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-hostport) the use of `hostPort`. + +The IP `10.0.0.10` in the following example error messages refers to the IP of the host where the Consul client pods are running. + + + +```log +Put http://10.0.0.10:8500/v1/catalog/register: dial tcp 10.0.0.10:8500: connect: connection refused +Put http://10.0.0.10:8500/v1/agent/service/register: dial tcp 10.0.0.10:8500: connect: connection refused +Get http://10.0.0.10:8500/v1/status/leader: dial tcp 10.0.0.10:8500: i/o timeout +``` + + + +To work around this issue, enable [`hostNetwork`](/consul/docs/reference/k8s/helm#v-client-hostnetwork) in your Helm values. Using the host network will enable the pod to use the host's network namespace without the need for CNI to support port mappings between containers and the host. + +```yaml +client: + hostNetwork: true + dnsPolicy: ClusterFirstWithHostNet +``` + + + +Using host network has security implications because it gives the Consul client unnecessary access to all network traffic on the host. We recommend raising an issue with the CNI you're using to add support for `hostPort` and switching back to `hostPort` eventually. + + + +## ACL auth method login failed + +If you see the following error in the init container logs of service mesh pods, check that the pod has a service account name that matches its Kubernetes Service. + + + +```log +consul-server-connection-manager: ACL auth method login failed: error="rpc error: code = PermissionDenied desc = Permission denied" +``` + + + +For example, this deployment will fail because the `serviceAccountName` is `does-not-match` instead of `static-server`. + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + # This name will be the service name in Consul. + name: static-server +spec: + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + serviceAccountName: does-not-match +``` + + \ No newline at end of file diff --git a/website/content/docs/fundamentals/agent.mdx b/website/content/docs/fundamentals/agent.mdx new file mode 100644 index 000000000000..adafb80b5673 --- /dev/null +++ b/website/content/docs/fundamentals/agent.mdx @@ -0,0 +1,215 @@ +--- +layout: docs +page_title: Configure a Consul agent +description: >- + Agent configuration is the process of defining server and client agent properties with CLI flags and configuration files. Learn what properties can be configured on reload and how Consul sets precedence for configuration settings. +--- + +# Configure a Consul agent + +The Consul agent is a long running daemon that operates on a node. It is a core unit of Consul operations. Agents are highly configurable, which enables you to deploy Consul to any infrastructure. + +Agents can run in either _server mode_ or _client mode_. Server nodes make up the control plane's server cluster, and they are responsible for maintaining the cluster's state. Client nodes are lightweight processes that make up the majority of the cluster. They interface with the server nodes for most operations and maintain little state of their own. Clients run on every node where services are running. + +In addition to the core agent operations, server nodes participate in the [consensus quorum](/consul/docs/concept/consensus). The quorum is based on the Raft protocol, which provides strong consistency and availability in the case of failure. Server nodes should run on dedicated instances because they are more resource intensive than client nodes. + +## Agent lifecycle + +The following process describes the agent lifecycle within the context of an existing cluster: + +1. **Start the agent** either manually or through an automated or programmatic process. + Newly-started agents are unaware of other nodes in the cluster. +1. **An agent joins a cluster**, which enables the agent to discover agent peers. + Agents join clusters on startup when the [`join`](/consul/commands/join) command is issued or according the [`auto-join` configuration](/consul/docs/deploy/server/cloud-auto-join). +1. **Agents gossip to the entire cluster**. As a result, all nodes will eventually become aware of each other. +1. **Existing servers begin replicating to the new node** if the agent is a server. + +### Node failures and crashes + +In the event of a network failure, some nodes may be unable to reach other nodes. Unreachable nodes will be marked as _failed_. + +Distinguishing between a network failure and an agent crash is impossible. As a result, agent crashes are handled in the same manner is network failures. + +Once a node is marked as failed, this information is updated in the service catalog. + +### Node exits + +When a node gracefully exits a cluster, it sends a departure notification, and the cluster marks its status as `left`. In this case, the cluster immediately deregisters all services associated with that node. This behavior differs from node failures, where services remain registered to allow for temporary outages. + +When a server agent leaves, replication to the exiting server stops. + +To prevent an accumulation of "dead" nodes in either `failed` or `left` states, Consul automatically removes dead nodes from the catalog. This process is called _reaping_. + +Reaping occurs on a configurable interval of 72 hours. We do not recommend changing the reap interval due to its consequences during outage situations. For failed nodes, reaping desregisters all of the node's services. + +## Agent requirements + +You should run one Consul agent per server or host. Instances of Consul can run in separate VMs or as separate containers. At least one server agent per Consul deployment is required, but we recommend three to five server agents. + +### Infrastructure requirements + +Refer to the following sections for information about host, port, memory, and other infrastructure requirements: + +- [Server Performance](/consul/docs/reference/architecture/server) +- [Required Ports](/consul/docs/reference/architecture/ports) + +The [Datacenter Deploy tutorial](/consul/tutorials/production-deploy/reference-architecture#deployment-system-requirements) contains additional information for production environments, including licensing configuration, environment variables, and other details. + +### Maximum latency network requirements + +Consul uses the gossip protocol to share information across agents. To function properly, you cannot exceed the protocol's _maximum latency threshold_. The latency threshold is calculated according to the total round trip time (RTT) for communication between all agents. Other network usages outside of gossip are not bound by these latency requirements. These uses include client to server RPCs, HTTP API requests, xDS proxy configuration, DNS requests. + +Your network must meet the followiong latency requirements for data sent between all Consul agents: + +- Average RTT for all traffic cannot exceed 50ms. +- RTT for 99 percent of traffic cannot exceed 100ms. + +## Start the Consul agent + +Start a Consul agent with the `consul` command and `agent` subcommand using the following syntax: + +```shell-session +$ consul agent +``` + +The minimum information Consul needs to run is the location of a directory for storing agent state data. You can specify the location with the `-data-dir` flag or define the location in an external file and point the file with the `-config-file` flag. + +You can also point to a directory containing several configuration files with the `-config-dir` flag. The configuration directory lets you logically group configuration settings into separate files. + +The following example starts an agent in developer mode and stores agent state data in the `tmp/consul` directory: + +```shell-session +$ consul agent -data-dir=tmp/consul -dev +``` + + + +Do not use developer mode in production environments. + + + +### Agent startup output + +Consul prints several important messages on startup. The following example shows output from the [`consul agent`](/consul/commands/agent) command: + +```shell-session +$ consul agent -data-dir=/tmp/consul +==> Starting Consul agent... +==> Consul agent running! + Node name: 'Armons-MacBook-Air' + Datacenter: 'dc1' + Server: false (bootstrap: false) + Client Addr: 127.0.0.1 (HTTP: 8500, DNS: 8600) + Cluster Addr: 192.168.1.43 (LAN: 8301, WAN: 8302) + +==> Log data will now stream in as it occurs: + + [INFO] serf: EventMemberJoin: Armons-MacBook-Air.local 192.168.1.43 +... +``` + +The agent output includes the following information + +- **Node name:** A unique name for the agent. By default, this field is the hostname of the machine, but you can customize it using the [`-node`](/consul/commands/agent#_node) flag. + +- **Datacenter:** The datacenter where the agent is configured to run. For single-DC configurations, the agent defaults to `dc1`, but you can configure which datacenter the agent reports to with the [`-datacenter` flag](/consul/commands/agent#_datacenter). Consul supports networks with multiple datacenters, so configuring each node to report its datacenter improves agent efficiency. + +- **Server:** Indicates whether the agent is running in server mode or client mode. Running an agent in server mode requires additional resource overhead because they participate in the consensus quorum, store cluster state, and handle queries. A server may also be in ["bootstrap" mode](/consul/commands/agent#_bootstrap_expect), which enables the server to elect itself as the Raft leader. Multiple servers cannot be in bootstrap mode because it would put the cluster in an inconsistent state. + +- **Client address:** The address used for clients to interface with the agent, including the ports for the HTTP and DNS interfaces. By default, this address binds only to `localhost`. If you change this address or port, specify a `-http-addr` whenever you run commands such as [`consul members`](/consul/commands/members) to indicate how to reach the agent. Other applications can also use the HTTP address and port [to control Consul with the HTTP API](/consul/api-docs). + +- **Cluster address:** The address and set of ports used for communication between Consul agents in a cluster. Not all Consul agents in a cluster have to use the same port, but this address **MUST** be reachable by all other nodes. + +When running under `systemd` on Linux, Consul sends `READY=1` to the `$NOTIFY_SOCKET` when a LAN join completes. You must either set the `join` or `retry_join` option and the service definition file has to have `Type=notify` set for this notification to send. + +## Stop a Consul agent + +You can stop an agent in two ways: gracefully or forcefully. Server and client agents behave differently depending on the leave that is performed. There are two potential states a process can be in after a system signal is sent: _left_ and _failed_. + +To gracefully halt an agent, send the process an _interrupt signal_ such as `Ctrl-C` from a terminal, or running `kill -INT consul_pid`. + +When a server exits gracefully, the server is marked as _failed_ to minimally impact the consensus quorum. To remove a server from the cluster, the [`force-leave`](/consul/commands/force-leave) command is used. Using `force-leave` puts the server instance in a _left_ state as long as the server agent is not alive. + +When a client exits gracefully, the agent first notifies the cluster it intends to leave the cluster. This way, other cluster members notify the cluster that the node has _left_. + +Alternatively, you can forcibly stop an agent by sending it a `kill -KILL consul_pid` signal. This command stops any agent immediately. The rest of the cluster will detect that the node has died, usually within a few seconds. Then, the servers update the catalog to indicate that the node has _failed_. + +## Configure Consul agents + +You can configure Consul agents with the Consul CLI's `consul agent` command or define them in agent configuration files. The following example starts a Consul agent that takes configuration settings from a file called `server.hcl` located in the current working directory: + +```shell-session hideClipboard +$ consul agent -config-file=server.hcl +``` + +Configuration precedence is evaluated in the following order: + +1. [Command line arguments](/consul/commands/agent) +1. [Configuration files](/consul/docs/reference/agent/configuration-file) + +The Consul agent loads the configuration from files and directories in lexical order. For example, configuration file `basic_config.json` is processed before `extra_config.json`. You can define the configuration in either [HCL](https://github.com/hashicorp/hcl#syntax) or JSON format. + +Configurations specified later are merged into configuration specified earlier. In most cases, "merge" means that the later version overrides the earlier one. In some cases, such as event handlers, merging appends the handlers to the existing configuration. The exact merging behavior is specified for each option. + +The Consul agent also supports reloading configuration when it receives the `SIGHUP` signal. However, not all changes are respected. You can use the [`consul reload` command](/consul/commands/reload) to trigger a configuration reload. + +### Common configuration settings + +The following settings are commonly used in the agent configuration file to configure Consul agents: + +| Parameter | Description | Default | +| ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- | +| `node_name` | String value that specifies a name for the agent node.
    [More information](/consul/commands/agent#_node_id). | Hostname of the machine | +| `server` | Boolean value that determines if the agent runs in server mode.
    [More information](/consul/commands/agent#_server). | `false` | +| `datacenter` | String value that specifies which datacenter the agent runs in.
    [More information](/consul/commands/agent#_datacenter). | `dc1` | +| `data_dir` | String value that specifies a directory for storing agent state data.
    [More information](/consul/commands/agent#_log_level). | `info` | +| `retry_join` | Array of string values that specify one or more agent addresses to join after startup. The agent will continue trying to join the specified agents until it has successfully joined another member.
    [More information](/consul/commands/agent#_retry_join) | none | +| `addresses` | Block of nested objects that define addresses bound to the agent for internal cluster communication. | `"http": "0.0.0.0"` Refer to the agent configuration [default address values](/consul/docs/reference/agent/configuration-file/address) | +| `ports` | Block of nested objects that define ports bound to agent addresses.
    [More information](/consul/commands/agent#advertise-address-options). | Refer to the agent configuration [default port values](/consul/docs/reference/agent/configuration-file/general#ports). + +## The `consul.d` configuration directory + +Agent configurations can become difficult to maintain when they are contained in a single file. We recommend separating different blocks of the agent configuration into several files located in the `/etc/consul.d` directory. When you add a new configuration or update an existing one, Consul automatically appends the agent's configuration after the agent reboots. If the update is to a supported reloadable configuration, such as ACL tokens, Consul can register the change and propagate it to the rest of the cluster without restarting. + +To start using the configuration directory, create a blank file named `consul.hcl` in the `/etc/consul.d` directory. You can add configurations that would exist on every node, such as `datacenter` and `data_dir`, or you can leave the file blank and follow your preferred configuration structure. + +## Reloadable configurations + +Some agent configuration options are reloadable at runtime. + +You can run the [`consul reload` command](/consul/commands/reload) to manually reload supported options from configuration files in the configuration directory. To configure the agent to automatically reload configuration files updated on disk, set the [`auto_reload_config` configuration option](/consul/docs/reference/agent/configuration-file/general#auto_reload_config) parameter to `true`. + +The following agent configuration options are reloadable at runtime: + +- ACL tokens +- Bootstrapped configuration entries +- Health check definitions +- [Discard Check Output](/consul/docs/reference/agent/configuration-file/general#discard_check_output) +- HTTP client address +- Log level +- [Metric Prefix Filter](/consul/docs/reference/agent/configuration-file/telemetry#telemetry-prefix_filter) +- [Node Metadata](/consul/docs/reference/agent/configuration-file/node#node_meta) +- Some Raft options: + - [`raft_snapshot_threshold`](/consul/docs/reference/agent/configuration-file/raft#raft_snapshot_threshold) + - [`raft_snapshot_interval`](/consul/docs/reference/agent/configuration-file/raft#raft_snapshot_interval) + - [`raft_trailing_logs`](/consul/docs/reference/agent/configuration-file/raft#raft_trailing_logs) +- [RPC rate limits](/consul/docs/reference/agent/configuration-file/general#limits) +- [Reporting](/consul/docs/reference/agent/configuration-file/general#reporting) +- [HTTP maximum connections per client](/consul/docs/reference/agent/configuration-file/general#http_max_conns_per_client) +- Service definitions +- TLS configuration. Be aware that this is currently limited to reload a configuration that is already TLS enabled. You cannot enable or disable TLS only with reloading. To avoid a potential security issue, the following TLS configuration parameters do not automatically reload when [-auto-reload-config](/consul/commands/agent#_auto_reload_config) is enabled: + - [encrypt_verify_incoming](/consul/docs/reference/agent/configuration-file/tls#encrypt_verify_incoming) + - [verify_incoming](/consul/docs/reference/agent/configuration-file/tls#verify_incoming) + - [verify_incoming_rpc](/consul/docs/reference/agent/configuration-file/tls#verify_incoming_rpc) + - [verify_incoming_https](/consul/docs/reference/agent/configuration-file/tls#verify_incoming_https) + - [verify_outgoing](/consul/docs/reference/agent/configuration-file/tls#verify_outgoing) + - [verify_server_hostname](/consul/docs/reference/agent/configuration-file/tls#verify_server_hostname) + - [ca_file](/consul/docs/reference/agent/configuration-file/tls#ca_file) + - [ca_path](/consul/docs/reference/agent/configuration-file/tls#ca_path) +- [License](/consul/docs/enterprise/license) + +## Next steps + +Next, learn about the [fundamentals of services in +Consul](/consul/docs/fundamentals/service)/ diff --git a/website/content/docs/fundamentals/config-entry.mdx b/website/content/docs/fundamentals/config-entry.mdx new file mode 100644 index 000000000000..c7464b187388 --- /dev/null +++ b/website/content/docs/fundamentals/config-entry.mdx @@ -0,0 +1,131 @@ +--- +layout: docs +page_title: Configuration entries +description: >- + Configuration entries define the behavior of Consul service mesh components. Learn how to use the `consul config` command to create, manage, and delete configuration entries. +--- + +# Configuration entries + +Configuration entries explicitly define many of Consul's security, traffic, and cluster management behaviors. You can specify configuration entries in either HCL or JSON. On Kubernetes, specify custom resource definitions in YAML. + +## Configuration entry basics + +Every configuration entry has at least two fields: `Kind` and `Name`. Those two fields identify a configuration entry for Consul. Configuration entries use either `snake-case` or `CamelCase` for key names. + + + +```hcl +Kind = "" +Name = "" +``` + + + +On Kubernetes, `Kind` is set as the custom resource `kind` and `Name` is set +as `metadata.name`: + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: +metadata: + name: +``` + + + +You should use the [Consul CLI](/consul/commands/config) or [Consul HTTP API](/consul/api-docs/config) to manage configuration entries outside of Kubernetes. + +As a convenience for initial cluster bootstrapping, you can also specify configuration entries in the Consul [server agent configuration files](/consul/docs/reference/agent/configuration-file/general#config_entries_bootstrap). When a server gains leadership, it will attempt to initialize the configuration entries. If a configuration entry does not already exist outside of the servers configuration, the server will create it. If a configuration entry does exist, that matches both `kind` and `name`, then the server does nothing. + +## Cluster management configuration entries + +The following configuration entries define configuration defaults, service-to-service security, service identity, and traffic management for a Consul datacenter. + +- [Control plane request limit](/consul/docs/reference/config-entry/control-plane-request-limit) defines read and write rates for the catalog, ACLs, and key/value store on Consul Enterprise clusters. +- [Exported services](/consul/docs/reference/config-entry/exported-services) enables service access for local admin partitions and remote peers. +- [Ingress gateway](/consul/docs/reference/config-entry/ingress-gateway) defines the configuration for an ingress gateway. To secure access into the network, we recommend the Consul API gateway instead. +- [JWT provider](/consul/docs/reference/config-entry/jwt-provider) configures Consul to use a JSON Web Token (JWT) and JSON Web Key Set (JWKS) in order to add JWT validation to proxies in the service mesh. +- [Mesh](/consul/docs/reference/config-entry/mesh) controls mesh-wide proxy configuration that apply across namespaces and federated datacenters. +- [Proxy defaults](/consul/docs/reference/config-entry/proxy-defaults) controls default configurations for sidecar proxies. +- [Sameness group](/consul/docs/reference/config-entry/sameness-group) defines partitions and cluster peers with identical services for failover and traffic management. +- [Service defaults](/consul/docs/reference/config-entry/service-defaults) configures defaults for all the instances of a given service. +- [Service intentions](/consul/docs/reference/config-entry/service-intentions) defines how sidecar proxies respond to incoming traffic from services in the service mesh. +- [Service resolver](/consul/docs/reference/config-entry/service-resolver) defines service subsets, redirects, failover, and load balancing behavior in the service mesh. +- [Service router](/consul/docs/reference/config-entry/service-router) defines destinations for application traffic based on the HTTP route. +- [Service splitter](/consul/docs/reference/config-entry/service-splitter) defines how to divide requests for a single HTTP route to subsets of services. +- [Terminating gateway](/consul/docs/reference/config-entry/terminating-gateway) defines security settings for services to send traffic from inside the service mesh to external services. + +## API gateway configuration entries + +To secure access to the services in your network for all incoming traffic, we recommend you deploy the Consul API gateway. The following configuration entries define the API gateway and how it handles incoming traffic: + +- [API gateway](/consul/docs/reference/config-entry/api-gateway) defines the API gateway, including the port it listens on. +- [File system certificate](/consul/docs/reference/config-entry/file-system-certificate) defines the paths to a certificate and private key on the local system for API gateway configuration. +- [HTTP route](/consul/docs/reference/config-entry/http-route) defines rules, match criteria, and response behavior that ther API gateway enforces on incoming HTTP traffic. +- [Inline certificate](/consul/docs/reference/config-entry/inline-certificate) includes values for a certificate and private key on the local system for API gateway configuration. In production environments, use the file system certificate instead. +- [TCP route](/consul/docs/reference/config-entry/tcp-route) binds services to listeners on the API gateway. + +## Kubernetes custom resource definitions (CRDs) + +Instead of configuration entries, Consul on Kubernetes uses custom resources modeled after the configuration entries. You can apply these resources to the cluster using either the `kubectl apply` command or the `consul-k8s` CLI. + +Cluster management configuration entries all have a Consul CRD counterpart with a shared name. Their differences are limited to formatting differences between HCL and YAML, so specifications and examples for these configuration entries and CRDs are documented as a single resource. + +The `Registration` CRD is unique to Kubernetes. You can use it to register a service running on a external node and configure health checks for Consul to run. + +Because the Consul API gateway is deployed and managed differently in a Kubernetes cluster, the API gateway configuration entries are not available as CRDs. Instead the following CRDs exist to define Consul API Gateway behavior on Kubernetes: + +- [`Gateway` CRD](/consul/docs/reference/k8s/api-gateway/gateway) +- [`GatewayClass` CRD](/consul/docs/reference/k8s/api-gateway/gatewayclass) +- [`GatewayClassConfig` CRD](/consul/docs/reference/k8s/api-gateway/gatewayclassconfig) +- [`GatewayPolicy` CRD](/consul/docs/reference/k8s/api-gateway/gatewaypolicy) +- [`MeshService` CRD](/consul/docs/reference/k8s/api-gateway/meshservice) +- [`Routes` CRD](/consul/docs/reference/k8s/api-gateway/routes) +- [`RouteAuthFilter` CRD](/consul/docs/reference/k8s/api-gateway/routeauthfilter) +- [`RouteRetryFilter` CRD](/consul/docs/reference/k8s/api-gateway/routeretryfilter) +- [`RouteTimeoutFilter` CRD](/consul/docs/reference/k8s/api-gateway/routetimeoutfilter) + +For more information about these CRDs and how to use them to configure the API Gateway, refer to the [Consul API gateway overview](/consul/docs/north-south/api-gateway). + +## Create or update a configuration entry + +You can use the [`consul config write`](/consul/commands/config/write) command to create and update configuration entries. This command will load the configuration entry (JSON or HCL) definition and then apply it to Consul. + +The following example shows an HCL configuration file to configure the proxy defaults for the global namespace: + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +Config { + local_connect_timeout_ms = 1000 + handshake_timeout_ms = 10000 +} +``` + + + +Apply this configuration. + +```shell-session +$ consul config write global-proxy-defaults.hcl +``` + +If you need to make changes to a configuration entry, edit the file and then rerun the command. This command will not output anything unless there is an error. The `write` command also supports a `-cas` option to enable performing a compare-and-swap operation to prevent overwriting other unknown modifications. + +If you use the HTTP API, include the entry as a payload and send a `PUT` request to the Consul servers. + +```shell-session +$ curl \ + --request PUT \ + --data @payload \ + http://127.0.0.1:8500/v1/config +``` + +## Next steps + +Now that you learned the basics of Consul configurations, find out how the [Consul Terraform provider](/consul/docs/fundamentals/tf) can automate infrastructure deployment and agent configuration. diff --git a/website/content/docs/fundamentals/editions.mdx b/website/content/docs/fundamentals/editions.mdx new file mode 100644 index 000000000000..a738e32af26b --- /dev/null +++ b/website/content/docs/fundamentals/editions.mdx @@ -0,0 +1,71 @@ +--- +layout: docs +page_title: Consul editions and tool binaries +description: >- + Learn about the various Consul releases and their differences. +--- + +# Consul editions and tool binaries + +This page describes the editions, binaries, and packages of Consul that are currently available. For a complete and up-to-date list of all HashiCorp releases, refer to [releases.hashicorp.com](https://releases.hashicorp.com/). + +## Introduction + +There are two main releases of Consul: + +1. **Consul Community Edition (CE)** +1. **Consul Enterprise** + +To run Consul Enterprise, you must configure Consul agents with an Enterprise license. The following table summarizes the major differences between Consul CE and Consul Enterprise: + +| Supported features | Consul CE | Consul Enterprise | +| :---------------------------------- | :-------: | :---------------: | +| Service discovery support | ✅ | ✅ | +| Service mesh support | ✅ | ✅ | +| KV store support | ✅ | ✅ | +| Multi-tenancy support | ❌ | ✅ | +| Additional resiliency & scalability | ❌ | ✅ | +| Complex topology support | ❌ | ✅ | +| OIDC and JWT authentication support | ❌ | ✅ | +| FIPS 140-2 Compliance | ❌ | ✅ | +| Consul-Terraform-Sync | ❌ | ✅ | + +To learn more about Consul Enterprise and its features, refer to [Consul Enterprise](/consul/docs/enterprise). + +## HCP Consul Dedicated + +HCP Consul Dedicated is the networking software as a service (SaaS) product available through the HashiCorp Cloud Platform (HCP). This service provides simplified workflows for common Consul tasks and the option to have HashiCorp set up and manage your Consul servers for you. + +On November 12, 2025, HashiCorp will end operations and support for HCP Consul Dedicated clusters. After this date, you will no longer be able to deploy new Dedicated clusters, nor will you be able to access, update, or manage existing Dedicated clusters. Refer to [Migrate Consul Dedicated cluster to self-managed Enterprise](/hcp/docs/consul/migrate) for more information. + +### Additional tool binaries and maintained repositories + +We make several additional Consul CLI tools available as separate binary releases. These tools can simplify operations and processes for certain runtimes, cloud providers, or complex network requirements. + +The following is a current list of Consul releases with active support: + +| Binary package name | Consul tool name | Description | +| :-------------------------- | :--------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------- | +| `consul-aws` | Consul AWS | Syncs the services in an AWS CloudMap namespace to a Consul datacenter. Enables native service discovery between Consul and AWS CloudMap. | +| `consul-cni` | Consul Container Network Interface (CNI) | Plugin for Consul on Kubernetes to allow configuring traffic redirection rules without escalated container privileges. | +| `consul-dataplane` | Consul Dataplanes | A lightweight process that manages Envoy for Consul service mesh workloads on Kubernetes and ECS. | +| `consul-ecs` | Consul ECS | Used by the `hashicorp/consul-ecs` Docker image to install and operator Consul on ECS. | +| `consul-esm` | Consul External Services Manager (ESM) | A daemon to run alongside Consul to manage services on external nodes and run health checks on them. | +| `consul-k8s` | Consul on Kubernetes CLI | A dedicated tool for managing Consul that does not require direct interaction with Helm, the Consul CLI or `kubectl`. | +| `consul-k8s-control-plane` | Consul on Kubernetes Helm manager | Manages Helm components for the `consul-k8s` CLI. The version should match the Helm chart version you want to use. | +| `consul-lambda-extension` | Lambda extension layer plugin | Enables Lambda functions to send requests to mesh services. | +| `consul-lambda-registrator` | Lambda registrator | Automates and manages Consul service registration and de-registration for Lambda functions. | +| `consul-replicate` | Consul Replicate | Long-running daemon for KV data replication between datacenters. | +| `consul-template` | Consul Template | Use templates to update Consul configurations dynamically using Go syntax. | +| `consul-terraform-sync` | Consul-Terraform-Sync (CTS) | Dynamically manage network infrastructure, nearly in real-time. | + +The following is a list of Consul tools that were used by previous versions of Consul. Some tools may be useful when performing Consul updates with especially large release gaps, while others were deprecated entirely. + +- `consul-api-gateway`. This feature was fully integrated into the `consul` package in v1.16. +- `consul-telemetry-collector`. This tool was developed to support [HCP Consul Central](/hcp/docs/consul/concepts/consul-central). + +Consul also supports integrations with tools developed by our community. Refer to [Community tools](/consul/docs/integrate/consul-tools) for more information. + +## Next steps + +To continue learning about Consul and its fundamental operations, [download and install Consul](/consul/docs/fundamentals/install). Then [run a Consul agent in developer mode](/consul/docs/fundamentals/install/dev) so you can learn how to use the 3 main Consul interface: [Consul HTTP API](/consul/docs/fundamentals/interface/api), [Consul CLI](/consul/docs/fundamentals/interface/cli), and [Consul web UI](/consul/docs/fundamentals/interface/ui). \ No newline at end of file diff --git a/website/content/docs/fundamentals/identity.mdx b/website/content/docs/fundamentals/identity.mdx new file mode 100644 index 000000000000..0648264be44d --- /dev/null +++ b/website/content/docs/fundamentals/identity.mdx @@ -0,0 +1,72 @@ +--- +layout: docs +page_title: Identity +description: >- + This topic introduces identity in Consul. Consul uses identity to associate agents, configurations, and services on different nodes that may have different names but are otherwise identical. +--- + +# Identity + +In a datacenter, Consul uses _identity_ to associate agents, configurations, and services on different nodes that may have different names but are otherwise identical. It is important to understand how Consul determines identity and how it uses identity for cluster operations and service networking. + +## Overview + +In Consul, there are three main categories of configuration files: + +1. Agent configuration files +1. Service definition files +1. Configuration entry files + +When you create any configuration in Consul, whether it is an agent configuration file, a service definition, or a configuration entry, you explicitly define the object's identity for Consul. + +We recommend names that are simple, descriptive, and unique. Use hyphens (`-`) or `snakeCase` for your naming conventions. Do not use spaces, underscores, or periods, and these characters may interfere with Consul DNS. + +By default, Consul records identity for each datacenter. Strategies to [extend your service network east/west](/consul/docs/east-west), such as WAN federation and cluster peering, allow you to use service identity across Consul datacenters. Consul Enterprise also includes [multi-tenancy management features](/consul/docs/multi-tenant) such as admin partitions and namespaces to help you manage service and configuration identity within a single datacenter. + +## Agent identity + +For agents, the `datacenter` parameter specifies the identity of the cluster. By default, Consul assigns the name `dc1` when one is not specified. Agents will only join clusters whose datacenter name matches its own. This ensures agents don't accidentally join the wrong cluster. + +You can also specify a custom domain for an agent. By default, the domain for all Consul agents is `consul`. We recommend using the default domain unless you are a platform engineer with advanced networking requirements. + +Optionally, you can also set a custom `node_name` for an agent that appears in the Consul catalog. + +## Service identity + +Consul can connect services across system runtimes and cloud providers because of _service identity_. When you register a service, you can specify the following identity information about the service for Consul: + +| Identity field | Default value | What it means | +| :------------- | :-------------- | :--------------------------------------------------------------------------------------------- | +| `name` | None (Required) | Specifices the service name in the Consul catalog to register the local instance under. | +| `id` | Value of `name` | Specifies a service when multiple services run on a single node. | +| `namespace` | `default` | Specifies the Consul namespace to register the service in. | +| `partition` | `default` | Specifies the Consul admin partition to register the service in. | +| `tags` | None | Specifies custom metadata values for service version upgrades and deployment upgrade rollouts. | + +In networks that use Consul for service discovery, Consul uses service identity to route traffic to a healthy instance in response to Consul DNS queries. + +In networks that also use Consul service mesh, Consul uses service identity for the following functions: + +- Explicit upstream definitions for service sidecar proxies. +- Service intentions for secure service-to-service communication. +- Traffic management with service resolvers, splitters, and routers. + +## Configuration identity + +When you operate Consul, you manage network behavior with a series of configuration entries that define the desired state of the network. When you register a configuration, you define the `Kind` of configuration entry and its `Name`. + +Consul merges configuration entries that have the same kind and name in the `consul.d` directory. When you reload the Consul agent, it automatically combines these related entries into a single unified configuration. This allows you to organize your configuration across multiple files while maintaining logical grouping. + +When you use the Consul CLI to add configuration entries to your datacenter, you reference the configuration entry name when you run the `consul config` command. + +### Peer identity + +When you use Consul's cluster peering features to connect datacenters, you must provide an identity for the peer on each end of the peering connection. To keep configurations human-readable, we recommend that you use the datacenter and partition names. For example, the default admin partition in the default datacenter is defined as `dc1-default` on each peer. + +## Next steps + +The next three pages in this sequence of Consul fundamentals go into more depth about each of the topics introduced on this page: + +- [Consul agent basics](/consul/docs/fundamentals/agent) +- [Service and health check basics](/consul/docs/fundamentals/service) +- [Configuration entry basics](/consul/docs/fundamentals/config-entry) \ No newline at end of file diff --git a/website/content/docs/fundamentals/install/dev.mdx b/website/content/docs/fundamentals/install/dev.mdx new file mode 100644 index 000000000000..e8ab116d5114 --- /dev/null +++ b/website/content/docs/fundamentals/install/dev.mdx @@ -0,0 +1,180 @@ +--- +layout: docs +page_title: Run Consul in development mode +description: >- + Learn how to run Consul in dev mode with the `-dev` flag to test basic configurations and features. +--- + +# Run Consul in development mode + + + + Development mode is for demonstration and testing scenarios. Never run Consul in `-dev` mode in production. + + + +This page describes the process to run Consul in development mode. When you run Consul in development mode, Consul starts a single server agent with a configured datacenter so that you can test Consul features and operations. + +## Start the agent + +Use the `-dev` flag to start the Consul agent in development mode. The terminal should output the server agent's logs, which resemble the following example. + +```shell-session +$ consul agent -dev +==> Starting Consul agent... + Version: '1.20.1' + Build Date: '2024-10-29 19:04:05 +0000 UTC' + Node ID: '77a5005f-403d-9396-c8a4-08c7f8755030' + Node name: 'boruszak-LFH0JH12XD' + Datacenter: 'dc1' (Segment: '') + Server: true (Bootstrap: false) + Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, gRPC-TLS: 8503, DNS: 8600) + Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302) + Gossip Encryption: false + Auto-Encrypt-TLS: false + ACL Enabled: false + Reporting Enabled: false + ACL Default Policy: allow + HTTPS TLS: Verify Incoming: false, Verify Outgoing: false, Min Version: TLSv1_2 + gRPC TLS: Verify Incoming: false, Min Version: TLSv1_2 + Internal RPC TLS: Verify Incoming: false, Verify Outgoing: false (Verify Hostname: false), Min Version: TLSv1_2 + +==> Log data will now stream in as it occurs: +``` + +Leave the Consul agent running and open up a new tab in your terminal. When you run commands in this new tab, Consul generates additional output in the original terminal session. + +## Discover datacenter members + +Run the [`consul members`](/consul/commands/members) command in the new terminal window. The output lists the nodes where Consul agents in the datacenter currently run. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +boruszak-LFH0JH12XD 127.0.0.1:8301 alive server 1.20.1 2 dc1 default +``` + +The output displays your agent, its IP address, its health state, its role in the datacenter, and some version information. You can discover additional metadata by providing the `-detailed` flag. + +### Use Consul HTTP API + +When Consul runs in development mode, you can also query the datacenter using the [Consul HTTP API](/consul/api-docs/). The HTTP API returns a payload that resembles the following example. + +```shell-session +$ curl localhost:8500/v1/catalog/nodes +[ + { + "ID": "77a5005f-403d-9396-c8a4-08c7f8755030", + "Node": "boruszak-LFH0JH12XD", + "Address": "127.0.0.1", + "Datacenter": "dc1", + "TaggedAddresses": { + "lan": "127.0.0.1", + "lan_ipv4": "127.0.0.1", + "wan": "127.0.0.1", + "wan_ipv4": "127.0.0.1" + }, + "Meta": { + "consul-network-segment": "", + "consul-version": "1.20.1" + }, + "CreateIndex": 13, + "ModifyIndex": 16 + } +] +``` + +### Use Consul DNS + +You can use [Consul DNS](/consul/docs/discover/dns) to discover nodes in the datacenter. Address nodes using the following syntax: + + + +```plaintext +.node.consul +``` + + + +By default, Consul uses port `8600` for DNS requests. Address your command to the DNS port with the `-p` flag. The output for this command resembles the following example. + +```shell-session +$ dig @127.0.0.1 -p 8600 boruszak-LFH0JH12XD.node.consul +; <<>> DiG 9.10.6 <<>> @127.0.0.1 -p 8600 boruszak-LFH0JH12XD.node.consul +; (1 server found) +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6578 +;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 3 +;; WARNING: recursion requested but not available + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;boruszak-LFH0JH12XD.node.consul. IN A + +;; ANSWER SECTION: +boruszak-LFH0JH12XD.node.consul. 0 IN A 127.0.0.1 + +;; ADDITIONAL SECTION: +boruszak-LFH0JH12XD.node.consul. 0 IN TXT "consul-network-segment=" +boruszak-LFH0JH12XD.node.consul. 0 IN TXT "consul-version=1.20.1" + +;; Query time: 3 msec +;; SERVER: 127.0.0.1#8600(127.0.0.1) +;; WHEN: Thu Dec 19 13:27:56 PST 2024 +;; MSG SIZE rcvd: 146 +``` + +## Stop the agent + +To stop the Consul agent, use the `consul leave` command. This command gracefully stops the agent, which causes it to leave the Consul datacenter and then shut down. + +```shell-session +$ consul leave +Graceful leave complete +``` + +If you switch back to the window with Consul's streaming log output, the logs indicate that the Consul agent left the datacenter. + + + +```log +2024-12-19T13:29:40.067-0800 [INFO] agent.server: server starting leave +2024-12-19T13:29:40.067-0800 [INFO] agent.server.serf.wan: serf: EventMemberLeave: boruszak-LFH0JH12XD.dc1 127.0.0.1 +2024-12-19T13:29:40.067-0800 [INFO] agent.server: Handled event for server in area: event=member-leave server=boruszak-LFH0JH12XD.dc1 area=wan +2024-12-19T13:29:40.067-0800 [INFO] agent.router.manager: shutting down +2024-12-19T13:29:43.068-0800 [INFO] agent.server.serf.lan: serf: EventMemberLeave: boruszak-LFH0JH12XD 127.0.0.1 +2024-12-19T13:29:43.068-0800 [INFO] agent.server: Removing LAN server: server="boruszak-LFH0JH12XD (Addr: tcp/127.0.0.1:8300) (DC: dc1)" +2024-12-19T13:29:43.068-0800 [DEBUG] agent.grpc.balancer: switching server: target=consul://dc1.77a5005f-403d-9396-c8a4-08c7f8755030/server.dc1 from=dc1-127.0.0.1:8300 to= +2024-12-19T13:29:43.068-0800 [WARN] agent.server: deregistering self should be done by follower: name=boruszak-LFH0JH12XD partition=default +2024-12-19T13:29:43.068-0800 [INFO] agent.router.manager: shutting down +2024-12-19T13:29:46.069-0800 [INFO] agent.server: Waiting to drain RPC traffic: drain_time=5s +2024-12-19T13:29:51.069-0800 [INFO] agent: Requesting shutdown +2024-12-19T13:29:51.070-0800 [INFO] agent.server: shutting down server +2024-12-19T13:29:51.071-0800 [DEBUG] agent.server.autopilot: state update routine is now stopped +2024-12-19T13:29:51.073-0800 [DEBUG] agent.server.autopilot: autopilot is now stopped +2024-12-19T13:29:51.071-0800 [INFO] agent: consul server down +2024-12-19T13:29:51.073-0800 [INFO] agent: shutdown complete +2024-12-19T13:29:51.073-0800 [DEBUG] agent.http: Request finished: method=PUT url=/v1/agent/leave from=127.0.0.1:63006 latency=11.00529825s +2024-12-19T13:29:51.073-0800 [DEBUG] agent: warning: content-type header not explicitly set.: request-path=/v1/agent/leave +2024-12-19T13:29:51.073-0800 [DEBUG] agent.server.cert-manager: context canceled +2024-12-19T13:29:51.070-0800 [WARN] agent.server.controller-runtime: error received from watch: controller=artists managed_type=demo.v2.Artist error="rpc error: code = Canceled desc = context canceled" +2024-12-19T13:29:51.073-0800 [DEBUG] agent.server.controller-runtime: controller stopping: controller=artists managed_type=demo.v2.Artist +2024-12-19T13:29:51.073-0800 [INFO] agent.dns: Stopping server: protocol=DNS address=127.0.0.1:8600 network=tcp +2024-12-19T13:29:51.073-0800 [INFO] agent.dns: Stopping server: protocol=DNS address=127.0.0.1:8600 network=udp +2024-12-19T13:29:51.073-0800 [INFO] agent: Stopping server: address=127.0.0.1:8500 network=tcp protocol=http +2024-12-19T13:29:51.073-0800 [INFO] agent: Waiting for endpoints to shut down +2024-12-19T13:29:51.073-0800 [INFO] agent: Endpoints down +2024-12-19T13:29:51.073-0800 [INFO] agent: Exit code: code=0 +``` + + + +## Next steps + +You can run Consul in development mode to help you learn about other Consul fundamentals. The following documentation uses development mode to demonstrate Consul's user interfaces: + +- [Consul HTTP API interface](/consul/docs/fundamentals/interface/api) +- [Consul CLI interface](/consul/docs/fundamentals/interface/cli) +- [Consul UI interface](/consul/docs/fundamentals/interface/ui) \ No newline at end of file diff --git a/website/content/docs/fundamentals/install/index.mdx b/website/content/docs/fundamentals/install/index.mdx new file mode 100644 index 000000000000..374b1215ce2f --- /dev/null +++ b/website/content/docs/fundamentals/install/index.mdx @@ -0,0 +1,125 @@ +--- +layout: docs +page_title: How to install Consul +description: >- + Install Consul to get started with service discovery and service mesh. Follow the installation instructions to download the precompiled binary, or use Go to compile from source. +--- + +# How to install Consul + +This page describes the process to install Consul on bare-metal machines, virtual machines (VMs), and Kubernetes clusters. + +To find the most recently available releases for each supported runtime and CPU architecture, refer to [install Consul](/consul/install). This page also includes instructions for installing Consul using package managers such as `apt`, `yum`, and `brew`. + +## Overview + +There are three ways to install Consul: + +1. [Install from a precompiled binary](#precompiled-binaries) +1. [Compile from the source](#compile-from-the-source) +1. [Install on Kubernetes with Helm](#install-on-kubernetes) + +We recommend using a precompiled binary. We also distribute a PGP signature with the SHA256 sums that you can use to verify the binary. For guidance, refer to [verify HashiCorp binary downloads](/well-architected-framework/operational-excellence/verify-hashicorp-binary). + +## Precompiled binaries + +To install the precompiled binary, [download the appropriate package for your system](/consul/install). To find other versions, including Enterprise versions with support for FIPS 1420 compliance, refer to [the complete list of Consul releases on releases.hashicorp.com](https://releases.hashicorp.com/consul/). Consul is currently packaged as a `.zip` file. + +After your download completes, unzip it into any directory. The `consul` binary or `.exe` file inside is the only file required to run Consul. + +Copy the binary to anywhere on your system. If you intend to access it from the command-line, make sure to place it somewhere on your `PATH`. + +## Compile from the source + +To compile Consul from its source, you need [Go](https://golang.org) installed and a copy of [`git`](https://www.git-scm.com/) in your `PATH`. + +1. Clone the Consul repository from GitHub: + + ```shell-session + $ git clone https://github.com/hashicorp/consul.git + ``` + + Then, navigate to the `consul` directory: + + ```shell-session + $ cd consul + ``` + +1. Build Consul for your target system. Once complete, Go will place the binary in `./bin`. + + + + + ```shell-session + $ make dev + ``` + + + + + Specify your target system by setting the following environment variables + before building: + + | Environment variable | Description | + | :------------------- | :---------- | + | `GOOS` | Target operating system. Valid values include: `linux`, `darwin`, `windows`, `solaris`, `freebsd`. | + | `GOARCH` | Target architecture. Valid values include: `386`, `amd64`, `arm`, `arm64`. | + + ```shell-session + $ GOOS=linux GOARCH=amd64 make dev + ``` + + + + +## Verify installation + +To verify your installation, print the version of the `consul` binary. + +```shell-session +$ consul version +Consul v1.20.1 +Revision 920cc7c6 +Build Date 2024-10-29T19:04:05Z +Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents) +``` + +If you are executing from the command line, make sure Consul is on your `PATH` or you may get an error about Consul not being found. + +## Install on Kubernetes + +On Kubernetes deployments, you can pull the latest release using the HashiCorp Helm repo. + +1. Add the HashiCorp Helm repository: + + ```shell-session + $ helm repo add hashicorp https://helm.releases.hashicorp.com + "hashicorp" has been added to your repositories + ``` + +1. Verify that you have access to the Consul chart: + + ```shell-session + $ helm search repo hashicorp/consul + NAME CHART VERSION APP VERSION DESCRIPTION + hashicorp/consul 1.5.0 1.19.0 Official HashiCorp Consul Chart + ``` + +1. Before you install Consul on Kubernetes with Helm, ensure that the `consul` Kubernetes namespace does not exist. We recommend installing Consul on a dedicated namespace. + + ```shell-session + $ kubectl get namespace + NAME STATUS AGE + default Active 18h + kube-node-lease Active 18h + kube-public Active 18h + kube-system Active 18h + ``` + +1. Run the following command to install the latest version of Consul on Kubernetes with its default configuration. + + ```shell-session + $ helm install consul hashicorp/consul --set global.name=consul --create-namespace --namespace consul + ``` + +For more information, including instructions on how to install on other runtimes such as OpenShift, refer to [Install on Kubernetes with Helm](/consul/docs/deploy/server/k8s/helm). \ No newline at end of file diff --git a/website/content/docs/fundamentals/interface/api.mdx b/website/content/docs/fundamentals/interface/api.mdx new file mode 100644 index 000000000000..e2a7b01d462f --- /dev/null +++ b/website/content/docs/fundamentals/interface/api.mdx @@ -0,0 +1,70 @@ +--- +layout: docs +page_title: Explore the Consul HTTP API +description: >- + Consul includes a built-in HTTP API. +--- + +# Explore the Consul HTTP API + +This page introduces the Consul HTTP API and describes the steps to set up and use the API to interact with Consul deployments. + +For more information, refer to the [Consul HTTP API reference documentation](/consul/api-docs). + +## Introduction + +The Consul HTTP API is a RESTful interface for Consul that exposes endpoints for both Consul operations and service networking functions that return JSON payloads. + +You can interact with any Consul agent to use the HTTP API. When ACLs are enabled, you must provide a Consul token in a `X-Consul-Token` header to authenticate with Consul. + +## How Consul clusters respond to API requests + +In production, a typical Consul datacenter consists of a cluster of 3 to 5 _server agents_ in the control plane, and can potentially include thousands of _client agents_ in the application's data plane. By default, Consul agents send each API request to the [current leader of the server quorum](/consul/docs/concept/consensus) in order to receive the most up-to-date information from the Consul catalog. + +High request volumes to a single server can impact cluster performance. While Consul uses the [gossip protocol](/consul/docs/concept/gossip) to efficiently distribute updates across agents, this distributed approach means that agents may temporarily have different views of the cluster state. This eventual consistency model allows Consul to handle higher loads by spreading requests across multiple agents, rather than concentrating all traffic on a single server. + +Consul includes several API features that help you manage this effect on your requests, especially when Consul operates at scale. + +### Consistency modes + +All HTTP API read requests use the default consistency mode unless overridden on a per-request basis. We do not recommend changing the consistency mode used by default. You can use the `stale` or `consistent` query parameters on some endpoints to override the default setting. To learn more, refer to [Consistency Modes](/consul/api-docs/features/consistency). + +### Blocking queries + +@include 'text/descriptions/blocking-query.mdx' + +### Filtering + +Consul supports a `filter` query parameter to include expressions for requests to match in the Consul catalog. Servers filter based on the expression before returning a response, which can greatly reduce operational load at scale. To learn more, refer to [Filtering](/consul/api-docs/features/filtering). + +### Agent cache + +Some read endpoints in Consul can return results from the local agent's cache, instead of contacting the servers for results. You can use the `?cached` query parameter and the `Cache-Control` header to enabled and define agent cache behavior. On some endpoints, you also have the option to enabled background refresh caching, which uses blocking queries to update the local agent cache when updates occur on the servers. For more information, refer to [Agent caching](/consul/api-docs/features/caching). + +## Basic API requests + +In a datacenter where ACLs are not enabled, the following request returns a JSON payload that lists the the Consul instances that participated in the cluster's gossip pool, including their state. This HTTP API request corresponds to the `consul members` CLI command. + +```shell-session +$ curl http://127.0.0.1:8500/v1/agent/members +``` + +The following request reloads the configuration files in the agent's configuration directory. Use this command to apply configuration updates to your Consul cluster without restarting the agent or node. This HTTP API request corresponds to the `consul reload` CLI command. + +```shell-session +$ curl \ + --request PUT \ + http://127.0.0.1:8500/v1/agent/reload +``` + +The following request returns a list of nodes, including their IP address, where the `web` service was registered. You can use filtering query parameters to refine the list of services that API request returns. + +```shell-session +$ curl http://127.0.0.1:8500/v1/catalog/service/web +``` + +## Next steps + +To learn more about the Consul HTTP API and its endpoints, refer to the [Consul API documentation](/consul/api-docs). + +To continue learning Consul fundamentals, proceed to either [Explore the Consul CLI](/consul/docs/fundamentals/interface/cli) or [Explore the Consul web UI](/consul/docs/fundamentals/interface/ui). \ No newline at end of file diff --git a/website/content/docs/fundamentals/interface/cli.mdx b/website/content/docs/fundamentals/interface/cli.mdx new file mode 100644 index 000000000000..a386d67169ed --- /dev/null +++ b/website/content/docs/fundamentals/interface/cli.mdx @@ -0,0 +1,166 @@ +--- +layout: docs +page_title: Explore the Consul CLI +description: >- + Consul includes a built-in command line interface. +--- + +# Explore the Consul CLI + +This page introduces the Consul CLI and describes the steps to set up and use the CLI to interact with Consul deployments. + +For more information, refer to the [Consul CLI reference documentation](/consul/commands). + +## Introduction + +The Consul CLI is a wrapper for the HTTP API that allows you to interact with Consul from a terminal session. + +To view a list of the available commands at any time, run `consul` with no arguments: + +```shell-session +$ consul +Usage: consul [--version] [--help] [] + +Available commands are: + acl Interact with Consul's ACLs + agent Runs a Consul agent + catalog Interact with the catalog + connect Interact with service mesh functionality + debug Records a debugging archive for operators + event Fire a new event + exec Executes a command on Consul nodes + force-leave Forces a member of the cluster to enter the "left" state + info Provides debugging information for operators. + intention Interact with service mesh intentions + join Tell Consul agent to join cluster + keygen Generates a new encryption key + keyring Manages gossip layer encryption keys + kv Interact with the key-value store + leave Gracefully leaves the Consul cluster and shuts down + lock Execute a command holding a lock + login Login to Consul using an auth method + logout Destroy a Consul token created with login + maint Controls node or service maintenance mode + members Lists the members of a Consul cluster + monitor Stream logs from a Consul agent + operator Provides cluster-level tools for Consul operators + peering Create and manage peering connections between Consul clusters + reload Triggers the agent to reload configuration files + rtt Estimates network round trip time between nodes + services Interact with services + snapshot Saves, restores and inspects snapshots of Consul server state + tls Builtin helpers for creating CAs and certificates + troubleshoot Provides tools to troubleshoot Consul's service mesh configuration + validate Validate config files/directories + version Prints the Consul version + watch Watch for changes in Consul +``` + +To get help for any specific command, pass the `-h` flag to the relevant +subcommand. For example, the following command returns information about the `join` subcommand: + +```shell-session +$ consul join --help +Usage: consul join [options] address ... + + Tells a running Consul agent (with "consul agent") to join the cluster + by specifying at least one existing member. + +HTTP API Options + + -http-addr=
    + The `address` and port of the Consul HTTP agent. The value can be + an IP address or DNS address, but it must also include the port. + This can also be specified via the CONSUL_HTTP_ADDR environment + variable. The default value is http://127.0.0.1:8500. The scheme + can also be set to HTTPS by setting the environment variable + CONSUL_HTTP_SSL=true. + + -token= + ACL token to use in the request. This can also be specified via the + CONSUL_HTTP_TOKEN environment variable. If unspecified, the query + will default to the token of the Consul agent at the HTTP address. + +Command Options + + -wan + Joins a server to another server in the WAN pool. +``` + +## Environment variables + +Consul's CLI supports environment variables to set default behaviors. While command-line flags always take precedence, using environment variables can streamline your workflows by reducing the need to repeatedly specify common settings. + +The following environment variables sets the HTTP address of your Consul cluster and the token used to authenticate with Consul. The documentation and tutorials assume that you have already configured these variables. + +- `CONSUL_HTTP_ADDR`: Specifies the HTTP address of your Consul cluster. + + ```shell-session + $ export CONSUL_HTTP_ADDR=http://127.0.0.1:8500 + ``` + +- `CONSUL_HTTP_TOKEN` or `CONSUL_HTTP_TOKEN_FILE`: Specifies the ACL token used to authenticate with Consul. + + When the [ACL system is enabled](/consul/docs/reference/agent/configuration-file/acl), the Consul CLI requires an [ACL token](/consul/docs/security/acl#tokens) with permissions to perform the operations. You can provide the ACL token directly on the command line using the `-token` command line flag, from a file using the `-token-file` command line flag, or from either the `CONSUL_HTTP_TOKEN` or `CONSUL_HTTP_TOKEN_FILE` environment variable. + + ```shell-session + $ export CONSUL_HTTP_TOKEN=aba7cbe5-879b-999a-07cc-2efd9ac0ffe + ``` + +Additional connection requirements depend on how your Consul cluster is configured. Refer to the [Consul CLI documentation](/consul/commands) for more information, including [a list of supported environment variables](/consul/commands#environment-variables). + +## Basic commands + +The following commands demonstrate commonly used Consul CLI commands. + +The `consul members` command returns a information about the Consul instances that participated in the cluster's gossip pool, including their state. You can use this command to check on the current state of a cluster. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +boruszak-LFH0JH12XD 127.0.0.1:8301 alive server 1.20.2 2 dc1 default +``` + +Use the `consul version` command to confirm your installation's version number and the build date. Running agents with incompatible versions in the same datacenter can impact operations. + +```shell-session +$ consul version +Consul v1.20.2 +Revision 33e5727a +Build Date 2025-01-03T14:38:40Z +Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents) +``` + +The `consul reload` command reloads the configuration files in the agent's configuration directory. Use this command to apply configuration updates to your Consul cluster without restarting the agent or node. + +```shell-session +$ consul reload +Configuration reload triggered +``` + +The `consul catalog services` command returns a list of services registered to the Consul catalog. + +```shell-session +$ consul catalog services +consul +``` + +You can interact with the Consul key/value store with the `consul kv` command. Specify an operation, followed by the key, and then the value. + +```shell-session +$consul kv put redis/config/connections 5 +Success! Data written to: redis/config/connections +``` + +Use `consul leave` to make the Consul agent leave the datacenter gracefully. After it leaves the cluster, the agent shuts down. + +```shell-session +$ consul leave +Graceful leave complete +``` + +## Next steps + +To learn more about the Consul CLI and its commands, refer to the [Consul CLI documentation](/consul/commands). + +To continue learning Consul fundamentals, proceed to [Explore the Consul web UI](/consul/docs/fundamentals/interface/ui). diff --git a/website/content/docs/fundamentals/interface/ui.mdx b/website/content/docs/fundamentals/interface/ui.mdx new file mode 100644 index 000000000000..978deb5942a3 --- /dev/null +++ b/website/content/docs/fundamentals/interface/ui.mdx @@ -0,0 +1,148 @@ +--- +layout: docs +page_title: Explore the Consul web UI +description: >- + Consul includes a built-in web user interface that you can use to observe services and manage datacenter operations. +--- + +# Explore the Consul web UI + +This page provides an overview of the Consul web UI, which is one of the ways to interface with a Consul cluster alongside the [Consul CLI](/consul/commands) and [HTTP API](/consul/api-docs). + +## Introduction + +The web UI allows you to interact with Consul using a browser-based graphical user interface. You can view information about your Consul datacenter, including: + +- [Registered server and client nodes](#nodes) +- [Registered services and sidecar proxies](#services) +- Registered API, terminating, ingress, and mesh gateways + +Additionally, you can view and update the following components through the Consul UI: + +- [Key-value store](#key-value-store) +- [Access Control List (ACL) tokens](#acl-tokens) +- [Service mesh intentions](#intentions) + +### UI task table + +@include 'tables/permissions/ui.mdx' + +## Navigate to the UI + +When you run `consul agent -dev` to start Consul in development mode, the web UI is enabled automatically. While the agent is running, open [`http://localhost:8500/ui`](http://localhost:8500/ui) in your web browser to view the Consul UI. + +### Enable the UI for non-development agents + +The web UI is not enabled by default. Agents that are run without the `-dev` flag must explicitly enable the web UI. + +To enable the web UI, you can: + +- Modify the agent start-up command to include the [`-ui` flag](/consul/commands/agent#ui-options). +- Modify the agent configuration file to include the [`ui_config.enabled = true`](/consul/docs/reference/agent/configuration-file/ui#ui_config_enabled) attribute. + + + + + +```shell-session +$ consul agent -ui [...] +``` + + + + + + + +```hcl +datacenter = "dc1" +data_dir = "/opt/consul" + +server = true + +ui_config { + enabled = true +} + +# ... +``` + + + + + + + +Using the server address of your Consul server node, open the address in your web browser on port `8500` at the `/ui` path. For example, if the server address is `212.44.23.212`, then open `http://212.44.23.212:8500/ui` in your web browser. + +## Services + +The initial Consul UI landing page is the **Services** page, which gives you a list of all registered services and gateways in your datacenter, as well as their health, tags, type, and source. You can also access the **Services** page from the left navigation bar. + +You can filter services by health status and service type, or search for services with the search bar. + +Click the name of a service to see more information about it, including instance count, the health of individual instances, and the Consul agent registered with the service. + +## Nodes + +Click **Nodes** option in the left navigation bar. + +This page shows an information about each node in the datacenter including the health status, the network address, and the number of registered services on the node. A badge beside the node shows the host of the current cluster leader. + +Click the name of a node for more information about service health checks, instances, round trip time, lock sessions, and metadata that is registered with the node. + +You can filter nodes by health status or search for nodes with the search bar. + +## Key/value store + +Click **Key/Value** option in the left navigation bar. + +This page has a structure that resembles a file directory. It nests objects according to their key prefix. + +You can use prefixes to organize objects by application, business function, or a combination of the two. + +## Intentions + +Service intentions are identity-based security mechanisms in Consul that control communication between services. + +Click **Intentions** option in the left navigation bar. + +You can create new service intentions by specifying the source and destination services, writing a description, and configuring the intention to allow or deny traffic between the two services. You can also set the intention to be **Application Aware**, which allows you to more granularly allow or deny communication between services based on Layer 7 criteria such as path, header, or method. + +## Access control lists (ACLs) + +Consul uses [access control lists](/consul/docs/secure/acl) (ACLs) to secure access to the web UI, API, CLI, service communications, and agent communications. + +ACLs are not enabled for local development agents so there will be a red dot beside the **Access Controls** heading in the left navigation bar if Consul is running as a local development agent. + +You enable ACLs in the agent configuration file with `enabled` attribute in the `acl` block. Refer to the [Bootstrap ACL system page](/consul/docs/secure/acl/bootstrap) for more information on configuring ACLs. + + + +```hcl +datacenter = "dc1" +data_dir = "/opt/consul" + +server = true + +ui_config { + enabled = true +} + +acl { + enabled = true +} + +# ... + +``` + + + +With ACLs enabled and proper permissions set, you can click on the ACL-related pages under the **Access Controls** heading in the left navigation bar and view the **Tokens**, **Policies**, **Roles**, and **Auth Methods** pages. Refer to the [ACL Tokens](/consul/docs/secure/acl/token), [ACL Policies](/consul/docs/secure/acl/policy), [ACL Roles](/consul/docs/secure/acl/role), and [Auth Methods](/consul/docs/secure/acl/auth-method) documentation for more information. + +## Adjust UI settings + +Click the **Settings** option from the top navigation bar. + +You can select whether or not to enable **Blocking Queries**. This allows the UI to update in real time without manually having to refresh the page. This setting is enabled by default but you may want to disable it on large datacenters for better performance. diff --git a/website/content/docs/fundamentals/service.mdx b/website/content/docs/fundamentals/service.mdx new file mode 100644 index 000000000000..b65242ecbcf9 --- /dev/null +++ b/website/content/docs/fundamentals/service.mdx @@ -0,0 +1,202 @@ +--- +layout: docs +page_title: Define and configure services +description: >- + This topic introduces the configuration items that enable you to register services with Consul so that they can connect to other services and nodes registered with Consul. +--- + +# Define and configure services + +Consul agents require service definitions in order to register services into the Consul catalog. You can also configure service defaults to define common settings for multiple services. + +## Service definitions + +@include 'text/descriptions/service-definition.mdx' + +To register a service, provide the service definition to the Consul agent. Refer to [Register Services and Health Checks](/consul/docs/register/service/vm) for information about registering services. + +### ACLs + +When both Consul namespaces and ACL system are enabled, you can register services to specific namespaces. However, it is important to note that: + +1. Services registered with a service definition will not inherit the namespace associated with the ACL token specified in the token field. + +1. You must explicitly include both the namespace parameter in the service definition and an ACL token that has permissions for that namespace. + +This ensures precise control over service placement within namespaces and maintains proper access control. + +## Example service definition + +The following example includes all possible parameters, but only the top-level `service` parameter and its `name` parameter are required by default. + + + + + +```hcl +service { + name = "redis" + id = "redis" + port = 80 + tags = ["primary"] + + meta = { + custom_meta_key = "custom_meta_value" + } + + tagged_addresses = { + lan = { + address = "192.168.0.55" + port = 8000 + } + + wan = { + address = "198.18.0.23" + port = 80 + } + } + + port = 8000 + socket_path = "/tmp/redis.sock" + enable_tag_override = false + + checks = [ + { + args = ["/usr/local/bin/check_redis.py"] + interval = "10s" + } + ] + + kind = "connect-proxy" + proxy_destination = "redis" + + proxy = { + destination_service_name = "redis" + destination_service_id = "redis1" + local_service_address = "127.0.0.1" + local_service_port = 9090 + local_service_socket_path = "/tmp/redis.sock" + mode = "transparent" + + transparent_proxy { + outbound_listener_port = 22500 + } + + mesh_gateway = { + mode = "local" + } + + expose = { + checks = true + + paths = [ + { + path = "/healthz" + local_path_port = 8080 + listener_port = 21500 + protocol = "http2" + } + ] + } + } + + connect = { + native = false + } + + weights = { + passing = 5 + warning = 1 + } + + token = "233b604b-b92e-48c8-a253-5f11514e4b50" + namespace = "foo" +} +``` + + + + + +```json +{ + "service": { + "id": "redis", + "name": "redis", + "tags": ["primary"], + "address": "", + "meta": { + "meta": "for my service" + }, + "tagged_addresses": { + "lan": { + "address": "192.168.0.55", + "port": 8000 + }, + "wan": { + "address": "198.18.0.23", + "port": 80 + } + }, + "port": 8000, + "socket_path": "/tmp/redis.sock", + "enable_tag_override": false, + "checks": [ + { + "args": ["/usr/local/bin/check_redis.py"], + "interval": "10s" + } + ], + "kind": "connect-proxy", + "proxy_destination": "redis", // Deprecated + "proxy": { + "destination_service_name": "redis", + "destination_service_id": "redis1", + "local_service_address": "127.0.0.1", + "local_service_port": 9090, + "local_service_socket_path": "/tmp/redis.sock", + "mode": "transparent", + "transparent_proxy": { + "outbound_listener_port": 22500 + }, + "config": {}, + "upstreams": [], + "mesh_gateway": { + "mode": "local" + }, + "expose": { + "checks": true, + "paths": [ + { + "path": "/healthz", + "local_path_port": 8080, + "listener_port": 21500, + "protocol": "http2" + } + ] + } + }, + "connect": { + "native": false, + "sidecar_service": {}, + "proxy": { // Deprecated + "command": [], + "config": {} + } + }, + "weights": { + "passing": 5, + "warning": 1 + }, + "token": "233b604b-b92e-48c8-a253-5f11514e4b50", + "namespace": "foo" + } +} +``` + + + + +## Next steps + +Learn about [Consul configuration entries](/consul/docs/fundamentals/config-entry). You can use configuration entries to manage service traffic and configure global defaultsa for services and their sidecar proxies. \ No newline at end of file diff --git a/website/content/docs/fundamentals/tf.mdx b/website/content/docs/fundamentals/tf.mdx new file mode 100644 index 000000000000..f1dfc35f7ce3 --- /dev/null +++ b/website/content/docs/fundamentals/tf.mdx @@ -0,0 +1,56 @@ +--- +layout: docs +page_title: Consul Terraform provider +description: -> + This topic introduces the Consul Terraform provider. +--- + +# Consul Terraform provider + +Terraform users can manage Consul with [the official Consul provider](https://registry.terraform.io/providers/hashicorp/consul/latest/docs). You can use the Terraform Consul provider to configure your Consul cluster's ACLs, configuration entries, intentions, and more. + +## Overview + +The majority of the workflows described in this documentation use the HTTP API, CLI, and web UI. Consul supports several methods for automated operations, from configuring virtual machines with shell scripts that run when an instance is deployed to updating application deployments dynamically with the KV store and API blocking queires. + +[The Terraform Consul provider](https://registry.terraform.io/providers/hashicorp/consul/latest/docs) lets you configure and manage your Consul cluster alongside the infrastructure-as-code it runs on. + +Consul Enterprise users can [install Consul-Terraform-Sync (CTS)](/consul/docs/automate/infrastructure/install) for additional network infrastructure automations between Consul and Terraform. For example, you can use CTS so that failed health checks in the Consul catalog trigger infrastructure redeployments in Terraform. For more information, refer to [Network infrastructure automation](/consul/docs/automate/infrastructure) and the [CTS version compatibility reference](/consul/docs/reference/cts/compatibility). + +## Example usage + +In the following example, Terraform uses the Consul provider to configure datacenter `dc` from the HTTP API address `172.18.0.3:8500`. The configuration retrieves the value of `ami` in Consul's KV store. Terraform uses this value as the machine image when it deploys the VM. + +```hcl +# Configure the Consul provider +provider "consul" { + address = "172.18.0.3:8500" + datacenter = "dc1" +} + +# Access a key in Consul +data "consul_keys" "app" { + key { + name = "ami" + path = "service/app/launch_ami" + default = "ami-1234" + } +} + +# Use our variable from Consul +resource "aws_instance" "app" { + ami = data.consul_keys.app.var.ami +} +``` + +This Terraform configuration gives you the ability to define immutable infrastructure in your code and then manage dynamic updates with the Consul KV store. + +## Next steps + +You are at the end of the Consul fundamentals documentation sequence. + +To keep learning about Consul's operations and service networking features, continue with the [Get started on VMs tutorial collection](/consul/tutorials/get-started-vms). These tutorials include a code repository with working examples of the Consul Terraform provider on AWS and Azure. You also have the option to complete the tutorials with [an interactive hosted lab environment](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy?variants=consul-workflow%3Alab) instead. + +If you use Kubernetes as your preferred runtime, you may prefer to proceed directly to the [Get started on Kubernetes tutorial collection](/consul/tutorials/get-started-kubernetes). These tutorials use a local Kubernetes cluster. + +Advanced users may wish to continue with the [Migrate a monolith tutorial in the Nomad documentation](/nomad/tutorials/migrate-monolith/monolith-migration-overview). This tutorial demonstrates several Consul features and use-cases for application developers and platform engineers. \ No newline at end of file diff --git a/website/content/docs/glossary.mdx b/website/content/docs/glossary.mdx new file mode 100644 index 000000000000..c35070f0cce1 --- /dev/null +++ b/website/content/docs/glossary.mdx @@ -0,0 +1,138 @@ +--- +layout: docs +page_title: Glossary +description: >- + The glossary is a list of technical terms with a specific meaning in Consul. Use the glossary to understand Consul concepts and study for the certification exam. +--- + +# Glossary + +This page defines terms used in Consul and provides links to its supporting documentation. + +## Access logs + +@include 'text/descriptions/access-log.mdx' + +## Agent + +@include 'text/descriptions/agent.mdx' + +## Admin partitions + +@include 'text/descriptions/admin-partition.mdx' + +## API gateways + +@include 'text/descriptions/api-gateway.mdx' + +## Application load balancing + +@include 'text/descriptions/load-balancer.mdx' + +## Blocking queries + +@include 'text/descriptions/blocking-query.mdx' + +## Certificate authority + +@include 'text/descriptions/certificate-authority.mdx' + +## Clusters + +@include 'text/descriptions/cluster.mdx' + +## Cluster peering + +@include 'text/descriptions/cluster-peering.mdx' + +## Consul DNS + +@include 'text/descriptions/consul-dns.mdx' + +## Consul template + +@include 'text/descriptions/consul-template.mdx' + +## Datacenters + +@include 'text/descriptions/datacenter.mdx' + +## Distributed tracing + +@include 'text/descriptions/distributed-tracing.mdx' + +## Key/value store + +@include 'text/descriptions/kv/store.mdx' + +## Ingress gateways + +@include 'text/descriptions/ingress-gateway.mdx' + +## Mesh gateways + +@include 'text/descriptions/mesh-gateway.mdx' + +## Namespaces + +@include 'text/descriptions/namespace.mdx' + +## Network areas + +@include 'text/descriptions/network-area.mdx' + +## Network infrastructure automation + +@include 'text/descriptions/network-infrastructure-automation.mdx' + +## Network segments + +@include 'text/descriptions/network-segment.mdx' + +## Prepared queries + +@include 'text/descriptions/prepared-query.mdx' + +## Sameness groups + +@include 'text/descriptions/sameness-group.mdx' + +## Service definitions + +@include 'text/descriptions/service-definition.mdx' + +## Service discovery + +@include 'text/descriptions/service-discovery.mdx' + +## Service intentions + +@include 'text/descriptions/service-intention.mdx' + +## Service mesh + +@include 'text/descriptions/service-mesh.mdx' + +## Service mesh telemetry metrics + +@include 'text/descriptions/telemetry.mdx' + +## Sessions + +@include 'text/descriptions/kv/session.mdx' + +## Static lookups + +@include 'text/descriptions/static-query.mdx' + +## Terminating gateways + +@include 'text/descriptions/terminating-gateway.mdx' + +## WAN federation + +@include 'text/descriptions/wan-federation.mdx' + +## Watches + +@include 'text/descriptions/kv/watch.mdx' diff --git a/website/content/docs/guides/index.mdx b/website/content/docs/guides/index.mdx deleted file mode 100644 index 24f96d794a01..000000000000 --- a/website/content/docs/guides/index.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -layout: docs -page_title: Guides -description: |- - This section provides various guides for common actions. Due to the nature of - Consul, some of these procedures can be complex, so our goal is to provide - guidance to do them safely. ---- - -# Consul Guides - -~> The Consul guides are now Consul [tutorials](/consul/tutorials). - -[Guides](/consul/tutorials) are step by step command-line -walkthroughs that demonstrate how to perform common operations using Consul, and -complement the feature-focused Consul documentation. - -Guide content begins with getting-started tracks to help new users learn the -basics of Consul, and continues through production-playbook tracks that cover -topics like Day 1 and Day 2 operations, production considerations, and -recommendations for securing your Consul cluster. - -You can work through the guides sequentially using the tracks, or just refer to -the material that is most relevant to you. - -Tracks include: - -- Getting Started -- Getting Started with Kubernetes -- Setup a Secure Development Environment -- Day 1: Deploying Your First Datacenter -- Day 1: Security and Network Operations -- Day 1: Kubernetes Production Deployment -- Maintenance and Monitoring Operations -- Service Discovery and Consul DNS -- Service Segmentation and Consul Service Mesh -- Service Configuration and Consul KV -- Cloud and Load Balancer Integrations diff --git a/website/content/docs/index.mdx b/website/content/docs/index.mdx index 8477ac18ab53..6f021e0c17be 100644 --- a/website/content/docs/index.mdx +++ b/website/content/docs/index.mdx @@ -5,11 +5,6 @@ description: >- Consul documentation provides reference material for all features and options available in Consul. --- -# Consul Documentation +# Consul documentation -The Consul documentation provides reference material for all features and options available in Consul. -Click the following links to access documentation and tutorials for common tasks: - -- [Install Consul](/consul/docs/install) -- [Configuration options](/consul/docs/agent/config) -- [Step-by-step tutorials](/consul/tutorials) +This is the top-level Consul Docs page. It does not appear on developer.hashicorp.com, and was left in place intentionally. \ No newline at end of file diff --git a/website/content/docs/install/bootstrapping.mdx b/website/content/docs/install/bootstrapping.mdx deleted file mode 100644 index a9918402045d..000000000000 --- a/website/content/docs/install/bootstrapping.mdx +++ /dev/null @@ -1,102 +0,0 @@ ---- -layout: docs -page_title: Bootstrap a Datacenter -description: >- - Bootstrapping a datacenter is the initial deployment process in Consul that starts server agents and joins them together. Learn how to automatically or manually join servers in a cluster. ---- - -# Bootstrap a Datacenter - -An agent can run in either client or server mode. Server nodes are responsible for running the -[consensus protocol](/consul/docs/architecture/consensus) and storing the cluster state. -The client nodes are mostly stateless and rely heavily on the server nodes. - -Before a Consul cluster can begin to service requests, -a server node must be elected leader. Bootstrapping is the process -of joining these initial server nodes into a cluster. Read the -[architecture documentation](/consul/docs/architecture) to learn more about -the internals of Consul. - -It is recommended to have three or five total servers per datacenter. A single server deployment is _highly_ discouraged -as data loss is inevitable in a failure scenario. Please refer to the -[deployment table](/consul/docs/architecture/consensus#deployment-table) for more detail. - -~> **Note**: In versions of Consul prior to 0.4, bootstrapping was a manual process. For details on using the `-bootstrap` flag directly, see the -[manual bootstrapping documentation](/consul/docs/install/bootstrapping#manually-join-the-servers). -Manual bootstrapping with `-bootstrap` is not recommended in -newer versions of Consul (0.5 and newer) as it is more error-prone. -Instead you should use automatic bootstrapping -with [`-bootstrap-expect`](/consul/docs/agent/config/cli-flags#_bootstrap_expect). - -## Bootstrapping the Servers - -The recommended way to bootstrap the servers is to use the [`-bootstrap-expect`](/consul/docs/agent/config/cli-flags#_bootstrap_expect) -configuration option. This option informs Consul of the expected number of -server nodes and automatically bootstraps when that many servers are available. To prevent -inconsistencies and split-brain (clusters where multiple servers consider -themselves leader) situations, you should either specify the same value for -[`-bootstrap-expect`](/consul/docs/agent/config/cli-flags#_bootstrap_expect) -or specify no value at all on all the servers. Only servers that specify a value will attempt to bootstrap the cluster. - -Suppose we are starting a three server cluster. We can start `Node A`, `Node B`, and `Node C` with each -providing the `-bootstrap-expect 3` flag. Once the nodes are started, you should see a warning message in the service output. - -```log -[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election. -``` - -The warning indicates that the nodes are expecting 2 peers but none are known yet. Below you will learn how to connect the servers so that one can be -elected leader. - -## Creating the Cluster - -You can trigger leader election by joining the servers together, to create a cluster. You can either configure the nodes to join automatically or manually. - -### Automatically Join the Servers - -There are two options for joining the servers. Choose the method which best suits your environment and specific use case. - -- Specify a list of servers with [-retry-join](/consul/docs/agent/config/cli-flags#_retry_join) option. -- Use automatic joining by tag for supported cloud environments with the [-retry-join](/consul/docs/agent/config/cli-flags#_retry_join) option. - -All three methods can be set in the agent configuration file or -the command line flag. - -### Manually Join the Servers - -To manually create a cluster, you should connect to one of the servers -and run the `consul join` command. - -```shell-session -$ consul join -Successfully joined cluster by contacting 3 nodes. -``` - -Since a join operation is symmetric, it does not matter which node initiates it. Once the join is successful, one of the nodes will output something like: - -```log -[INFO] consul: adding server foo (Addr: 127.0.0.2:8300) (DC: dc1) -[INFO] consul: adding server bar (Addr: 127.0.0.1:8300) (DC: dc1) -[INFO] consul: Attempting bootstrap with nodes: [127.0.0.3:8300 127.0.0.2:8300 127.0.0.1:8300] - ... -[INFO] consul: cluster leadership acquired -``` - -### Verifying the Cluster and Connect the Clients - -As a sanity check, the [`consul info`](/consul/commands/info) command -is a useful tool. It can be used to verify the `raft.num_peers` -and to view the latest log index under `raft.last_log_index`. When -running [`consul info`](/consul/commands/info) on the followers, you -should see `raft.last_log_index` converge to the same value once the -leader begins replication. That value represents the last log entry that -has been stored on disk. - -Now that the servers are all started and replicating to each other, you can -join the clients with the same join method you used for the servers. -Clients are much easier as they can join against any existing node. All nodes participate in a gossip -protocol to perform basic discovery, so once joined to any member of the -cluster, new clients will automatically find the servers and register -themselves. - --> **Note:** It is not strictly necessary to start the server nodes before the clients; however, most operations will fail until the servers are available. diff --git a/website/content/docs/install/glossary.mdx b/website/content/docs/install/glossary.mdx deleted file mode 100644 index c8bee74d6d77..000000000000 --- a/website/content/docs/install/glossary.mdx +++ /dev/null @@ -1,377 +0,0 @@ ---- -layout: docs -page_title: Glossary -description: >- - The glossary is a list of technical terms with a specific meaning in Consul. Use the glossary to understand Consul concepts and study for the certification exam. ---- - -# Consul Vocabulary - -This section collects brief definitions of some of the technical terms used in the documentation for Consul and Consul Enterprise, as well as some terms that come up frequently in conversations throughout the Consul community. - -## Agent - -An agent is the long running daemon on every member of the Consul cluster. -It is started by running `consul agent`. The agent is able to run in either _client_ -or _server_ mode. Since all nodes must be running an agent, it is simpler to refer to -the node as being either a client or server, but there are other instances of the agent. All -agents can run the DNS or HTTP interfaces, and are responsible for running checks and -keeping services in sync. - -## Client - -A client is an agent that forwards all RPCs to a server. The client is relatively -stateless. The only background activity a client performs is taking part in the LAN gossip -pool. This has a minimal resource overhead and consumes only a small amount of network -bandwidth. - -## Server - -A server is an agent with an expanded set of responsibilities including -participating in the Raft quorum, maintaining cluster state, responding to RPC queries, -exchanging WAN gossip with other datacenters, and forwarding queries to leaders or -remote datacenters. - -## Datacenter - -We define a datacenter to be a networking environment that is -private, low latency, and high bandwidth. This excludes communication that would traverse -the public internet, but for our purposes multiple availability zones within a single EC2 -region would be considered part of a single datacenter. - -## Consensus - -When used in our documentation we use consensus to mean agreement upon -the elected leader as well as agreement on the ordering of transactions. Since these -transactions are applied to a -[finite-state machine](https://en.wikipedia.org/wiki/Finite-state_machine), our definition -of consensus implies the consistency of a replicated state machine. Consensus is described -in more detail on [Wikipedia](), -and our implementation is described [here](/consul/docs/architecture/consensus). - -## Gossip - -Consul is built on top of [Serf](https://github.com/hashicorp/serf/) which provides a full -[gossip protocol](https://en.wikipedia.org/wiki/Gossip_protocol) that is used for multiple purposes. -Serf provides membership, failure detection, and event broadcast. Our use of these -is described more in the [gossip documentation](/consul/docs/architecture/gossip). It is enough to know -that gossip involves random node-to-node communication, primarily over UDP. - -## LAN Gossip - -Refers to the LAN gossip pool which contains nodes that are all -located on the same local area network or datacenter. - -## WAN Gossip - -Refers to the WAN gossip pool which contains only servers. These -servers are primarily located in different datacenters and typically communicate -over the internet or wide area network. - -## RPC - -Remote Procedure Call. This is a request / response mechanism allowing a -client to make a request of a server. - -# Consul Glossary -This section collects brief definitions of some of the terms used in the discussions around networking in a cloud-native world. - - -## Access Control List (ACL) -An Access Control List (ACL) is a list of user permissions for a file, folder, or -other object. It defines what users and groups can access the object and what -operations they can perform. - -Consul uses Access Control Lists (ACLs) to secure the UI, API, CLI, service -communications, and agent communications. -Visit [Consul ACL Documentation and Guides](/consul/docs/security/acl) - -## API Gateway -An Application Programming Interface (API) is a common software interface that -allows two applications to communicate. Most modern applications are built using -APIs. An API Gateway is a single point of entry into these modern applications -built using APIs. - -## Application Security -Application Security is the process of making applications secure by detecting -and fixing any threats or information leaks. This can be done during or after -the app development lifecycle; although, it is easier for app teams and security -teams to incorporate security into an app even before the development process -begins. - -## Application Services -Application Services are a group of services, such as application performance -monitoring, load balancing, service discovery, service proxy, security, -autoscaling, etc. needed to deploy, run, and improve applications. - -## Authentication and Authorization (AuthN and AuthZ) -Authentication (AuthN) deals with establishing user identity while Authorization -(AuthZ) allows or denies access to the user based on user identity. - -## Auto Scaling Groups -An Auto Scaling Group is an AWS specific term that represents a collection of -Amazon EC2 instances that are treated as a logical grouping for the purposes of -automatic scaling and management. -Learn more about Auto Scaling Groups -[here](https://docs.aws.amazon.com/autoscaling/ec2/userguide/AutoScalingGroup.html). - -## Autoscaling -Autoscaling is the process of automatically scaling computational resources based -on network traffic requirements. Autoscaling can be done either horizontally or -vertically. Horizontal scaling is done by adding more machines into the pool of -resources whereas vertical scaling means increasing the capacity of an existing -machine. - -## Blue-Green Deployments -Blue-Green Deployment is a deployment method designed to reduce downtime by -running two identical production environments labeled Blue and Green. Blue is -the active while Green is the idle environment. - -## Canary Deployments -Canary deployment is the pattern used for rolling out releases to a subset of -users or servers. The goal is deploy the updates to a subset of users, test it, -and then roll out the changes to everyone. - -## Client-side Load Balancing -Client-side load balancing is a load balancing approach that relies on clients' -decision to call the right servers. As the name indicates, this approach is part -of the client application. Servers can still have their own load balancer -alongside the client-side load balancer. - -## Cloud Native Computing Foundation -The [Cloud Native Computing Foundation (CNCF)](https://github.com/cncf/foundation) -is a Linux Foundation project that was founded in 2015 to help advance -container technology and align the tech industry around its evolution. - -HashiCorp joined Cloud Native Computing Foundation to further HashiCorp -product integrations with CNCF projects and to work more closely with the -broader cloud-native community of cloud engineers. Read more -[here](https://www.hashicorp.com/blog/hashicorp-joins-the-cncf/). - -## Custom Resource Definition (CRD) -Custom resources are the extensions of the Kubernetes API. A Custom Resource -Definition (CRD) file allows users to define their own custom resources and -allows the API server to handle the lifecycle. - -## Egress Traffic -Egress traffic is network traffic that begins inside a network and proceeds -through its routers to a destination outside the network. - -## Elastic Provisioning -Elastic Provisioning is the ability to provision computing resources -dynamically to meet user demand. - -## Envoy Proxy -[Envoy Proxy](https://www.envoyproxy.io/) is a modern, high performance, -small footprint edge and service proxy. Originally written and deployed at -[Lyft](https://eng.lyft.com/announcing-envoy-c-l7-proxy-and-communication-bus-92520b6c8191), - Envoy Proxy is now an official project at [Cloud Native Computing Foundation - (CNCF)](https://www.cncf.io/cncf-envoy-project-journey/) - -## Forward Proxy -A forward proxy is used to forward outgoing requests from inside the network -to the Internet, usually through a firewall. The objective is to provide a level -of security and to reduce network traffic. - -## Hybrid Cloud Architecture -A hybrid cloud architecture is an IT architectural approach that mixes -on-premises, private cloud, and public cloud services. A hybrid cloud -environment incorporates workload portability, orchestration, and management -across the environments. - -A private cloud, traditionally on-premises, is referred to an infrastructure -environment managed by the user themselves. - -A public cloud, traditionally off-premises, is referred to an infrastructure -service provided by a third party. - -## Identity-based authorization -Identity-based authorization is a security approach to restrict or allow access -based on the authenticated identity of an individual. - -## Infrastructure as a Service -Infrastructure as a Service, often referred to as IaaS, is a cloud computing -approach where the computing resources are delivered online via APIs. These -APIs communicate with underlying infrastructure like physical computing resources, - location, data partitioning, scaling, security, backup, etc. - -IaaS is one of the four types of cloud services along with SaaS -(Software as a Service), PaaS (Platform as a Service), and Serverless. - -## Infrastructure as Code -Infrastructure as Code (IaC) is the process of developers and operations teams' -ability of provisioning and managing computing resources automatically through -software, instead of using configuration tools. - -## Ingress Controller -In Kubernetes, "ingress" is an object that allows access Kubernetes services -from outside the Kubernetes cluster. An ingress controller is responsible for -ingress, generally with a load balancer or an edge router that can help with -traffic management. - -## Ingress Gateway -An Ingress Gateway is an edge of the mesh load balancer that provides secure and -reliable access from external networks to Kubernetes clusters. - -## Ingress Traffic -Ingress Traffic is the network traffic that originates outside the network and -has a destination inside the network. - -## Key-Value Store -A Key-Value Store (or a KV Store) also referred to as a Key-Value Database is -a data model where each key is associated with one and only one value in -a collection. - -## L4 - L7 Services -L4-L7 Services are a set of functions such as load balancing, web application -firewalls, service discovery, and monitoring for network layers within the -Open Systems Interconnection (OSI) model. - -## Layer 7 Observability -Layer 7 Observability is a feature of Consul Service Mesh that enables a -unified workflow for metric collection, distributed tracking, and logging. -It also allows centralized configuration and management for a distributed -data plane. - -## Load Balancer -A load balancer is a network appliance that acts as a [reverse proxy](#reverse-proxy) -and distributes network and application traffic across the servers. - -## Load Balancing -Load Balancing is the process of distributing network and application traffic -across multiple servers. - -## Load Balancing Algorithms -Load balancers follow an algorithm to determine how to route the traffic across -the server farm. Some of the commonly used algorithms are: -1. Round Robin -2. Least Connections -3. Weighted Connections -4. Source IP Hash -5. Least Response Time Method -6. Least Bandwidth Method - -## Multi-cloud -A multi-cloud environment generally uses two or more cloud computing services -from different vendors in a single architecture. This refers to the distribution -of compute resources, storage, and networking aspects across cloud environments. -A multi-cloud environment could be either all private cloud or all public cloud -or a combination of both. - -## Multi-cloud Networking -Multi-cloud Networking provides network configuration and management across -multiple cloud providers via APIs. - -## Mutual Transport Layer Security (mTLS) -Mutual Transport Layer Security, also known as mTLS, is an authentication -mechanism that ensures network traffic security in both directions between -a client and server. - -## Network Middleware Automation -The process of publishing service changes to network middleware such as -load balancers and firewalls and automating network tasks is called Network -Middleware Automation. - -## Network security -Network security is the process of protecting data and network. It consists -of a set of policies and practices that are designed to prevent and monitor -unauthorized access, misuse, modification, or denial of a computer network -and network-accessible resources. - -## Network traffic management -Network Traffic Management is the process of ensuring optimal network operation -by using a set of network monitoring tools. Network traffic management also -focuses on traffic management techniques such as bandwidth monitoring, deep -packet inspection, and application based routing. - -## Network Visualization -Network Visualization is the process of visually displaying networks and -connected entities in a "boxes and lines" kind of a diagram. - -In the context of microservices architecture, visualization can provide a clear -picture of how services are connected to each other, the service-to-service -communication, and resource utilization of each service. - -## Observability -Observability is the process of logging, monitoring, and alerting on the -events of a deployment or an instance. - -## Elastic Scaling -Elastic Scaling is the ability to automatically add or remove compute or -networking resources based on the changes in application traffic patterns. - -## Platform as a Service -Platform-as-a-Service (PaaS) is a category of cloud computing that allows -users to develop, run, and manage applications without the complexity of -building and maintaining the infrastructure typically associated with -developing and launching the application. - -## Reverse Proxy -A reverse proxy handles requests coming from outside, to the internal -network. Reverse Proxy provides a level of security that prevents the -external clients from having direct access to data on the corporate servers. -The reverse proxy is usually placed between the web server and the external -traffic. - -## Role-based Access Controls -The act of restricting or provisioning access -to a user based on their specific role in the organization. - -## Server side load balancing -A Server-side Load Balancer sits between the client and the server farm, -accepts incoming traffic, and distributes the traffic across multiple backend -servers using various load balancing methods. - -## Service configuration -A service configuration includes the name, description, and the specific -function of a service. In a microservices application architecture setting, -a service configuration file includes a service definition. - -## Service Catalog -A service catalog is an organized and curated collection of services that -are available for developers to bind to their applications. - -## Service Discovery -Service Discovery is the process of detecting services and devices on a -network. In a microservices context, service discovery is how applications -and microservices locate each other on a network. - -## Service Mesh -Service Mesh is the infrastructure layer that facilitates service-to-service -communication between microservices, often using a sidecar proxy. This -network of microservices make up microservice applications and the -interactions between them. - -## Service Networking -Service networking brings several entities together to deliver a particular -service. Service Networking acts as the brain of an organization's -networking and monitoring operations. - -## Service Proxy -A service proxy is the client-side proxy for a microservice application. -It allows applications to send and receive messages over a proxy server. - -## Service Registration -Service registration is the process of letting clients (of the service) -and routers know about the available instances of the service. -Service instances are registered with a service registry on startup and deregistered at shutdown. - -## Service Registry -Service Registry is a database of service instances and information on -how to send requests to these service instances. - -## Microservice Segmentation -Microservice segmentation, sometimes visual, of microservices is the -segmentation in a microservices application architecture that enables -administrators to view their functions and interactions. - -## Service-to-service communication -Service-to-service communication, sometimes referred to as -inter-service communication, is the ability of a microservice -application instance to communicate with another to collaborate and -handle client requests. - -## Software as a Service -Software as a Service is a licensing and delivery approach to software -delivery where the software is hosted by a provider and licensed -to users on a subscription basis. diff --git a/website/content/docs/install/index.mdx b/website/content/docs/install/index.mdx deleted file mode 100644 index a1ac4ea8b4d7..000000000000 --- a/website/content/docs/install/index.mdx +++ /dev/null @@ -1,105 +0,0 @@ ---- -layout: docs -page_title: Install Consul -description: >- - Install Consul to get started with service discovery and service mesh. Follow the installation instructions to download the precompiled binary, or use Go to compile from source. ---- - -# Install Consul - -Installing Consul is simple. There are three approaches to installing Consul: - -1. Using a [precompiled binary](#precompiled-binaries) - -1. Installing [from source](#compiling-from-source) - -1. Installing [on Kubernetes](/consul/docs/k8s/installation/install) - -Downloading a precompiled binary is easiest, and we provide downloads over TLS -along with SHA256 sums to verify the binary. We also distribute a PGP signature -with the SHA256 sums that can be verified. - -The [Get Started on VMs tutorials](/consul/tutorials/get-started-vms?utm_source=docs) provide a quick walkthrough of installing and using Consul on a VM. - -## Precompiled Binaries - -To install the precompiled binary, [download](/consul/downloads) the appropriate -package for your system. Consul is currently packaged as a zip file. We do not -have any near term plans to provide system packages. - -Once the zip is downloaded, unzip it into any directory. The `consul` binary -inside is all that is necessary to run Consul (or `consul.exe` for Windows). -No additional files are required to run Consul. - -Copy the binary to anywhere on your system. If you intend to access it from the -command-line, make sure to place it somewhere on your `PATH`. - -## Compiling from Source - -To compile from source, you will need [Go](https://golang.org) installed and -a copy of [`git`](https://www.git-scm.com/) in your `PATH`. - -1. Clone the Consul repository from GitHub: - - ```shell - $ git clone https://github.com/hashicorp/consul.git - $ cd consul - ``` - -1. Build Consul for your target system. The binary will be placed in `./bin` - (relative to the git checkout). - - - - - - - ```shell-session - $ make dev - ``` - - - - - Specify your target system by setting the following environment variables - before building: - - - `GOOS`: Target operating system. Valid values include: - `linux`, `darwin`, `windows`, `solaris`, `freebsd`. - - `GOARCH`: Target architecture. Valid values include: - `386`, `amd64`, `arm`, `arm64` - - ```shell-session - $ export GOOS=linux GOARCH=amd64 - $ make dev - ``` - - - - -## Verifying the Installation - -To verify Consul is properly installed, run `consul version` on your system. You -should see help output. If you are executing it from the command line, make sure -it is on your PATH or you may get an error about Consul not being found. - -```shell-session -$ consul version -``` - -## Browser Compatibility Considerations - -Consul currently supports all 'evergreen' browsers, as they are generally on -up-to-date versions. For more information on supported browsers, please see our -[FAQ](/consul/docs/troubleshoot/faq) diff --git a/website/content/docs/install/manual-bootstrap.mdx b/website/content/docs/install/manual-bootstrap.mdx deleted file mode 100644 index 99cfbf719587..000000000000 --- a/website/content/docs/install/manual-bootstrap.mdx +++ /dev/null @@ -1,100 +0,0 @@ ---- -layout: docs -page_title: Manually Bootstrap a Datacenter -description: >- - Manually bootstrap a datacenter to deploy your Consul servers and join them together for the first time. For Consul v0.4+, we recommend automatic bootstrapping instead. ---- - -# Manually Bootstrapping a Datacenter - -When deploying Consul to a datacenter for the first time, there is an initial -bootstrapping that must be done. As of Consul 0.4, an -[automatic bootstrapping](/consul/docs/install/bootstrapping) is available and is -the recommended approach. However, older versions only support a manual -bootstrap that is documented here. - -Generally, the first nodes that are started are the server nodes. Remember that -an agent can run in both client and server mode. Server nodes are responsible -for running the [consensus protocol](/consul/docs/architecture/consensus), and -storing the cluster state. The client nodes are mostly stateless and rely on the -server nodes, so they can be started easily. - -Manual bootstrapping requires that the first server that is deployed in a new -datacenter provide the [`-bootstrap` configuration option](/consul/docs/agent/config/cli-flags#_bootstrap). -This option allows the server -to assert leadership of the cluster without agreement from any other server. -This is necessary because at this point, there are no other servers running in -the datacenter! Lets call this first server `Node A`. When starting `Node A` -something like the following will be logged: - -```log -2014/02/22 19:23:32 [INFO] consul: cluster leadership acquired -``` - -Once `Node A` is running, we can start the next set of servers. There is a -[deployment table](/consul/docs/architecture/consensus#deployment_table) that covers various -options, but it is recommended to have 3 or 5 total servers per datacenter. A -single server deployment is _**highly**_ discouraged as data loss is inevitable -in a failure scenario. We start the next servers **without** specifying -`-bootstrap`. This is critical, since only one server should ever be running in -bootstrap mode. Once `Node B` and `Node C` are started, you should see a -message to the effect of: - -```log -[WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election. -``` - -This indicates that the node is not in bootstrap mode, and it will not elect -itself as leader. We can now join these machines together. Since a join -operation is symmetric it does not matter which node initiates it. From -`Node B` and `Node C` you can do the following: - -```shell-session -$ consul join -Successfully joined cluster by contacting 1 nodes. -``` - -Alternatively, from `Node A` you can do the following: - -```shell-session -$ consul join -Successfully joined cluster by contacting 2 nodes. -``` - -Once the join is successful, `Node A` should output something like: - -```log -[INFO] raft: Added peer 127.0.0.2:8300, starting replication -.... -[INFO] raft: Added peer 127.0.0.3:8300, starting replication -``` - -As a sanity check, the `consul info` command is a useful tool. It can be used to -verify `raft.num_peers` is now 2, and you can view the latest log index under -`raft.last_log_index`. When running `consul info` on the followers, you should -see `raft.last_log_index` converge to the same value as the leader begins -replication. That value represents the last log entry that has been stored on -disk. - -This indicates that `Node B` and `Node C` have been added as peers. At this -point, all three nodes see each other as peers, `Node A` is the leader, and -replication should be working. - -The final step is to remove the `-bootstrap` flag. This is important since we -don't want the node to be able to make unilateral decisions in the case of a -failure of the other two nodes. To do this, we send a `SIGINT` to `Node A` to -allow it to perform a graceful leave. Then we remove the `-bootstrap` flag and -restart the node. The node will need to rejoin the cluster, since the graceful -exit leaves the cluster. Any transactions that took place while `Node A` was -offline will be replicated and the node will catch up. - -Now that the servers are all started and replicating to each other, all the -remaining clients can be joined. Clients are much easier, as they can be started -and perform a `join` against any existing node. All nodes participate in a -gossip protocol to perform basic discovery, so clients will automatically find -the servers and register themselves. - --> If you accidentally start another server with the flag set, do not fret. -Shutdown the node, and remove the `raft/` folder from the data directory. This -will remove the bad state caused by being in `-bootstrap` mode. Then restart the -node and join the cluster normally. diff --git a/website/content/docs/install/performance.mdx b/website/content/docs/install/performance.mdx deleted file mode 100644 index c035cbf84041..000000000000 --- a/website/content/docs/install/performance.mdx +++ /dev/null @@ -1,215 +0,0 @@ ---- -layout: docs -page_title: Server Performance Requirements -description: >- - Consul servers require sufficient compute resources to communicate and process data quickly. Learn about Consul's minimum server requirements and recommendations for different workloads. ---- - -# Server Performance - -Since Consul servers run a [consensus protocol](/consul/docs/architecture/consensus) to -process all write operations and are contacted on nearly all read operations, server -performance is critical for overall throughput and health of a Consul cluster. Servers -are generally I/O bound for writes because the underlying Raft log store performs a sync -to disk every time an entry is appended. Servers are generally CPU bound for reads since -reads work from a fully in-memory data store that is optimized for concurrent access. - -## Minimum Server Requirements ((#minimum)) - -In Consul 0.7, the default server [performance parameters](/consul/docs/agent/config/config-files#performance) -were tuned to allow Consul to run reliably (but relatively slowly) on a server cluster of three -[AWS t2.micro](https://aws.amazon.com/ec2/instance-types/) instances. These thresholds -were determined empirically using a leader instance that was under sufficient read, write, -and network load to cause it to permanently be at zero CPU credits, forcing it to the baseline -performance mode for that instance type. Real-world workloads typically have more bursts of -activity, so this is a conservative and pessimistic tuning strategy. - -This default was chosen based on feedback from users, many of whom wanted a low cost way -to run small production or development clusters with low cost compute resources, at the -expense of some performance in leader failure detection and leader election times. - -The default performance configuration is equivalent to this: - -```json -{ - "performance": { - "raft_multiplier": 5 - } -} -``` - -## Production Server Requirements ((#production)) - -When running Consul 0.7 and later in production, it is recommended to configure the server -[performance parameters](/consul/docs/agent/config/config-files#performance) back to Consul's original -high-performance settings. This will let Consul servers detect a failed leader and complete -leader elections much more quickly than the default configuration which extends key Raft -timeouts by a factor of 5, so it can be quite slow during these events. - -The high performance configuration is simple and looks like this: - -```json -{ - "performance": { - "raft_multiplier": 1 - } -} -``` - -This value must take into account the network latency between the servers and the read/write load on the servers. - -The value of `raft_multiplier` is a scaling factor and directly affects the following parameters: - -| Param | Value | | -| ------------------ | -----: | ------: | -| HeartbeatTimeout | 1000ms | default | -| ElectionTimeout | 1000ms | default | -| LeaderLeaseTimeout | 500ms | default | - -By default, Consul uses a scaling factor of `5` (i.e. `raft_multiplier: 5`), which results in the following values: - -| Param | Value | Calculation | -| ------------------ | -----: | ----------: | -| HeartbeatTimeout | 5000ms | 5 x 1000ms | -| ElectionTimeout | 5000ms | 5 x 1000ms | -| LeaderLeaseTimeout | 2500ms | 5 x 500ms | - -~> **NOTE** Wide networks with more latency will perform better with larger values of `raft_multiplier`. - -The trade off is between leader stability and time to recover from an actual -leader failure. A short multiplier minimizes failure detection and election time -but may be triggered frequently in high latency situations. This can cause -constant leadership churn and associated unavailability. A high multiplier -reduces the chances that spurious failures will cause leadership churn but it -does this at the expense of taking longer to detect real failures and thus takes -longer to restore cluster availability. - -Leadership instability can also be caused by under-provisioned CPU resources and -is more likely in environments where CPU cycles are shared with other workloads. -In order for a server to remain the leader, it must send frequent heartbeat -messages to all other servers every few hundred milliseconds. If some number of -these are missing or late due to the leader not having sufficient CPU to send -them on time, the other servers will detect it as failed and hold a new -election. - -It's best to benchmark with a realistic workload when choosing a production server for Consul. -Here are some general recommendations: - -- Consul will make use of multiple cores, and at least 2 cores are recommended. - -- Spurious leader elections can be caused by networking - issues between the servers or insufficient CPU resources. Users in cloud environments - often bump their servers up to the next instance class with improved networking - and CPU until leader elections stabilize, and in Consul 0.7 or later the [performance - parameters](/consul/docs/agent/config/config-files#performance) configuration now gives you tools - to trade off performance instead of upsizing servers. You can use the [`consul.raft.leader.lastContact` - telemetry](/consul/docs/agent/telemetry#leadership-changes) to observe how the Raft timing is - performing and guide the decision to de-tune Raft performance or add more powerful - servers. - -- For DNS-heavy workloads, configuring all Consul agents in a cluster with the - [`allow_stale`](/consul/docs/agent/config/config-files#allow_stale) configuration option will allow reads to - scale across all Consul servers, not just the leader. Consul 0.7 and later enables stale reads - for DNS by default. See [Stale Reads](/consul/docs/services/discovery/dns-cache#stale-reads) in the - [DNS Caching](/consul/docs/services/discovery/dns-cache) guide for more details. It's also good to set - reasonable, non-zero [DNS TTL values](/consul/docs/services/discovery/dns-cache#ttl-values) if your clients will - respect them. - -- In other applications that perform high volumes of reads against Consul, consider using the - [stale consistency mode](/consul/api-docs/features/consistency#stale) available to allow reads to scale - across all the servers and not just be forwarded to the leader. - -- In Consul 0.9.3 and later, a new [`limits`](/consul/docs/agent/config/config-files#limits) configuration is - available on Consul clients to limit the RPC request rate they are allowed to make against the - Consul servers. After hitting the limit, requests will start to return rate limit errors until - time has passed and more requests are allowed. Configuring this across the cluster can help with - enforcing a max desired application load level on the servers, and can help mitigate abusive - applications. - -## Memory Requirements - -Consul server agents operate on a working set of data comprised of key/value -entries, the service catalog, prepared queries, access control lists, and -sessions in memory. These data are persisted through Raft to disk in the form -of a snapshot and log of changes since the previous snapshot for durability. - -When planning for memory requirements, you should typically allocate -enough RAM for your server agents to contain between 2 to 4 times the working -set size. You can determine the working set size by noting the value of -`consul.runtime.alloc_bytes` in the [Telemetry data](/consul/docs/agent/telemetry). - -> NOTE: Consul is not designed to serve as a general purpose database, and you -> should keep this in mind when choosing what data are populated to the -> key/value store. - -## Read/Write Tuning - -Consul is write limited by disk I/O and read limited by CPU. Memory requirements will be dependent on the total size of KV pairs stored and should be sized according to that data (as should the hard drive storage). The limit on a key's value size is `512KB`. - --> Consul is write limited by disk I/O and read limited by CPU. - -For **write-heavy** workloads, the total RAM available for overhead must approximately be equal to - -``` -RAM NEEDED = number of keys * average key size * 2-3x -``` - -Since writes must be synced to disk (persistent storage) on a quorum of servers before they are committed, deploying a disk with high write throughput (or an SSD) will enhance performance on the write side. ([Documentation](/consul/docs/agent/config/cli-flags#_data_dir)) - -For a **read-heavy** workload, configure all Consul server agents with the `allow_stale` DNS option, or query the API with the `stale` [consistency mode](/consul/api-docs/features/consistency). By default, all queries made to the server are RPC forwarded to and serviced by the leader. By enabling stale reads, any server will respond to any query, thereby reducing overhead on the leader. Typically, the stale response is `100ms` or less from consistent mode but it drastically improves performance and reduces latency under high load. - -If the leader server is out of memory or the disk is full, the server eventually stops responding, loses its election and cannot move past its last commit time. However, by configuring `max_stale` and setting it to a large value, Consul will continue to respond to queries during such outage scenarios. ([max_stale documentation](/consul/docs/agent/config/config-files#max_stale)). - -It should be noted that `stale` is not appropriate for coordination where strong consistency is important (i.e. locking or application leader election). For critical cases, the optional `consistent` API query mode is required for true linearizability; the trade off is that this turns a read into a full quorum write so requires more resources and takes longer. - -**Read-heavy** clusters may take advantage of the [enhanced reading](/consul/docs/enterprise/read-scale) feature (Enterprise) for better scalability. This feature allows additional servers to be introduced as non-voters. Being a non-voter, the server will still participate in data replication, but it will not block the leader from committing log entries. - -Consul's agents use network sockets for communicating with the other nodes (gossip) and with the server agent. In addition, file descriptors are also opened for watch handlers, health checks, and log files. For a **write heavy** cluster, the `ulimit` size must be increased from the default value (`1024`) to prevent the leader from running out of file descriptors. - -To prevent any CPU spikes from a misconfigured client, RPC requests to the server should be [rate limited](/consul/docs/agent/config/config-files#limits) - -~> **NOTE** Rate limiting is configured on the client agent only. - -In addition, two [performance indicators](/consul/docs/agent/telemetry) — `consul.runtime.alloc_bytes` and `consul.runtime.heap_objects` — can help diagnose if the current sizing is not adequately meeting the load. - -## Service Mesh Certificate Signing CPU Limits - -If you enable [service mesh](/consul/docs/connect), the leader server will need -to perform public key signing operations for every service instance in the -cluster. Typically these operations are fast on modern hardware, however when -the CA is changed or its key rotated, the leader will face an influx of -requests for new certificates for every service instance running. - -While the client agents distribute these randomly over 30 seconds to avoid an -immediate thundering herd, they don't have enough information to tune that -period based on the number of certificates in use in the cluster so picking -longer smearing results in artificially slow rotations for small clusters. - -Smearing requests over 30s is sufficient to bring RPC load to a reasonable level -in all but the very largest clusters, but the extra CPU load from cryptographic -operations could impact the server's normal work. To limit that, Consul since -1.4.1 exposes two ways to limit the impact Certificate signing has on the leader -[`csr_max_per_second`](/consul/docs/agent/config/config-files#ca_csr_max_per_second) and -[`csr_max_concurrent`](/consul/docs/agent/config/config-files#ca_csr_max_concurrent). - -By default we set a limit of 50 per second which is reasonable on modest -hardware but may be too low and impact rotation times if more than 1500 service -instances are using service mesh in the cluster. `csr_max_per_second` is likely best -if you have fewer than four cores available since a whole core being used by -signing is likely to impact the server stability if it's all or a large portion -of the cores available. The downside is that you need to capacity plan: how many -service instances will need service mesh certificates? What CSR rate can your server -tolerate without impacting stability? How fast do you want CA rotations to -process? - -For larger production deployments, we generally recommend multiple CPU cores for -servers to handle the normal workload. With four or more cores available, it's -simpler to limit signing CPU impact with `csr_max_concurrent` rather than tune -the rate limit. This effectively sets how many CPU cores can be monopolized by -certificate signing work (although it doesn't pin that work to specific cores). -In this case `csr_max_per_second` should be disabled (set to `0`). - -For example if you have an 8 core server, setting `csr_max_concurrent` to `1` -would allow you to process CSRs as fast as a single core can (which is likely -sufficient for the very large clusters), without consuming all available -CPU cores and impacting normal server work or stability. diff --git a/website/content/docs/integrate/download-tools.mdx b/website/content/docs/integrate/consul-tools.mdx similarity index 100% rename from website/content/docs/integrate/download-tools.mdx rename to website/content/docs/integrate/consul-tools.mdx diff --git a/website/content/docs/integrate/hcdiag.mdx b/website/content/docs/integrate/hcdiag.mdx new file mode 100644 index 000000000000..ad8661442c33 --- /dev/null +++ b/website/content/docs/integrate/hcdiag.mdx @@ -0,0 +1,395 @@ +--- +layout: docs +page_title: Troubleshoot Consul datacenters with the hcdiag tool +description: >- + Troubleshoot your Consul datacenter with the HashiCorp Diagnostics (hcdiag) tool. Generate operational data for your Consul server and use it for debug and performance monitoring. +--- + +# Troubleshoot Consul datacenters with the hcdiag tool + +This pages describes the process to use the HashiCorp Diagnostics tool, `hcdiag`, to troubleshoot Consul operations. + +## Overview + +HashiCorp Diagnostics — [hcdiag](https://github.com/hashicorp/hcdiag/) — is a troubleshooting data-gathering tool that you can use to collect and archive important data from Consul, Nomad, Vault, and TFE server environments. The information gathered by `hcdiag` is well-suited for sharing with teams during incident response and troubleshooting. + +In this tutorial, you will: + +- Run a Consul server in "dev" mode, inside an Ubuntu Docker container +- Install hcdiag from the official Hashicorp Ubuntu package repository +- Execute basic `hcdiag` commands against this Consul service +- Explore the contents of files created by the hcdiag tool +- Learn about additional hcdiag features and how to use a custom configuration file with `hcdiag` + +## Prerequisites + +You will need a local install of Docker running on your machine for this tutorial. You can find the instructions for installing Docker [here](https://docs.docker.com/install/). + +## Set up the environment + +Run an `ubuntu` Docker container in detached mode with the `-d` flag. The `--rm` flag instructs Docker to delete the container once it has been stopped and the `-t` flag allocates a pseudo-tty which keeps the container running until it is stopped manually. + +```shell-session +$ docker run -d --rm -t --name consul ubuntu:22.04 +``` + +Open an interactive shell session in the container with the `-it` flags. + +```shell-session +$ docker exec -it consul /bin/bash +``` + + + + Your terminal prompt will now appear differently to show that you are in a shell in the Ubuntu container - for example, it may look something like `root@a931b3c8ca00:/#`. The rest of the commands in the tutorial are to be run in this Ubuntu container shell. + + + +Update `apt-get` and install the necessary dependencies. + +```shell-session +$ apt-get update && apt-get install -y wget gpg +``` + +Create a working directory and change into it. + +```shell-session +$ mkdir /tmp/consul-hcdiag && cd /tmp/consul-hcdiag +``` + +### Install and start Consul + +Add the HashiCorp repository. + +```shell-session +$ wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor > /usr/share/keyrings/hashicorp-archive-keyring.gpg && echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com jammy main" | tee /etc/apt/sources.list.d/hashicorp.list +``` + +Install the `consul` package. + +```shell-session +$ apt-get update && apt-get install -y consul +``` + +Start Consul as a background process, with all of its output redirected to a `consul.log` file in your current directory. +```shell-session +$ consul agent -dev >> consul.log 2>&1 & +``` + +### Access the Consul client + +Set the required environment variable that points `hcdiag` to your Consul service — in this case, since the dev-mode Consul agent is running, it is `http://127.0.0.1:8500`. + +```shell-session +$ export CONSUL_HTTP_ADDR=http://127.0.0.1:8500 +``` + +Run the `consul members` command to confirm Consul is running. + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +c6c722a039fa 127.0.0.1:8301 alive server 1.13.1 2 dc1 default +``` + +## Install and run the `hcdiag` tool + +Install the latest `hcdiag` release from the HashiCorp repository. + +```shell-session +$ apt-get install -y hcdiag +``` + +This is a minimal environment, so ensure you set the `SHELL` environment variable. + +```shell-session +$ export SHELL=/bin/sh +``` + +Run `hcdiag` with the `consul` flag. This will gather your available environment and Consul product information. + +```shell-session +$ hcdiag -consul +2022-08-26T18:48:30.272Z [INFO] hcdiag: Ensuring destination directory exists: directory=. +2022-08-26T18:48:30.273Z [INFO] hcdiag: Checking product availability +2022-08-26T18:48:30.424Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57292 latency=3.334206ms +2022-08-26T18:48:30.432Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57294 latency=756.452µs +2022-08-26T18:48:30.434Z [INFO] hcdiag: Gathering diagnostics +2022-08-26T18:48:30.434Z [INFO] hcdiag.product: Running operations for: product=host +2022-08-26T18:48:30.434Z [INFO] hcdiag.product: running operation: product=host runner="uname -v" +2022-08-26T18:48:30.434Z [INFO] hcdiag.product: Running operations for: product=consul +2022-08-26T18:48:30.434Z [INFO] hcdiag.product: running operation: product=consul runner="consul version" +2022-08-26T18:48:30.436Z [INFO] hcdiag.product: running operation: product=host runner=disks +2022-08-26T18:48:30.437Z [INFO] hcdiag.product: running operation: product=host runner=info +2022-08-26T18:48:30.439Z [INFO] hcdiag.product: running operation: product=host runner=memory +2022-08-26T18:48:30.439Z [INFO] hcdiag.product: running operation: product=host runner=process +2022-08-26T18:48:30.439Z [INFO] hcdiag.product: running operation: product=host runner=network +2022-08-26T18:48:30.440Z [INFO] hcdiag.product: running operation: product=host runner=/etc/hosts +2022-08-26T18:48:30.442Z [INFO] hcdiag.product: running operation: product=host runner=iptables +2022-08-26T18:48:30.443Z [WARN] hcdiag.product: result: runner=iptables status=fail result="map[iptables -L -n -v:]" error="exec error, command=iptables -L -n -v, format=string, error=exec: "iptables": executable file not found in $PATH" +2022-08-26T18:48:30.443Z [INFO] hcdiag.product: running operation: product=host runner="/proc/ files" +2022-08-26T18:48:30.454Z [INFO] hcdiag.product: running operation: product=host runner=/etc/fstab +2022-08-26T18:48:30.457Z [INFO] hcdiag: Product done: product=host statuses="map[fail:1 success:9]" +2022-08-26T18:48:30.513Z [INFO] hcdiag.product: running operation: product=consul runner="consul debug -output=/tmp/consul-hcdiag/hcdiag-2022-08-26T184830Z3905111513/ConsulDebug -duration=10s -interval=5s" +2022-08-26T18:48:30.589Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57296 latency=756.369µs +2022-08-26T18:48:30.593Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/host from=127.0.0.1:57296 latency=2.57301ms +2022-08-26T18:48:30.597Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57296 latency=705.57µs +2022-08-26T18:48:30.598Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/members?wan=1 from=127.0.0.1:57296 latency=43.877µs +2022-08-26T18:48:42.600Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/metrics/stream from=127.0.0.1:57296 latency=12.0006132s +2022-08-26T18:48:42.619Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/agent/self" +2022-08-26T18:48:42.620Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/monitor?loglevel=DEBUG from=127.0.0.1:57306 latency=12.018503143s +2022-08-26T18:48:42.621Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57294 latency=956.549µs +2022-08-26T18:48:42.623Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/agent/metrics" +2022-08-26T18:48:42.623Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/metrics from=127.0.0.1:57294 latency=415.444µs +2022-08-26T18:48:42.625Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/catalog/datacenters" +2022-08-26T18:48:42.625Z [DEBUG] agent.http: Request finished: method=GET url=/v1/catalog/datacenters from=127.0.0.1:57294 latency=118.765µs +2022-08-26T18:48:42.625Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/catalog/services" +2022-08-26T18:48:42.626Z [DEBUG] agent.http: Request finished: method=GET url=/v1/catalog/services from=127.0.0.1:57294 latency=88.196µs +2022-08-26T18:48:42.626Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/namespace" +2022-08-26T18:48:42.627Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/status/leader" +2022-08-26T18:48:42.627Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:57294 latency=77.818µs +2022-08-26T18:48:42.628Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/status/peers" +2022-08-26T18:48:42.628Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/peers from=127.0.0.1:57294 latency=36.511µs +2022-08-26T18:48:42.628Z [INFO] hcdiag.product: running operation: product=consul runner="log/docker consul" +2022-08-26T18:48:42.630Z [INFO] hcdiag.product: result: runner="log/docker consul" status=skip + result= + | /bin/sh: 1: docker: not found + error="docker not found, container=consul, error=exec error, command=docker version, error=exit status 127" +2022-08-26T18:48:42.630Z [INFO] hcdiag.product: running operation: product=consul runner=journald +2022-08-26T18:48:42.631Z [INFO] hcdiag.product: result: runner=journald status=skip + result= + | /bin/sh: 1: journalctl: not found + error="journald not found on this system, service=consul, error=exec error, command=journalctl --version, error=exit status 127" +2022-08-26T18:48:42.631Z [INFO] hcdiag: Product done: product=consul statuses="map[skip:2 success:9]" +2022-08-26T18:48:42.631Z [INFO] hcdiag: Recording manifest +2022-08-26T18:48:42.634Z [INFO] hcdiag: Created Results.json file: dest=/tmp/consul-hcdiag/hcdiag-2022-08-26T184830Z3905111513/Results.json +2022-08-26T18:48:42.636Z [INFO] hcdiag: Created Manifest.json file: dest=/tmp/consul-hcdiag/hcdiag-2022-08-26T184830Z3905111513/Manifest.json +2022-08-26T18:48:42.642Z [INFO] hcdiag: Compressed and archived output file: dest=hcdiag-2022-08-26T184830Z.tar.gz +2022-08-26T18:48:42.642Z [INFO] hcdiag: Writing summary of products and ops to standard output +product success fail unknown total +consul 9 0 2 11 +host 9 1 0 10 +``` + + + + This is an extremely minimal environment which doesn't provide some of the system services that `hcdiag` uses to gather information — seeing a few errors, like in the output above, is normal. + + + +You can also invoke `hcdiag` without options to gather all available environment and product information. To learn about all executable options, run `hcdiag -h`. + +### Examine the results + +List the directory for `.tar.gzip` archive files to discover the file that `hcdiag` created. + +```shell-session +$ ls -l *.gz +-rw-r--r-- 1 root root 28025 Aug 18 12:24 hcdiag-2022-08-18T122356Z.tar.gz +``` + + + + The extracted directory uses a timestamp as part of the filename. This means any references to it used in this tutorial will be different than what you will see on your local machine. + + + +Unpack the archive to further examine its contents. + +```shell-session +$ tar zxvf hcdiag-2022-08-18T122356Z.tar.gz +hcdiag-2022-08-18T122356Z/ConsulDebug/agent.json +hcdiag-2022-08-18T122356Z/ConsulDebug/consul.log +hcdiag-2022-08-18T122356Z/ConsulDebug/host.json +hcdiag-2022-08-18T122356Z/ConsulDebug/index.json +hcdiag-2022-08-18T122356Z/ConsulDebug/members.json +hcdiag-2022-08-18T122356Z/ConsulDebug/metrics.json +hcdiag-2022-08-18T122356Z/ConsulDebug.tar.gz +hcdiag-2022-08-18T122356Z/Manifest.json +hcdiag-2022-08-18T122356Z/Results.json +hcdiag-2022-08-18T122356Z/docker-consul.log +hcdiag-2022-08-18T122356Z/journald-consul.log +``` + +The archive extracts several files and directories: + +- `Manifest.json` contains information describing the hcdiag run, including configuration options used, run duration, and a count of any errors encountered. +- `Results.json` contains information about the environment and the output from an invocation of `consul debug`. +- `ConsulDebug/` is a directory that contains the output from invoking `consul debug`. + +Inspect the bundle to ensure it contains only information that is appropriate to share based on your use-case or situation. If you need to obscure secrets or sensitive information that might be contained in an `hcdiag` bundle, please refer to the `hcdiag` redactions documentation. + + + + If you are a Consul Enterprise user, please share the output from `hcdiag` with HashiCorp Customer Support to reduce the amount of information gathering required for a support request. + + + +The tool only works locally and does not export or share the diagnostic bundle with anyone. You must use other tools to transfer it to a secure location so you can share it with specific support staff who need to view it. + +## Configuration file + +You can configure `hcdiag`'s behavior with a HashiCorp Configuration Language (HCL) formatted file. Using this file, you can configure behavior by adding your own custom runners, redacting sensitive content using regular expressions, excluding commands, and more. + + + + + This minimal environment doesn't ship with most common command-line text editors, so you will want to install one with `apt-get install nano` or `apt-get install vim`. + + + +Create a file named `diag.hcl` with the following contents. This file does two things: + +1. It adds an agent-level (global) redaction which instructs `hcdiag` to redact all sensitive content in the format `PASSWORD=sensitive`. This is a contrived example; please refer to the official [`hcdiag` Documentation](https://github.com/hashicorp/hcdiag) for more detailed information about how redactions work and how to use them. + +1. It instructs `hcdiag` to exclude the `consul debug` command. + + + +```hcl +agent { + redact "regex" { + match = "PASSWORD=\\S*" + replace = "" + } +} +product "consul" { + excludes = ["GET /v1/agent/metrics"] +} +``` + + + +Run the `hcdiag` command with the `diag.hcl` configuration file. It will return a similar output to the following. + +```shell-session +$ hcdiag -consul -config diag.hcl +2022-08-26T18:53:24.997Z [INFO] hcdiag: Ensuring destination directory exists: directory=. +2022-08-26T18:53:24.997Z [INFO] hcdiag: Checking product availability +2022-08-26T18:53:25.149Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57310 latency=866.114µs +2022-08-26T18:53:25.155Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57312 latency=616.956µs +2022-08-26T18:53:25.157Z [INFO] hcdiag: Gathering diagnostics +2022-08-26T18:53:25.157Z [INFO] hcdiag.product: Running operations for: product=host +2022-08-26T18:53:25.157Z [INFO] hcdiag.product: running operation: product=host runner="uname -v" +2022-08-26T18:53:25.157Z [INFO] hcdiag.product: Running operations for: product=consul +2022-08-26T18:53:25.157Z [INFO] hcdiag.product: running operation: product=consul runner="consul version" +2022-08-26T18:53:25.159Z [INFO] hcdiag.product: running operation: product=host runner=disks +2022-08-26T18:53:25.159Z [INFO] hcdiag.product: running operation: product=host runner=info +2022-08-26T18:53:25.160Z [INFO] hcdiag.product: running operation: product=host runner=memory +2022-08-26T18:53:25.160Z [INFO] hcdiag.product: running operation: product=host runner=process +2022-08-26T18:53:25.161Z [INFO] hcdiag.product: running operation: product=host runner=network +2022-08-26T18:53:25.161Z [INFO] hcdiag.product: running operation: product=host runner=/etc/hosts +2022-08-26T18:53:25.163Z [INFO] hcdiag.product: running operation: product=host runner=iptables +2022-08-26T18:53:25.164Z [WARN] hcdiag.product: result: runner=iptables status=fail result="map[iptables -L -n -v:]" error="exec error, command=iptables -L -n -v, format=string, error=exec: "iptables": executable file not found in $PATH" +2022-08-26T18:53:25.164Z [INFO] hcdiag.product: running operation: product=host runner="/proc/ files" +2022-08-26T18:53:25.175Z [INFO] hcdiag.product: running operation: product=host runner=/etc/fstab +2022-08-26T18:53:25.178Z [INFO] hcdiag: Product done: product=host statuses="map[fail:1 success:9]" +2022-08-26T18:53:25.238Z [INFO] hcdiag.product: running operation: product=consul runner="consul debug -output=/tmp/consul-hcdiag/hcdiag-2022-08-26T185324Z4292389516/ConsulDebug -duration=10s -interval=5s" +2022-08-26T18:53:25.321Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57314 latency=761.333µs +2022-08-26T18:53:25.323Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/host from=127.0.0.1:57314 latency=1.034301ms +2022-08-26T18:53:25.326Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57314 latency=791.722µs +2022-08-26T18:53:25.329Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/members?wan=1 from=127.0.0.1:57314 latency=37.086µs +2022-08-26T18:53:36.737Z [DEBUG] agent: Skipping remote check since it is managed automatically: check=serfHealth +2022-08-26T18:53:36.737Z [DEBUG] agent: Node info in sync +2022-08-26T18:53:37.331Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/metrics/stream from=127.0.0.1:57314 latency=12.000899626s +2022-08-26T18:53:37.356Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/monitor?loglevel=DEBUG from=127.0.0.1:57320 latency=12.023058522s +2022-08-26T18:53:37.356Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/agent/self" +2022-08-26T18:53:37.357Z [DEBUG] agent.http: Request finished: method=GET url=/v1/agent/self from=127.0.0.1:57312 latency=965.341µs +2022-08-26T18:53:37.360Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/catalog/datacenters" +2022-08-26T18:53:37.360Z [DEBUG] agent.http: Request finished: method=GET url=/v1/catalog/datacenters from=127.0.0.1:57312 latency=258.37µs +2022-08-26T18:53:37.361Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/catalog/services" +2022-08-26T18:53:37.361Z [DEBUG] agent.http: Request finished: method=GET url=/v1/catalog/services from=127.0.0.1:57312 latency=109.885µs +2022-08-26T18:53:37.362Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/namespace" +2022-08-26T18:53:37.362Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/status/leader" +2022-08-26T18:53:37.363Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/leader from=127.0.0.1:57312 latency=177.814µs +2022-08-26T18:53:37.363Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/status/peers" +2022-08-26T18:53:37.364Z [DEBUG] agent.http: Request finished: method=GET url=/v1/status/peers from=127.0.0.1:57312 latency=109.065µs +2022-08-26T18:53:37.364Z [INFO] hcdiag.product: running operation: product=consul runner="log/docker consul" +2022-08-26T18:53:37.366Z [INFO] hcdiag.product: result: runner="log/docker consul" status=skip + result= + | /bin/sh: 1: docker: not found + error="docker not found, container=consul, error=exec error, command=docker version, error=exit status 127" +2022-08-26T18:53:37.366Z [INFO] hcdiag.product: running operation: product=consul runner=journald +2022-08-26T18:53:37.368Z [INFO] hcdiag.product: result: runner=journald status=skip + result= + | /bin/sh: 1: journalctl: not found + error="journald not found on this system, service=consul, error=exec error, command=journalctl --version, error=exit status 127" +2022-08-26T18:53:37.368Z [INFO] hcdiag: Product done: product=consul statuses="map[skip:2 success:8]" +2022-08-26T18:53:37.368Z [INFO] hcdiag: Recording manifest +2022-08-26T18:53:37.373Z [INFO] hcdiag: Created Results.json file: dest=/tmp/consul-hcdiag/hcdiag-2022-08-26T185324Z4292389516/Results.json +2022-08-26T18:53:37.374Z [INFO] hcdiag: Created Manifest.json file: dest=/tmp/consul-hcdiag/hcdiag-2022-08-26T185324Z4292389516/Manifest.json +2022-08-26T18:53:37.381Z [INFO] hcdiag: Compressed and archived output file: dest=hcdiag-2022-08-26T185324Z.tar.gz +2022-08-26T18:53:37.381Z [INFO] hcdiag: Writing summary of products and ops to standard output +product success fail unknown total +consul 8 0 2 10 +host 9 1 0 10 +``` + +Unlike the previous `hcdiag` run, this output does not contain Consul agent metrics information. + +``` +2022-08-18T12:15:30.644Z [INFO] hcdiag.product: running operation: product=consul runner="GET /v1/agent/metrics" +``` + +Additionally, any runner output that might capture and expose passwords in the redacted format would show `` in place of this sensitive content. + +## Cleanup + +Exit the Ubuntu container to return to your terminal prompt. + +```shell-session +$ exit +``` + +Stop the Docker container. Docker will automatically delete the container due to the `-rm` flag passed to the `docker run` command used in the beginning of the tutorial. + +```shell-session +$ docker stop consul +``` + +## Production usage tips + +By default, the hcdiag tool includes files for up to 72 hours back from the current time. You can specify the desired time range using the `-include-since` flag. + +If you are concerned about impacting performance of your Consul servers, you can specify that the runners to not run concurrently, and instead be invoked serially with the `-serial` flag. + +Deploying hcdiag in production involves a workflow similar to the following: + +1. Place the `hcdiag` binary on the Consul system in scope - this could be a Consul server or a Consul client. + +1. When running with a configuration file and the `-config` flag, ensure that the specified configuration file is readable by the user that executes `hcdiag`. + +1. Ensure that the current directory (or the destination directory you have chosen with the `dest` flag) is writable by the user that executes `hcdiag`. + +1. Ensure connectivity to the HashiCorp products that `hcdiag` needs to connect to during the run. Export any required environment variables for establishing connection or passing authentication tokens as necessary. + +1. Decide on a duration for information gathering, noting that the default is to gather for up to 72 hours back in server log output. Adjust your needs as necessary with the `-include-since` flag. For example, to include only 24 hours of log output, invoke as: + + ```shell-session +$ hcdiag -consul -include-since 24h + ``` + +1. Limit what is gathered with the `-includes` flag. For example, `-includes /var/log/consul-*,/var/log/nomad-*` instructs `hcdiag` to only gather logs matching the specified Consul and Nomad filename patterns. + +1. Use redactions to prevent sensitive information like keys or passwords from reaching hcdiag's output or the generated bundle files. + +1. Use the `-dryrun` flag to observe what hcdiag will do without anything actually being done for testing configuration and options. + + +## Summary + +In this tutorial, you retrieved a Git repository, created a local Consul datacenter with Docker Compose, and used the local environment to explore the `hcdiag` tool in the context of gathering information from a running Consul environment. + +You also learned about the available configuration flags, the configuration file, and production specific tips for using `hcdiag`. + +## Next Steps + +For additional information about the tool, check out the [`hcdiag` GitHub repository](https://github.com/hashicorp/hcdiag). + +There are also `hcdiag` guides for other HashiCorp tools including [Vault](/vault/tutorials/monitoring/hcdiag-with-vault), [Terraform](/terraform/tutorials/enterprise/hcdiag-with-tfe), and [Nomad](/nomad/tutorials/manage-clusters/hcdiag-with-nomad). + +## Help and Reference + +Feel free to explore the following resources for additional help with troubleshooting your Consul environment. + +- [Troubleshooting Consul](/consul/tutorials/datacenter-operations/troubleshooting) diff --git a/website/content/docs/integrate/index.mdx b/website/content/docs/integrate/index.mdx new file mode 100644 index 000000000000..d3e1b2378a58 --- /dev/null +++ b/website/content/docs/integrate/index.mdx @@ -0,0 +1,213 @@ +--- +layout: docs +page_title: Consul Integration Program +description: >- + The Consul Integration Program allows approved partners to develop Consul integrations that HashiCorp reviews to consider publishing as officially verified. Learn about how to participate in the program. +--- + +# Consul Integration Program + +The HashiCorp Consul Integration Program enables prospective partners to build integrations with HashiCorp Consul that are reviewed and verified by HashiCorp. You can integrate with any of the following Consul versions: + +- **Self-Managed**. Community Edition, always free +- **HashiCorp Cloud Platform (HCP)**. A hosted version of Consul managed in the cloud +- **Consul Enterprise**. Self-managed, with additional features for custom deployments + +The program is intended to be largely self-service with links to resources, code samples, documentation, and clear integration steps. + +## Categories of Consul Integrations + +By leveraging Consul's RESTful HTTP API system, prospective partners are able to build extensible integrations at the data plane, platform, and the infrastructure layer to extend Consul's functionalities. These integrations can be performed both with the community edition of Consul, Consul Enterprise, and HCP Consul. + +**The Consul ecosystem of integrations:** + + + +![Consul Architecture](/img/consul_ecosystem_diagram2.png) + + + +**Data Plane**: These integrations extend Consul's certificate management, secure ACL configuration, observability metrics and logging, and service discovery that allows for dynamic service mapping APM and logging tools, extend sidecar proxies to support Consul service mesh, and extend API gateways to allow Consul to route incoming traffic to the proxies for mesh-enabled services. + +**Control Plane**: Consul has a client-server architecture and is the control plane for the service mesh. + +**Platform**: These integrations leverage automation of Consul agent deployment, configuration, and management. Designed to be platform agnostic, Consul can be deployed in a variety of form factors, including major Public Cloud providers (AWS, GCP, Azure) as well as in bare-metal, virtual machine, and container (Docker, Kubernetes) environments. They include the Consul agent running in both client and server mode. + +**Infrastructure**: There are two integration options in this category: natively through a direct integration with Consul or via Consul-Terraform-Sync (CTS). By leveraging Consul's powerful **Network Infrastructure Automation (NIA)*** capabilities through CTS, changes in an infrastructure are seamlessly automated when Consul detects a change in its service catalog. For example, these integrations could be used to automate IP updates of load balancers or firewall security policies by leveraging Consul service discovery. + +-> **Network Infrastructure Automation (NIA)***: These integrations leverage Consul's service catalog to seamlessly integrate with Consul-Terraform-Sync (CTS) to automate changes in network infrastructure via a publisher-subscriber method. Refer to the [NIA documentation](/consul/docs/integrate/nia) for details. + +**HCP Consul**: HCP Consul is secure by default and offers an out-of-the-box service mesh solution to streamline operations without the hassle of managing Consul servers. [Sign up for a free HCP Consul account](https://cloud.hashicorp.com/products/consul). + +@include 'alerts/hcp-dedicated-eol.mdx' + +**Consul integration verification badges**: Partners will be issued the Consul Enterprise badge for integrations that work with [Consul Enterprise features](/consul/docs/enterprise) such as namespaces. Partners will be issued the HCP Consul badge for integrations validated to work with [HCP Consul](/hcp/docs/consul#features). Each badge would be displayed on HashiCorp's partner page as well as be available for posting on the partner's own website to provide better visibility and differentiation of the integration for joint customers. + + + + +![Consul Enterprise Badge](/img/consul_enterprise_partner_badge.png) + + + + +![HCP Consul](/img/HCPc_badge.png) + + + + +Developing a valid integration with either Consul Enterprise or HCP Consul also qualifies the partner for the Premier tier of the HashiCorp Technology Partners program. The process for verification of these integrations is detailed below. + +## Development Process + +The Consul integration development process is described in the steps below. By following these steps, Consul integrations can be developed alongside HashiCorp to ensure new integrations are reviewed, approved and released as quickly as possible. + + + +![Integration Program Steps](/img/consul_integration_program_steps.png) + + + +1. Engage: Initial contact between vendor and HashiCorp +2. Enable: Documentation, code samples and best practices for developing the integration +3. Develop and Test: Integration development and testing by vendor +4. Review/Certification: HashiCorp code review and certification of integration +5. Release: Consul integration released +6. Support: Ongoing maintenance and support of the integration by the vendor. + +### 1. Engage + +Please begin by completing [Consul Integration Program webform](https://docs.google.com/forms/d/e/1FAIpQLSf-RyVR9F0lmosao8Nnur0TTDjnl99gttnK3QP1OkfRefVKSw/viewform) to tell us about your company and the Consul integration you are developing. + +### 2. Enable + +Here are links to resources, documentation, examples and best practices to guide you through the Consul integration development and testing process: + +#### Data Plane: + +**Application Performance Monitoring (APM)** + +- [Consul Telemetry Documentation](/consul/docs/reference/agent/telemetry) +- [Monitoring Consul with Datadog APM](https://www.datadoghq.com/blog/consul-datadog/) +- [Monitor HCP Consul with New Relic Instant Observability](https://github.com/newrelic-experimental/hashicorp-quickstart-annex/blob/main/hcp-consul/README.md) +- [HCP Consul and CloudFabrix AIOps Integration](https://bot-docs.cloudfabrix.io/Bots/consul/?h=consul) +- [Consul and SnappyFlow Full Stack Observability](https://docs.snappyflow.io/docs/Integrations/hcp_consul) + +**Network Performance Monitoring (NPM)** + +- [Datadog NPM now supports Consul networking](https://www.datadoghq.com/blog/monitor-consul-with-datadog-npm/) + +**OpenTelemetry Integrations** + +- [Splunk SignalFX OpenTelemetry integration with Consul](https://docs.splunk.com/Observability/gdi/consul/consul.html) +- [Ship HashiCorp Consul metrics with OpenTelemetry to Logz.io](https://docs.logz.io/shipping/prometheus-sources/consul.html) +- [Ingest Consul metrics through OpenTelemetry into Lightstep Observability](https://docs.lightstep.com/integrations/ingest-metrics-consul) + +**Logging and Alerts** + +- [Consul Integration with iLert](https://docs.ilert.com/integrations/consul) +- [Consul Integration with PagerDuty](https://www.pagerduty.com/docs/guides/consul-integration-guide/) +- [Monitor Consul with Zabbix](https://www.zabbix.com/integrations/hashicorp_consul#consul) + +**API Gateway and Ingress Controller** + +- [F5 Terminating Gateway Integration Documentation](https://www.hashicorp.com/integrations/f5-networks/consul) +- [Traefik Integration with Consul Service Mesh](https://traefik.io/blog/integrating-consul-connect-service-mesh-with-traefik-2-5/) +- [Kong's Ingress Controller Integration with Consul](https://www.hashicorp.com/integrations/kong/consul) +- [Configuring Ingress Controllers with Consul-on-Kubernetes](/consul/docs/north-south/ingress-controller) +- [Introduction to Consul Transparent Proxy](/consul/docs/connect/transparent-proxy) +- [Getting Started with Transparent Proxy](https://www.hashicorp.com/blog/transparent-proxy-on-consul-service-mesh) + +#### Platform: + +- [Deploy Consul on Red Hat OpenShift](/consul/deploy/server/k8s/platform/openshift) +- [Consul Integration with Layer5 Meshery](https://www.hashicorp.com/integrations/layer5-io/consul) +- [Consul Integration with VMware Tanzu Application Service](/consul/tutorials/cloud-integrations/sync-pivotal-cloud-services?utm_source=docs) + +#### Infrastructure: + +-> **Note**: The types of integration areas below could be developed to natively work with Consul or through leveraging Consul-Terraform-Sync and Consul's network automation capabilities. + +**Firewalls** + + **Network Infrastructure Automation:** + + - [Automated Firewalling with Check Point](https://www.hashicorp.com/integrations/checkpoint-software/consul) + - [Automated Firewalling with Palo Alto Networks](https://www.hashicorp.com/integrations/pan/consul) + - [Automated Firewalling with Cisco FMC](https://registry.terraform.io/modules/CiscoDevNet/dynamicobjects/fmc/latest) + - [Automated Firewalling with Fortinet FortiManager](https://registry.terraform.io/modules/fortinetdev/cts-agpu/fortimanager/latest) + +**Software-Defined Networking \(SDN\)** + +- [Automating Cisco ACI with Consul](https://www.hashicorp.com/integrations/cisco/consul) + +**Load Balancer** + +- [Load Balancing with NGINX and Consul Template](/consul/tutorials/load-balancing/load-balancing-nginx?utm_source=docs) +- [Load Balancing with HAProxy Service Discovery](/consul/tutorials/load-balancing/load-balancing-haproxy?utm_source=docs) + + **Network Infrastructure Automation:** + + - [Zero-Touch Configuration of Secure Apps across BIG-IP Tenants using CTS](https://community.f5.com/t5/technical-articles/zero-touch-configuration-of-secure-apps-across-big-ip-tenants/ta-p/300190) + - [Automate VMware Advanced Load Balancers (Avi) with Consul NIA](https://www.hashicorp.com/integrations/_vmware/consul) + +**Application Delivery Controllers \(ADC\)** + +- [Automate A10 ADC with Consul NIA](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-a10-adc?utm_source=docs) +- [Automate Citrix ADC with Consul NIA](https://www.hashicorp.com/integrations/citrix-adc/consul) + +**Domain Name Service (DNS) Automation** + +- [Automate DNSimple public facing DNS records with Consul NIA](https://registry.terraform.io/modules/dnsimple/cts/dnsimple/latest) +- [Automate NS1 managed DNS with Consul NIA](https://github.com/ns1-terraform/terraform-ns1-record-sync-nia) + +**No-Code/Low-Code** + +- [Automate Consul Deployments with Sophos Factory Pipelines](https://community.sophos.com/sophos-factory/f/recommended-reads/136639/deploy-hashicorp-consul-from-sophos-factory) + +### 3. Develop and Test + +The only knowledge necessary to write a plugin is basic command-line skills and knowledge of the [Go programming language](http://www.golang.org). Use the plugin interface to develop your integration. All integrations should contain unit and acceptance testing. + +**HCP Consul**: As a managed service, minimal configuration is required to deploy HCP Consul server clusters. You only need to install Consul client agents. Furthermore, HashiCorp provides all new users an initial credit, which provides approximately two months worth of [development cluster](https://cloud.hashicorp.com/products/consul/pricing) access. When deployed with AWS or Azure free tier services, there should be no cost beyond the time spent by the designated tester. Refer to the [Deploy HCP Consul tutorial](/consul/tutorials/get-started-hcp/hcp-gs-deploy) for details on getting started. + +HCP Consul is currently only deployed on AWS and Microsoft Azure, so your application can be deployed to or run in AWS or Azure. + +#### HCP Consul Resource Links: + +@include 'alerts/hcp-dedicated-eol.mdx' + +- [Getting Started with HCP Consul](/consul/tutorials/get-started-hcp/hcp-gs-deploy) +- [HCP Consul End-to-End Deployment](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-overview) +- [Deploy HCP Consul with EKS using Terraform](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-eks) +- [HCP Consul Deployment Automation](/consul/tutorials/cloud-deploy-automation) +- [HCP Consul documentation](/hcp/docs/consul/usage) + +**Consul Enterprise**: An integration qualifies for Consul Enterprise when it is tested and compatible with Consul Enterprise Namespaces. + +### 4. Review and Approval + +HashiCorp will review and approve your Consul integration. Please send an email to [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) with any relevant documentation, demos or other resources and let us know your integration is ready for review. + +### 5. Release + +At this stage, the Consul integration is fully developed, documented, reviewed and approved. Once released, HashiCorp will officially list the Consul integration. + +### 6. Support + +Many vendors view the release step to be the end of the journey, while at HashiCorp we view it to be the start. Getting the Consul integration built is just the first step in enabling users. Once this is done, ongoing effort is required to maintain the integration and address any issues in a timely manner. + +The expectation for vendors is to respond to all critical issues within 48 hours and all other issues within 5 business days. HashiCorp Consul has an extremely wide community of users and we encourage everyone to report issues, however small, as well as help resolve them when possible. + +## Checklist + +Below is a checklist of steps that should be followed during the Consul integration development process. This reiterates the steps described above. + +- Complete the [Consul Integration Program webform](https://docs.google.com/forms/d/e/1FAIpQLSf-RyVR9F0lmosao8Nnur0TTDjnl99gttnK3QP1OkfRefVKSw/viewform) +- Develop and test your Consul integration following examples, documentation and best practices +- When the integration is completed and ready for HashiCorp review, send us the documentation, demos and any other resources for review at: [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) +- Plan to continue to support the integration with additional functionality and responding to customer issues. + +## Contact Us + +For any questions or feedback, please contact us at: [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) diff --git a/website/content/docs/integrate/nia-integration.mdx b/website/content/docs/integrate/nia-integration.mdx deleted file mode 100644 index 1da7ca7417e1..000000000000 --- a/website/content/docs/integrate/nia-integration.mdx +++ /dev/null @@ -1,102 +0,0 @@ ---- -layout: docs -page_title: Network Infrastructure Automation (NIA) Integration Program -description: >- - The Network Infrastructure Automation (NIA) Integration Program allows partners to develop Terraform modules for Consul-Terraform-Sync (CTS) that HashiCorp reviews to consider publishing as officially verified. Learn about how to participate in the program. ---- - -# Network Infrastructure Automation Integration Program - -HashiCorp's Network Infrastructure Automation (NIA) Integration Program allows partners to build integrations that allow customers to automate dynamic application workflows leveraging network and security infrastructure at runtime. Detailed below is the approach to integrate your networking technology along with the clearly defined steps, links to information sources, and checkpoints. - -## Network Infrastructure Automation - -Network Infrastructure Automation (NIA) relies on a declarative, workflow and service driven network automation architecture. NIA is carried out by [`Consul-Terraform-Sync`](/consul/docs/nia) which leverages Terraform as the underlying automation tool and utilizes the Terraform provider ecosystem to drive relevant change to the network infrastructure. This usage of the Terraform provider in conjunction with Consul-Terraform-Sync allows for HashiCorp Consul to act as the core data source where it is updated with the state of the infrastructure. - -Consul-Terraform-Sync executes one or more automation tasks with an appropriate value of service variables based on updates from the Consul service catalog. Each task consists of a runbook automation written as a compatible Terraform module using resources and data sources for the underlying network infrastructure. The Consul-Terraform-Sync daemon runs on the same node as a Consul agent. - -[![NIA Architecture](/img/nia-highlevel-diagram.svg)](/img/nia-highlevel-diagram.svg) - --> Please note that the above indicated solution is a "push" based method and is not the only way to integrate network devices with Consul and drive Network Infrastructure Automation Integration. If your preferred method is to directly integrate with Consul without using Terraform, then please use [Consul Integration Program](/consul/docs/integrate/partnerships). - -## NIA Program Steps - -The NIA Integration Program has six steps. By following these steps, Consul-Terraform-Sync compatible Terraform modules can be developed. They are then published as "verified" Consul-Terraform-Sync modules on the [NIA page consul.io](https://www.consul.io/use-cases/network-infrastructure-automation). - --> **Note:** A prerequisite to be eligible for NIA Integration program includes having a "verified" provider on Terraform registry for the appropriate technology. Please follow the guidelines to enroll in the Terraform Provider Development Program if you do not presently have a "verified" provider. - -[![NIA Integration Program Steps](/img/nia-integration-program.png)](/img/nia-integration-program.png) - -1. **Engage**: Initial contact between Technology Partner and HashiCorp -2. **Enable**: Documentation, code samples and best practices for developing the integration -3. **Develop & Test**: Integration development and testing by Partner -4. **Review**: HashiCorp code review and verification of integration (iterative process) -5. **Release**: Module listed as Consul-Terraform-Sync compatible on Hashicorp Consul website and hosted on Terraform module registry -6. **Support**: Ongoing maintenance and support of the module by the partner. - -### 1. Engage - -Please begin by providing the following details: [NIA Integration Program](https://docs.google.com/forms/d/1HtJXYQ36n83lFXEX-F_uIB7abUEMj95prSi5TSDMCXc/viewform?gxids=7757&edit_requested=true). - -### 2. Enable - -Consul-Terraform-Sync compatible Terraform module development process is fairly straightforward and simple when technology partners have experience with building Terraform configuration files. Adopting Terraform best practices helps expedite the review and release cycles. - -- Consul [documentation](/consul/docs) -- Consul-Terraform-Sync [documentation](/consul/docs/nia) -- Writing Consul-Terraform-Sync compatible Terraform modules using our [guide](/consul/docs/nia/terraform-modules) and [tutorial](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-module?utm_source=docs) -- Example Terraform Modules for reference: [PAN-OS](https://registry.terraform.io/modules/PaloAltoNetworks/dag-nia/panos/latest), [Simple Print Module](https://registry.terraform.io/modules/findkim/print/cts/latest) and a [Template to structure your Terraform Modules](https://github.com/hashicorp/consul-terraform-sync-template-module) -- Publishing to the Terraform Registry [guidelines](/terraform/registry/modules/publish) - -### 3. Develop & Test - -Terraform modules are written in HashiCorp Configuration Language (HCL). Writing [Terraform modules](/terraform/language/modules/develop) or a [tutorial to build a module](/terraform/tutorials/modules/module-create) are good resources to begin writing a new module. -Consul-Terraform-Sync compatible modules follow the [standard module structure](/terraform/language/modules/develop). Modules can use syntax supported by Terraform version 0.13, or higher. Consul-Terraform-Sync is designed to integrate with any module that satisfies these [specifications](/consul/docs/nia/terraform-modules#module-specifications). The guide will give you an introduction of the code structure and the basics of authoring a plugin that Terraform can interact with. - -It is recommended that partners develop modules that cater to specific workflows on an individual platform in their product portfolio rather than having overarching modules that try to cover multiple workflows across different platforms. This is to keep the automation modular and adoptable by a broad set of users with varying network infrastructure topologies. Partners are encouraged to test the functionality of the modules against their supported platforms. - -All Consul-Terraform-Sync compatible modules follow a naming convention: `terraform---nia`. Module repositories must use this four-part name format, where `` is the Terraform Provider being used, `` reflects the type of infrastructure the module manages, and ends with the suffix `-nia` to represent that this module is designed for NIA using Consul-Terraform-Sync. - --> **Important**: All Consul-Terraform-Sync compatible modules listed as Verified must contain one of the following open source licenses: - -- CDDL 1.0, 2.0 -- CPL 1.0 -- Eclipse Public License (EPL) 1.0 -- MPL 1.0, 1.1, 2.0 -- PSL 2.0 -- Ruby's Licensing -- AFL 2.1, 3.0 -- Apache License 2.0 -- Artistic License 1.0, 2.0 -- Apache Software License (ASL) 1.1 -- Boost Software License -- BSD, BSD 3-clause, "BSD-new" -- CC-BY -- Microsoft Public License (MS-PL) -- MIT - -### 4. Review - -During the review process, HashiCorp will provide feedback on the newly developed Consul-Terraform-Sync compatible Terraform module. Please engage in the review process once one or two sample modules have been developed. Begin the process by emailing nia-integration-dev@hashicorp.com with a URL to the public GitHub repo containing the code. - -HashiCorp will then review the module, and the documentation. The documentation should clearly indicate the compatibility with Consul-Terraform-Sync software version(s) and partner platform's software version(s). - -Once the module development has been completed another email should be sent to nia-integration-dev@hashicorp.com along with a URL to the public GitHub repo containing the code requesting the final code review. HashiCorp will review the module and provide feedback about any changes that may be required. This is often an iterative process and can take some time to get done. - -### 5. Release - -At this stage, it is expected that the module is fully developed, all tests and documentation are in place, and that HashiCorp has reviewed the module to be compatible with Consul-Terraform-Sync. - -Once this is done, HashiCorp will get the new module listed as Consul-Terraform-Sync compatible on [consul.io](/consul/docs/nia/usage/requirements#partner-terraform-modules), and then the partner will be asked to publish the Terraform module to the [Terraform Registry](https://registry.terraform.io/browse/modules). - -### 6. Support - -Many partners view the release step to be the end of the journey, while at HashiCorp we view it to be the start. Getting the Consul-Terraform-Sync compatible module built is just the first step in enabling users to use it against the infrastructure. Once this is done, on-going effort is required to maintain the module and address any issues in a timely manner. - -The expectation is to resolve all critical issues within 48 hours and all other issues within 5 business days. HashiCorp Consul and Terraform have an extremely wide community of users and contributors and we encourage everyone to report issues however small, as well as help resolve them when possible. - -Partners who choose to not follow the process of NIA Integration Program for their Consul-Terraform-Sync compatible Terraform modules will not have their modules listed on [consul.io](/consul/docs/nia/usage/requirements#partner-terraform-modules). - -### Contact Us - -For any questions or feedback please contact us at nia-integration-dev@hashicorp.com. diff --git a/website/content/docs/integrate/nia.mdx b/website/content/docs/integrate/nia.mdx new file mode 100644 index 000000000000..d71f83eac6fd --- /dev/null +++ b/website/content/docs/integrate/nia.mdx @@ -0,0 +1,102 @@ +--- +layout: docs +page_title: Network Infrastructure Automation (NIA) Integration Program +description: >- + The Network Infrastructure Automation (NIA) Integration Program allows partners to develop Terraform modules for Consul-Terraform-Sync (CTS) that HashiCorp reviews to consider publishing as officially verified. Learn about how to participate in the program. +--- + +# Network Infrastructure Automation Integration Program + +HashiCorp's Network Infrastructure Automation (NIA) Integration Program allows partners to build integrations that allow customers to automate dynamic application workflows leveraging network and security infrastructure at runtime. Detailed below is the approach to integrate your networking technology along with the clearly defined steps, links to information sources, and checkpoints. + +## Network Infrastructure Automation + +Network Infrastructure Automation (NIA) relies on a declarative, workflow and service driven network automation architecture. NIA is carried out by [`Consul-Terraform-Sync`](/consul/docs/automate/infrastructure) which leverages Terraform as the underlying automation tool and utilizes the Terraform provider ecosystem to drive relevant change to the network infrastructure. This usage of the Terraform provider in conjunction with Consul-Terraform-Sync allows for HashiCorp Consul to act as the core data source where it is updated with the state of the infrastructure. + +Consul-Terraform-Sync executes one or more automation tasks with an appropriate value of service variables based on updates from the Consul service catalog. Each task consists of a runbook automation written as a compatible Terraform module using resources and data sources for the underlying network infrastructure. The Consul-Terraform-Sync daemon runs on the same node as a Consul agent. + +[![NIA Architecture](/img/nia-highlevel-diagram.svg)](/img/nia-highlevel-diagram.svg) + +-> Please note that the above indicated solution is a "push" based method and is not the only way to integrate network devices with Consul and drive Network Infrastructure Automation Integration. If your preferred method is to directly integrate with Consul without using Terraform, then please use [Consul Integration Program](/consul/docs/integrate). + +## NIA Program Steps + +The NIA Integration Program has six steps. By following these steps, Consul-Terraform-Sync compatible Terraform modules can be developed. They are then published as "verified" Consul-Terraform-Sync modules on the [NIA page consul.io](https://developer.hashicorp.com/use-cases/network-infrastructure-automation). + +-> **Note:** A prerequisite to be eligible for NIA Integration program includes having a "verified" provider on Terraform registry for the appropriate technology. Please follow the guidelines to enroll in the Terraform Provider Development Program if you do not presently have a "verified" provider. + +[![NIA Integration Program Steps](/img/nia-integration-program.png)](/img/nia-integration-program.png) + +1. **Engage**: Initial contact between Technology Partner and HashiCorp +2. **Enable**: Documentation, code samples and best practices for developing the integration +3. **Develop & Test**: Integration development and testing by Partner +4. **Review**: HashiCorp code review and verification of integration (iterative process) +5. **Release**: Module listed as Consul-Terraform-Sync compatible on Hashicorp Consul website and hosted on Terraform module registry +6. **Support**: Ongoing maintenance and support of the module by the partner. + +### 1. Engage + +Please begin by providing the following details: [NIA Integration Program](https://docs.google.com/forms/d/1HtJXYQ36n83lFXEX-F_uIB7abUEMj95prSi5TSDMCXc/viewform?gxids=7757&edit_requested=true). + +### 2. Enable + +Consul-Terraform-Sync compatible Terraform module development process is fairly straightforward and simple when technology partners have experience with building Terraform configuration files. Adopting Terraform best practices helps expedite the review and release cycles. + +- Consul [documentation](/consul/docs) +- Consul-Terraform-Sync [documentation](/consul/docs/automate/infrastructure) +- Writing Consul-Terraform-Sync compatible Terraform modules using our [guide](/consul/docs/automate/infrastructure/module) and [tutorial](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-module?utm_source=docs) +- Example Terraform Modules for reference: [PAN-OS](https://registry.terraform.io/modules/PaloAltoNetworks/dag-nia/panos/latest), [Simple Print Module](https://registry.terraform.io/modules/findkim/print/cts/latest) and a [Template to structure your Terraform Modules](https://github.com/hashicorp/consul-terraform-sync-template-module) +- Publishing to the Terraform Registry [guidelines](/terraform/registry/modules/publish) + +### 3. Develop & Test + +Terraform modules are written in HashiCorp Configuration Language (HCL). Writing [Terraform modules](/terraform/language/modules/develop) or a [tutorial to build a module](/terraform/tutorials/modules/module-create) are good resources to begin writing a new module. +Consul-Terraform-Sync compatible modules follow the [standard module structure](/terraform/language/modules/develop). Modules can use syntax supported by Terraform version 0.13, or higher. Consul-Terraform-Sync is designed to integrate with any module that satisfies these [specifications](/consul/docs/automate/infrastructure/module#module-specifications). The guide will give you an introduction of the code structure and the basics of authoring a plugin that Terraform can interact with. + +It is recommended that partners develop modules that cater to specific workflows on an individual platform in their product portfolio rather than having overarching modules that try to cover multiple workflows across different platforms. This is to keep the automation modular and adoptable by a broad set of users with varying network infrastructure topologies. Partners are encouraged to test the functionality of the modules against their supported platforms. + +All Consul-Terraform-Sync compatible modules follow a naming convention: `terraform---nia`. Module repositories must use this four-part name format, where `` is the Terraform Provider being used, `` reflects the type of infrastructure the module manages, and ends with the suffix `-nia` to represent that this module is designed for NIA using Consul-Terraform-Sync. + +-> **Important**: All Consul-Terraform-Sync compatible modules listed as Verified must contain one of the following open source licenses: + +- CDDL 1.0, 2.0 +- CPL 1.0 +- Eclipse Public License (EPL) 1.0 +- MPL 1.0, 1.1, 2.0 +- PSL 2.0 +- Ruby's Licensing +- AFL 2.1, 3.0 +- Apache License 2.0 +- Artistic License 1.0, 2.0 +- Apache Software License (ASL) 1.1 +- Boost Software License +- BSD, BSD 3-clause, "BSD-new" +- CC-BY +- Microsoft Public License (MS-PL) +- MIT + +### 4. Review + +During the review process, HashiCorp will provide feedback on the newly developed Consul-Terraform-Sync compatible Terraform module. Please engage in the review process once one or two sample modules have been developed. Begin the process by emailing nia-integration-dev@hashicorp.com with a URL to the public GitHub repo containing the code. + +HashiCorp will then review the module, and the documentation. The documentation should clearly indicate the compatibility with Consul-Terraform-Sync software version(s) and partner platform's software version(s). + +Once the module development has been completed another email should be sent to nia-integration-dev@hashicorp.com along with a URL to the public GitHub repo containing the code requesting the final code review. HashiCorp will review the module and provide feedback about any changes that may be required. This is often an iterative process and can take some time to get done. + +### 5. Release + +At this stage, it is expected that the module is fully developed, all tests and documentation are in place, and that HashiCorp has reviewed the module to be compatible with Consul-Terraform-Sync. + +Once this is done, HashiCorp will get the new module listed as Consul-Terraform-Sync compatible on [consul.io](/consul/docs/nia/usage/requirements#partner-terraform-modules), and then the partner will be asked to publish the Terraform module to the [Terraform Registry](https://registry.terraform.io/browse/modules). + +### 6. Support + +Many partners view the release step to be the end of the journey, while at HashiCorp we view it to be the start. Getting the Consul-Terraform-Sync compatible module built is just the first step in enabling users to use it against the infrastructure. Once this is done, on-going effort is required to maintain the module and address any issues in a timely manner. + +The expectation is to resolve all critical issues within 48 hours and all other issues within 5 business days. HashiCorp Consul and Terraform have an extremely wide community of users and contributors and we encourage everyone to report issues however small, as well as help resolve them when possible. + +Partners who choose to not follow the process of NIA Integration Program for their Consul-Terraform-Sync compatible Terraform modules will not have their modules listed on [consul.io](/consul/docs/nia/usage/requirements#partner-terraform-modules). + +### Contact Us + +For any questions or feedback please contact us at nia-integration-dev@hashicorp.com. diff --git a/website/content/docs/integrate/partnerships.mdx b/website/content/docs/integrate/partnerships.mdx deleted file mode 100644 index 024a5b5413d7..000000000000 --- a/website/content/docs/integrate/partnerships.mdx +++ /dev/null @@ -1,209 +0,0 @@ ---- -layout: docs -page_title: Consul Integration Program -description: >- - The Consul Integration Program allows approved partners to develop Consul integrations that HashiCorp reviews to consider publishing as officially verified. Learn about how to participate in the program. ---- - -# Consul Integration Program - -The HashiCorp Consul Integration Program enables prospective partners to build integrations with HashiCorp Consul that are reviewed and verified by HashiCorp. You can integrate with any of the following Consul versions: - -- **Self-Managed**. Community Edition, always free -- **HashiCorp Cloud Platform (HCP)**. A hosted version of Consul managed in the cloud -- **Consul Enterprise**. Self-managed, with additional features for custom deployments - -The program is intended to be largely self-service with links to resources, code samples, documentation, and clear integration steps. - -## Categories of Consul Integrations - -By leveraging Consul's RESTful HTTP API system, prospective partners are able to build extensible integrations at the data plane, platform, and the infrastructure layer to extend Consul's functionalities. These integrations can be performed both with the community edition of Consul, Consul Enterprise, and HCP Consul Dedicated. - -**The Consul ecosystem of integrations:** - - - -![Consul Architecture](/img/consul_ecosystem_diagram2.png) - - - -**Data Plane**: These integrations extend Consul's certificate management, secure ACL configuration, observability metrics and logging, and service discovery that allows for dynamic service mapping APM and logging tools, extend sidecar proxies to support Consul service mesh, and extend API gateways to allow Consul to route incoming traffic to the proxies for mesh-enabled services. - -**Control Plane**: Consul has a client-server architecture and is the control plane for the service mesh. - -**Platform**: These integrations leverage automation of Consul agent deployment, configuration, and management. Designed to be platform agnostic, Consul can be deployed in a variety of form factors, including major Public Cloud providers (AWS, GCP, Azure) as well as in bare-metal, virtual machine, and container (Docker, Kubernetes) environments. They include the Consul agent running in both client and server mode. - -**Infrastructure**: There are two integration options in this category: natively through a direct integration with Consul or via Consul-Terraform-Sync (CTS). By leveraging Consul's powerful **Network Infrastructure Automation (NIA)*** capabilities through CTS, changes in an infrastructure are seamlessly automated when Consul detects a change in its service catalog. For example, these integrations could be used to automate IP updates of load balancers or firewall security policies by leveraging Consul service discovery. - --> **Network Infrastructure Automation (NIA)***: These integrations leverage Consul's service catalog to seamlessly integrate with Consul-Terraform-Sync (CTS) to automate changes in network infrastructure via a publisher-subscriber method. Refer to the [NIA documentation](/consul/docs/integrate/nia-integration) for details. - -**HCP Consul Dedicated**: HCP Consul Dedicated is secure by default and offers an out-of-the-box service mesh solution to streamline operations without the hassle of managing Consul servers. [Sign up for a free HCP Consul Dedicated account](https://cloud.hashicorp.com/products/consul). - -**Consul integration verification badges**: Partners will be issued the Consul Enterprise badge for integrations that work with [Consul Enterprise features](/consul/docs/enterprise) such as namespaces. Partners will be issued the HCP Consul Dedicated badge for integrations validated to work with [HCP Consul Dedicated](/hcp/docs/consul#features). Each badge would be displayed on HashiCorp's partner page as well as be available for posting on the partner's own website to provide better visibility and differentiation of the integration for joint customers. - - - - -![Consul Enterprise Badge](/img/consul_enterprise_partner_badge.png) - - - - -![HCP Consul Dedicated](/img/HCPc_badge.png) - - - - -Developing a valid integration with either Consul Enterprise or HCP Consul Dedicated also qualifies the partner for the Premier tier of the HashiCorp Technology Partners program. The process for verification of these integrations is detailed below. - -## Development Process - -The Consul integration development process is described in the steps below. By following these steps, Consul integrations can be developed alongside HashiCorp to ensure new integrations are reviewed, approved and released as quickly as possible. - - - -![Integration Program Steps](/img/consul_integration_program_steps.png) - - - -1. Engage: Initial contact between vendor and HashiCorp -2. Enable: Documentation, code samples and best practices for developing the integration -3. Develop and Test: Integration development and testing by vendor -4. Review/Certification: HashiCorp code review and certification of integration -5. Release: Consul integration released -6. Support: Ongoing maintenance and support of the integration by the vendor. - -### 1. Engage - -Please begin by completing [Consul Integration Program webform](https://docs.google.com/forms/d/e/1FAIpQLSf-RyVR9F0lmosao8Nnur0TTDjnl99gttnK3QP1OkfRefVKSw/viewform) to tell us about your company and the Consul integration you are developing. - -### 2. Enable - -Here are links to resources, documentation, examples and best practices to guide you through the Consul integration development and testing process: - -#### Data Plane: - -**Application Performance Monitoring (APM)** - -- [Consul Telemetry Documentation](/consul/docs/agent/telemetry) -- [Monitoring Consul with Datadog APM](https://www.datadoghq.com/blog/consul-datadog/) -- [Monitor HCP Consul Dedicated with New Relic Instant Observability](https://github.com/newrelic-experimental/hashicorp-quickstart-annex/blob/main/hcp-consul/README.md) -- [HCP Consul and CloudFabrix AIOps Integration](https://bot-docs.cloudfabrix.io/Bots/consul/?h=consul) -- [Consul and SnappyFlow Full Stack Observability](https://docs.snappyflow.io/docs/Integrations/hcp_consul) - -**Network Performance Monitoring (NPM)** - -- [Datadog NPM now supports Consul networking](https://www.datadoghq.com/blog/monitor-consul-with-datadog-npm/) - -**OpenTelemetry Integrations** - -- [Splunk SignalFX OpenTelemetry integration with Consul](https://docs.splunk.com/Observability/gdi/consul/consul.html) -- [Ship HashiCorp Consul metrics with OpenTelemetry to Logz.io](https://docs.logz.io/shipping/prometheus-sources/consul.html) -- [Ingest Consul metrics through OpenTelemetry into Lightstep Observability](https://docs.lightstep.com/integrations/ingest-metrics-consul) - -**Logging and Alerts** - -- [Consul Integration with iLert](https://docs.ilert.com/integrations/consul) -- [Consul Integration with PagerDuty](https://www.pagerduty.com/docs/guides/consul-integration-guide/) -- [Monitor Consul with Zabbix](https://www.zabbix.com/integrations/hashicorp_consul#consul) - -**API Gateway and Ingress Controller** - -- [F5 Terminating Gateway Integration Documentation](https://www.hashicorp.com/integrations/f5-networks/consul) -- [Traefik Integration with Consul Service Mesh](https://traefik.io/blog/integrating-consul-connect-service-mesh-with-traefik-2-5/) -- [Kong's Ingress Controller Integration with Consul](https://www.hashicorp.com/integrations/kong/consul) -- [Configuring Ingress Controllers with Consul-on-Kubernetes](/consul/docs/k8s/connect/ingress-controllers) -- [Introduction to Consul Transparent Proxy](/consul/docs/connect/transparent-proxy) -- [Getting Started with Transparent Proxy](https://www.hashicorp.com/blog/transparent-proxy-on-consul-service-mesh) - -#### Platform: - -- [Deploy Consul on Red Hat OpenShift](/consul/tutorials/kubernetes/kubernetes-openshift-red-hat) -- [Consul Integration with Layer5 Meshery](https://www.hashicorp.com/integrations/layer5-io/consul) -- [Consul Integration with VMware Tanzu Application Service](/consul/tutorials/cloud-integrations/sync-pivotal-cloud-services?utm_source=docs) - -#### Infrastructure: - --> **Note**: The types of integration areas below could be developed to natively work with Consul or through leveraging Consul-Terraform-Sync and Consul's network automation capabilities. - -**Firewalls** - - **Network Infrastructure Automation:** - - - [Automated Firewalling with Check Point](https://www.hashicorp.com/integrations/checkpoint-software/consul) - - [Automated Firewalling with Palo Alto Networks](https://www.hashicorp.com/integrations/pan/consul) - - [Automated Firewalling with Cisco FMC](https://registry.terraform.io/modules/CiscoDevNet/dynamicobjects/fmc/latest) - - [Automated Firewalling with Fortinet FortiManager](https://registry.terraform.io/modules/fortinetdev/cts-agpu/fortimanager/latest) - -**Software-Defined Networking \(SDN\)** - -- [Automating Cisco ACI with Consul](https://www.hashicorp.com/integrations/cisco/consul) - -**Load Balancer** - -- [Load Balancing with NGINX and Consul Template](/consul/tutorials/load-balancing/load-balancing-nginx?utm_source=docs) -- [Load Balancing with HAProxy Service Discovery](/consul/tutorials/load-balancing/load-balancing-haproxy?utm_source=docs) - - **Network Infrastructure Automation:** - - - [Zero-Touch Configuration of Secure Apps across BIG-IP Tenants using CTS](https://community.f5.com/t5/technical-articles/zero-touch-configuration-of-secure-apps-across-big-ip-tenants/ta-p/300190) - - [Automate VMware Advanced Load Balancers (Avi) with Consul NIA](https://www.hashicorp.com/integrations/_vmware/consul) - -**Application Delivery Controllers \(ADC\)** - -- [Automate A10 ADC with Consul NIA](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-a10-adc?utm_source=docs) -- [Automate Citrix ADC with Consul NIA](https://www.hashicorp.com/integrations/citrix-adc/consul) - -**Domain Name Service (DNS) Automation** - -- [Automate DNSimple public facing DNS records with Consul NIA](https://registry.terraform.io/modules/dnsimple/cts/dnsimple/latest) -- [Automate NS1 managed DNS with Consul NIA](https://github.com/ns1-terraform/terraform-ns1-record-sync-nia) - -**No-Code/Low-Code** - -- [Automate Consul Deployments with Sophos Factory Pipelines](https://community.sophos.com/sophos-factory/f/recommended-reads/136639/deploy-hashicorp-consul-from-sophos-factory) - -### 3. Develop and Test - -The only knowledge necessary to write a plugin is basic command-line skills and knowledge of the [Go programming language](http://www.golang.org). Use the plugin interface to develop your integration. All integrations should contain unit and acceptance testing. - -**HCP Consul Dedicated**: As a managed service, minimal configuration is required to deploy HCP Consul Dedicated server clusters. You only need to install Consul client agents. Furthermore, HashiCorp provides all new users an initial credit, which provides approximately two months worth of [development cluster](https://cloud.hashicorp.com/products/consul/pricing) access. When deployed with AWS or Azure free tier services, there should be no cost beyond the time spent by the designated tester. Refer to the [Deploy HCP Consul Dedicated tutorial](/consul/tutorials/get-started-hcp/hcp-gs-deploy) for details on getting started. - -HCP Consul Dedicated is currently only deployed on AWS and Microsoft Azure, so your application can be deployed to or run in AWS or Azure. - -#### HCP Consul Dedicated Resource Links: - -- [Getting Started with HCP Consul Dedicated](/consul/tutorials/get-started-hcp/hcp-gs-deploy) -- [HCP Consul Dedicated End-to-End Deployment](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-overview) -- [Deploy HCP Consul Dedicated with EKS using Terraform](/consul/tutorials/cloud-deploy-automation/consul-end-to-end-eks) -- [HCP Consul Dedicated Deployment Automation](/consul/tutorials/cloud-deploy-automation) -- [HCP Consul Dedicated documentation](/hcp/docs/consul/usage) - -**Consul Enterprise**: An integration qualifies for Consul Enterprise when it is tested and compatible with Consul Enterprise Namespaces. - -### 4. Review and Approval - -HashiCorp will review and approve your Consul integration. Please send an email to [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) with any relevant documentation, demos or other resources and let us know your integration is ready for review. - -### 5. Release - -At this stage, the Consul integration is fully developed, documented, reviewed and approved. Once released, HashiCorp will officially list the Consul integration. - -### 6. Support - -Many vendors view the release step to be the end of the journey, while at HashiCorp we view it to be the start. Getting the Consul integration built is just the first step in enabling users. Once this is done, ongoing effort is required to maintain the integration and address any issues in a timely manner. - -The expectation for vendors is to respond to all critical issues within 48 hours and all other issues within 5 business days. HashiCorp Consul has an extremely wide community of users and we encourage everyone to report issues, however small, as well as help resolve them when possible. - -## Checklist - -Below is a checklist of steps that should be followed during the Consul integration development process. This reiterates the steps described above. - -- Complete the [Consul Integration Program webform](https://docs.google.com/forms/d/e/1FAIpQLSf-RyVR9F0lmosao8Nnur0TTDjnl99gttnK3QP1OkfRefVKSw/viewform) -- Develop and test your Consul integration following examples, documentation and best practices -- When the integration is completed and ready for HashiCorp review, send us the documentation, demos and any other resources for review at: [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) -- Plan to continue to support the integration with additional functionality and responding to customer issues. - -## Contact Us - -For any questions or feedback, please contact us at: [technologypartners@hashicorp.com](mailto:technologypartners@hashicorp.com) diff --git a/website/content/docs/integrate/vault/k8s.mdx b/website/content/docs/integrate/vault/k8s.mdx new file mode 100644 index 000000000000..2cd4c1d344a1 --- /dev/null +++ b/website/content/docs/integrate/vault/k8s.mdx @@ -0,0 +1,1171 @@ +--- +layout: docs +page_title: Use Vault for secrets management with Consul on Kubernetes +description: >- + Secure Consul on Kubernetes using gossip encryption, TLS certificates, and service mesh certificates using Vault as secrets management. +--- + +# Use Vault for secrets management with Consul on Kubernetes + +A secure Consul datacenter requires you to distribute a number of secrets to your +Consul agents before you can perform any operations. This includes a gossip encryption +key, TLS certificates for the servers, and ACL tokens for all configuration. You will +also need a valid license if you use Consul Enterprise. + +If you are deploying Consul on Kubernetes, you have different options to provide +these secrets to your Consul agents, including: +- store secrets in plain text in your configuration files. This is the most straightforward +method, however, you may expose your secrets if someone compromises your Consul agents. +- leverage Kubernetes secrets. This reduces some risk, but may lead to secrets or +credentials sprawl as you adopt new platforms and scale your workloads. +- use a secrets management system, like HashiCorp's Vault, to centrally manage and protect +sensitive data (for example: tokens, passwords, certificates, encryption keys, and more). + +Vault is HashiCorp's secrets and encryption management system that helps +you securely manage secrets and protect sensitive data (for example, tokens, +passwords, certificates, encryption keys, and more) + +You can use HashiCorp Vault to authenticate your applications with a Kubernetes +Service Account token. The `kubernetes` authentication method automatically +injects a Vault token into a Kubernetes pod. This lets you use Vault to store +all the other secrets, including the ones required by Consul. + +In this tutorial, you will use Vault with Kubernetes to store and manage secrets +required for a Consul datacenter. Then, you will use these secrets to deploy +and configure the Consul datacenter on Kubernetes + +Specifically you will: + +- Configure Vault secrets engines to store and generate Consul secrets. +- Configure Kubernetes authentication engine for Vault. This lets you authenticate using a Kubernetes service account. +- Configure the Consul helm chart to retrieve the secrets from Vault during deploy. +- Deploy Consul on Kubernetes and verify the deployment completes correctly. + +The following architecture diagram depicts the desired outcome. + +![Architectural diagram for Vault as Consul secrets manager](/img/kubernetes-diagram-vault-as-secrets-manager.png) + +## Prerequisites + +You can configure the scenario to deploy either Consul OSS or Consul Enterprise. +Select your learning path by clicking one of the following tabs. + + + + + + + + + + +- A Consul Enterprise license. Save the license in a file named `consul.hclic` +file in your working folder. If you do not have a license, request a trial license on the +[Consul Enterprise trial registration page](https://www.hashicorp.com/products/consul/trial). + + + + + +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/install-kubectl/) to interact with your Kubernetes cluster. +- [Vault (CLI)](/vault/tutorials/getting-started/getting-started-install) to interact with your +Vault cluster +- [jq](https://stedolan.github.io/jq/) to manipulate json output + +- An [AWS account](https://portal.aws.amazon.com/billing/signup) with AWS +Credentials [configured for use with Terraform](https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication). + +- An [HCP account](https://portal.cloud.hashicorp.com/sign-in?utm_source=learn) + configured for [use with Terraform](https://registry.terraform.io/providers/hashicorp/hcp/latest/docs/guides/auth) + +- The [Terraform 1.0.6+ CLI](/terraform/tutorials/aws-get-started/install-cli) installed locally. + +- [git](https://git-scm.com/downloads) to clone the code repository locally + +- [helm v3.2.1+](https://helm.sh/docs/using_helm/) to deploy Consul and Vault agent injector. + + + + Some of the infrastructure in this tutorial may not qualify for the +AWS [free tier](https://aws.amazon.com/free/). Destroy the infrastructure at +the end of the guide to avoid unnecessary charges. We are not responsible for +any charges that you incur. + + + +## Deploy Kubernetes and Vault + +The scenario requires a Kubernetes cluster deployed, either locally or on a +Cloud provider, and a Vault cluster deployed, either on the Kubernetes cluster +or on the HashiCorp Cloud Platform (HCP). + +The tutorial provides example code and steps for a scenario using HCP Vault Dedicated and +an Amazon Elastic Kubernetes Service (EKS) Cluster. + +### Deploy HCP Vault Dedicated cluster and EKS cluster + +To begin, clone the repository. This repository contains all the Terraform +configuration required to complete this tutorial. + + + +```shell-session +$ git clone https://github.com/hashicorp/learn-consul-kubernetes.git +``` + +```shell-session +$ git clone git@github.com:hashicorp/learn-consul-kubernetes.git +``` + + + +Navigate into the repository folder. + + ```shell-session + $ cd learn-consul-kubernetes + ``` + +Fetch the tags from the git remote server, and checkout the tag for this tutorial. + +```shell-session +$ git fetch --all --tags && git checkout tags/v0.0.23 +``` + +Navigate into the project folder for this tutorial. + + ```shell-session + $ cd hcp-vault-eks + ``` + +The Terraform configuration deploys an Vault Dedicated Cluster, an Amazon Elastic Kubernetes +Service (EKS) Cluster, and the underlying networking components for HCP and AWS +to communicate with each other. + +Initialize the Terraform project. + + ```shell-session + $ terraform init + + Terraform has been successfully initialized! + + You may now begin working with Terraform. Try running "terraform plan" to see + any changes that are required for your infrastructure. All Terraform commands + should now work. + + If you ever set or change modules or backend configuration for Terraform, + rerun this command to reinitialize your working directory. If you forget, other + commands will detect it and remind you to do so if necessary. + ``` + + Apply the configuration to deploy the resources. Respond `yes` to the prompt to confirm. + +```shell-session +$ terraform apply +##... + +Plan: 76 to add, 0 to change, 0 to destroy. + +Changes to Outputs: + + eks_data = (sensitive value) + + oidc_provider_arn = (known after apply) + + service_account_role_arn = (known after apply) + + vault_auth_data = (sensitive value) + +Do you want to perform these actions? + Terraform will perform the actions described above. + Only 'yes' will be accepted to approve. + + Enter a value: +``` + +The deployment can take up to 20 minutes to complete. When Terraform completes +successfully, it will display something similar to the following outputs: + + + +```plaintext +##... + +Apply complete! Resources: 76 added, 0 changed, 0 destroyed. + +Outputs: + +eks_data = +oidc_provider_arn = "arn:aws:iam::************:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/************************" +service_account_role_arn = "arn:aws:iam::************:role/tutorialclustertest" +vault_auth_data = +``` + + + +The Terraform configuration configures your local `kubectl` so you can interact +with the deployed Amazon EKS cluster. + +```shell-session +$ kubectl get pods --all-namespaces +NAMESPACE NAME READY STATUS RESTARTS AGE +kube-system aws-node-4zhlh 1/1 Running 0 19m +kube-system aws-node-8zt67 1/1 Running 0 18m +kube-system aws-node-h77xb 1/1 Running 0 19m +kube-system coredns-66cb55d4f4-hrzdg 1/1 Running 0 26m +kube-system coredns-66cb55d4f4-n7nqw 1/1 Running 0 26m +kube-system kube-proxy-2jlxt 1/1 Running 0 20m +kube-system kube-proxy-57lg2 1/1 Running 0 20m +kube-system kube-proxy-7w449 1/1 Running 0 20m +``` + +### Setup your environment to interact with Vault + +Once the Vault Dedicated instance is deployed, use the `vault` CLI to interact +with it. + +Use the output from the `terraform apply` command to retrieve the info for your +Vault Dedicated cluster. + +First, configure Vault Dedicated token for your environment. + + ```shell-session + $ export VAULT_TOKEN=`terraform output -json vault_auth_data | jq --raw-output .vault_token` + ``` + +Next, configure the Vault endpoint for your environment, using the public +address of the Vault Dedicated instance. + + ```shell-session + $ export VAULT_ADDR=`terraform output -json vault_auth_data | \ + jq --raw-output .cluster_host | \ + sed 's/private/public/'` + ``` + + + + Using an HCP Vault Dedicated cluster with a public endpoint is not +recommended for use in production. Read more in the +[HCP Vault Dedicated Security Overview](/hcp/docs/vault/security-overview). + + + +Your Vault Dedicated instance is also exposed on a private address accessible from your AWS +resources, in this case an EKS cluster, over the HVN peering connection. You +will use this address to configure the Kubernetes integration with Vault. + + ```shell-session + $ export VAULT_PRIVATE_ADDR=`terraform output -json vault_auth_data | \ + jq --raw-output .cluster_host` + ``` + +Finally, since Vault Dedicated uses namespaces, set the `VAULT_NAMESPACE` +environment variable to `admin`. + + ```shell-session + $ export VAULT_NAMESPACE=admin + ``` + +### Install Vault agent injector on Amazon EKS + +Create a `vault-values.yaml` file that sets the external servers to Vault Dedicated. +This will deploy a Vault agent injector into the EKS cluster. + +```shell-session +$ cat > vault-values.yaml << EOF +injector: + enabled: true + externalVaultAddr: "${VAULT_PRIVATE_ADDR}" +EOF +``` + +To get more info on the available helm values configuration options, check +[Helm Chart Configuration](/vault/docs/platform/k8s/helm/configuration) +page. + +Validate that the values file is populated correctly. You should find the Vault Dedicated private address in the file. + +```shell-session +$ cat vault-values.yaml +injector: + enabled: true + externalVaultAddr: "https://vault-cluster.private.vault.00000000-0000-0000-0000-000000000000.aws.hashicorp.cloud:8200" +``` + +You will use the [official `vault-helm` chart](https://github.com/hashicorp/vault-helm) +to install the Vault agents to your EKS cluster. + +Add the HashiCorp's Helm chart repository. + +```shell-session +$ helm repo add hashicorp https://helm.releases.hashicorp.com && helm repo update +"hashicorp" has been added to your repositories +Hang tight while we grab the latest from your chart repositories... +...Successfully got an update from the "hashicorp" chart repository +Update Complete. ⎈ Happy Helming!⎈ +``` + +Install the HashiCorp Vault Helm chart. + +```shell-session +$ helm install vault -f ./vault-values.yaml hashicorp/vault --version "0.20.0" +NAME: vault +LAST DEPLOYED: Wed Mar 30 14:27:06 2022 +NAMESPACE: default +STATUS: deployed +REVISION: 1 +TEST SUITE: None +NOTES: +Thank you for installing HashiCorp Vault! + +Now that you have deployed Vault, you should look over the docs on using +Vault with Kubernetes available here: + +https://www.vaultproject.io/docs/ + +Your release is named vault. To learn more about the release, try: + + $ helm status vault + $ helm get manifest vault +``` + +Once the installation is complete, verify the Vault agent injector pod + deploys by issuing `kubectl get pods`. + +```shell-session +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +vault-agent-injector-6c6bbb9785-p7x4n 1/1 Running 0 8s +``` + +At this point, Vault is ready to use and for you to configure it as the secret manager for Consul. + +## Configure Vault as Consul secret manager + +You will need to generate tokens for the different nodes and servers to securely +configure Consul. In order to generate these tokens, you will need a key for +gossip encryption, a Consul Enterprise license, TLS certificates for the Consul +servers and ACL policies + +Since you are using Vault as secrets management for your Consul datacenter, all +the secrets will be stored inside Vault. + +Vault provides a `kv` secrets engine that can be used to store arbitrary secrets. +You will use this engine to store the encryption key and the enterprise license. + +First, enable key/value v2 secrets engine (`kv-v2`). + +```shell-session +$ vault secrets enable -path=consul kv-v2 +Success! Enabled the kv-v2 secrets engine at: consul/ +``` + + + + + + + + + +### Store enterprise license in Vault + +If you want to deploy Consul Enterprise, store the license for Consul enterprise. + +Place your Consul Enterprise license into the folder, from wherever you +currently store it, and name it `consul.hclic`. + +```shell-session +$ cp /path/to/your/current/consul-ent/license consul.hclic +``` + +Store the license in Vault. + +```shell-session +$ vault kv put consul/secret/enterpriselicense key="$(cat ./consul.hclic)" +Key Value +--- ----- +created_time 2022-03-22T16:25:14.073090874Z +custom_metadata +deletion_time n/a +destroyed false +version 1 +``` + + + + + +### Store Consul gossip key in Vault + +Once the secret engine is enabled, store the encryption key in Vault. + +```shell-session +$ vault kv put consul/secret/gossip gossip="$(consul keygen)" +Key Value +--- ----- +created_time 2022-03-16T18:18:52.389731147Z +deletion_time n/a +destroyed false +version 1 +``` + +### Setup PKI secrets engine for TLS and service mesh CA + +Vault provides a `pki` secrets engine that you can use to generate TLS certificates. +You will use this engine to configure CAs for TLS encryption for servers and +service mesh leaf certificates for services. + +Enable Vault's PKI secrets engine at the `pki` path. + +```shell-session +$ vault secrets enable pki +Success! Enabled the pki secrets engine at: pki/ +``` + +You can tune the PKI secrets engine to issue certificates with a maximum time-to-live (TTL). +In this tutorial, you will set a TTL of 10 years (87600 hours). + +```shell-session +$ vault secrets tune -max-lease-ttl=87600h pki +Success! Tuned the secrets engine at: pki/ +``` + +Generate the root certificate for Consul CA. This command saves the certificate +in a file named `consul_ca.crt`. You will use it to configure environment +variables for Consul CLI when you need to interact with your Consul datacenter. + +```shell-session +$ vault write -field=certificate pki/root/generate/internal \ + common_name="dc1.consul" \ + ttl=87600h | tee consul_ca.crt +``` + + + + This command sets `common_name` to `dc1.consul`, which +matches the Consul datacenter and domain configuration. If you are +deploying Consul with different datacenter and domain values, use the +`common_name=""` schema to generate the certificate. + + + +The command should return an output similar to the following. + + + +```plaintext +-----BEGIN CERTIFICATE----- +MIIDMjCCAhqgAwIBAgIUGnHzuETSKLBqYz7CnW9iDdFbGVAwDQYJKoZIhvcNAQEL +BQAwFTETMBEGA1UEAxMKZGMxLmNvbnN1bDAeFw0yMjAzMTcxMDQwNTlaFw0zMjAz +MTQxMDQxMjlaMBUxEzARBgNVBAMTCmRjMS5jb25zdWwwggEiMA0GCSqGSIb3DQEB +AQUAA4IBDwAwggEKAoIBAQDPUSYAR+iHHSQlc0qUmWvix3GZIrc+yMg9RZPbaSCH +ttBd0p71weYXbMjNg8Ob0CY6umEdycXtCGOZBCkBBGPRisMrVoF9RIrWBII9XGbR +36bggYaOTtw9FYfVqVCcO1ZilcnRUpBFrtVCDVd3TZXvEPWv7j0cQ0FwqbSur3Db +VCNYPuCKt/lwill+6wlTo8yFOMRaxkBDKDGFnDKIV2gHw34xZ5vrqt2Vdeif5HSI3X3r ++zr6YAdWuwiJP+S4aTXohRinFLqHw1NMjrzbzqb8mRkuchyDfnjZBur5gxj1Z9Xs +o7fpfmWzFIleOjYHREmHtcjMcu8tti2LuGjJUAVnVg5hAgMBAAGjejB4MA4GA1Ud +DwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBR8hhn7L3Lze5LN +aYAWszT/oo4C6TAfBgNVHSMEGDAWgBR8hhn7L3Lze5LNaYAWszT/oo4C6TAVBgNV +HREEDjAMggpkYzEuY29uc3VsMA0GCSqGSIb3DQEBCwUAA4IBAQAddNVes5f4vmO0 +zh03ShJPxH929IXFLs09uwEU3lnCQuiEhEY86x01kvSGqVnSxyBH+Xtn5va2bPCd +PQsr+9dj6J2eCV1gee6YNtKIEly4NHmYU+3ReexoGLl79guKUvOh1PG1MfHLQQun ++Y74z3s5YW89rdniWK/KdORPr63p+XQvbiuhZLfveY8BLk55mVlojKMs9HV5YOPh +znOLQNTJku04vdltNGQ4yRMDswPM2lTtUVdIgzI6S7j3DDK+gawDHLFa90zq87qY +Qux7KBBlN1VEaRQas4FrvqeRR3FtqFTzn3p+QLpOHXw3te1/6fl5oe4Cch8ZROVB +5U3wt2Em +-----END CERTIFICATE----- +``` + + + +Next, create a role that defines the configuration for the certificates. + +```shell-session +$ vault write pki/roles/consul-server \ + allowed_domains="dc1.consul,consul-server,consul-server.consul,consul-server.consul.svc" \ + allow_subdomains=true \ + allow_bare_domains=true \ + allow_localhost=true \ + generate_lease=true \ + max_ttl="720h" +``` + +The command should return an output similar to the following. + + + +```plaintext +Success! Data written to: pki/roles/consul-server +``` + + + + + + This command sets `common_name` to `dc1.consul`, which +matches the Consul datacenter and domain configuration. If you are +deploying Consul with different datacenter and domain values, use values that reflect the +allowed domains for the `consul-server` pki role. + + + +Finally, enable Vault's PKI secrets engine at the `connect-root` path to be used +as root CA for Consul service mesh. + +```shell-session +$ vault secrets enable -path connect-root pki +Success! Enabled the pki secrets engine at: connect-root/ +``` + +## Configure Kubernetes authentication + +Vault provides a [Kubernetes authentication](/vault/docs/auth/kubernetes) +method that enables clients to authenticate with a Kubernetes Service Account Token. + +Using the Kubernetes authentication, Vault identifies your Consul agents using +their service account and gives them access to the secrets they need to join your Consul datacenter. + +Enable the Kubernetes authentication method. + +```shell-session +$ vault auth enable kubernetes +Success! Enabled kubernetes auth method at: kubernetes/ +``` + +Vault accepts service tokens from any client from within the Kubernetes cluster. +During authentication, Vault verifies that the service account token is valid by +querying a configured Kubernetes endpoint. In order to do that, configure the +Kubernetes auth method with the JSON web token (JWT) for the service account, the +Kubernetes CA certificate, and the Kubernetes host URL. + +The chart configures a Kubernetes service account named `vault` that you will use +to enable Vault communication with Kubernetes. Retrieve the JSON Web Token (JWT) for +the `vault` service account and set it to the `token_reviewer_jwt` environment +variable. + +```shell-session +$ export token_reviewer_jwt=$(kubectl get secret \ + $(kubectl get serviceaccount vault -o jsonpath='{.secrets[0].name}') \ + -o jsonpath='{ .data.token }' | base64 --decode) +``` + +Retrieve the Kubernetes certificate authority for the service account and set it to +the `kubernetes_ca_cert` environment variable. + +```shell-session +$ export kubernetes_ca_cert=$(kubectl get secret \ + $(kubectl get serviceaccount vault -o jsonpath='{.secrets[0].name}') \ + -o jsonpath='{ .data.ca\.crt }' | base64 --decode) +``` + +Retrieve the Kubernetes cluster endpoint and set it to the `kubernetes_host_url` +environment variable. + +```shell-session +$ export kubernetes_host_url=$(kubectl config view --raw --minify --flatten \ + -o jsonpath='{.clusters[].cluster.server}') +``` + +Configure the Vault Kubernetes auth method to use the service account token. + +```shell-session +$ vault write auth/kubernetes/config \ + token_reviewer_jwt="${token_reviewer_jwt}" \ + kubernetes_host="${kubernetes_host_url}" \ + kubernetes_ca_cert="${kubernetes_ca_cert}" +``` + +The command should return an output similar to the following. + + + +```plaintext +Success! Data written to: auth/kubernetes/config +``` + + + +Verify the configuration for the Kubernetes auth method changed in Vault. + +```shell-session +$ vault read auth/kubernetes/config +Key Value +--- ----- +disable_iss_validation true +disable_local_ca_jwt false +issuer n/a +kubernetes_ca_cert -----BEGIN CERTIFICATE----- +##... +-----END CERTIFICATE----- +kubernetes_host https://66606BBB5881313742471313182BBB90999.gr7.us-east-1.eks.amazonaws.com +pem_keys [] +``` + +## Generate Vault policies + +Next, you must define the different Vault policies that will let the Consul agents +generate or retrieve the different secrets. + +You will create Vault policies to grant access to: +1. The gossip encryption key. +1. Consul server policy. +1. Consul CA access policy. +1. Consul service mesh CA policy. +1. Optionally, if you are using Consul Enterprise, Consul enterprise license. + +#### Gossip encryption key + +Earlier in the tutorial, you stored the gossip encryption in the Vault `kv` +secret engine. Define a policy that grants access to the path where you stored +the gossip encryption. + +```shell-session +$ vault policy write gossip-policy - < + +```plaintext +Success! Uploaded policy: gossip-policy +``` + + + + + + + + + + + +#### Consul enterprise license + +When using Consul Enterprise, you must distribute the license to your Consul nodes. +Earlier in this tutorial, you stored the Consul license in the Vault `kv` secrets +engine. Define a policy that grants access to the path where you stored the license. + +```shell-session +$ vault policy write enterpriselicense-policy - < + +```plaintext +Success! Uploaded policy: enterpriselicense-policy +``` + + + + + + + +#### Consul server policy + +Consul servers need to generate TLS certificates (`pki/issue/consul-server`) and +retrieve the CA certificate (`pki/cert/ca`). + +```shell-session +$ vault policy write consul-server - < + +```plaintext +Success! Uploaded policy: consul-server +``` + + + +#### Consul CA access policy + +The policy `ca-policy` grants access to the Consul root CA so that Consul agents +and services can verify the certificates used in the service mesh are authentic. + +```shell-session +$ vault policy write ca-policy - < + +```plaintext +Success! Uploaded policy: ca-policy +``` + + + +#### Consul service mesh CA policy + +The following Vault policy allows Consul to create and manage the root +and intermediate PKI secrets engines for generating service mesh certificates. +If you would prefer to control PKI secrets engine creation and configuration +from Vault rather than delegating full control to Consul, +refer to the [Vault CA provider documentation](/consul/docs/connect/ca/vault#vault-managed-pki-paths). + +The following Vault policy applies to Consul 1.12 and later. +For use with earlier Consul versions, refer to the +[Vault CA provider documentation](/consul/docs/connect/ca/vault#vault-acl-policies) +and select your version from the version dropdown. + +In this example, the `RootPKIPath` is `connect-root` and the `IntermediatePKIPath` +is `connect-intermediate-dc1`. Update these values to reflect your environment. + +```shell-session +$ vault policy write connect - < + +```plaintext +Success! Uploaded policy: connect +``` + + + +## Configure Kubernetes authentication roles in Vault + +You have configured the Kubernetes authentication method and defined the different +policies to grant access to the resources. You can now define the +associations between Kubernetes service accounts and Vault policies. + +You will create Vault roles to associate the necessary policies to: +1. Consul server agents. +1. Consul client agents. +1. Consul CA certificate access. + +### Consul server role + +Create a Kubernetes authentication role in Vault named `consul-server` that +connects the Kubernetes service account (`consul-server`) and namespace (`consul`) +with the Vault policies: `gossip-policy`,`consul-server` and `connect`. +The tokens returned after authentication are valid for 24 hours. + + + + + +```shell-session +$ vault write auth/kubernetes/role/consul-server \ + bound_service_account_names=consul-server \ + bound_service_account_namespaces=consul \ + policies="gossip-policy,consul-server,connect" \ + ttl=24h +``` + +The command should return an output similar to the following. + + + +```plaintext +Success! Data written to: auth/kubernetes/role/consul-server +``` + + + + + + + +When using Consul Enterprise, you must also associate the role with the `enterpriselicense-policy`. + +```shell-session +$ vault write auth/kubernetes/role/consul-server \ + bound_service_account_names=consul-server \ + bound_service_account_namespaces=consul \ + policies="gossip-policy,consul-server,connect,enterpriselicense-policy" \ + ttl=24h +``` + +The command should return an output similar to the following. + + + +```plaintext +Success! Data written to: auth/kubernetes/role/consul-server +``` + + + + + + + +### Consul client role + +Create a Kubernetes authentication role in Vault named `consul-client` that +connects the Kubernetes service account (`consul-client`) and namespace (`consul`) +with the Vault policies: `gossip-policy` and `ca-policy`. +The tokens returned after authentication are valid for 24 hours. + + + + + +```shell-session +$ vault write auth/kubernetes/role/consul-client \ + bound_service_account_names=consul-client \ + bound_service_account_namespaces=consul \ + policies="gossip-policy,ca-policy" \ + ttl=24h +``` + +The command should return an output similar to the following. + + + +```plaintext +Success! Data written to: auth/kubernetes/role/consul-client +``` + + + + + + + +When using Consul Enterprise, you must also associate the role with the `enterpriselicense-policy`. + +```shell-session +$ vault write auth/kubernetes/role/consul-client \ + bound_service_account_names=consul-client \ + bound_service_account_namespaces=consul \ + policies="gossip-policy,ca-policy,enterpriselicense-policy" \ + ttl=24h +``` + +The command should return an output similar to the following. + + + +```plaintext +Success! Data written to: auth/kubernetes/role/consul-client +``` + + + + + + + +#### Define access to Consul CA root certificate + +Create a Kubernetes authentication role in Vault named `consul-ca` that connects +all Kubernetes service account in namespace (`consul`) with the Vault policy `ca-policy`. +The tokens returned after authentication are valid for 1 hour. + +```shell-session +$ vault write auth/kubernetes/role/consul-ca \ + bound_service_account_names="*" \ + bound_service_account_namespaces=consul \ + policies=ca-policy \ + ttl=1h +``` + +The command should return an output similar to the following. + + + +```plaintext +Success! Data written to: auth/kubernetes/role/consul-ca +``` + + + +With the creation of the roles you completed the Vault configuration necessary. +The following diagram provides a summary of the configuration you created. + +![Permissions for Vault as Consul secrets manager](/img/kubernetes-diagram-permissions-vault-as-secrets-manager.png) + +## Deploy Consul datacenter + +Now that you have completed configuring Vault, you are ready to deploy Consul +datacenter on the Kubernetes cluster. + + + + + +The repository contains a configuration file for your Helm chart, named `consul-values.yaml`. + +Open the file and modify the configuration to use your `$VAULT_PRIVATE_ADDR`. + +```shell-session +$ vim consul-values.yaml +``` + + + + Content should resemble the example below. This example is not +guaranteed to be up to date. **Always** refer to the values file provided in +the repository. + + + + + +```yaml +global: + datacenter: "dc1" + name: consul + domain: consul + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server + consulClientRole: consul-client + consulCARole: consul-ca + connectCA: + address: $VAULT_PRIVATE_ADDR + rootPKIPath: connect-root/ + intermediatePKIPath: connect-intermediate-dc1/ + additionalConfig: "{\"connect\": [{ \"ca_config\": [{ \"namespace\": \"admin\"}]}]}" + agentAnnotations: | + "vault.hashicorp.com/namespace": "admin" +##... +``` + + + +To get more info on the available Helm values configuration options, check out the +[Helm Chart Configuration](/consul/docs/reference/k8s/helm) page. + +Once the configuration is complete, install Consul on your EKS cluster. + +```shell-session +$ helm install --namespace consul --create-namespace \ + --wait \ + --values ./consul-values.yaml \ + consul hashicorp/consul --version "0.44.0" +``` + + + + + +The repository contains a configuration file for your Helm chart, named `consul-ent-values.yaml`. + +Open the file and modify the configuration to use your `$VAULT_PRIVATE_ADDR`. + +```shell-session +$ vim consul-values.yaml +``` + + + + Content should resemble the example below. This example is not +guaranteed to be up to date. **Always** refer to the values file provided in +the repository. + + + + + +```yaml +global: + datacenter: "dc1" + name: consul + domain: consul + image: hashicorp/consul-enterprise:1.12-ent + secretsBackend: + vault: + enabled: true + consulServerRole: consul-server + consulClientRole: consul-client + consulCARole: consul-ca + connectCA: + address: $VAULT_PRIVATE_ADDR + rootPKIPath: connect-root/ + intermediatePKIPath: connect-intermediate-dc1/ + additionalConfig: "{\"connect\": [{ \"ca_config\": [{ \"namespace\": \"admin\"}]}]}" + agentAnnotations: | + "vault.hashicorp.com/namespace": "admin" + enterpriseLicense: + secretName: 'consul/data/secret/enterpriselicense' + secretKey: 'key' +##... +``` + + + +To get more info on the available Helm values configuration options, check out the +[Helm Chart Configuration](/consul/docs/reference/k8s/helm) page. + +Once the configuration is complete, install Consul on your EKS cluster. + +```shell-session +$ helm install --namespace consul --create-namespace \ + --wait \ + --values ./consul-ent-values.yaml \ + consul hashicorp/consul --version "0.44.0" +``` + + + + + +The deployment can take up to 10 minutes to complete. When finished, you will get something similar to the following: + + + +```plaintext +## ... +NOTES: +Thank you for installing HashiCorp Consul! + +Your release is named consul. + +To learn more about the release, run: + + $ helm status consul + $ helm get all consul + +Consul on Kubernetes Documentation: +https://developer.hashicorp.com/docs/platform/k8s + +Consul on Kubernetes CLI Reference: +https://developer.hashicorp.com/docs/k8s/k8s-cli +``` + + + +## Verify configuration + + Once the installation is complete, verify all the Consul pods are running using `kubectl`. + +```shell-session +$ kubectl get pods --namespace consul +NAME READY STATUS RESTARTS AGE +consul-client-5zpkg 2/2 Running 0 3m46s +consul-client-q7ch8 2/2 Running 0 3m46s +consul-client-qtts7 2/2 Running 0 3m46s +consul-connect-injector-746cd866b-zc5l2 2/2 Running 0 3m46s +consul-controller-5b5d5b8f8-qmgnc 2/2 Running 0 3m46s +consul-ingress-gateway-74b88fb69f-dr9x2 3/3 Running 0 3m46s +consul-ingress-gateway-74b88fb69f-kv59b 3/3 Running 0 3m46s +consul-server-0 2/2 Running 0 3m45s +consul-sync-catalog-bd69d7565-8tfcz 2/2 Running 0 3m46s +consul-terminating-gateway-55cc569ddf-5wvnd 3/3 Running 0 3m45s +consul-terminating-gateway-55cc569ddf-lp9kl 3/3 Running 0 3m45s +consul-webhook-cert-manager-5bb49457bf-7n82w 1/1 Running 0 3m46s +prometheus-server-5cbddcc44b-67z8n 2/2 Running 0 2m12s +``` + +The configuration enables Consul UI as a service in your EKS cluster. + +Retrieve the address from Kubernetes services. + +```shell-session +$ kubectl get services --namespace consul --field-selector metadata.name=consul-ui +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +consul-ui LoadBalancer 10.100.72.81 abe45f34b066541f68a625ac1d3e0cfe-1763109037.us-east-1.elb.amazonaws.com 443:32597/TCP 23h +``` + +Access the Consul UI using the `consul-ui` external address on port `443`. + +![Consul UI on services tab](/img/kubernetes-consul_ui-services.png) + +## Clean up environment + +Now that you are finished with the tutorial, clean up your environment. +Respond `yes` to the prompt to confirm + +```shell-session +$ terraform destroy +Plan: 0 to add, 0 to change, 76 to destroy. + +Changes to Outputs: + - eks_data = (sensitive value) + - oidc_provider_arn = "arn:aws:iam::561656980159:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/D9338D6BDF23CFD3E878411B2BA34870" -> null + - service_account_role_arn = "arn:aws:iam::561656980159:role/tutorialclustertest" -> null + - vault_auth_data = (sensitive value) + +Do you really want to destroy all resources? + Terraform will destroy all your managed infrastructure, as shown above. + There is no undo. Only 'yes' will be accepted to confirm. + + Enter a value: yes +``` + +## Next steps + +Using Vault as a centralized secret management, you can simplify your Consul +deployments and avoid secret sprawl across multiple Kubernetes instances. Vault +helps you scale your deployments without having to trade off between security +and manageability. + +In this tutorial you learned how to use Vault as secret manager for your Consul +datacenter installed in Kubernetes. + +Specifically you: + +- Configured Vault secrets engines to store or generate Consul secrets. +- Configured Kubernetes auth engine for Vault to allow authentication using a k8s +service account. +- Configured Consul helm chart to retrieve the secrets from Vault during deploy. +- Deployed Consul on Kubernetes and verify that secrets are being generated or +retrieved from Vault. + +You can read more on Consul Helm chart configuration value, to tune your Consul +installation at [Helm Chart Configuration](/consul/docs/reference/k8s/helm). + +To learn more on Vault Kubernetes authentication check [Vault Agent with Kubernetes](/vault/tutorials/kubernetes/agent-kubernetes) and [HCP Vault Dedicated with Amazon Elastic Kubernetes Service](/vault/tutorials/cloud-ops/vault-eks). diff --git a/website/content/docs/internals/acl.mdx b/website/content/docs/internals/acl.mdx deleted file mode 100644 index 87e35844179a..000000000000 --- a/website/content/docs/internals/acl.mdx +++ /dev/null @@ -1,13 +0,0 @@ ---- -layout: docs -page_title: ACL System -description: >- - Consul provides an optional Access Control List (ACL) system which can be used - to control access to data and APIs. The ACL system is a Capability-based - system that relies on tokens which can have fine grained rules applied to - them. It is very similar to AWS IAM in many ways. ---- - -# ACL System ((#version_8_acls)) - -This content has been moved into the [ACL Guide](/consul/tutorials/security/access-control-setup-production). \ No newline at end of file diff --git a/website/content/docs/internals/index.mdx b/website/content/docs/internals/index.mdx deleted file mode 100644 index c4a1d08f6b97..000000000000 --- a/website/content/docs/internals/index.mdx +++ /dev/null @@ -1,25 +0,0 @@ ---- -layout: docs -page_title: Internals Overview -description: >- - To enhance your understanding of how to use, troubleshoot, and secure Consul, learn more about Consul's internal properties and how it works under the hood. ---- - -# Consul Internals Overview - -This section covers some of the internals of Consul. Understanding the internals of Consul is necessary to successfully -use it in production. - -Please review the following documentation to understand how Consul works. - -- [Architecture](/consul/docs/architecture) -- [Consensus Protocol](/consul/docs/architecture/consensus) -- [Gossip Protocol](/consul/docs/architecture/gossip) -- [Network Coordinates](/consul/docs/architecture/coordinates) -- [Sessions](/consul/docs/security/acl/auth-methods/oidc) -- [Anti-Entropy](/consul/docs/architecture/anti-entropy) -- [Security Model](/consul/docs/security) -- [Discovery Chain](/consul/docs/connect/manage-traffic/discovery-chain) - -You should also be familiar with [Jepsen testing](/consul/docs/architecture/jepsen), before deploying -a production datacenter. diff --git a/website/content/docs/intro.mdx b/website/content/docs/intro.mdx new file mode 100644 index 000000000000..04cb41631656 --- /dev/null +++ b/website/content/docs/intro.mdx @@ -0,0 +1,93 @@ +--- +layout: docs +page_title: What is Consul? +description: >- + Consul is a service networking solution that delivers service discovery, service mesh, and network security capabilities. It supports multi-cloud infrastructure by automating connectivity between cloud providers. Learn how Consul can help you scale operations and provide high availability across your network. +--- + +# What is Consul? + +HashiCorp Consul is a service networking solution that enables teams to manage secure network connectivity between services and across on-prem and multi-cloud environments and runtimes. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure devices. You can use these features individually or together in a single Consul deployment. + +> **Hands-on**: Complete the Getting Started tutorials to learn how to deploy Consul: +- [Get Started on Kubernetes](/consul/tutorials/get-started-kubernetes) +- [Get Started on VMs](/consul/tutorials/get-started-vms) +- [HashiCorp Cloud Platform (HCP) Consul Dedicated](/consul/tutorials/get-started-hcp) + +## How does Consul work? + +Consul provides a _control plane_ that enables you to register, query, and secure services deployed across your network. The control plane is the part of the network infrastructure that maintains a central registry to track services and their respective IP addresses. It is a distributed system that runs on clusters of nodes, such as physical servers, cloud instances, virtual machines, or containers. + +Consul interacts with the _data plane_ through proxies. The data plane is the part of the network infrastructure that processes data requests. + +![Basic Consul workflow](/img/what-is-consul-overview-diagram.png) + +The core Consul workflow consists of the following stages: + +- **Register**: Teams add services to the Consul catalog, which is a central registry that lets services automatically discover each other without requiring a human operator to modify application code, deploy additional load balancers, or hardcode IP addresses. It is the runtime source of truth for all services and their addresses. Teams can manually [define](/consul/docs/register/service/vm/define) and [register](/consul/docs/register/service/vm) using the CLI or the API, or you can automate the process in Kubernetes with [service sync](/consul/docs/register/service/k8s/service-sync). Services can also include health checks so that Consul can monitor for unhealthy services. +- **Query**: Consul’s identity-based DNS lets you find healthy services in the Consul catalog. Services registered with Consul provide health information, access points, and other data that help you control the flow of data through your network. Your services only access other services through their local proxy according to the identity-based policies you define. +- **Secure**: After services locate upstreams, Consul ensures that service-to-service communication is authenticated, authorized, and encrypted. Consul service mesh secures microservice architectures with mTLS and can allow or restrict access based on service identities, regardless of differences in compute environments and runtimes. + +## Why Consul? + +Consul increases application resilience, bolsters uptime, accelerates application deployment, and improves security across service-to-service communications. HashiCorp co-founder and CTO Armon Dadgar explains how Consul solves networking challenges. + + + +To learn more about Consul and how it compares to similar products, refer to [Consul use cases](/consul/docs/use-case). + +To learn more about Consul and how it compares to similar products, refer to [Consul use cases](/consul/docs/use-case). + +### Automate service discovery + +Adopting a microservices architecture on cloud infrastructure is a critical step toward delivering value at scale, but knowing where healthy services are running on your networks in real time becomes a challenge. Consul automates service discovery by replacing service connections usually handled with load balancers with an identity-based service catalog. The [service catalog](/consul/docs/concept/catalog) is a centralized source of truth that you can query through Consul’s DNS server or API. The catalog always knows which services are available, which have been removed, and which services are healthy. + +To learn more about Consul's service discovery features and how they compares to similar products, refer to [Consul compared to other DNS tools](/consul/docs/use-case/dns). + +### Connect services across runtimes and cloud providers + +Modern organizations may deploy services to a combination of on-prem infrastructure environments and public cloud providers across multiple regions. Services may run on bare metal, virtual machines, or as containers across Kubernetes clusters. + +Consul routes network traffic to any runtime or infrastructure environment your services need to reach. You can also use Consul API Gateway to route traffic into and out of the network. Consul service mesh provides additional capabilities, such as securing communication between services, traffic management, and observability, with no application code changes. + +Consul also has many integrations with Kubernetes that enable you to leverage Consul features in containerized environments. For example, Consul can automatically inject sidecar proxies into Kubernetes Pods and sync Kubernetes Services and non-Kubernetes services into the Consul service registry without manual changes to the application or changing the Pod definition. + +You can also schedule Consul workloads with [HashiCorp Nomad](https://www.nomadproject.io/) to provide secure service-to-service communication between Nomad jobs and task groups. + +### Enable zero-trust network security + +Microservice architectures are complex and difficult to secure against accidental disclosure to malicious actors. Consul provides several mechanisms that enhance network security without any changes to your application code, including mutual transport layer security (mTLS) encryption on all traffic between services and Consul intentions, which are service-to-service permissions that you can manage through the Consul UI, API, and CLI. + +When you deploy Consul to Kubernetes clusters, you can also integrate with [HashiCorp Vault](https://www.vaultproject.io/) to manage sensitive data. By default, Consul on Kubernetes leverages Kubernetes secrets as the backend system. Kubernetes secrets are base64 encoded, unencrypted, and lack lease or time-to-live properties. By leveraging Vault as a secrets backend for Consul on Kubernetes, you can manage and store Consul related secrets within a centralized Vault cluster to use across one or many Consul on Kubernetes datacenters. Refer to [Vault as the Secrets Backend](/consul/docs/deploy/server/k8s/vault) for additional information. + +You can also secure your Consul deployment, itself, by defining security policies in access control lists (ACL) to control access to data and Consul APIs. + +### Protect your services against network failure + +Outages are unavoidable, but with distributed systems it is critical that a power failure in one datacenter doesn’t disrupt downstream service operations. You can enable automated backups, redundancy zones, read-replicas, and other features that prevent data loss and downtime after a catastrophic event. L7 observability features also deliver service traffic metrics in the Consul UI, which help you understand the state of a service and its connections within the mesh. + +### Dynamically update network infrastructure devices + +Change to your network, including day-to-day operational tasks such as updating network device endpoints and firewall or load balancer rules, can lead to problems that disrupt operations at critical moments. You can deploy the Consul-Terraform-Sync (CTS) add-on to dynamically update network infrastructure devices when a service changes. CTS monitors the service information stored in Consul and automatically launches an instance of HashiCorp Terraform to drive relevant changes to the network infrastructure when Consul registers a change, reducing the manual effort of configuring network infrastructure. + +### Optimize traffic routes for deployment and testing scenarios + +Rolling out changes can be risky, especially in complex network environments. Updated services may not behave as expected when connected to other services, resulting in upstream or downstream issues. Consul service mesh supports layer 7 (L7) traffic management, which lets you divide L7 traffic into different subsets of service instances. This enables you to divide your pool of services for canary testing, A/B tests, blue/green deployments, and soft multi-tenancy (prod/qa/staging sharing compute resources) deployments. + +## Consul Enterprise + +HashiCorp offers core Consul functionality for free in the community edition, which is ideal for smaller businesses and teams that want to pilot Consul within their organizations. As your business grows, you can upgrade to Consul Enterprise, which offers additional capabilities designed to address organizational complexities of collaboration, operations, scale, and governance. + +### HCP Consul Dedicated + +@include 'alerts/hcp-dedicated-eol.mdx' + +HashiCorp Cloud Platform (HCP) Consul is our SaaS that delivers Consul Enterprise capabilities and shifts the burden of managing the control plane to us. Create an HCP organization and leverage our expertise to simplify control plane maintenance and configuration. Learn more at [HashiCorp Cloud Platform](https://cloud.hashicorp.com/products/consul). + +## Community + +We welcome questions, suggestions, and contributions from the community. + +- Ask questions in [HashiCorp Discuss](https://discuss.hashicorp.com/c/consul/29). +- Read our [contributing guide](https://github.com/hashicorp/consul/blob/main/.github/CONTRIBUTING.md). +- [Submit a Github issue](https://github.com/hashicorp/consul/issues/new/choose) for feature requests and bug reports. diff --git a/website/content/docs/intro/index.mdx b/website/content/docs/intro/index.mdx deleted file mode 100644 index 1708f3d84b5d..000000000000 --- a/website/content/docs/intro/index.mdx +++ /dev/null @@ -1,90 +0,0 @@ ---- -layout: docs -page_title: What is Consul? -description: >- - Consul is a service networking solution that delivers service discovery, service mesh, and network security capabilities. It supports multi-cloud infrastructure by automating connectivity between cloud providers. Learn how Consul can help you scale operations and provide high availability across your network. ---- - -# What is Consul? - -HashiCorp Consul is a service networking solution that enables teams to manage secure network connectivity between services and across on-prem and multi-cloud environments and runtimes. Consul offers service discovery, service mesh, traffic management, and automated updates to network infrastructure devices. You can use these features individually or together in a single Consul deployment. - -> **Hands-on**: Complete the Getting Started tutorials to learn how to deploy Consul: -- [Get Started on Kubernetes](/consul/tutorials/get-started-kubernetes) -- [Get Started on VMs](/consul/tutorials/get-started-vms) -- [HashiCorp Cloud Platform (HCP) Consul](/consul/tutorials/get-started-hcp) - -## How does Consul work? - -Consul provides a _control plane_ that enables you to register, query, and secure services deployed across your network. The control plane is the part of the network infrastructure that maintains a central registry to track services and their respective IP addresses. It is a distributed system that runs on clusters of nodes, such as physical servers, cloud instances, virtual machines, or containers. - -Consul interacts with the _data plane_ through proxies. The data plane is the part of the network infrastructure that processes data requests. Refer to [Consul Architecture](/consul/docs/architecture) for details. - -![Basic Consul workflow](/img/what-is-consul-overview-diagram.png) - -The core Consul workflow consists of the following stages: - -- **Register**: Teams add services to the Consul catalog, which is a central registry that lets services automatically discover each other without requiring a human operator to modify application code, deploy additional load balancers, or hardcode IP addresses. It is the runtime source of truth for all services and their addresses. Teams can manually [define](/consul/docs/services/usage/define-services) and [register](/consul/docs/services/usage/register-services-checks) using the CLI or the API, or you can automate the process in Kubernetes with [service sync](/consul/docs/k8s/service-sync). Services can also include health checks so that Consul can monitor for unhealthy services. -- **Query**: Consul’s identity-based DNS lets you find healthy services in the Consul catalog. Services registered with Consul provide health information, access points, and other data that help you control the flow of data through your network. Your services only access other services through their local proxy according to the identity-based policies you define. -- **Secure**: After services locate upstreams, Consul ensures that service-to-service communication is authenticated, authorized, and encrypted. Consul service mesh secures microservice architectures with mTLS and can allow or restrict access based on service identities, regardless of differences in compute environments and runtimes. - -## Why Consul? -Consul increases application resilience, bolsters uptime, accelerates application deployment, and improves security across service-to-service communications. HashiCorp co-founder and CTO Armon Dadgar explains how Consul solves networking challenges. - - - -### Automate service discovery - -Adopting a microservices architecture on cloud infrastructure is a critical step toward delivering value at scale, but knowing where healthy services are running on your networks in real time becomes a challenge. Consul automates service discovery by replacing service connections usually handled with load balancers with an identity-based service catalog. The service catalog is a centralized source of truth that you can query through Consul’s DNS server or API. The catalog always knows which services are available, which have been removed, and which services are healthy. - -### Connect services across runtimes and cloud providers - -Modern organizations may deploy services to a combination of on-prem infrastructure environments and public cloud providers across multiple regions. Services may run on bare metal, virtual machines, or as containers across Kubernetes clusters. - -Consul routes network traffic to any runtime or infrastructure environment your services need to reach. You can also use Consul API Gateway to route traffic into and out of the network. Consul service mesh provides additional capabilities, such as securing communication between services, traffic management, and observability, with no application code changes. - -Consul also has many integrations with Kubernetes that enable you to leverage Consul features in containerized environments. For example, Consul can automatically inject sidecar proxies into Kubernetes Pods and sync Kubernetes Services and non-Kubernetes services into the Consul service registry without manual changes to the application or changing the Pod definition. - -You can also schedule Consul workloads with [HashiCorp Nomad](https://www.nomadproject.io/) to provide secure service-to-service communication between Nomad jobs and task groups. - -### Enable zero-trust network security - -Microservice architectures are complex and difficult to secure against accidental disclosure to malicious actors. Consul provides several mechanisms that enhance network security without any changes to your application code, including mutual transport layer security (mTLS) encryption on all traffic between services and Consul intentions, which are service-to-service permissions that you can manage through the Consul UI, API, and CLI. - -When you deploy Consul to Kubernetes clusters, you can also integrate with [HashiCorp Vault](https://www.vaultproject.io/) to manage sensitive data. By default, Consul on Kubernetes leverages Kubernetes secrets as the backend system. Kubernetes secrets are base64 encoded, unencrypted, and lack lease or time-to-live properties. By leveraging Vault as a secrets backend for Consul on Kubernetes, you can manage and store Consul related secrets within a centralized Vault cluster to use across one or many Consul on Kubernetes datacenters. Refer to [Vault as the Secrets Backend](/consul/docs/k8s/deployment-configurations/vault) for additional information. - -You can also secure your Consul deployment, itself, by defining security policies in access control lists (ACL) to control access to data and Consul APIs. - -### Protect your services against network failure - -Outages are unavoidable, but with distributed systems it is critical that a power failure in one datacenter doesn’t disrupt downstream service operations. You can enable automated backups, redundancy zones, read-replicas, and other features that prevent data loss and downtime after a catastrophic event. L7 observability features also deliver service traffic metrics in the Consul UI, which help you understand the state of a service and its connections within the mesh. - -### Dynamically update network infrastructure devices - -Change to your network, including day-to-day operational tasks such as updating network device endpoints and firewall or load balancer rules, can lead to problems that disrupt operations at critical moments. You can deploy the Consul-Terraform-Sync (CTS) add-on to dynamically update network infrastructure devices when a service changes. CTS monitors the service information stored in Consul and automatically launches an instance of HashiCorp Terraform to drive relevant changes to the network infrastructure when Consul registers a change, reducing the manual effort of configuring network infrastructure. - -### Optimize traffic routes for deployment and testing scenarios - -Rolling out changes can be risky, especially in complex network environments. Updated services may not behave as expected when connected to other services, resulting in upstream or downstream issues. Consul service mesh supports layer 7 (L7) traffic management, which lets you divide L7 traffic into different subsets of service instances. This enables you to divide your pool of services for canary testing, A/B tests, blue/green deployments, and soft multi-tenancy (prod/qa/staging sharing compute resources) deployments. - -## Consul Enterprise - -HashiCorp offers core Consul functionality for free in the community edition, which is ideal for smaller businesses and teams that want to pilot Consul within their organizations. As your business grows, you can upgrade to Consul Enterprise, which offers additional capabilities designed to address organizational complexities of collaboration, operations, scale, and governance. - -### HCP Consul Dedicated - -HashiCorp Cloud Platform (HCP) Consul is our SaaS that delivers Consul Enterprise capabilities and shifts the burden of managing the control plane to us. Create an HCP organization and leverage our expertise to simplify control plane maintenance and configuration. Learn more at [HashiCorp Cloud Platform](https://cloud.hashicorp.com/products/consul). - -## Community - -We welcome questions, suggestions, and contributions from the community. - -- Ask questions in [HashiCorp Discuss](https://discuss.hashicorp.com/c/consul/29). -- Read our [contributing guide](https://github.com/hashicorp/consul/blob/main/.github/CONTRIBUTING.md). -- [Submit a Github issue](https://github.com/hashicorp/consul/issues/new/choose) for feature requests and bug reports. diff --git a/website/content/docs/k8s.mdx b/website/content/docs/k8s.mdx new file mode 100644 index 000000000000..6695999be8c6 --- /dev/null +++ b/website/content/docs/k8s.mdx @@ -0,0 +1,81 @@ +--- +layout: docs +page_title: Consul on Kubernetes +description: >- + Consul supports Kubernetes natively, allowing you to deploy Consul sidecars to a Kubernetes service mesh and sync the k8s service registry with non-k8s services. Learn how to install Consul on Kubernetes with Helm or the Consul K8s CLI and get started with tutorials. +--- + +# Consul on Kubernetes + +Consul has many integrations with Kubernetes. You can deploy Consul +to Kubernetes using the [Helm chart](/consul/docs/k8s/installation/install#helm-chart-installation) or [Consul K8s CLI](/consul/docs/k8s/installation/install-cli#consul-k8s-cli-installation), sync services between Consul and +Kubernetes, run Consul Service Mesh, and more. +This section documents the official integrations between Consul and Kubernetes. + +## Use Cases + +**Consul Service Mesh**: +Consul can automatically inject the [Consul Service Mesh](/consul/docs/connect) +sidecar into pods so that they can accept and establish encrypted +and authorized network connections with mutual TLS. And because Consul Service Mesh +can run anywhere, pods and external services can communicate with each other over a fully encrypted connection. + +**Service sync to enable Kubernetes and non-Kubernetes services to communicate**: +Consul can sync Kubernetes services with its own service registry. This service sync allows +Kubernetes services to use Kubernetes' native service discovery capabilities to discover +and connect to external services registered in Consul, and for external services +to use Consul service discovery to discover and connect to Kubernetes services. + +**Additional integrations**: Consul can run directly on Kubernetes, so in addition to the +native integrations provided by Consul itself, any other tool built for +Kubernetes can leverage Consul. + +## Version compatibility + +The following table lists versions of Kubernetes across the three major cloud providers and the currently supported Consul versions they have been tested with. + +@include 'tables/compatibility/k8s/providers.mdx' + +The following table describes the general compatibility between Consul and Kubernetes versions. + +@include 'tables/compatibility/k8s/version.mdx' + +## Getting Started With Consul and Kubernetes + +There are several ways to try Consul with Kubernetes in different environments. + +### Tutorials + +- The [Getting Started with Consul Service Mesh track](/consul/tutorials/get-started-kubernetes) + provides guidance for installing Consul as service mesh for Kubernetes using the Helm + chart, deploying services in the service mesh, and using intentions to secure service + communications. + +- The [Migrate to Microservices with Consul Service Mesh on Kubernetes](/consul/tutorials/microservices?utm_source=docs) + collection uses an example application written by a fictional company to illustrate why and how organizations can + migrate from monolith to microservices using Consul service mesh on Kubernetes. The case study in this collection + should provide information valuable for understanding how to develop services that leverage Consul during any stage + of your microservices journey. + +- The [Consul and Minikube guide](/consul/tutorials/kubernetes/kubernetes-minikube?utm_source=docs) is a quick step-by-step guide for deploying Consul with the official Helm chart on a local instance of Minikube. + +- Review production best practices and cloud-specific configurations for deploying Consul on managed Kubernetes runtimes. + + - The [Consul on Azure Kubernetes Service (AKS) tutorial](/consul/tutorials/kubernetes/kubernetes-aks-azure?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on AKS. The guide also allows you to practice deploying two microservices. + - The [Consul on Amazon Elastic Kubernetes Service (EKS) tutorial](/consul/tutorials/kubernetes/kubernetes-eks-aws?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on EKS. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API. + - The [Consul on Google Kubernetes Engine (GKE) tutorial](/consul/tutorials/kubernetes/kubernetes-gke-google?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on GKE. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API. + +- The [Consul and Kubernetes Reference Architecture](/consul/tutorials/kubernetes/kubernetes-reference-architecture?utm_source=docs) guide provides recommended practices for production. + +- The [Consul and Kubernetes Deployment](/consul/tutorials/kubernetes/kubernetes-deployment-guide?utm_source=docs) tutorial covers the necessary steps to install and configure a new Consul cluster on Kubernetes in production. + +- The [Secure Consul and Registered Services on Kubernetes](/consul/tutorials/kubernetes/kubernetes-secure-agents?utm_source=docs) tutorial covers + the necessary steps to secure a Consul cluster running on Kubernetes in production. + +- The [Layer 7 Observability with Consul Service Mesh](/consul/tutorials/kubernetes/kubernetes-layer7-observability) tutorial covers monitoring a + Consul service mesh running on Kubernetes with Prometheus and Grafana. + +### Documentation + +- [Installing Consul](/consul/docs/deploy/server/k8s/helm) covers how to install Consul using the Helm chart. +- [Helm Chart Reference](/consul/docs/reference/k8s/helm) describes the different options for configuring the Helm chart. diff --git a/website/content/docs/k8s/annotations-and-labels.mdx b/website/content/docs/k8s/annotations-and-labels.mdx deleted file mode 100644 index 56e4c4c0fcd5..000000000000 --- a/website/content/docs/k8s/annotations-and-labels.mdx +++ /dev/null @@ -1,340 +0,0 @@ ---- -layout: docs -page_title: Annotations and Labels -description: >- - Annotations and labels configure service mesh sidecar properties and injection behavior when scheduling Kubernetes clusters. Learn about the annotations and labels that enable Consul’s service mesh and secure upstream communication on k8s in this reference guide. ---- - -# Annotations and Labels - -## Overview - -Consul on Kubernetes provides a few options for customizing how connect-inject or service sync behavior should be configured. -This allows the user to configure natively configure Consul on select Kubernetes resources (i.e. pods, services). - -- [Consul Service Mesh](#consul-service-mesh) - - [Annotations](#annotations) - - [Labels](#labels) -- [Service Sync](#service-sync) - - [Annotations](#annotations-1) - -The noun _connect_ is used throughout this documentation to refer to the connect -subsystem that provides Consul's service mesh capabilities. - -## Consul Service Mesh - -### Annotations - -The following Kubernetes resource annotations could be used on a pod to control connect-inject behavior: - -- `consul.hashicorp.com/connect-inject` - If this is "true" then injection - is enabled. If this is "false" then injection is explicitly disabled. - The default injector behavior requires pods to opt-in to injection by - specifying this value as "true". This default can be changed in the - injector's configuration if desired. - -- `consul.hashicorp.com/transparent-proxy` - If this is "true", this Pod - will run with transparent proxy enabled. This means you can use Kubernetes - DNS to access upstream services and all inbound and outbound traffic within - the pod is redirected to go through the proxy. - -- `consul.hashicorp.com/transparent-proxy-overwrite-probes` - If this is "true" - and transparent proxy is enabled, the Connect injector will overwrite Kubernetes - HTTP probes to point to the Envoy proxy. - -- `consul.hashicorp.com/transparent-proxy-exclude-inbound-ports` - A comma-separated - list of inbound ports to exclude from traffic redirection when running in transparent proxy - mode. - -- `consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs` - A comma-separated - list of outbound CIDRs to exclude from traffic redirection when running in transparent proxy - mode. - -- `consul.hashicorp.com/transparent-proxy-exclude-outbound-ports` - A comma-separated - list of outbound ports to exclude from traffic redirection when running in transparent proxy - mode. - -- `consul.hashicorp.com/transparent-proxy-exclude-uids` - A comma-separated - list of additional user IDs to exclude from traffic redirection when running in transparent proxy - mode. - -- `consul.hashicorp.com/use-proxy-health-check` - It this is set to `true`, it configures - a readiness endpoint on Consul sidecar proxy and queries the proxy instead of the proxy's inbound port which - forwards the request to the application. - -- `consul.hashicorp.com/connect-service` - For pods that accept inbound - connections, this specifies the name of the service that is being - served. This defaults to the name of the Kubernetes service associated with the pod. - - If using ACLs, this must be the same name as the Pod's `ServiceAccount`. - -- `consul.hashicorp.com/connect-service-port` - For pods that accept inbound - connections, this specifies the port to route inbound connections to. This - is the port that the service is listening on. The service port defaults to - the first exposed port on any container in the pod. If specified, the value - can be the _name_ of a configured port, such as "http" or it can be a direct - port value such as "8080". This is the port of the _service_, the proxy - public listener will listen on a dynamic port. - -- `consul.hashicorp.com/connect-service-upstreams` - The list of upstream - services that this pod needs to connect to via the service mesh along with a static - local port to listen for those connections. When transparent proxy is enabled, - this annotation is optional. This annotation can be either _labeled_ or _unlabeled_. We recommend the labeled format because it has a more consistent syntax and can be used to reference cluster peers as upstreams. - - You cannot reference auto-generated environment variables when the upstream annotation contains a dot. This is because Consul also renders the environment variables to include a dot. For example, Consul renders the variables generated for `static-server.svc:8080` as `STATIC-SERVER.SVC_CONNECT_SERVICE_HOST` and `STATIC_SERVER.SVC_CONNECT_SERVICE_PORT`, which makes the variables unusable. - - **Labeled**: - - The labeled annotation format allows you to reference any service as an upstream. You can specify a Consul Enterprise namespace. You can also specify an admin partition in the same datacenter, a cluster peer, or a WAN-federated datacenter. - - - Service name: Place the service name at the beginning of the annotation followed by `.svc` to specify the upstream service. - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].svc:[port]" - ``` - - - Peer or datacenter: Place the peer or datacenter after `svc.` followed by either `peer` or `dc` and the port number. - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].svc.[service-peer].peer:[port]" - ``` - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].svc.[service-dc].dc:[port]" - ``` - - - Namespace (requires Consul Enterprise): Place the namespace after `svc.` followed by `ns` and the port number. - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].svc.[service-namespace].ns:[port]" - ``` - - When namespaces are enabled, you must include the namespace in the annotation before specifying a cluster peer, WAN-federated datacenter, or admin partition in the same datacenter. - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].svc.[service-namespace].ns.[service-peer].peer:[port]" - ``` - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].svc.[service-namespace].ns.[service-partition].ap:[port]" - ``` - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].svc.[service-namespace].ns.[service-dc].dc:[port]" - ``` - - - **Unlabeled**: - The unlabeled annotation format allows you to reference any service not in a cluster peer as an upstream. You can specify a Consul Enterprise namespace. You can also specify an admin partition in the same datacenter or a WAN-federated datacenter. Unlike the labeled annotation, you can also reference a prepared query as an upstream. - - - Service name: Place the service name at the beginning of the annotation to specify the upstream service. You also have the option to append the WAN federated datacenter where the service is deployed. - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name]:[port]:[optional datacenter]" - ``` - - - Namespace: Upstream services may be running in a different namespace. Place - the upstream namespace after the service name. For additional details about configuring the injector, refer to [Consul Enterprise namespaces](#consul-enterprise-namespaces) . - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].[service-namespace]:[port]:[optional datacenter]" - ``` - - If the namespace is not specified, the annotation defaults to the namespace of the source service. - Consul Enterprise v1.7 and older interprets the value placed in the namespace position as part of the service name. - - - Admin partitions: Upstream services may be running in a different - partition. When specifying a partition, you must also specify a namespace. Place the partition name after the namespace. If you specify the name of the datacenter, it must be the local datacenter. Communicating across partitions using this method is only supported within a - datacenter. For cross partition communication across datacenters, [establish a cluster - peering connection](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering) and set the upstream with a labeled annotation format. - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name].[service-namespace].[service-partition]:[port]:[optional datacenter]" - ``` - - - Prepared queries: To reference a [prepared query](/consul/api-docs/query) in an upstream annotation, prepend the annotation - with `prepared_query` and then invoke the name of the query. - - ```yaml - annotations: - 'consul.hashicorp.com/connect-service-upstreams': 'prepared_query:[query name]:[port]' - ``` - - - **Multiple upstreams**: Delimit multiple services or upstreams with commas. You can specify any of the unlabeled, labeled, or prepared query formats when using the supported versions for the formats. - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name]:[port]:[optional datacenter],[service-name]:[port]:[optional datacenter]" - ``` - - ```yaml - annotations: - "consul.hashicorp.com/connect-service-upstreams":"[service-name]:[port]:[optional datacenter],prepared_query:[query name]:[port],[service-name].svc:[port]" - ``` - -- `consul.hashicorp.com/envoy-extra-args` - A space-separated list of [arguments](https://www.envoyproxy.io/docs/envoy/latest/operations/cli) - to be passed to the injected envoy binary. - - ```yaml - annotations: - consul.hashicorp.com/envoy-extra-args: '--log-level debug --disable-hot-restart' - ``` - -- `consul.hashicorp.com/kubernetes-service` - Specifies the name of the Kubernetes service used for Consul service registration. - This is useful when multiple Kubernetes services reference the same deployment. Any service that does not match the name - specified in this annotation is ignored. When not specified no service is ignored. - - ```yaml - annotations: - consul.hashicorp.com/kubernetes-service: 'service-name-to-use' - ``` - -- `consul.hashicorp.com/service-tags` - A comma separated list of tags that will - be applied to the Consul service and its sidecar. - - ```yaml - annotations: - consul.hashicorp.com/service-tags: foo,bar,baz - ``` - - If you need your tag to have a comma in it you can escape the comma with `\,`. For example, - `consul.hashicorp.com/service-tags: foo\,bar\,baz` will become the single tag `foo,bar,baz`. - -- `consul.hashicorp.com/service-meta-` - Set Consul meta key/value - pairs that will be applied to the Consul service and its sidecar. - The key will be what comes after `consul.hashicorp.com/service-meta-`, e.g. - `consul.hashicorp.com/service-meta-foo: bar` will result in `foo: bar`. - - ```yaml - annotations: - consul.hashicorp.com/service-meta-foo: baz - consul.hashicorp.com/service-meta-bar: baz - ``` - -- `consul.hashicorp.com/sidecar-proxy-` - Override default resource settings for - the sidecar proxy container. - The defaults are set in Helm config via the [`connectInject.sidecarProxy.resources`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-resources) key. - - - `consul.hashicorp.com/sidecar-proxy-cpu-limit` - Override the default CPU limit. - - `consul.hashicorp.com/sidecar-proxy-cpu-request` - Override the default CPU request. - - `consul.hashicorp.com/sidecar-proxy-memory-limit` - Override the default memory limit. - - `consul.hashicorp.com/sidecar-proxy-memory-request` - Override the default memory request. - -- `consul.hashicorp.com/consul-envoy-proxy-concurrency` - Override the default envoy worker thread count. This should be set low for sidecar - use cases and can be raised for edge proxies like gateways. - -- `consul.hashicorp.com/consul-sidecar-` - Override default resource settings for - the `consul-sidecar` container. - The defaults are set in Helm config via the [`global.consulSidecarContainer.resources`](/consul/docs/k8s/helm#v-global-consulsidecarcontainer) key. - - - `consul.hashicorp.com/consul-sidecar-cpu-limit` - Override the default CPU limit. - - `consul.hashicorp.com/consul-sidecar-cpu-request` - Override the default CPU request. - - `consul.hashicorp.com/consul-sidecar-memory-limit` - Override the default memory limit. - - `consul.hashicorp.com/consul-sidecar-memory-request` - Override the default memory request. - -- `consul.hashicorp.com/enable-sidecar-proxy-lifecycle` - Override the default Helm value [`connectInject.sidecarProxy.lifecycle.defaultEnabled`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-lifecycle-defaultenabled) -- `consul.hashicorp.com/enable-sidecar-proxy-shutdown-drain-listeners` - Override the default Helm value [`connectInject.sidecarProxy.lifecycle.defaultEnableShutdownDrainListeners`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-lifecycle-defaultenableshutdowndrainlisteners) -- `consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds` - Override the default Helm value [`connectInject.sidecarProxy.lifecycle.defaultShutdownGracePeriodSeconds`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-lifecycle-defaultshutdowngraceperiodseconds) -- `consul.hashicorp.com/sidecar-proxy-lifecycle-graceful-port` - Override the default Helm value [`connectInject.sidecarProxy.lifecycle.defaultGracefulPort`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-lifecycle-defaultgracefulport) -- `consul.hashicorp.com/sidecar-proxy-lifecycle-graceful-shutdown-path` - Override the default Helm value [`connectInject.sidecarProxy.lifecycle.defaultGracefulShutdownPath`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-lifecycle-defaultgracefulshutdownpath) - -- `consul.hashicorp.com/sidecar-proxy-startup-failure-seconds` - Override the default Helm value [`connectInject.sidecarProxy.defaultStartupFailureSeconds`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-defaultstartupfailureseconds) -- `consul.hashicorp.com/sidecar-proxy-liveness-failure-seconds` - Override the default Helm value [`connectInject.sidecarProxy.defaultLivenessFailureSeconds`](/consul/docs/k8s/helm#v-connectinject-sidecarproxy-defaultlivenessfailureseconds) - -- `consul.hashicorp.com/enable-metrics` - Override the default Helm value [`connectInject.metrics.defaultEnabled`](/consul/docs/k8s/helm#v-connectinject-metrics-defaultenabled). -- `consul.hashicorp.com/enable-metrics-merging` - Override the default Helm value [`connectInject.metrics.defaultEnableMerging`](/consul/docs/k8s/helm#v-connectinject-metrics-defaultenablemerging). -- `consul.hashicorp.com/merged-metrics-port` - Override the default Helm value [`connectInject.metrics.defaultMergedMetricsPort`](/consul/docs/k8s/helm#v-connectinject-metrics-defaultmergedmetricsport). -- `consul.hashicorp.com/prometheus-scrape-port` - Override the default Helm value [`connectInject.metrics.defaultPrometheusScrapePort`](/consul/docs/k8s/helm#v-connectinject-metrics-defaultprometheusscrapeport). -- `consul.hashicorp.com/prometheus-scrape-path` - Override the default Helm value [`connectInject.metrics.defaultPrometheusScrapePath`](/consul/docs/k8s/helm#v-connectinject-metrics-defaultprometheusscrapepath). -- `consul.hashicorp.com/prometheus-ca-file` - Local filesystem path to a CA file for Envoy to use - when serving TLS on the Prometheus metrics endpoint. Only applicable when `envoy_prometheus_bind_addr` - is set in proxy config. -- `consul.hashicorp.com/prometheus-ca-path` - Local filesystem path to a directory of CA certificates - for Envoy to use when serving TLS on the Prometheus metrics endpoint. Only applicable when - `envoy_prometheus_bind_addr` is set in proxy config. -- `consul.hashicorp.com/prometheus-cert-file` - Local filesystem path to a certificate file for Envoy to use - when serving TLS on the Prometheus metrics endpoint. Only applicable when `envoy_prometheus_bind_addr` - is set in proxy config. -- `consul.hashicorp.com/prometheus-key-file` - Local filesystem path to a private key file for Envoy to use - when serving TLS on the Prometheus metrics endpoint. Only applicable when `envoy_prometheus_bind_addr` - is set in proxy config. -- `consul.hashicorp.com/service-metrics-port` - Set the port where the mesh service exposes metrics. -- `consul.hashicorp.com/service-metrics-path` - Set the path where the mesh service exposes metrics. -- `consul.hashicorp.com/connect-inject-mount-volume` - Comma separated list of container names to mount the connect-inject volume into. The volume will be mounted at `/consul/connect-inject`. The connect-inject volume contains Consul internals data needed by the other sidecar containers, for example the `consul` binary, and the Pod's Consul ACL token. This data can be valuable for advanced use-cases, such as making requests to the Consul API from within application containers. -- `consul.hashicorp.com/consul-sidecar-user-volume` - JSON objects as specified by the [Volume pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volume-v1-core), that define volumes to add to the Envoy sidecar. - ```yaml - annotations: - "consul.hashicorp.com/consul-sidecar-user-volume": "[{\"name\": \"secrets-data\", \"hostPath\": "[{\"path\": \"/mnt/secrets-path\"}]"}]" - ``` -- `consul.hashicorp.com/consul-sidecar-user-volume-mount` - JSON objects as specified by the [Volume mount pod spec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#volumemount-v1-core), that define volumeMounts to add to the Envoy sidecar. - ```yaml - annotations: - "consul.hashicorp.com/consul-sidecar-user-volume-mount": "[{\"name\": \"secrets-store-mount\", \"mountPath\": \"/mnt/secrets-store\"}]" - ``` -- `consul.hashicorp.com/proxy-config-map` - JSON object specifying [Proxy Config Options](/consul/docs/connect/proxies/envoy#proxy-config-options). The proxy config map provides a low-level interface for setting configuration fields on a per-proxy basis during service registration. This configuration field is intended to be used in situations where a field does not exist in [service defaults configuration entries](/consul/docs/connect/config-entries/service-defaults) and another annotation does not provide explicit access to one of the Envoy configuration options. - ```yaml - annotations: - "consul.hashicorp.com/proxy-config-map": "{ \"xds_fetch_timeout_ms\": 30000 }" - ``` - -### Labels - -Resource labels could be used on a Kubernetes service to control connect-inject behavior. - -- `consul.hashicorp.com/service-ignore` - This label can be set on a Kubernetes Service. - If set to "true", the service will not be used to register a Consul endpoint. This can be - useful in cases where 2 or more services point to the same deployment. Consul cannot register - multiple endpoints to the same deployment. This label can be used to tell the endpoint - registration to ignore all services except for the one which should be used for routing requests - using Consul. - -## Service Sync - -### Annotations - -The following Kubernetes resource annotations could be used on a pod to [Service Sync](https://developer.hashicorp.com/consul/docs/k8s/service-sync) behavior: - -- `consul.hashicorp.com/service-sync`: If this is set to `true`, then the Kubernetes service is explicitly configured to be synced to Consul. - - ```yaml - annotations: - 'consul.hashicorp.com/service-sync': 'true' - ``` - -- `consul.hashicorp.com/service-port`: Configures the port to register to the Consul Catalog for the Kubernetes service. The annotation value may be a name of a port (recommended) or an exact port value. Refer to [service ports](https://developer.hashicorp.com/consul/docs/k8s/service-sync#service-ports) for more information. - - ```yaml - annotations: - 'consul.hashicorp.com/service-port': 'http' - ``` - -- `consul.hashicorp.com/service-tags`: A comma separated list of strings (without whitespace) to use for registering tags to the service registered to Consul. These custom tags automatically include the `k8s` tag which can't be disabled. - - ```yaml - annotations: - 'consul.hashicorp.com/service-tags': 'primary,foo' - ``` - -- `consul.hashicorp.com/service-meta-KEY`: A map for specifying service metadata for Consul services. The "KEY" below can be set to any key. This allows you to set multiple meta values. - - ```yaml - annotations: - 'consul.hashicorp.com/service-meta-KEY': 'value' - ``` - -- `consul.hashicorp.com/service-weight:` - Configures ability to support weighted loadbalancing by service annotation for Catalog Sync. The integer provided will be applied as a weight for the `passing` state for the health of the service. Refer to [weights](/consul/docs/services/configuration/services-configuration-reference#weights) in service configuration for more information on how this is leveraged for services in the Consul catalog. - - ```yaml - annotations: - consul.hashicorp.com/service-weight: 10 - ``` - - diff --git a/website/content/docs/k8s/architecture.mdx b/website/content/docs/k8s/architecture.mdx deleted file mode 100644 index 58526dc3ba2e..000000000000 --- a/website/content/docs/k8s/architecture.mdx +++ /dev/null @@ -1,48 +0,0 @@ ---- -layout: docs -page_title: Consul on Kubernetes Control Plane Architecture -description: >- - When running on Kubernetes, Consul’s control plane architecture does not change significantly. Server agents are deployed as a StatefulSet with a persistent volume, while client agents can run as a k8s DaemonSet with an exposed API port or be omitted with Consul Dataplanes. ---- - -# Architecture - -This topic describes the architecture, components, and resources associated with Consul deployments to Kubernetes. Consul employs the same architectural design on Kubernetes as it does with other platforms, but Kubernetes provides additional benefits that make operating a Consul cluster easier. Refer to [Consul Architecture](/consul/docs/architecture) for more general information on Consul's architecture. - -> **For more specific guidance:** -> - For guidance on datacenter design, refer to [Consul and Kubernetes Reference Architecture](/consul/tutorials/kubernetes-production/kubernetes-reference-architecture). -> - For step-by-step deployment guidance, refer to [Consul and Kubernetes Deployment Guide](/consul/tutorials/kubernetes-production/kubernetes-deployment-guide). -> - For non-Kubernetes guidance, refer to the standard [production deployment guide](/consul/tutorials/production-deploy/deployment-guide). - -## Server Agents - -The server agents are deployed as a `StatefulSet` and use persistent volume -claims to store the server state. This also ensures that the -[node ID](/consul/docs/agent/config/config-files#node_id) is persisted so that servers -can be rescheduled onto new IP addresses without causing issues. The server agents -are configured with -[anti-affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) -rules so that they are placed on different nodes. A readiness probe is -configured that marks the pod as ready only when it has established a leader. - -A Kubernetes `Service` is registered to represent the servers and exposes ports that are required to communicate to the Consul server pods. -The servers utilize the DNS address of this service to join a Consul cluster, without requiring any other access to the Kubernetes cluster. Additional consul servers may also utilize non-ready endpoints which are published by the Kubernetes service, so that servers can utilize the service for joining during bootstrap and upgrades. - -Additionally, a **PodDisruptionBudget** is configured so the Consul server -cluster maintains quorum during voluntary operational events. The maximum -unavailable is `(n/2)-1` where `n` is the number of server agents. - --> **Note:** Kubernetes and Helm do not delete Persistent Volumes or Persistent -Volume Claims when a -[StatefulSet is deleted](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#stable-storage), -so this must done manually when removing servers. - -## Consul Dataplane - -By default, Consul on Kubernetes uses an alternate service mesh configuration that injects sidecars without client agents. _Consul Dataplane_ manages Envoy proxies and leaves responsibility for other functions to the orchestrator, which removes the need to run client agents on every node. - -![Diagram of Consul Dataplanes in Kubernetes deployment](/img/k8s-dataplanes-architecture.png) - -Refer to [Simplified Service Mesh with Consul Dataplanes](/consul/docs/connect/dataplane) for more information. - -Consul Dataplane is the default proxy manager in Consul on Kubernetes 1.14 and later. If you are on Consul 1.13 or older, refer to [upgrading to Consul Dataplane](/consul/docs/k8s/upgrade#upgrading-to-consul-dataplanes) for specific upgrade instructions. diff --git a/website/content/docs/k8s/compatibility.mdx b/website/content/docs/k8s/compatibility.mdx deleted file mode 100644 index a786ed513ad1..000000000000 --- a/website/content/docs/k8s/compatibility.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: docs -page_title: Consul on Kubernetes Version Compatibility -description: >- - New releases require corresponding version updates to Consul on Kubernetes and its Helm chart. Review the compatibility matrix for Consul and consul-k8s and additional notes for integrating Vault and third-party platforms. ---- - -# Consul on Kubernetes Version Compatibility - -For every release of Consul on Kubernetes, a Helm chart, `consul-k8s-control-plane` binary and a `consul-k8s` CLI binary is built and distributed through a single version. When deploying via Helm, the recommended best path for upgrading Consul on Kubernetes, is to upgrade using the same `consul-k8s-control-plane` version as the Helm Chart, as the Helm Chart and Control Plane binary are tightly coupled. - -## Supported Consul and Kubernetes versions - -Consul Kubernetes versions all of its components (`consul-k8s` CLI, `consul-k8s-control-plane`, and Helm chart) with a single semantic version. When installing or upgrading to a specific version, ensure that you are using the correct Consul version with the compatible Helm chart or `consul-k8s` CLI. - -### Compatibility matrix - -The following sections describe the compatibility matrix for -Consul Kubernetes components and dependencies. -Some releases of Consul Enterprise receive -[Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) -and have an [extended compatibility window](#enterprise-long-term-support-releases) -compared to non-LTS Enterprise and community edition releases. - -#### Standard releases - -Unless otherwise noted, rows in the following compatibility table -apply to both Consul Enterprise and Consul community edition (CE). - -| Consul version | Compatible `consul-k8s` versions | Compatible Kubernetes versions | Compatible OpenShift versions | -| -------------- | -------------------------------- | -------------------------------| ------------------------------| -| 1.20.x | 1.6.x | 1.28.x - 1.30.x | 4.13.x - 4.15.x | -| 1.19.x | 1.5.x | 1.27.x - 1.29.x | 4.13.x - 4.15.x | -| 1.18.x CE | 1.4.x | 1.26.x - 1.29.x | 4.13.x - 4.15.x | -| 1.17.x | 1.3.x | 1.25.x - 1.28.x | 4.12.x - 4.15.x | - -#### Enterprise Long Term Support releases - -Active Consul Enterprise -[Long Term Support (LTS)](/consul/docs/enterprise/long-term-support) -releases expand their Kubernetes version compatibility window -until the LTS release reaches its end of maintenance. - -| Consul version | Compatible `consul-k8s` versions | Compatible Kubernetes versions | Compatible OpenShift versions | -| -------------- | -------------------------------- | -------------------------------| ------------------------------| -| 1.18.x Ent | 1.4.x | 1.26.x - 1.29.x | 4.13.x - 4.15.x | -| 1.15.x Ent | 1.1.x | 1.23.x - 1.28.x | 4.10.x - 4.15.x | - -### Version-specific upgrade requirements - -As of Consul v1.14.0, Kubernetes deployments use [Consul Dataplane](/consul/docs/connect/dataplane) instead of client agents. If you upgrade Consul from a version that uses client agents to a version that uses dataplanes, you must follow specific steps to update your Helm chart and remove client agents from the existing deployment. Refer to [Upgrading to Consul Dataplane](/consul/docs/k8s/upgrade#upgrading-to-consul-dataplane) for more information. - -The v1.0.0 release of the Consul on Kubernetes Helm chart also introduced a change to the [`externalServers[].hosts` parameter](/consul/docs/k8s/helm#v-externalservers-hosts). Previously, you were able to enter a provider lookup as a string in this field. Now, you must include `exec=` at the start of a string containing a provider lookup. Otherwise, the string is treated as a DNS name. Refer to the [`go-netaddrs`](https://github.com/hashicorp/go-netaddrs) library and command line tool for more information. - -## Supported Envoy versions - -Supported versions of Envoy and `consul-dataplane` (for Consul K8s 1.0 and above) for Consul versions are also found in [Envoy - Supported Versions](/consul/docs/connect/proxies/envoy#supported-versions). Starting with `consul-k8s` 1.0, `consul-dataplane` will include a bundled version of Envoy. The recommended best practice is to use the default version of Envoy or `consul-dataplane` that is provided in the Helm `values.yaml` file, as that is the version that has been tested with the default Consul and Consul Kubernetes binaries for a given Helm chart. - -## Vault as a Secrets Backend compatibility - -Starting with Consul K8s 0.39.0 and Consul 1.11.x, Consul Kubernetes supports the ability to utilize Vault as the secrets backend for all the secrets utilized by Consul on Kubernetes. - -| `consul-k8s` Versions | Compatible Vault Versions | Compatible `vault-k8s` Versions | -| ------------------------ | --------------------------| ----------------------------- | -| 0.39.0 - latest | 1.9.0 - latest | 0.14.0 - latest | - -## Platform specific compatibility notes - -### Red Hat OpenShift - -You can enable support for Red Hat OpenShift by setting `enabled: true` in the `global.openshift` stanza. Refer to the [Deploy Consul on RedHat OpenShift tutorial](https://developer.hashicorp.com/consul/tutorials/kubernetes/kubernetes-openshift-red-hat) for instructions on deploying to OpenShift. - -### VMware Tanzu Kubernetes Grid and Tanzu Kubernetes Grid Integrated Edition - -Consul Kubernetes is [certified](https://marketplace.cloud.vmware.com/services/details/hashicorp-consul-1?slug=true) for both VMware Tanzu Kubernetes Grid, and VMware Tanzu Kubernetes Integrated Edition. - -- Tanzu Kubernetes Grid is certified for version 1.3.0 and above. Only Calico is supported as the CNI Plugin. - - diff --git a/website/content/docs/k8s/connect/cluster-peering/tech-specs.mdx b/website/content/docs/k8s/connect/cluster-peering/tech-specs.mdx deleted file mode 100644 index f6c0ae031633..000000000000 --- a/website/content/docs/k8s/connect/cluster-peering/tech-specs.mdx +++ /dev/null @@ -1,161 +0,0 @@ ---- -layout: docs -page_title: Cluster Peering on Kubernetes Technical Specifications -description: >- - In Kubernetes deployments, cluster peering connections interact with mesh gateways, exported services, and ACLs. Learn about requirements specific to k8s, including required Helm values and custom resource definitions (CRDs). ---- - -# Cluster peering on Kubernetes technical specifications - -This reference topic describes the technical specifications associated with using cluster peering in your Kubernetes deployments. These specifications include [required Helm values](#helm-requirements) and [required custom resource definitions (CRDs)](#crd-requirements), as well as required Consul components and their configurations. To learn more about Consul's cluster peering feature, refer to [cluster peering overview](/consul/docs/connect/cluster-peering). - -For cluster peering requirements in non-Kubernetes deployments, refer to [cluster peering technical specifications](/consul/docs/connect/cluster-peering/tech-specs). - -## General requirements - -Make sure your Consul environment meets the following prerequisites: - -- Consul v1.14 or higher -- Consul on Kubernetes v1.0.0 or higher -- At least two Kubernetes clusters - -You must also configure the following service mesh components in order to establish cluster peering connections: - -- [Helm](#helm-requirements) -- [Custom resource definitions (CRD)](#crd-requirements) -- [Mesh gateways](#mesh-gateway-requirements) -- [Exported services](#exported-service-requirements) -- [ACLs](#acl-requirements) - -## Helm specifications - -Consul's default configuration supports cluster peering connections directly between clusters. In production environments, we recommend using mesh gateways to securely route service mesh traffic between partitions with cluster peering connections. The following values must be set in the Helm chart to enable mesh gateways: - -- [`global.tls.enabled = true`](/consul/docs/k8s/helm#v-global-tls-enabled) -- [`meshGateway.enabled = true`](/consul/docs/k8s/helm#v-meshgateway-enabled) - -Refer to the following example Helm configuration: - - - -```yaml -global: - name: consul - image: "hashicorp/consul:1.16.0" - peering: - enabled: true - tls: - enabled: true -meshGateway: - enabled: true -``` - - - -After mesh gateways are enabled in the Helm chart, you can separately [configure Mesh CRDs](#mesh-gateway-configuration-for-kubernetes). - -## CRD specifications - -You must create the following CRDs in order to establish a peering connection: - -- `PeeringAcceptor`: Generates a peering token and accepts an incoming peering connection. -- `PeeringDialer`: Uses a peering token to make an outbound peering connection with the cluster that generated the token. - -Refer to the following example CRDs: - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: PeeringAcceptor -metadata: - name: cluster-02 ## The name of the peer you want to connect to -spec: - peer: - secret: - name: "peering-token" - key: "data" - backend: "kubernetes" -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: PeeringDialer -metadata: - name: cluster-01 ## The name of the peer you want to connect to -spec: - peer: - secret: - name: "peering-token" - key: "data" - backend: "kubernetes" -``` - - - - -## Mesh gateway specifications - -To change Consul's default configuration and enable cluster peering through mesh gateways, use a mesh configuration entry to update your network's service mesh proxies globally: - -1. In `cluster-01` create the `Mesh` custom resource with `peeringThroughMeshGateways` set to `true`. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: Mesh - metadata: - name: mesh - spec: - peering: - peerThroughMeshGateways: true - ``` - - - -1. Apply the mesh CRD to `cluster-01`. - - ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT apply -f mesh.yaml - ``` - -1. Apply the mesh CRD to `cluster-02`. - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT apply -f mesh.yaml - ``` - - - - For help setting up the cluster context variables used in this example, refer to [assign cluster IDs to environmental variables](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#assign-cluster-ids-to-environmental-variables). - - - -When cluster peering through mesh gateways, consider the following deployment requirements: - -- A Consul cluster requires a registered mesh gateway in order to export services to peers in other regions or cloud providers. -- The mesh gateway must also be registered in the same admin partition as the exported services and their `exported-services` configuration entry. An enterprise license is required to use multiple admin partitions with a single cluster of Consul servers. -- To use the `local` mesh gateway mode, you must register a mesh gateway in the importing cluster. -- Define the `Proxy.Config` settings using opaque parameters compatible with your proxy. For additional Envoy proxy configuration information, refer to [Gateway options](/consul/docs/connect/proxies/envoy#gateway-options) and [Escape-hatch overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides). - -### Mesh gateway modes - -By default, all cluster peering connections use mesh gateways in [remote mode](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters#remote). Be aware of these additional requirements when changing a mesh gateway's mode. - -- For mesh gateways that connect peered clusters, you can set the `mode` as either `remote` or `local`. -- The `none` mode is invalid for mesh gateways with cluster peering connections. - -To learn how to change the mesh gateway mode to `local` on your Kubernetes deployment, refer to [configure the mesh gateway mode for traffic between services](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#configure-the-mesh-gateway-mode-for-traffic-between-services). - -## Exported service specifications - -The `exported-services` CRD is required in order for services to communicate across partitions with cluster peering connections. Basic guidance on using the `exported-services` configuration entry is included in [Establish cluster peering connections](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#export-services-between-clusters). - -Refer to [`exported-services` configuration entry](/consul/docs/connect/config-entries/exported-services) for more information. \ No newline at end of file diff --git a/website/content/docs/k8s/connect/cluster-peering/usage/create-sameness-groups.mdx b/website/content/docs/k8s/connect/cluster-peering/usage/create-sameness-groups.mdx deleted file mode 100644 index 2674805e166c..000000000000 --- a/website/content/docs/k8s/connect/cluster-peering/usage/create-sameness-groups.mdx +++ /dev/null @@ -1,292 +0,0 @@ ---- -page_title: Create sameness groups -description: |- - Learn how to create sameness groups between partitions and cluster peers on Kubernetes so that Consul can identify instances of the same service across partitions and datacenters. ---- - -# Create sameness groups on Kubernetes - -This topic describes how to create a sameness group, which designates a set of admin partitions as functionally identical in a Consul deployment running Kubernetes. Adding an admin partition to a sameness group enables Consul to recognize services registered to remote partitions with cluster peering connections as instances of the same service when they share a name and Consul namespace. - -For information about configuring a failover strategy using sameness groups, refer to [Failover with sameness groups](/consul/docs/connect/manage-traffic/failover/sameness). - -## Overview - -Sameness groups are a user-defined set of partitions with identical configurations, includingcustom resource definitions (CRDs) for service and proxy defaults. Partitions on separate clusters should have an established cluster peering connection in order to recognize each other. - -To create and use sameness groups in your network, complete the following steps: - -- **Create sameness group custom resource definitions (CRDs) for each member of the group**. For each partition that you want to include in the sameness group, you must write and apply a sameness group CRD that defines the group’s members from that partition’s perspective. Refer to the [sameness group configuration entry reference](/consul/docs/connect/config-entries/sameness-group) for details on configuration hierarchy, default values, and specifications. -- **Export services to members of the sameness group**. You must write and apply an exported services CRD that makes the partition’s services available to other members of the group. Refer to [exported services configuration entry reference](/consul/docs/connect/config-entries/exported-services) for additional specification information. -- **Create service intentions for each member of the sameness group**. For each partition that you want to include in the sameness group, you must write and apply service intentions CRDs to authorize traffic to your services from all members of the group. Refer to the [service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional specification information. - -## Requirements - -- All datacenters where you want to create sameness groups must run Consul v1.16 or later. Refer to [upgrade instructions](/consul/docs/k8s/upgrade) for more information about how to upgrade your deployment. -- A [Consul Enterprise license](/consul/docs/enterprise/license/overview) is required. - -### Before you begin - -Before creating a sameness group, take the following actions to prepare your network: - -#### Check Consul namespaces and service naming conventions - -Sameness groups are defined at the partition level. Consul assumes all partitions in the group have identical configurations, including identical service names and identical Consul namespaces. This behavior occurs even when two partitions in the group contain functionally different services that share a common name and namespace. For example, if distinct services both named `api` were registered to different members of a sameness group, it could lead to errors because requests may be sent to the incorrect service. - -To prevent errors, check the names of the services deployed to your network and the namespaces they are deployed in. Pay particular attention to the default namespace to confirm that services have unique names. If different services share a name, you should either change one of the service’s names or deploy one of the services to a different namespace. - -#### Deploy mesh gateways for each partition - -Mesh gateways are required for cluster peering connections and recommended to secure cross-partition traffic in a single datacenter. Therefore, we recommend securing your network, and especially your production environment, by deploying mesh gateways to each datacenter. Refer to [mesh gateways specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs#mesh-gateway-specifications) for more information about configuring mesh gateways. - -#### Establish cluster peering relationships between remote partitions - -You must establish connections with cluster peers before you can create a sameness group that includes them. A cluster peering connection exists between two admin partitions in different datacenters, and each connection between two partitions must be established separately with each peer. Refer to [establish cluster peering connections](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering) for step-by-step instructions. - -To establish cluster peering connections and define a group as part of the same workflow, follow instructions up to [Export services between clusters](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#export-services-between-clusters). You can use the same exported services and service intention configuration entries to establish the cluster peering connection and create the sameness group. - -## Create a sameness group - -To create a sameness group, you must write and apply a set of three CRDs for each partition that is a member of the group: - -- Sameness group CRDs: Define the sameness group from each partition’s perspective. -- Exported services CRDs: Make services available to other partitions in the group. -- Service intentions CRDs: Authorize traffic between services across partitions. - -### Define the sameness group from each partition’s perspective - -To define a sameness group for a partition, create a [sameness group CRD](/consul/docs/connect/config-entries/sameness-group) that describes the partitions and cluster peers that are part of the group. Typically, this order follows this pattern: - -1. The local partition -1. Other partitions in the same datacenter -1. Partitions with established cluster peering relationships - -If you want all services to failover to other instances in the sameness group by default, set `spec.defaultForFailover=true` and list the group members in the order you want to use in a failover scenario. Refer to [failover with sameness groups](/consul/docs/connect/manage-traffic/failover/sameness) for more information. - -Be aware that the sameness group CRDs are different for each partition. The following example demonstrates how to format three different CRDs for three partitions that are part of the sameness group `product-group` when Partition 1 and Partition 2 are in DC1, and the third partition is Partition 1 in DC2: - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: SamenessGroup -metadata: - name: product-group -spec: - defaultForFailover: true - members: - - partition: partition-1 - - partition: partition-2 - - peer: dc2-partition-1 -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: SamenessGroup -metadata: - name: product-group -spec: - defaultForFailover: true - members: - - partition: partition-2 - - partition: partition-1 - - peer: dc2-partition-1 -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: SamenessGroup -metadata: - name: product-group -spec: - defaultForFailover: true - members: - - partition: partition-1 - - peer: dc1-partition-1 - - peer: dc1-partition-2 -``` - - - - - -After you create the CRD, apply it to the Consul server with the following `kubectl` CLI command: - -```shell-session -$ kubectl apply -f product-group.yaml -``` - -Then, repeat the process to create and apply a CRD for every partition that is a member of the sameness group. - -### Export services to other partitions in the sameness group - -To make services available to other members of the sameness group, you must write and apply an [exported services CRD](/consul/docs/connect/config-entries/exported-services) for each partition in the group. This CRD exports the local partition's services to the rest of the group members. In each CRD, set the sameness group as the `consumer` for the exported services. You can export multiple services in a single exported services configuration entry. - -Because you are configuring the consumer to reference the sameness group instead of listing out each partition and cluster peer, you do not need to edit this configuration again when you add a partition or peer to the group. - -The following example demonstrates how to format three different `ExportedServices` CRDs to make a service named `api` deployed to the `store` namespace of each partition available to all other group members: - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -Kind: ExportedServices -metadata: - name: partition-1 -spec: - services: - - name: api - namespace: store - consumers: - - samenessGroup: product-group -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -Kind: ExportedServices -metadata: - name: partition-2 -spec: - services: - - name: api - namespace: store - consumers: - - samenessGroup: product-group -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -Kind: ExportedServices -metadata: - name: partition-1 -spec: - services: - - name: api - namespace: store - consumers: - - samenessGroup: product-group -``` - - - - - -For more information about exporting services, including examples of CRDs that export multiple services at the same time, refer to the [exported services configuration entry reference](/consul/docs/connect/config-entries/exported-services). - -After you create each exported services configuration entry, apply it to the Consul server with the following CLI command: - -```shell-session -$ kubectl apply -f product-group-export.yaml -``` - -#### Export services for cluster peers and sameness groups as part of the same workflow - -Creating a cluster peering connection between two partitions and then adding the partitions to a sameness group requires that you write and apply two separate exported services CRDs. One CRD exports services to the peer, and a second CRD exports services to other members of the group. - -If your goal for peering clusters is to create a sameness group, you can write and apply a single exported services configuration entry by configuring the `services[].consumers` block with the `samenessGroup` field instead of the `peer` field. Be aware that this scenario requires you to write the `SamenessGroup` CRD to Kubernetes before you apply the `ExportedServices` CRD that references the sameness group. - -### Create service intentions to authorize traffic between group members - -Exporting the service to other members of the sameness group makes the services visible to remote partitions, but you must also create service intentions so that local services are authorized to send and receive traffic from a member of the sameness group. - -For each partition that is a member of the group, write and apply a [service intentions CRD](/consul/docs/connect/config-entries/service-intentions) that defines intentions for the services that are part of the group. In the `sources` block of the configuration entry, include the service name, its namespace, the sameness group and grant `allow` permissions. - -Because you are using the sameness group in the `sources` block rather than listing out each partition and cluster peer, you do not have to make further edits to the service intentions configuration entries when members are added to or removed from the group. - -The following example demonstrates how to format three different `ServiceIntentions` CRDs to make a service named `api` available to all instances of `payments` deployed in all members of the sameness group including the local partition. In this example, `api` is deployed to the `store` namespace in all three partitions. - - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: api-intentions -spec: - sources: - - name: api - action: allow - namespace: store - samenessGroup: product-group -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: api-intentions -spec: - sources: - - name: api - action: allow - namespace: store - samenessGroup: product-group -``` - - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: api-intentions -spec: - sources: - - name: api - action: allow - namespace: store - samenessGroup: product-group -``` - - - - - -Refer to [create and manage intentions](/consul/docs/connect/intentions/create-manage-intentions) for more information about how to create and apply service intentions in Consul. - -After you create each service intentions configuration entry, apply it to the Consul server with the following CLI command: - -```shell-session -$ kubectl apply -f api-intentions.yaml -``` - -#### Create service intentions for cluster peers and sameness groups as part of the same workflow - -Creating a cluster peering connection between two partitions and then adding the partitions to a sameness group requires that you write and apply two separate service intentions CRDs. One CRD authorizes services for the peer, and a second CRD authorizes services for other members of the group. - -If your goal for peering clusters is to create a sameness group, you can write and apply a single service intentions CRD by configuring the `sources` block with the `samenessGroup` field instead of the `peer` field. Be aware that this scenario requires you to write the `SamenessGroup` CRD to Kubernetes before you apply the `ServiceIntentions` CRD that references the sameness group. - -## Next steps - -When `defaultForFailover=true` in a sameness group CRD, additional upstream configuration is not required. - -After creating a sameness group, you can also set up failover between services in a sameness group. Refer to [Failover with sameness groups](/consul/docs/connect/manage-traffic/failover/sameness) for more information. diff --git a/website/content/docs/k8s/connect/cluster-peering/usage/establish-peering.mdx b/website/content/docs/k8s/connect/cluster-peering/usage/establish-peering.mdx deleted file mode 100644 index bc82be872a1c..000000000000 --- a/website/content/docs/k8s/connect/cluster-peering/usage/establish-peering.mdx +++ /dev/null @@ -1,442 +0,0 @@ ---- -layout: docs -page_title: Establish Cluster Peering Connections on Kubernetes -description: >- - To establish a cluster peering connection on Kubernetes, generate a peering token to establish communication. Then export services and authorize requests with service intentions. ---- - -# Establish cluster peering connections on Kubernetes - -This page details the process for establishing a cluster peering connection between services in a Consul on Kubernetes deployment. - -The overall process for establishing a cluster peering connection consists of the following steps: - -1. Create a peering token in one cluster. -1. Use the peering token to establish peering with a second cluster. -1. Export services between clusters. -1. Create intentions to authorize services for peers. - -Cluster peering between services cannot be established until all four steps are complete. - -Cluster peering between services cannot be established until all four steps are complete. If you want to establish cluster peering connections and create sameness groups at the same time, refer to the guidance in [create sameness groups](/consul/docs/k8s/connect/cluster-peering/usage/create-sameness-groups). - -For general guidance for establishing cluster peering connections, refer to [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering). - -## Prerequisites - -You must meet the following requirements to use Consul's cluster peering features with Kubernetes: - -- Consul v1.14.1 or higher -- Consul on Kubernetes v1.0.0 or higher -- At least two Kubernetes clusters - -In Consul on Kubernetes, peers identify each other using the `metadata.name` values you establish when creating the `PeeringAcceptor` and `PeeringDialer` CRDs. For additional information about requirements for cluster peering on Kubernetes deployments, refer to [Cluster peering on Kubernetes technical specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs). - -### Assign cluster IDs to environmental variables - -After you provision a Kubernetes cluster and set up your kubeconfig file to manage access to multiple Kubernetes clusters, you can assign your clusters to environmental variables for future use. - -1. Get the context names for your Kubernetes clusters using one of these methods: - - - Run the `kubectl config current-context` command to get the context for the cluster you are currently in. - - Run the `kubectl config get-contexts` command to get all configured contexts in your kubeconfig file. - -1. Use the `kubectl` command to export the Kubernetes context names and then set them to variables. For more information on how to use kubeconfig and contexts, refer to the [Kubernetes docs on configuring access to multiple clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/). - - ```shell-session - $ export CLUSTER1_CONTEXT= - $ export CLUSTER2_CONTEXT= - ``` - -### Install Consul using Helm and configure peering over mesh gateways - -To use cluster peering with Consul on Kubernetes deployments, update the Helm chart with [the required values](/consul/docs/k8s/connect/cluster-peering/tech-specs#helm-requirements). After updating the Helm chart, you can use the `consul-k8s` CLI to apply `values.yaml` to each cluster. - -1. In `cluster-01`, run the following commands: - - ```shell-session - $ export HELM_RELEASE_NAME1=cluster-01 - ``` - - ```shell-session - $ helm install ${HELM_RELEASE_NAME1} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc1 --kube-context $CLUSTER1_CONTEXT - ``` - -1. In `cluster-02`, run the following commands: - - ```shell-session - $ export HELM_RELEASE_NAME2=cluster-02 - ``` - - ```shell-session - $ helm install ${HELM_RELEASE_NAME2} hashicorp/consul --create-namespace --namespace consul --version "1.2.0" --values values.yaml --set global.datacenter=dc2 --kube-context $CLUSTER2_CONTEXT - ``` - -1. For both clusters apply the `Mesh` configuration entry values provided in [Mesh Gateway Specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs#mesh-gateway-specifications) to allow establishing peering connections over mesh gateways. - -### Configure the mesh gateway mode for traffic between services - -In Kubernetes deployments, you can configure mesh gateways to use `local` mode so that a service dialing a service in a remote peer dials the local mesh gateway instead of the remote mesh gateway. To configure the mesh gateway mode so that this traffic always leaves through the local mesh gateway, you can use the `ProxyDefaults` CRD. - -1. In `cluster-01` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ProxyDefaults - metadata: - name: global - spec: - meshGateway: - mode: local - ``` - - - - ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT apply -f proxy-defaults.yaml - ``` - -1. In `cluster-02` apply the following `ProxyDefaults` CRD to configure the mesh gateway mode. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ProxyDefaults - metadata: - name: global - spec: - meshGateway: - mode: local - ``` - - - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT apply -f proxy-defaults.yaml - ``` - -## Create a peering token - -To begin the cluster peering process, generate a peering token in one of your clusters. The other cluster uses this token to establish the peering connection. - -Every time you generate a peering token, a single-use secret for establishing the secret is embedded in the token. Because regenerating a peering token invalidates the previously generated secret, you must use the most recently created token to establish peering connections. - -1. In `cluster-01`, create the `PeeringAcceptor` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: PeeringAcceptor - metadata: - name: cluster-02 ## The name of the peer you want to connect to - spec: - peer: - secret: - name: "peering-token" - key: "data" - backend: "kubernetes" - ``` - - - -1. Apply the `PeeringAcceptor` resource to the first cluster. - - ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT apply --filename acceptor.yaml - ``` - -1. Save your peering token so that you can export it to the other cluster. - - ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT get secret peering-token --output yaml > peering-token.yaml - ``` - -## Establish a connection between clusters - -Next, use the peering token to establish a secure connection between the clusters. - -1. Apply the peering token to the second cluster. - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT apply --filename peering-token.yaml - ``` - -1. In `cluster-02`, create the `PeeringDialer` custom resource. To ensure cluster peering connections are secure, the `metadata.name` field cannot be duplicated. Refer to the peer by a specific name. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: PeeringDialer - metadata: - name: cluster-01 ## The name of the peer you want to connect to - spec: - peer: - secret: - name: "peering-token" - key: "data" - backend: "kubernetes" - ``` - - - -1. Apply the `PeeringDialer` resource to the second cluster. - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT apply --filename dialer.yaml - ``` - -## Export services between clusters - -After you establish a connection between the clusters, you need to create an `exported-services` CRD that defines the services that are available to another admin partition. - -While the CRD can target admin partitions either locally or remotely, clusters peering always exports services to remote admin partitions. Refer to [exported service consumers](/consul/docs/connect/config-entries/exported-services#consumers-1) for more information. - - -1. For the service in `cluster-02` that you want to export, add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods prior to deploying. The annotation allows the workload to join the mesh. It is highlighted in the following example: - - - - ```yaml - # Service to expose backend - apiVersion: v1 - kind: Service - metadata: - name: backend - spec: - selector: - app: backend - ports: - - name: http - protocol: TCP - port: 80 - targetPort: 9090 - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: backend - --- - # Deployment for backend - apiVersion: apps/v1 - kind: Deployment - metadata: - name: backend - labels: - app: backend - spec: - replicas: 1 - selector: - matchLabels: - app: backend - template: - metadata: - labels: - app: backend - annotations: - "consul.hashicorp.com/connect-inject": "true" - spec: - serviceAccountName: backend - containers: - - name: backend - image: nicholasjackson/fake-service:v0.22.4 - ports: - - containerPort: 9090 - env: - - name: "LISTEN_ADDR" - value: "0.0.0.0:9090" - - name: "NAME" - value: "backend" - - name: "MESSAGE" - value: "Response from backend" - ``` - - - -1. Deploy the `backend` service to the second cluster. - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT apply --filename backend.yaml - ``` - -1. In `cluster-02`, create an `ExportedServices` custom resource. The name of the peer that consumes the service should be identical to the name set in the `PeeringDialer` CRD. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ExportedServices - metadata: - name: default ## The name of the partition containing the service - spec: - services: - - name: backend ## The name of the service you want to export - consumers: - - peer: cluster-01 ## The name of the peer that receives the service - ``` - - - -1. Apply the `ExportedServices` resource to the second cluster. - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT apply --filename exported-service.yaml - ``` - -## Authorize services for peers - -Before you can call services from peered clusters, you must set service intentions that authorize those clusters to use specific services. Consul prevents services from being exported to unauthorized clusters. - -1. Create service intentions for the second cluster. The name of the peer should match the name set in the `PeeringDialer` CRD. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceIntentions - metadata: - name: backend-deny - spec: - destination: - name: backend - sources: - - name: "*" - action: deny - - name: frontend - action: allow - peer: cluster-01 ## The peer of the source service - ``` - - - -1. Apply the intentions to the second cluster. - - - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT apply --filename intention.yaml - ``` - - - -1. Add the `"consul.hashicorp.com/connect-inject": "true"` annotation to your service's pods before deploying the workload so that the services in `cluster-01` can dial `backend` in `cluster-02`. To dial the upstream service from an application, configure the application so that that requests are sent to the correct DNS name as specified in [Service Virtual IP Lookups](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups). In the following example, the annotation that allows the workload to join the mesh and the configuration provided to the workload that enables the workload to dial the upstream service using the correct DNS name is highlighted. [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups-for-consul-enterprise) details how you would similarly format a DNS name including partitions and namespaces. - - - - ```yaml - # Service to expose frontend - apiVersion: v1 - kind: Service - metadata: - name: frontend - spec: - selector: - app: frontend - ports: - - name: http - protocol: TCP - port: 9090 - targetPort: 9090 - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: frontend - --- - apiVersion: apps/v1 - kind: Deployment - metadata: - name: frontend - labels: - app: frontend - spec: - replicas: 1 - selector: - matchLabels: - app: frontend - template: - metadata: - labels: - app: frontend - annotations: - "consul.hashicorp.com/connect-inject": "true" - spec: - serviceAccountName: frontend - containers: - - name: frontend - image: nicholasjackson/fake-service:v0.22.4 - securityContext: - capabilities: - add: ["NET_ADMIN"] - ports: - - containerPort: 9090 - env: - - name: "LISTEN_ADDR" - value: "0.0.0.0:9090" - - name: "UPSTREAM_URIS" - value: "http://backend.virtual.cluster-02.consul" - - name: "NAME" - value: "frontend" - - name: "MESSAGE" - value: "Hello World" - - name: "HTTP_CLIENT_KEEP_ALIVES" - value: "false" - ``` - - - -1. Apply the service file to the first cluster. - - ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT apply --filename frontend.yaml - ``` - -1. Run the following command in `frontend` and then check the output to confirm that you peered your clusters successfully. - - ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT exec -it $(kubectl --context $CLUSTER1_CONTEXT get pod -l app=frontend -o name) -- curl localhost:9090 - ``` - - - - ```json - { - "name": "frontend", - "uri": "/", - "type": "HTTP", - "ip_addresses": [ - "10.16.2.11" - ], - "start_time": "2022-08-26T23:40:01.167199", - "end_time": "2022-08-26T23:40:01.226951", - "duration": "59.752279ms", - "body": "Hello World", - "upstream_calls": { - "http://backend.virtual.cluster-02.consul": { - "name": "backend", - "uri": "http://backend.virtual.cluster-02.consul", - "type": "HTTP", - "ip_addresses": [ - "10.32.2.10" - ], - "start_time": "2022-08-26T23:40:01.223503", - "end_time": "2022-08-26T23:40:01.224653", - "duration": "1.149666ms", - "headers": { - "Content-Length": "266", - "Content-Type": "text/plain; charset=utf-8", - "Date": "Fri, 26 Aug 2022 23:40:01 GMT" - }, - "body": "Response from backend", - "code": 200 - } - }, - "code": 200 - } - ``` - - \ No newline at end of file diff --git a/website/content/docs/k8s/connect/cluster-peering/usage/l7-traffic.mdx b/website/content/docs/k8s/connect/cluster-peering/usage/l7-traffic.mdx deleted file mode 100644 index 0c3b28d693ef..000000000000 --- a/website/content/docs/k8s/connect/cluster-peering/usage/l7-traffic.mdx +++ /dev/null @@ -1,75 +0,0 @@ ---- -layout: docs -page_title: Manage L7 Traffic With Cluster Peering on Kubernetes -description: >- - Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul on Kubernetes deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects in k8s. ---- - -# Manage L7 traffic with cluster peering on Kubernetes - -This usage topic describes how to configure the `service-resolver` custom resource definition (CRD) to set up and manage L7 traffic between services that have an existing cluster peering connection in Consul on Kubernetes deployments. - -For general guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering](/consul/docs/connect/cluster-peering/usage/peering-traffic-management). - -## Service resolvers for redirects and failover - -When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/connect/manage-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters. - -However, the `service-splitter` and `service-router` CRDs do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in a `service-resolver` CRD. Then, you can set up a redirect in a second service resolver to interact with the peer service by name. - -For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/agent/config-entries). - -## Configure dynamic traffic between peers - -To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order: - -1. Define the peer cluster as a failover target in the service resolver configuration. - - The following example updates the [`service-resolver` CRD](/consul/docs/connect/config-entries/) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceResolver - metadata: - name: frontend - spec: - connectTimeout: 15s - failover: - '*': - targets: - - peer: 'cluster-02' - service: 'frontend' - namespace: 'default' - ``` - -1. Define the desired behavior in `service-splitter` or `service-router` CRD. - - The following example splits traffic evenly between `frontend` and `frontend-peer`: - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceSplitter - metadata: - name: frontend - spec: - splits: - - weight: 50 - ## defaults to service with same name as configuration entry ("frontend") - - weight: 50 - service: frontend-peer - ``` - -1. Create a second `service-resolver` configuration entry on the local cluster that resolves the name of the peer service you used when splitting or routing the traffic. - - The following example uses the name `frontend-peer` to define a redirect targeting the `frontend` service on the peer `cluster-02`: - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceResolver - metadata: - name: frontend-peer - spec: - redirect: - peer: 'cluster-02' - service: 'frontend' - ``` \ No newline at end of file diff --git a/website/content/docs/k8s/connect/cluster-peering/usage/manage-peering.mdx b/website/content/docs/k8s/connect/cluster-peering/usage/manage-peering.mdx deleted file mode 100644 index 42f8fc59832e..000000000000 --- a/website/content/docs/k8s/connect/cluster-peering/usage/manage-peering.mdx +++ /dev/null @@ -1,121 +0,0 @@ ---- -layout: docs -page_title: Manage Cluster Peering Connections on Kubernetes -description: >- - Learn how to list, read, and delete cluster peering connections using Consul on Kubernetes. You can also reset cluster peering connections on k8s deployments. ---- - -# Manage cluster peering connections on Kubernetes - -This usage topic describes how to manage cluster peering connections on Kubernetes deployments. - -After you establish a cluster peering connection, you can get a list of all active peering connections, read a specific peering connection's information, and delete peering connections. - -For general guidance for managing cluster peering connections, refer to [Manage L7 traffic with cluster peering](/consul/docs/connect/cluster-peering/usage/peering-traffic-management). - -## Reset a peering connection - -To reset the cluster peering connection, you need to generate a new peering token from the cluster where you created the `PeeringAcceptor` CRD. The only way to create or set a new peering token is to manually adjust the value of the annotation `consul.hashicorp.com/peering-version`. Creating a new token causes the previous token to expire. - -1. In the `PeeringAcceptor` CRD, add the annotation `consul.hashicorp.com/peering-version`. If the annotation already exists, update its value to a higher version. - - - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: PeeringAcceptor - metadata: - name: cluster-02 - annotations: - consul.hashicorp.com/peering-version: "1" ## The peering version you want to set, must be in quotes - spec: - peer: - secret: - name: "peering-token" - key: "data" - backend: "kubernetes" - ``` - - - -1. After updating `PeeringAcceptor`, repeat all of the steps to [establish a new peering connection](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering). - -## List all peering connections - -In Consul on Kubernetes deployments, you can list all active peering connections in a cluster using the Consul CLI. - -1. If necessary, [configure your CLI to interact with the Consul cluster](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy#configure-your-cli-to-interact-with-consul-cluster). - -1. Run the [`consul peering list` CLI command](/consul/commands/peering/list). - - ```shell-session - $ consul peering list - Name State Imported Svcs Exported Svcs Meta - cluster-02 ACTIVE 0 2 env=production - cluster-03 PENDING 0 0 - ``` - -## Read a peering connection - -In Consul on Kubernetes deployments, you can get information about individual peering connections between clusters using the Consul CLI. - -1. If necessary, [configure your CLI to interact with the Consul cluster](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy#configure-your-cli-to-interact-with-consul-cluster). - -1. Run the [`consul peering read` CLI command](/consul/commands/peering/read). - - ```shell-session - $ consul peering read -name cluster-02 - Name: cluster-02 - ID: 3b001063-8079-b1a6-764c-738af5a39a97 - State: ACTIVE - Meta: - env=production - - Peer ID: e83a315c-027e-bcb1-7c0c-a46650904a05 - Peer Server Name: server.dc1.consul - Peer CA Pems: 0 - Peer Server Addresses: - 10.0.0.1:8300 - - Imported Services: 0 - Exported Services: 2 - - Create Index: 89 - Modify Index: 89 - ``` - -## Delete peering connections - -To end a peering connection in Kubernetes deployments, delete both the `PeeringAcceptor` and `PeeringDialer` resources. - -1. Delete the `PeeringDialer` resource from the second cluster. - - ```shell-session - $ kubectl --context $CLUSTER2_CONTEXT delete --filename dialer.yaml - ``` - -1. Delete the `PeeringAcceptor` resource from the first cluster. - - ```shell-session - $ kubectl --context $CLUSTER1_CONTEXT delete --filename acceptor.yaml - ```` - -To confirm that you deleted your peering connection in `cluster-01`, query the `/health` HTTP endpoint: - -1. Exec into the server pod for the first cluster. - - ```shell-session - $ kubectl exec -it consul-server-0 --context $CLUSTER1_CONTEXT -- /bin/sh - ``` - -1. If you've enabled ACLs, export an ACL token to access the `/health` HTP endpoint for services. The bootstrap token may be used if an ACL token is not already provisioned. - - ```shell-session - $ export CONSUL_HTTP_TOKEN= - ``` - -1. Query the `/health` HTTP endpoint. Peered services with deleted connections should no longe appear. - - ```shell-session - $ curl "localhost:8500/v1/health/connect/backend?peer=cluster-02" - ``` \ No newline at end of file diff --git a/website/content/docs/k8s/connect/connect-ca-provider.mdx b/website/content/docs/k8s/connect/connect-ca-provider.mdx deleted file mode 100644 index cdab172bcc21..000000000000 --- a/website/content/docs/k8s/connect/connect-ca-provider.mdx +++ /dev/null @@ -1,196 +0,0 @@ ---- -layout: docs -page_title: Configure Service Mesh Certificate Authority (CA) for Consul on Kubernetes -description: >- - Consul includes a built-in CA, but when bootstrapping a cluster on k8s, you can configure your service mesh to use a custom certificate provider instead. Learn how to configure Vault as an external CA in primary and secondary datacenters and manually rotate Vault tokens. ---- - -# Configure Service Mesh Certificate Authority for Consul on Kubernetes - -If `connect` is enabled, the built-in Consul CA provider is automatically enabled for the service mesh certificate authority (CA). You can use different CA providers with Consul service mesh. Refer to [Service Mesh Certificate Management](/consul/docs/connect/ca) for supported providers. - -## Overview - -You should only complete the following instructions during the initial cluster bootstrapping procedure with Consul K8s CLI 0.38.0 or later. To update the Consul service mesh CA provider on an existing cluster or to update any provider properties, such as tokens, refer to [Update CA Configuration Endpoint](/consul/api-docs/connect/ca#update-ca-configuration). - -To configure an external CA provider using the Consul Helm chart, complete the following steps: - -1. Create a configuration file containing your provider information. -1. Create a Kubernetes secret containing the configuration file. -1. Reference the Kubernetes secret in the [`server.extraVolumes`](/consul/docs/k8s/helm#v-server-extravolumes) value in the Helm chart. - -To configure the Vault service mesh provider, refer to [Vault as the Service Mesh Certificate Provider on Kubernetes](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca). - -## Configuring Vault as a Service Mesh CA (Consul K8s 0.37.0 and earlier) - - -If you use Vault 1.11.0+ as Consul's service mesh CA, versions of Consul released before Dec 13, 2022 will develop an issue with Consul control plane or service mesh communication ([GH-15525](https://github.com/hashicorp/consul/pull/15525)). Use or upgrade to a [Consul version that includes the fix](https://support.hashicorp.com/hc/en-us/articles/11308460105491#01GMC24E6PPGXMRX8DMT4HZYTW) to avoid this problem. - -The following instructions are only valid for Consul K8s CLI 0.37.0 and prior. It describes how to configure Vault as the service mesh CA. You can configure other providers during initial bootstrap of the cluster by providing the appropriate [`ca_config`] and [`ca_provider`] values for your provider. - --> **Auto-renewal:** If using Vault as your service mesh CA, we strongly recommend Consul 1.8.5 or later, which includes support for token auto-renewal. If the Vault token is [renewable](/vault/api-docs/auth/token#renewable), then Consul automatically renews the token periodically. Otherwise, you must [manually rotate](#manually-rotating-vault-tokens) the Vault token before it expires. - -### Primary Datacenter - -To configure Vault as a CA provider for Consul service mesh, -first, create a provider configuration JSON file. -Please refer to [Vault as a service mesh CA](/consul/docs/connect/ca/vault) for the configuration options. -You will need to provide a Vault token to the `token` property. -Please refer to [these docs](/consul/docs/connect/ca/vault#token) for the permissions that the token needs to have. -This token should be [renewable](/vault/api-docs/auth/token#renewable). - -To provide a CA, you first need to create a Kubernetes secret containing the CA. -For example, you may create a secret with the Vault CA like so: - -```shell-session -kubectl create secret generic vault-ca --from-file vault.ca=/path/to/your/vault/ca -``` - -And then reference it like this in the provider configuration: - - - -```json -{ - "connect": [ - { - "ca_config": [ - { - "address": "https://vault:8200", - "intermediate_pki_path": "dc1/connect-intermediate", - "root_pki_path": "connect-root", - "token": "s.VgQvaXl8xGFO1RUxAPbPbsfN", - "ca_file": "/consul/userconfig/vault-ca/vault.ca" - } - ], - "ca_provider": "vault" - } - ] -} -``` - - - -This example configuration file is pointing to a Vault instance running in the same Kubernetes cluster, -which has been deployed with TLS enabled. Note that the `ca_file` is pointing to the file location -based on the Kubernetes secret for the Vault CA that we have created before. -We will provide that secret later in the Helm values for our Consul cluster. - -~> NOTE: If you have used Kubernetes CA to sign Vault's certificate, -such as shown in [Standalone Server with TLS](/vault/docs/platform/k8s/helm/examples/standalone-tls), -you don't need to create a Kubernetes secret with Vault's CA and can reference the CA directly -by setting `ca_file` to `/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`. - -Next, create a Kubernetes secret with this configuration file. - -```shell-session -$ kubectl create secret generic vault-config --from-file=config=vault-config.json -``` - -We will provide this secret and the Vault CA secret, to the Consul server via the -`server.extraVolumes` Helm value. - - - - ```yaml - global: - name: consul - server: - extraVolumes: - - type: secret - name: vault-config - load: true - items: - - key: config - path: vault-config.json - - type: secret - name: vault-ca - load: false - connectInject: - enabled: true - ``` - - - -Finally, [install](/consul/docs/k8s/installation/install#installing-consul) the Helm chart using the above config file: - -```shell-session -$ helm install consul --values values.yaml hashicorp/consul -``` - -Verify that the CA provider is set correctly: - -```shell-session -$ kubectl exec consul-server-0 -- curl --silent http://localhost:8500/v1/connect/ca/configuration\?pretty -{ - "Provider": "vault", - "Config": { - "Address": "https://vault:8200", - "CAFile": "/consul/userconfig/vault-server-tls/vault.ca", - "IntermediateCertTTL": "8760h", - "IntermediatePKIPath": "connect-intermediate", - "LeafCertTTL": "72h", - "RootPKIPath": "connect-root", - "Token": "s.VgQvaXl8xGFO1RUxAPbPbsfN" - }, - "State": null, - "ForceWithoutCrossSigning": false, - "CreateIndex": 5, - "ModifyIndex": 5 -} -``` - -### Secondary Datacenters - -To configure Vault as the service mesh CA in secondary datacenters, you need to make sure that the Root CA is the same, -but the intermediate is different for each datacenter. In the `connect` configuration for a secondary datacenter, -you can specify a `intermediate_pki_path` that is, for example, prefixed with the datacenter -for which this configuration is intended. -You will similarly need to create a Vault token and a Kubernetes secret with -Vault's CA in each secondary Kubernetes cluster. - - - -```json -{ - "connect": [ - { - "ca_config": [ - { - "address": "https://vault:8200", - "intermediate_pki_path": "dc2/connect-intermediate", - "root_pki_path": "connect-root", - "token": "s.VgQvaXl8xGFO1RUxAPbPbsfN", - "ca_file": "/consul/userconfig/vault-ca/vault.ca" - } - ], - "ca_provider": "vault" - } - ] -} -``` - - - -Note that all secondary datacenters need to have access to the same Vault instance as the primary. - -### Manually Rotating Vault Tokens - -If running Consul < 1.8.5 or using a Vault token that is not [renewable](/vault/api-docs/auth/token#renewable) -then you will need to manually renew or rotate the Vault token before it expires. - -#### Rotating Vault Token - -The [`ca_config`] and [`ca_provider`] options defined in the Consul agent -configuration are only used when initially bootstrapping the cluster. Once the -cluster is running, subsequent changes to the [`ca_provider`] config are **ignored**–even if `consul reload` is run or the servers are restarted. - -To update any settings under these keys, you must use Consul's [Update CA Configuration](/consul/api-docs/connect/ca#update-ca-configuration) API or the [`consul connect ca set-config`](/consul/commands/connect/ca#set-config) command. - -#### Renewing Vault Token - -To renew the Vault token, use the [`vault token renew`](/vault/docs/commands/token/renew) CLI command -or API. - -[`ca_config`]: /consul/docs/agent/config/config-files#connect_ca_config -[`ca_provider`]: /consul/docs/agent/config/config-files#connect_ca_provider diff --git a/website/content/docs/k8s/connect/health.mdx b/website/content/docs/k8s/connect/health.mdx deleted file mode 100644 index a3096d5c9cba..000000000000 --- a/website/content/docs/k8s/connect/health.mdx +++ /dev/null @@ -1,39 +0,0 @@ ---- -layout: docs -page_title: Configure Health Checks for Consul on Kubernetes -description: >- - Kubernetes has built-in health probes you can sync with Consul's health checks to ensure service mesh traffic is routed to healthy pods. ---- - -# Configure Health Checks for Consul on Kubernetes - -~> This topic requires familiarity with [Kubernetes Health Checks](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). - -This page describes how Consul on Kubernetes will sync the status of Kubernetes health probes of a pod to Consul for service mesh use cases. -Health check synchronization with Consul is done automatically whenever `connectInject.enabled` is `true`. - -For each Kubernetes pod that is connect-injected the following will be configured: - -1. A [Consul health check](/consul/api-docs/catalog#register-entity) is registered within Consul catalog. -The Consul health check's state reflects the pod's readiness status. - -1. If the pod is using [transparent proxy mode](/consul/docs/connect/transparent-proxy), -the mutating webhook redirects all `http` based startup, liveness, and readiness probes in the pod through the Envoy proxy. -This webhook is defined in the -[`ExposePaths` configuration](/consul/docs/connect/proxies/proxy-config-reference#expose-paths-configuration-reference) -for each probe so that kubelet can access the endpoint through the Envoy proxy. - -The mutation behavior can be disabled, by setting either the `consul.hashicorp.com/transparent-proxy-overwrite-probes` -pod annotation to `false` or the `connectInject.defaultOverwriteProbes` Helm value to `false`. - -When readiness probes are set for a pod, the status of the pod will be reflected within Consul and will cause Consul to redirect service -mesh traffic to the pod based on the pod's health. If the pod has failing health checks, Consul will no longer use -the service instance associated with the pod for service mesh traffic. When the pod passes its health checks, Consul will -then use the respective service instance for service mesh traffic. - -In the case where no user defined health checks are assigned to a pod, the default behavior is that the Consul health check will -be marked `passing` until the pod becomes unready. - --> It is highly recommended to [enable TLS](/consul/docs/k8s/helm#v-global-tls-enabled) for all production configurations to mitigate any -security concerns should the pod network ever be compromised. The controller makes calls across the network to Consul agents on all -nodes so an attacker could potentially sniff ACL tokens *if those calls are not encrypted* via TLS. diff --git a/website/content/docs/k8s/connect/index.mdx b/website/content/docs/k8s/connect/index.mdx deleted file mode 100644 index 0f320d343789..000000000000 --- a/website/content/docs/k8s/connect/index.mdx +++ /dev/null @@ -1,705 +0,0 @@ ---- -layout: docs -page_title: How does Consul Service Mesh Work on Kubernetes? -description: >- - An injection annotation allows Consul to automatically deploy sidecar proxies on Kubernetes pods, enabling Consul's service mesh for containers running on k8s. Learn how to configure sidecars, enable services with multiple ports, change default injection settings. ---- - -# How does Consul Service Mesh Work on Kubernetes? - -Consul service mesh automates service-to-service authorization and encryption across your Consul services. You can use service mesh in Kubernetes-orchestrated networks to secure communication between pods as well as communication between pods and external Kubernetes services. - -## Workflow - -Consul service mesh is enabled by default when you install Consul on Kubernetes using the Consul Helm chart. Consul also automatically injects sidecars into the pods in your clusters that run Envoy. These sidecar proxies, called Consul dataplanes, are enabled when `connectInject.default` is set to `false` in the Helm chart. Refer to the following documentation for additional information about these concepts: - -- [Installation and Configuration](#installation-and-configuration) in this topic -- [Consul Helm chart reference](/consul/docs/k8s/helm) -- [Simplified Service Mesh with Consul Dataplane](/consul/docs/connect/dataplane) - -If `connectInject.default` is set to `false` or you want to explicitly enable service mesh sidecar proxy injection for a specific deployment, add the `consul.hashicorp.com/connect-inject` annotation to the pod specification template and set it to `true` when connecting services to the mesh. - -### Service names - -When the service is onboarded, the name registered in Consul is set to the name of the Kubernetes Service associated with the Pod. You can use the [`consul.hashicorp.com/connect-service` annotation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service) to specify a custom name for the service, but if ACLs are enabled then the name of the service registered in Consul must match the Pod's `ServiceAccount` name. - -### Transparent proxy mode - -By default, the Consul service mesh runs in transparent proxy mode. This mode forces inbound and outbound traffic through the sidecar proxy even though the service binds to all interfaces. Transparent proxy infers the location of upstream services using Consul service intentions, and also allows you to use Kubernetes DNS as you normally would for your workloads. - -When transparent proxy mode is enabled, all service-to-service traffic is required to use mTLS. When onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication. You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/k8s/connect/onboarding-tproxy-mode) for additional information. - -### Kubernetes service mesh workload scenarios - --> **Note:** A Kubernetes Service is required in order to register services on the Consul service mesh. Consul monitors the lifecycle of the Kubernetes Service and its service instances using the service object. In addition, the Kubernetes service is used to register and de-register the service from Consul's catalog. - -The following configurations are examples for registering workloads on Kubernetes into Consul's service mesh in different scenarios. Each scenario provides an example Kubernetes manifest to demonstrate how to use Consul's service mesh with a specific Kubernetes workload type. - -- [Kubernetes Pods running as a deployment](#kubernetes-pods-running-as-a-deployment) -- [Connecting to mesh-enabled Services](#connecting-to-mesh-enabled-services) -- [Kubernetes Jobs](#kubernetes-jobs) -- [Kubernetes Pods with multiple ports](#kubernetes-pods-with-multiple-ports) - -#### Kubernetes Pods running as a deployment - -The following example shows a Kubernetes configuration that specifically enables service mesh connections for the `static-server` service. Consul starts and registers a sidecar proxy that listens on port 20000 by default and proxies valid inbound connections to port 8080. - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - # This name will be the service name in Consul. - name: static-server -spec: - selector: - app: static-server - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-server ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-server -spec: - replicas: 1 - selector: - matchLabels: - app: static-server - template: - metadata: - name: static-server - labels: - app: static-server - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - spec: - containers: - - name: static-server - image: hashicorp/http-echo:latest - args: - - -text="hello world" - - -listen=:8080 - ports: - - containerPort: 8080 - name: http - # If ACLs are enabled, the serviceAccountName must match the Consul service name. - serviceAccountName: static-server -``` - - - -To establish a connection to the upstream Pod using service mesh, a client must dial the upstream workload using a mesh proxy. The client mesh proxy will use Consul service discovery to find all available upstream proxies and their public ports. - -#### Connecting to mesh-enabled Services - -The example Deployment specification below configures a Deployment that is capable -of establishing connections to our previous example "static-server" service. The -connection to this static text service happens over an authorized and encrypted -connection via service mesh. - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - # This name will be the service name in Consul. - name: static-client -spec: - selector: - app: static-client - ports: - - port: 80 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-client ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-client -spec: - replicas: 1 - selector: - matchLabels: - app: static-client - template: - metadata: - name: static-client - labels: - app: static-client - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - spec: - containers: - - name: static-client - image: curlimages/curl:latest - # Just spin & wait forever, we'll use `kubectl exec` to demo - command: ['/bin/sh', '-c', '--'] - args: ['while true; do sleep 30; done;'] - # If ACLs are enabled, the serviceAccountName must match the Consul service name. - serviceAccountName: static-client -``` - - - -By default when ACLs are enabled or when ACLs default policy is `allow`, -Consul will automatically configure proxies with all upstreams from the same datacenter. -When ACLs are enabled with default `deny` policy, -you must supply an [intention](/consul/docs/connect/intentions) to tell Consul which upstream you need to talk to. - -When upstreams are specified explicitly with the -[`consul.hashicorp.com/connect-service-upstreams` annotation](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams), -the injector will also set environment variables `_CONNECT_SERVICE_HOST` -and `_CONNECT_SERVICE_PORT` in every container in the Pod for every defined -upstream. This is analogous to the standard Kubernetes service environment variables, but -point instead to the correct local proxy port to establish connections via -service mesh. - -You cannot reference auto-generated environment variables when the upstream annotation contains a dot. This is because Consul also renders the environment variables to include a dot. For example, Consul renders the variables generated for `static-server.svc:8080` as `STATIC-SERVER.SVC_CONNECT_SERVICE_HOST` and `STATIC_SERVER.SVC_CONNECT_SERVICE_PORT`, which makes the variables unusable. -You can verify access to the static text server using `kubectl exec`. -Because transparent proxy is enabled by default, -use Kubernetes DNS to connect to your desired upstream. - -```shell-session -$ kubectl exec deploy/static-client -- curl --silent http://static-server/ -"hello world" -``` - -You can control access to the server using [intentions](/consul/docs/connect/intentions). -If you use the Consul UI or [CLI](/consul/commands/intention/create) to -deny communication between -"static-client" and "static-server", connections are immediately rejected -without updating either of the running pods. You can then remove this -intention to allow connections again. - -```shell-session -$ kubectl exec deploy/static-client -- curl --silent http://static-server/ -command terminated with exit code 52 -``` - -#### Kubernetes Jobs - -Kubernetes Jobs run pods that only make outbound requests to services on the mesh and successfully terminate when they are complete. In order to register a Kubernetes Job with the mesh, you must provide an integer value for the `consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds` annotation. Then, issue a request to the `http://127.0.0.1:20600/graceful_shutdown` API endpoint so that Kubernetes gracefully shuts down the `consul-dataplane` sidecar after the job is complete. - -Below is an example Kubernetes manifest that deploys a job correctly. - - - -```yaml ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: test-job - namespace: default ---- -apiVersion: v1 -kind: Service -metadata: - name: test-job - namespace: default -spec: - selector: - app: test-job - ports: - - port: 80 ---- -apiVersion: batch/v1 -kind: Job -metadata: - name: test-job - namespace: default - labels: - app: test-job -spec: - template: - metadata: - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - 'consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds': '5' - labels: - app: test-job - spec: - containers: - - name: test-job - image: alpine/curl:3.14 - ports: - - containerPort: 80 - command: - - /bin/sh - - -c - - | - echo "Started test job" - sleep 10 - echo "Killing proxy" - curl --max-time 2 -s -f -X POST http://127.0.0.1:20600/graceful_shutdown - sleep 10 - echo "Ended test job" - serviceAccountName: test-job - restartPolicy: Never -``` - - - -Upon completing the job you should be able to verify that all containers are shut down within the pod. - -```shell-session -$ kubectl get pods -NAME READY STATUS RESTARTS AGE -test-job-49st7 0/2 Completed 0 3m55s -``` - -```shell-session -$ kubectl get job -NAME COMPLETIONS DURATION AGE -test-job 1/1 30s 4m31s -``` - -In addition, based on the logs emitted by the pod you can verify that the proxy was shut down before the Job completed. - -```shell-session -$ kubectl logs test-job-49st7 -c test-job -Started test job -Killing proxy -Ended test job -``` - -#### Kubernetes Pods with multiple ports - -To configure a pod with multiple ports to be a part of the service mesh and receive and send service mesh traffic, you -will need to add configuration so that a Consul service can be registered per port. This is because services in Consul -currently support a single port per service instance. - -In the following example, suppose we have a pod which exposes 2 ports, `8080` and `9090`, both of which will need to -receive service mesh traffic. - -First, decide on the names for the two Consul services that will correspond to those ports. In this example, the user -chooses the names `web` for `8080` and `web-admin` for `9090`. - -Create two service accounts for `web` and `web-admin`: - - - -```yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: web ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: web-admin -``` - - - - -Create two Service objects for `web` and `web-admin`: - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: web -spec: - selector: - app: web - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ---- -apiVersion: v1 -kind: Service -metadata: - name: web-admin -spec: - selector: - app: web - ports: - - protocol: TCP - port: 80 - targetPort: 9090 -``` - - - -`web` will target `containerPort` `8080` and select pods labeled `app: web`. `web-admin` will target `containerPort` -`9090` and will also select the same pods. - -~> Kubernetes 1.24+ only -In Kubernetes 1.24+ you need to [create a Kubernetes secret](https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets) for each additional Consul service associated with the pod in order to expose the Kubernetes ServiceAccount token to the Consul dataplane container running under the pod serviceAccount. The Kubernetes secret name must match the ServiceAccount name: - - - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: web - annotations: - kubernetes.io/service-account.name: web -type: kubernetes.io/service-account-token ---- -apiVersion: v1 -kind: Secret -metadata: - name: web-admin - annotations: - kubernetes.io/service-account.name: web-admin -type: kubernetes.io/service-account-token -``` - - - -Create a Deployment with any chosen name, and use the following annotations: -```yaml -annotations: - 'consul.hashicorp.com/connect-inject': 'true' - 'consul.hashicorp.com/transparent-proxy': 'false' - 'consul.hashicorp.com/connect-service': 'web,web-admin' - 'consul.hashicorp.com/connect-service-port': '8080,9090' -``` -Note that the order the ports are listed in the same order as the service names, i.e. the first service name `web` -corresponds to the first port, `8080`, and the second service name `web-admin` corresponds to the second port, `9090`. - -The service account on the pod spec for the deployment should be set to the first service name `web`: -```yaml -serviceAccountName: web -``` - -The following deployment example demonstrates the required annotations for the manifest. In addition, the previous YAML manifests can also be combined into a single manifest for easier deployment. - - - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: web -spec: - replicas: 1 - selector: - matchLabels: - app: web - template: - metadata: - name: web - labels: - app: web - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - 'consul.hashicorp.com/transparent-proxy': 'false' - 'consul.hashicorp.com/connect-service': 'web,web-admin' - 'consul.hashicorp.com/connect-service-port': '8080,9090' - spec: - containers: - - name: web - image: hashicorp/http-echo:latest - args: - - -text="hello world" - - -listen=:8080 - ports: - - containerPort: 8080 - name: http - - name: web-admin - image: hashicorp/http-echo:latest - args: - - -text="hello world from 9090" - - -listen=:9090 - ports: - - containerPort: 9090 - name: http - serviceAccountName: web -``` - - - -After deploying the `web` application, you can test service mesh connections by deploying the `static-client` -application with the configuration in the [previous section](#connecting-to-mesh-enabled-services) and add the -`consul.hashicorp.com/connect-service-upstreams: 'web:1234,web-admin:2234'` annotation to the pod template on `static-client`: - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - # This name will be the service name in Consul. - name: static-client -spec: - selector: - app: static-client - ports: - - port: 80 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-client ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-client -spec: - replicas: 1 - selector: - matchLabels: - app: static-client - template: - metadata: - name: static-client - labels: - app: static-client - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - 'consul.hashicorp.com/connect-service-upstreams': 'web:1234,web-admin:2234' - spec: - containers: - - name: static-client - image: curlimages/curl:latest - # Just spin & wait forever, we'll use `kubectl exec` to demo - command: ['/bin/sh', '-c', '--'] - args: ['while true; do sleep 30; done;'] - # If ACLs are enabled, the serviceAccountName must match the Consul service name. - serviceAccountName: static-client -``` - - - -If you exec on to a static-client pod, using a command like: -```shell-session -$ kubectl exec -it static-client-5bd667fbd6-kk6xs -- /bin/sh -``` -you can then run: -```shell-session -$ curl localhost:1234 -``` -to see the output `hello world` and run: -```shell-session -$ curl localhost:2234 -``` -to see the output `hello world from 9090`. - -The way this works is that a Consul service instance is being registered per port on the Pod, so there are 2 Consul -services in this case. An additional Envoy sidecar proxy and `connect-init` init container are also deployed per port in -the Pod. So the upstream configuration can use the individual service names to reach each port as seen in the example. - - -#### Caveats for Multi-port Pods -* Transparent proxy is not supported for multi-port Pods. -* Metrics and metrics merging is not supported for multi-port Pods. -* Upstreams will only be set on the first service's Envoy sidecar proxy for the pod. - * This means that ServiceIntentions from a multi-port pod to elsewhere, will need to use the first service's name, - `web` in the example above to accept connections from either `web` or `web-admin`. ServiceIntentions from elsewhere - to a multi-port pod can use the individual service names within the multi-port Pod. -* Health checking is done on a per-Pod basis, so if any Kubernetes health checks (like readiness, liveness, etc) are - failing for any container on the Pod, the entire Pod is marked unhealthy, and any Consul service referencing that Pod - will also be marked as unhealthy. So, if `web` has a failing health check, `web-admin` would also be marked as - unhealthy for service mesh traffic. - -## Installation and Configuration - -The service mesh sidecar proxy is injected via a -[mutating admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#admission-webhooks) -call the connect injector provided by the -[consul-k8s project](https://github.com/hashicorp/consul-k8s). -This enables the automatic pod mutation shown in the usage section above. -Installation of the mutating admission webhook is automated using the -[Helm chart](/consul/docs/k8s/installation/install). - -To install the connect injector, enable the connect injection feature using -[Helm values](/consul/docs/k8s/helm#configuration-values) and -upgrade the installation using `helm upgrade` for existing installs or -`helm install` for a fresh install. - -```yaml -connectInject: - enabled: true -``` - -This will configure the injector to inject when the -[injection annotation](#consul-hashicorp-com-connect-inject) -is set to `true`. Other values in the Helm chart can be used to limit the namespaces -the injector runs in, enable injection by default, and more. - -### Verifying the Installation - -To verify the installation, run the -["Accepting Inbound Connections"](/consul/docs/k8s/connect#accepting-inbound-connections) -example from the "Usage" section above. After running this example, run -`kubectl get pod static-server --output yaml`. In the raw YAML output, you should -see connect injected containers and an annotation -`consul.hashicorp.com/connect-inject-status` set to `injected`. This -confirms that injection is working properly. - -If you do not see this, then use `kubectl logs` against the injector pod -and note any errors. - -### Controlling Injection Via Annotation - -By default, the injector will inject only when the -[injection annotation](#consul-hashicorp-com-connect-inject) -on the pod (not the deployment) is set to `true`: - -```yaml -annotations: - 'consul.hashicorp.com/connect-inject': 'true' -``` - -### Injection Defaults - -If you wish for the injector to always inject, you can set the default to `true` -in the Helm chart: - -```yaml -connectInject: - enabled: true - default: true -``` - -You can then exclude specific pods via annotation: - -```yaml -annotations: - 'consul.hashicorp.com/connect-inject': 'false' -``` - -### Controlling Injection Via Namespace - -You can control which Kubernetes namespaces are allowed to be injected via -the `k8sAllowNamespaces` and `k8sDenyNamespaces` keys: - -```yaml -connectInject: - enabled: true - k8sAllowNamespaces: ['*'] - k8sDenyNamespaces: [] -``` - -In the default configuration (shown above), services from all namespaces are allowed -to be injected. Whether or not they're injected depends on the value of `connectInject.default` -and the `consul.hashicorp.com/connect-inject` annotation. - -If you wish to only enable injection in specific namespaces, you can list only those -namespaces in the `k8sAllowNamespaces` key. In the configuration below -only the `my-ns-1` and `my-ns-2` namespaces will be enabled for injection. -All other namespaces will be ignored, even if the connect inject [annotation](#consul-hashicorp-com-connect-inject) -is set. - -```yaml -connectInject: - enabled: true - k8sAllowNamespaces: ['my-ns-1', 'my-ns-2'] - k8sDenyNamespaces: [] -``` - -If you wish to enable injection in every namespace _except_ specific namespaces, you can -use `*` in the allow list to allow all namespaces and then specify the namespaces to exclude in the deny list: - -```yaml -connectInject: - enabled: true - k8sAllowNamespaces: ['*'] - k8sDenyNamespaces: ['no-inject-ns-1', 'no-inject-ns-2'] -``` - --> **NOTE:** The deny list takes precedence over the allow list. If a namespace -is listed in both lists, it will **not** be synced. - -~> **NOTE:** The `kube-system` and `kube-public` namespaces will never be injected. - -### Consul Enterprise Namespaces - -Consul Enterprise 1.7+ supports Consul namespaces. When Kubernetes pods are registered -into Consul, you can control which Consul namespace they are registered into. - -There are three options available: - -1. **Single Destination Namespace** – Register all Kubernetes pods, regardless of namespace, - into the same Consul namespace. - - This can be configured with: - - ```yaml - global: - enableConsulNamespaces: true - - connectInject: - enabled: true - consulNamespaces: - consulDestinationNamespace: 'my-consul-ns' - ``` - - -> **NOTE:** If the destination namespace does not exist we will create it. - -1. **Mirror Namespaces** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes namespace. - For example, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`. - If a mirrored namespace does not exist in Consul, it will be created. - - This can be configured with: - - ```yaml - global: - enableConsulNamespaces: true - - connectInject: - enabled: true - consulNamespaces: - mirroringK8S: true - ``` - -1. **Mirror Namespaces With Prefix** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes - namespace **with a prefix**. - For example, given a prefix `k8s-`, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`. - - This can be configured with: - - ```yaml - global: - enableConsulNamespaces: true - - connectInject: - enabled: true - consulNamespaces: - mirroringK8S: true - mirroringK8SPrefix: 'k8s-' - ``` - -### Consul Enterprise Namespace Upstreams - -When [transparent proxy](/consul/docs/connect/transparent-proxy) is enabled and ACLs are disabled, -the upstreams will be configured automatically across Consul namespaces. -When ACLs are enabled, you must configure it by specifying an [intention](/consul/docs/connect/intentions), -allowing services across Consul namespaces to talk to each other. - -If you wish to specify an upstream explicitly via the `consul.hashicorp.com/connect-service-upstreams` annotation, -use the format `[service-name].[namespace]:[port]:[optional datacenter]`: - -```yaml -annotations: - 'consul.hashicorp.com/connect-inject': 'true' - 'consul.hashicorp.com/connect-service-upstreams': '[service-name].[namespace]:[port]:[optional datacenter]' -``` - -See [consul.hashicorp.com/connect-service-upstreams](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) for more details. - --> **Note:** When you specify upstreams via an upstreams annotation, you will need to use -`localhost:` with the port from the upstreams annotation instead of KubeDNS to connect to your upstream -application. diff --git a/website/content/docs/k8s/connect/ingress-controllers.mdx b/website/content/docs/k8s/connect/ingress-controllers.mdx deleted file mode 100644 index dcec16e17042..000000000000 --- a/website/content/docs/k8s/connect/ingress-controllers.mdx +++ /dev/null @@ -1,94 +0,0 @@ ---- -layout: docs -page_title: Configure Ingress Controllers for Consul on Kubernetes -description: >- - Ingress controllers are pluggable components that must be configured in k8s in order to use the Ingress resource. Learn how to deploy sidecars with the controller to secure its communication with Consul, review common configuration issues, and find links to example configurations. ---- - -# Configure Ingress Controllers for Consul on Kubernetes - --> This topic requires Consul 1.10+, Consul-k8s 0.26+, Consul-helm 0.32+ configured with [Transparent Proxy](/consul/docs/connect/transparent-proxy) mode enabled. In addition, this topic assumes that the reader is familiar with [Ingress Controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) on Kubernetes. - -~> If you are looking for a fully supported solution for ingress traffic into Consul Service Mesh, please visit [Consul API Gateway](/consul/docs/api-gateway) for instruction on how to install Consul API Gateway along with Consul on Kubernetes. - -This page describes a general approach for integrating Ingress Controllers with Consul on Kubernetes to secure traffic from the Controller -to the backend services by deploying sidecars along with your Ingress Controller. This allows Consul to transparently secure traffic from the ingress point through the entire traffic flow of the service. - -A few steps are generally required to enable an Ingress controller to join the mesh and pass traffic through to a service: - -* Enable connect-injection via an annotation on the Ingress Controller's deployment: `consul.hashicorp.com/connect-inject` is `true`. - -* Using the following annotations on the Ingress controller's deployment, set up exclusion rules for its ports. - * [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) - Provides the ability to exclude a list of ports for -inbound traffic that the service exposes from redirection. Typical configurations would require all inbound service ports -for the controller to be included in this list. - * [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) - Provides the ability to exclude a list of ports for -outbound traffic that the service exposes from redirection. These would be outbound ports used by your ingress controller - which expect to skip the mesh and talk to non-mesh services. - * [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) - Provides the ability to exclude a list of CIDRs that -the service communicates with for outbound requests from redirection. It is somewhat common that an Ingress controller -will expect to make API calls to the Kubernetes service for service/endpoint management. As such including the ClusterIP of the -Kubernetes service is common. - -~> Note: Depending on which ingress controller you use, these stanzas may differ in name and layout, but it is important to apply -these annotations to the *pods* of your *ingress controller*. - ```yaml - # An example list of pod annotations for an ingress controller, which need be applied to PODS for the controller, not the deployment itself. - podAnnotations: - consul.hashicorp.com/connect-inject: "true" - # Add the container ports used by your ingress controller - consul.hashicorp.com/transparent-proxy-exclude-inbound-ports: "80,8000,9000,8443" - # And the CIDR of your Kubernetes API: `kubectl get svc kubernetes --output jsonpath='{.spec.clusterIP}' - consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: "10.108.0.1/32" - ``` - -* If the Ingress controller acts as a LoadBalancer and routes directly to Pod IPs instead of the ClusterIP of your Kubernetes Services -a `ServiceDefault` CRD must be applied to *each backend service* allowing it to use the `dialedDirectly` features. By default this is disabled. - - ```yaml - # Example Service defaults config entry - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceDefaults - metadata: - name: backend - spec: - transparentProxy: - dialedDirectly: true - ``` - -* An intention from the Ingress Controller to the backend application must also be applied, this could be an L4 or L7 intention: - - ```yaml - # example L4 intention, but an L7 intention can also be used to control access to specific routes. - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceIntentions - metadata: - name: ingress-backend - spec: - destination: - name: backend - sources: - - name: ingress - action: allow - ``` - -### Common Configuration Problems: -- The Ingress Controller's ServiceAccount name and Service name differ by default in some platforms. Consul on Kubernetes requires the -ServiceAccount and Service to have the same name. To resolve this be sure to explicitly set ServiceAccount name the same as the ingress -controller service name using it's respective helm configurations. - -- If the Ingress Controller does not have the correct inbound ports excluded it will fail to start and the Ingress' -service will not get created, causing the controller to hang in the init container. The required container ports are not -always readily available in the helm charts, so in order to resolve this examine the ingress controller's -underlying pod spec and look for the required container ports, adding these to the `consul.hashicorp.com/transparent-proxy-exclude-inbound-ports` -annotation on the ingress controller deployment. - -### Examples: -Here are a couple example configurations which can be used as reference points in setting up your own ingress controller configuration! -These were used in dev environments and are not intended to be fully supported but should provide some idea how to extend the information -above to your own uses cases. - -- [Traefik Consul example - kschoche](https://github.com/kschoche/traefik-consul) -- [Kong and Traefik Ingress Controller examples - joatmon08](https://github.com/joatmon08/consul-k8s-ingress-controllers) -- [NGINX Ingress Controller example](https://github.com/hashicorp-education/consul-k8s-nginx-ingress-controller) - diff --git a/website/content/docs/k8s/connect/ingress-gateways.mdx b/website/content/docs/k8s/connect/ingress-gateways.mdx deleted file mode 100644 index 2675214d64a8..000000000000 --- a/website/content/docs/k8s/connect/ingress-gateways.mdx +++ /dev/null @@ -1,296 +0,0 @@ ---- -layout: docs -page_title: Configure Ingress Gateways for Consul on Kubernetes -description: >- - Ingress gateways listen for external requests and route authorized traffic to instances in the service mesh running on Kubernetes. Learn how to configure ingress gateways, set intentions, and connect them to k8s applications. ---- - -# Configure Ingress Gateways for Consul on Kubernetes - - - -Ingress gateway is deprecated and will not be enhanced beyond its current capabilities. Ingress gateway is fully supported -in this version but will be removed in a future release of Consul. - -Consul's API gateway is the recommended alternative to ingress gateway. - - - -~> This topic requires familiarity with [Ingress Gateways](/consul/docs/connect/gateways/ingress-gateway). - -This page describes how to enable external access through Consul ingress gateways to mesh services running inside Kubernetes. -See [Ingress Gateways](/consul/docs/connect/gateways/ingress-gateway) for more information on use-cases and how it works. - -Adding an ingress gateway is a multi-step process that consists of the following steps: - -- Setting the Helm chart configuration -- Deploying the Helm chart -- Configuring the gateway -- Defining an Intention (if ACLs are enabled) -- Deploying your application to Kubernetes -- Connecting to your application - -## Setting the helm chart configuration - -When deploying the Helm chart you must provide Helm with a custom YAML file that contains your environment configuration. - - - -```yaml -global: - name: consul -connectInject: - enabled: true -ingressGateways: - enabled: true - gateways: - - name: ingress-gateway - service: - type: LoadBalancer -``` - - - -~> **Note:** this will create a public unauthenticated LoadBalancer in your cluster, please take appropriate security considerations. - -The YAML snippet is the launching point for a valid configuration that must be supplied when installing using the [official consul-helm chart](https://hub.helm.sh/charts/hashicorp/consul). -Information on additional options can be found in the [Helm reference](/consul/docs/k8s/helm). -Configuration options for ingress gateways reside under the [ingressGateways](/consul/docs/k8s/helm#v-ingressgateways) entry. - -The gateways stanza is where you will define and configure the set of ingress gateways you want deployed to your environment. -The only required field for each entry is `name`, though entries may contain any of the fields found in the `defaults` stanza. -Values in this section override the values from the defaults stanza for the given ingress gateway with one exception: -the annotations from the defaults stanza will be _appended_ to any user-defined annotations defined in the gateways stanza rather than being overridden. -Please refer to the ingress gateway configuration [documentation](/consul/docs/k8s/helm#v-ingressgateways-defaults) for a detailed explanation of each option. - -## Deploying the Helm chart - -Ensure you have the latest consul-helm chart and install Consul via helm using the following -[guide](/consul/docs/k8s/installation/install#installing-consul) while being sure to provide the yaml configuration -as previously discussed. - -The following example installs Consul 1.0.4 using the `values.yaml` configuration: - -```shell-session -$ helm install consul -f values.yaml hashicorp/consul --version 1.0.4 --wait --debug -``` - -## Configuring the gateway - -Now that Consul has been installed with ingress gateways enabled, -you can configure the gateways via the [`IngressGateway`](/consul/docs/connect/config-entries/ingress-gateway) custom resource. - -Here is an example `IngressGateway` resource: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: IngressGateway -metadata: - name: ingress-gateway -spec: - listeners: - - port: 8080 - protocol: http - services: - - name: static-server -``` - - - -~> **Note:** The 'name' field for the IngressGateway resource must match the name -specified when creating the gateway in the Helm chart. In the above example, the -name "ingress-gateway" is the [default name](/consul/docs/k8s/helm#v-ingressgateways-gateways-name) -used by the Helm chart when enabling ingress gateways. - -Apply the `IngressGateway` resource with `kubectl apply`: - -```shell-session -$ kubectl apply --filename ingress-gateway.yaml -ingressgateway.consul.hashicorp.com/ingress-gateway created -``` - -Since we're using `protocol: http`, we also need to set the protocol of our service -`static-server` to http. To do that, we create a [`ServiceDefaults`](/consul/docs/connect/config-entries/service-defaults) custom resource: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: static-server -spec: - protocol: http -``` - - - -Apply the `ServiceDefaults` resource with `kubectl apply`: - -```shell-session -$ kubectl apply --filename service-defaults.yaml -servicedefaults.consul.hashicorp.com/static-server created -``` - -Ensure both resources have synced to Consul successfully: - -```shell-session -$ kubectl get servicedefaults -NAME SYNCED AGE -static-server True 45s - -$ kubectl get ingressgateway -NAME SYNCED AGE -ingress-gateway True 13m -``` - -### Viewing the UI - -You can confirm the ingress gateways have been configured as expected by viewing the ingress-gateway service instances -in the Consul UI. - -To view the UI, use the `kubectl port-forward` command. See [Viewing The Consul UI](/consul/docs/k8s/installation/install#viewing-the-consul-ui) -for full instructions. - -Once you've port-forwarded to the UI, navigate to the Ingress Gateway instances: [http://localhost:8500/ui/dc1/services/ingress-gateway/instances](http://localhost:8500/ui/dc1/services/ingress-gateway/instances) - -If TLS is enabled, use [https://localhost:8501/ui/dc1/services/ingress-gateway/instances](https://localhost:8501/ui/dc1/services/ingress-gateway/instances). - -## Defining an Intention - -If ACLs are enabled (via the `global.acls.manageSystemACLs` setting), you must define an [intention](/consul/docs/connect/intentions) -to allow the ingress gateway to route to the upstream services defined in the `IngressGateway` resource (in the example above the upstream service is `static-server`). - -To create an intention that allows the ingress gateway to route to the service `static-server`, create a [`ServiceIntentions`](/consul/docs/connect/config-entries/service-intentions) -resource: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: static-server -spec: - destination: - name: static-server - sources: - - name: ingress-gateway - action: allow -``` - - - -Apply the `ServiceIntentions` resource with `kubectl apply`: - -```shell-session -$ kubectl apply --filename service-intentions.yaml -serviceintentions.consul.hashicorp.com/ingress-gateway created -``` - -For detailed instructions on how to configure zero-trust networking with intentions please refer to this [guide](/consul/tutorials/kubernetes-features/service-mesh-zero-trust-network?utm_source=docs). - -## Deploying your application to Kubernetes - -Now you will deploy a sample application which echoes "hello world" - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: static-server -spec: - selector: - app: static-server - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-server ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-server -spec: - replicas: 1 - selector: - matchLabels: - app: static-server - template: - metadata: - name: static-server - labels: - app: static-server - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - spec: - containers: - - name: static-server - image: hashicorp/http-echo:latest - args: - - -text="hello world" - - -listen=:8080 - ports: - - containerPort: 8080 - name: http - serviceAccountName: static-server -``` - - - -```shell-session -$ kubectl apply --filename static-server.yaml -``` - -## Connecting to your application - -You can validate the service is running and registered in the Consul UI by navigating to -[http://localhost:8500/ui/dc1/services/static-server/instances](http://localhost:8500/ui/dc1/services/static-server/instances) - -If TLS is enabled, use: [https://localhost:8501/ui/dc1/services/static-server/instances](https://localhost:8501/ui/dc1/services/static-server/instances) - -You can also validate the connectivity of the application from the ingress gateway using `curl`: - -```shell-session -$ EXTERNAL_IP=$(kubectl get services --selector component=ingress-gateway --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}") -$ echo "Connecting to \"$EXTERNAL_IP\"" -$ curl --header "Host: static-server.ingress.consul" "http://$EXTERNAL_IP:8080" -"hello world" -``` - -~> **Security Warning:** Please be sure to delete the application and services created here as they represent a security risk through -leaving an open and unauthenticated load balancer alive in your cluster. - -To delete the ingress gateway, set enabled to `false` in your Helm configuration: - - - -```yaml -global: - name: consul -connectInject: - enabled: true -ingressGateways: - enabled: false # Set to false - gateways: - - name: ingress-gateway - service: - type: LoadBalancer -``` - - - -And run Helm upgrade: - -```shell-session -$ helm upgrade consul hashicorp/consul --values values.yaml -``` diff --git a/website/content/docs/k8s/connect/observability/metrics.mdx b/website/content/docs/k8s/connect/observability/metrics.mdx deleted file mode 100644 index 167f2c7613c7..000000000000 --- a/website/content/docs/k8s/connect/observability/metrics.mdx +++ /dev/null @@ -1,147 +0,0 @@ ---- -layout: docs -page_title: Configure metrics for Consul on Kubernetes -description: >- - Use the `connectInject.metrics` Helm values to enable Prometheus and Grafana integrations and capture metrics. Consul can collect metrics from the service mesh, sidecar proxies, agents, and gateways in a k8s cluster and then display service traffic metrics in Consul’s UI for additional observability. ---- - -# Configure Metrics for Consul on Kubernetes - -Consul on Kubernetes integrates with Prometheus and Grafana to provide metrics for Consul service mesh. The metrics -available are: - -- Mesh service metrics -- Mesh sidecar proxy metrics -- Consul agent metrics -- Ingress, terminating, and mesh gateway metrics - -Specific sidecar proxy metrics can also be seen in the Consul UI Topology Visualization view. This section documents how to enable each of these. - -## Mesh Service and Sidecar Metrics with Metrics Merging - -Prometheus annotations are used to instruct Prometheus to scrape metrics from Pods. Prometheus annotations only support -scraping from one endpoint on a Pod, so Consul on Kubernetes supports metrics merging whereby service metrics and -sidecar proxy metrics are merged into one endpoint. If there are no service metrics, it also supports just scraping the -sidecar proxy metrics. - - - -Metrics for services in the mesh can be configured with the Helm values nested under [`connectInject.metrics`](/consul/docs/k8s/helm#v-connectinject-metrics). - -Metrics and metrics merging can be enabled by default for all connect-injected Pods with the following Helm values: - -```yaml -connectInject: - metrics: - defaultEnabled: true # by default, this inherits from the value global.metrics.enabled - defaultEnableMerging: true -``` - -They can also be overridden on a per-Pod basis using the annotations `consul.hashicorp.com/enable-metrics` and -`consul.hashicorp.com/enable-metrics-merging`. - -~> In most cases, the default settings are sufficient. If you encounter issues with colliding ports or service -metrics not being merged, you may need to change the defaults. - -The Prometheus annotations specify which endpoint to scrape the metrics from. The annotations point to a listener on `0.0.0.0:20200` on the Envoy sidecar. You can configure the listener and the corresponding Prometheus annotations using the following Helm values. Alternatively, you can specify the `consul.hashicorp.com/prometheus-scrape-port` and `consul.hashicorp.com/prometheus-scrape-path` Consul annotations to override them on a per-Pod basis: - -```yaml -connectInject: - metrics: - defaultPrometheusScrapePort: 20200 - defaultPrometheusScrapePath: "/metrics" -``` - -The Helm values specified in the previous example result in the following Prometheus annotations being automatically added to the Pod for scraping: - -```yaml -metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/path: "/metrics" - prometheus.io/port: "20200" -``` - -When metrics and metrics merging are both enabled, metrics are combined from the service and the sidecar proxy, and -exposed through a local server on the Consul Dataplane sidecar for scraping. This endpoint is called the merged metrics endpoint and -defaults to `127.0.0.1:20100/stats/prometheus`. The listener targets the merged metrics endpoint in the above case. -It can be configured with the following Helm values (or overridden on a per-Pod basis with -`consul.hashicorp.com/merged-metrics-port`: - -```yaml -connectInject: - metrics: - defaultMergedMetricsPort: 20100 -``` - -The endpoint to scrape service metrics from can be configured only on a per-Pod basis with the Pod annotations `consul.hashicorp.com/service-metrics-port` and `consul.hashicorp.com/service-metrics-path`. If these are not configured, the service metrics port defaults to the port used to register the service with Consul (`consul.hashicorp.com/connect-service-port`), which in turn defaults to the first port on the first container of the Pod. The service metrics path defaults to `/metrics`. - -## Consul Agent Metrics - -Metrics from the Consul server Pods can be scraped with Prometheus by setting the field `global.metrics.enableAgentMetrics` to `true`. Additionally, one can configure the metrics retention time on the agents by configuring -the field `global.metrics.agentMetricsRetentionTime` which expects a duration and defaults to `"1m"`. This value must be greater than `"0m"` for the Consul servers to emit metrics at all. As the Prometheus deployment currently does not support scraping TLS endpoints, agent metrics are currently unsupported when TLS is enabled. - -```yaml -global: - metrics: - enabled: true - enableAgentMetrics: true - agentMetricsRetentionTime: "1m" -``` - -## Gateway Metrics - -Metrics from the Consul ingress, terminating, and mesh gateways can be scraped -with Prometheus by setting the field `global.metrics.enableGatewayMetrics` to `true`. The gateways emit standard Envoy proxy -metrics. To ensure that the metrics are not exposed to the public internet, as mesh and ingress gateways can have public -IPs, their metrics endpoints are exposed on the Pod IP of the respective gateway instance, rather than on all -interfaces on `0.0.0.0`. - -```yaml -global: - metrics: - enabled: true - enableGatewayMetrics: true -``` - -## Metrics in the UI Topology Visualization - -Consul's built-in UI has a topology visualization for services that are part of the Consul service mesh. The topology visualization has the ability to fetch basic metrics from a metrics provider for each service and display those metrics as part of the [topology visualization](/consul/docs/connect/observability/ui-visualization). - -The diagram below illustrates how the UI displays service metrics for a sample application: - -![UI Topology View](/img/ui-service-topology-view-hover.png) - -The topology view is configured under `ui.metrics`. This configuration enables the Consul UI to query the provider specified by -`ui.metrics.provider` at the URL of the Prometheus server `ui.metrics.baseURL`, and then display sidecar proxy metrics for the -service. The UI displays some specific sidecar proxy Prometheus metrics when `ui.metrics.enabled` is `true` and -`ui.enabled` is true. The value of `ui.metrics.enabled` defaults to `"-"` which means it inherits from the value of -`global.metrics.enabled.` - -```yaml -ui: - enabled: true - metrics: - enabled: true # by default, this inherits from the value global.metrics.enabled - provider: "prometheus" - baseURL: http://prometheus-server -``` - -## Deploying Prometheus (_for demo and non-production use-cases only_) - -The Helm chart contains demo manifests for deploying Prometheus. It can be installed with Helm with `prometheus.enabled`. This manifest is based on the community manifest for Prometheus. -The Prometheus deployment is designed to allow quick bootstrapping for trial and demo use cases, and is not recommended for production use-cases. - -Prometheus is be installed in the same namespace as Consul, and gets installed -and uninstalled along with the Consul installation. - -Grafana can optionally be utilized with Prometheus to display metrics. The installation and configuration of Grafana must be managed separately from the Consul Helm chart. The [Layer 7 Observability with Prometheus, Grafana, and Kubernetes](/consul/tutorials/kubernetes/kubernetes-layer7-observability) tutorial provides an installation walkthrough using Helm. - -```yaml -prometheus: - enabled: true -``` diff --git a/website/content/docs/k8s/connect/onboarding-tproxy-mode.mdx b/website/content/docs/k8s/connect/onboarding-tproxy-mode.mdx deleted file mode 100644 index 2cb7adfc9225..000000000000 --- a/website/content/docs/k8s/connect/onboarding-tproxy-mode.mdx +++ /dev/null @@ -1,300 +0,0 @@ ---- -layout: docs -page_title: Onboard services in transparent proxy mode -description: Learn how to enable permissive mutual transport layer security (permissive mTLS) so that you can safely add services to your service mesh when transparent proxy is enabled in Kubernetes deployments. ---- - -# Onboard services while in transparent proxy mode - -This topic describes how to run Consul in permissive mTLS mode so that you can safely onboard existing applications to Consul service mesh when transparent proxy mode is enabled. - -## Background - -When [transparent proxy mode](/consul/docs/k8s/connect/transparent-proxy) is enabled, all service-to-service traffic is secured by mTLS. Until the services that you want to add to the network are fully onboarded, your network may have a mix of mTLS and non-mTLS traffic, which can result in broken service-to-service communication. This situation occurs because sidecar proxies for existing mesh services reject traffic from services that are not yet onboarded. - -You can enable the `permissive` mTLS mode to ensure existing non-mTLS service-to-service traffic is allowed during the onboarding phase. The `permissive` mTLS mode enables sidecar proxies to accept both mTLS and non-mTLS traffic to an application. Using this mode enables you to onboard without downtime and without being required to reconfigure or redeploy your application. - -We recommend enabling permissive mTLS as a temporary operating mode. After onboarding is complete, you should reconfigure all services to `strict` mTLS mode to ensure all service-to-service communication is automatically secured by Consul service mesh. - -!> **Security warning**: We recommend that you disable permissive mTLS mode after onboarding services to prevent non-mTLS connections to the service. Intentions are not enforced and encryption is not enabled for non-mTLS connections. - -## Workflow - -The workflow to configure mTLS settings depends on the applications you are onboarding and the order you intend to onboard them, but the following steps describe the general workflow: - -1. **Configure global settings**: Configure the mesh to allow services to send non-mTLS messages to services outside the mesh. Additionally, configure the mesh to let services in the mesh use permissive mTLS mode. -1. **Enable permissive mTLS mode**: If you are onboarding an upstream service prior to its related downstream services, then enable permissive mTLS mode in the service defaults configuration entry. This allows the upstream service to send encrypted messages from the mesh when you register the service with Consul. -1. **Configure intentions**: Intentions are controls that authorize traffic between services in the mesh. Transparent proxy uses intentions to infer traffic routes between Envoy proxies. Consul does not enforce intentions for non-mTLS connections made while proxies are in permissive mTLS mode, but intentions are necessary for completing the onboarding process. -1. **Register the service**: Create the service definition and configure and deploy its sidecar proxy. -1. **Re-secure the mesh**: If you enabled permissive mTLS mode, switch back to strict mTLS mode and revert the global settings to disable non-mTLS traffic in the service mesh. - -## Requirements - -Permissive mTLS is only supported for services running in transparent proxy mode. Transparent proxy mode is only available on Kubernetes deployments. - -## Limitations - -L7 Envoy features such as Intentions and some [Envoy extensions](/consul/docs/connect/proxies/envoy-extensions) are not supported for non-mTLS traffic. - -## Configure global settings - -Configure Consul to allow services that are already in the mesh to send non-mTLS messages to services outside the mesh. You can also Consul to allow services to run in permissive mTLS mode. Set both configurations in the mesh gateway configuration entry, which is the global configuration that defines service mesh proxy behavior. - -### Allow outgoing non-mTLS traffic - -You can configure a global setting that allows services in the mesh to send non-mTLS messages to services outside the mesh. - -Add the `MeshDestinationsOnly` property to the mesh configuration entry and set the property to `false`. If the services belong to multiple admin partitions, you must apply the setting in each partition: - - - -```hcl -Kind = "mesh" - -TransparentProxy { - MeshDestinationsOnly = false -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - transparentProxy: - meshDestinationsOnly: true -``` - -```json -{ - "Kind": "mesh", - "TransparentProxy": [ - { - "MeshDestinationsOnly": false - } - ] -} -``` - - - -Alternatively, you can selectively allow outgoing traffic on a per-service basis by configuring [outbound port exclusions](/consul/docs/k8s/connect/transparent-proxy/enable-transparent-proxy#exclude-outbound-ports). This setting excludes outgoing traffic from traffic redirection imposed by transparent proxy. When changing this setting, you must redeploy your application. - -### Allow permissive mTLS modes for incoming traffic - -Set the `AllowEnablingPermissiveMutualTLS` parameter in the mesh configuration entry to `true` so that services in the mesh _are able_ to use permissive mTLS mode for incoming traffic. The parameter does not direct services to use permissive mTLS. It is a global parameter that allows services to run in permissive mTLS mode. - - - -```hcl -Kind = "mesh" - -AllowEnablingPermissiveMutualTLS = true -TransparentProxy { - MeshDestinationsOnly = false -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - allowEnablingPermissiveMutualTLS: true - transparentProxy: - meshDestinationsOnly: false -``` - - -```json -{ - "Kind": "mesh", - "AllowEnablingPermissiveMutualTLS": true, - "TransparentProxy": [ - { - "MeshDestinationsOnly": false - } - ] -} -``` - - - -You can change this setting back to `false` at any time, even if there are services currently running in permissive mode. Doing so allows you to decide at which point during the onboarding process to stop allowing services to use permissive mTLS. When the `MeshDestinationOnly` is set to `false`, you must configure all new services added to the mesh with `MutualTLSMode=strict` for the Consul to securely route traffic throughout the mesh. - -## Enable permissive mTLS mode - -Depending on the services you are onboarding, you may not need to enable permissive mTLS mode. If the service does not accept incoming traffic or accepts traffic from downstream services that are already part of the service mesh, then permissive mTLS mode is not required to continue. - -To enable permissive mTLS mode for the service, set [`MutualTLSMode=permissive`](/consul/docs/connect/config-entries/service-defaults#mutualtlsmode) in the service defaults configuration entry for the service. The following example shows how to configure this setting for a service named `example-service`. - - - -```hcl -Kind = "service-defaults" -Name = "example-service" - -MutualTLSMode = "permissive" -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: example-service -spec: - mutualTLSMode: "permissive" -``` - -```json -{ - "Kind": "service-defaults", - "Name": "example-service", - "MutualTLSMode": "permissive" -} -``` - - - -Refer to the [service defaults configuration reference](/consul/docs/connect/config-entries/service-defaults) for information about all settings. - -You can change this setting back to `strict` at any time to ensure mTLS is required for incoming traffic to this service. - -## Configure intentions - -Service intentions are mechanisms in Consul that control traffic between services in the mesh. - -We recommend creating intentions that restrict services to accepting only necessary traffic. You must identify the downstream services that send messages to the service you want to add to the mesh and then create an intention to allow traffic to the service from its downstreams. - -When transparent proxy enabled and the `MutualTLSMode` parameter is set to `permissive`, incoming traffic from a downstream service to another upstream service is not secured by mTLS unless that upstream relationship is known to Consul. You must either define an intention so that Consul can infer the upstream relationship from the intention, or you must include an explicit upstream as part of the service definition for the downstream. - -Refer to [Service intentions](/consul/docs/connect/intentions) for additional information about how intentions work and how to create them. - -## Add the service to the mesh - -Register your service into the catalog and update your application to deploy a sidecar proxy. You should also monitor your service to verify its configuration. Refer to the [Consul on Kubernetes service mesh overview](/consul/docs/k8s/connect) for additional information. - -## Re-secure mesh traffic - -If the newly added service was placed in permissive mTLS mode for onboarding, then you should switch to strict mode when it is safe to do so. You should also revert the global settings that allow services to send and receive non-mTLS traffic. - -### Disable permissive mTLS mode - -Configure the service to operate in strict mTLS mode after the service is no longer receiving incoming non-mTLS traffic. After the downstream services that send messages to this service are all onboarded to the mesh, this service should no longer receive non-mTLS traffic. - -Check the following Envoy listener statistics for the sidecar proxy to determine if the sidecar is receiving non-mTLS traffic: - -- The `tcp.permissive_public_listener.*` statistics indicate non-mTLS traffic. If these metrics are static over a sufficient period of time, that indicates the sidecar is not receiving non-mTLS traffic. -- The `tcp.public_listener.*` statistics indicate mTLS traffic. If incoming traffic is expected to this service and these statistics are changing, then the sidecar is receiving mTLS traffic. - -Refer to the [service mesh observability overview](/consul/docs/connect/observability) and [metrics configuration for Consul on Kubernetes documentation](/consul/docs/k8s/connect/observability/metrics) for additional information. - -If your service is still receiving non-mTLS traffic, complete the following steps to determine the source of the non-mTLS traffic: - -1. Verify the list of downstream services. Optionally, you can enable [Envoy access logging](/consul/docs/connect/observability/access-logs) to determine source IP addresses for incoming traffic, and cross-reference those IP addresses with services in your network. -1. Verify that each downstream is onboarded to the service mesh. If a downstream is not onboarded, consider onboarding it next. -1. Verify that each downstream has an intention that allows it to send traffic to the upstream service. - -After you determine it is safe to move the service to strict mode, set `MutualTLSMode=strict` in the service defaults configuration entry. - - - -```hcl -Kind = "service-defaults" -Name = "example-service" - -MutualTLSMode = "strict" -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: example-service -spec: - mutualTLSMode: "strict" -``` - -```json -{ - "Kind": "service-defaults", - "MutualTLSMode": "strict", - "Name": "example-service" -} -``` - - - -### Disable non-mTLS traffic - -After all services are onboarded, revert the global settings that allow non-mTLS traffic and verify that permissive mTLS mode is not being used in the mesh. - -Set `AllowEnablingPermissiveMutualTLS=false` and `MeshDestinationsOnly=true` in the mesh config entry. - - - -```hcl -Kind = "mesh" - -AllowEnablingPermissiveMutualTLS = false -TransparentProxy { - MeshDestinationsOnly = true -} -``` - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: Mesh -metadata: - name: mesh -spec: - allowEnablingPermissiveMutualTLS: false - transparentProxy: - meshDestinationsOnly: true -``` - - -```json -{ - "Kind": "mesh", - "AllowEnablingPermissiveMutualTLS": false, - "TransparentProxy": [ - { - "MeshDestinationsOnly": true - } - ] -} -``` - - - -For each namespace, admin partition, and datacenter in your Consul deployment, run the `consul config list` and `consul config read` commands to verify that no services are using `permissive` mTLS mode. - -The following command returns any service defaults configuration entries that contain `'MutualTLSMode = "permissive"'`: - -```shell-session -$ consul config list -kind service-defaults -filter 'MutualTLSMode == "permissive"' -``` - - In each admin partition and datacenter, verify that `MutualTLSMode = "permissive"` is not set in the proxy defaults configuration entry . If `MutualTLSMode` is either empty or if the configuration entry is not found, then the mode is `strict` by default. - - The following command fetches the proxy defaults configuration entry: - -```shell-session -$ consul config read -kind proxy-defaults -name global -{ - "Kind": "proxy-defaults", - "Name": "global", - "Partition": "default", - "Namespace": "default", - "TransparentProxy": {}, - "MutualTLSMode": "", - "MeshGateway": {}, - "Expose": {}, - "AccessLogs": {}, - "CreateIndex": 26, - "ModifyIndex": 30 -} -``` diff --git a/website/content/docs/k8s/connect/terminating-gateways.mdx b/website/content/docs/k8s/connect/terminating-gateways.mdx deleted file mode 100644 index 4cac88da67b1..000000000000 --- a/website/content/docs/k8s/connect/terminating-gateways.mdx +++ /dev/null @@ -1,457 +0,0 @@ ---- -layout: docs -page_title: Configure Terminating Gateways for Consul on Kubernetes -description: >- - Terminating gateways send secure requests from the service mesh to locations outside of the Kubernetes cluster. Learn how to configure terminating gateways for k8s, register external services in Consul’s service catalog, and define external sources as upstreams in your service mesh. ---- - -# Configure Terminating Gateways for Consul on Kubernetes - -Adding a terminating gateway is a multi-step process: - -- Update the Helm chart with terminating gateway configuration options -- Deploy the Helm chart -- Access the Consul agent -- Register external services with Consul - -## Requirements - -- [Consul](/consul/docs/install#install-consul) -- [Consul on Kubernetes CLI](/consul/docs/k8s/k8s-cli) -- Familiarity with [Terminating Gateways](/consul/docs/connect/gateways/terminating-gateway) - -## Update the Helm chart with terminating gateway configuration options - -Minimum required Helm options: - - - -```yaml -global: - name: consul -terminatingGateways: - enabled: true -``` - - - -## Deploying the Helm chart - -The Helm chart may be deployed using the [Consul on Kubernetes CLI](/consul/docs/k8s/k8s-cli). - -```shell-session -$ consul-k8s install --config-file values.yaml -``` - -## Accessing the Consul agent - -You can access the Consul server directly from your host by running `kubectl port-forward`. This is helpful for interacting with your Consul UI locally as well as for validating the connectivity of the application. - - - - - -```shell-session -$ kubectl port-forward service/consul-server 8500 & -``` - -```shell-session -$ export CONSUL_HTTP_ADDR=http://localhost:8500 -``` - - - - -If TLS is enabled use port 8501: - -```shell-session -$ kubectl port-forward service/consul-server 8501 & -``` - -```shell-session -$ export CONSUL_HTTP_ADDR=https://localhost:8501 -$ export CONSUL_HTTP_SSL_VERIFY=false -``` - - - - -If ACLs are enabled also set: - -```shell-session -$ export CONSUL_HTTP_TOKEN=$(kubectl get secret consul-bootstrap-acl-token --template='{{.data.token | base64decode }}') -``` - -## Register external services with Consul - - - -Consul on Kubernetes now supports the `Registration` CRD to register services running on external nodes with a terminating gateway. We recommend registering services with this CRD. For more information, refer to [Register services running on external nodes to Consul on Kubernetes](/consul/docs/k8s/deployment-configurations/external-service). - - - -Registering the external services with Consul is a multi-step process: - -- Register external services with Consul -- Update the terminating gateway ACL token if ACLs are enabled -- Create a [`TerminatingGateway`](/consul/docs/connect/config-entries/terminating-gateway) resource to configure the terminating gateway -- Create a [`ServiceIntentions`](/consul/docs/connect/config-entries/service-intentions) resource to allow access from services in the mesh to external service -- Define upstream annotations for any services that need to talk to the external services - -### Register external services with Consul - -You may register an external service with Consul using `ServiceDefaults` if -[`TransparentProxy`](/consul/docs/connect/transparent-proxy) is enabled. Otherwise, -you may register the service as a node in the Consul catalog using the [`Registration` CRD](/consul/docs/connect/config-entries/registration). - - - - -The [`destination`](/consul/docs/connect/config-entries/service-defaults#terminating-gateway-destination) field of the `ServiceDefaults` Custom Resource Definition (CRD) allows clients to dial an external service directly. For this method to work, [`TransparentProxy`](/consul/docs/connect/transparent-proxy) must be enabled. - -The following table describes traffic behaviors when using the `destination` field to route traffic through a terminating gateway: - -| External Services Layer | Client dials | Client uses TLS | Allowed | Notes | -|--------------------------------------|---------------------------|------------------------------|--------------------------|-----------------------------------------------------------------------------------------------| -| L4 | Hostname | Yes | Allowed | `CAFiles` are not allowed because traffic is already end-to-end encrypted by the client. | -| L4 | IP | Yes | Allowed | `CAFiles` are not allowed because traffic is already end-to-end encrypted by the client. | -| L4 | Hostname | No | Not allowed | The sidecar is not protocol aware and can not identify traffic going to the external service. | -| L4 | IP | No | Allowed | There are no limitations on dialing IPs without TLS. | -| L7 | Hostname | Yes | Not allowed | Because traffic is already encrypted before the sidecar, it cannot route as L7 traffic. | -| L7 | IP | Yes | Not allowed | Because traffic is already encrypted before the sidecar, it cannot route as L7 traffic. | -| L7 | Hostname | No | Allowed | A `Host` or `:authority` header is required. | -| L7 | IP | No | Allowed | There are no limitations on dialing IPs without TLS. | - -You can provide a `caFile` to secure traffic that connect to external services through the terminating gateway. -Refer to [Create the configuration entry for the terminating gateway](#create-the-configuration-entry-for-the-terminating-gateway) for details. - --> **Note:** Regardless of the `protocol` specified in the `ServiceDefaults`, [L7 intentions](/consul/docs/connect/config-entries/service-intentions#permissions) are not currently supported with `ServiceDefaults` destinations. - -Create a `ServiceDefaults` custom resource for the external service: - - - -```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceDefaults - metadata: - name: example-https - spec: - protocol: tcp - destination: - addresses: - - "example.com" - port: 443 -``` - - - -Apply the `ServiceDefaults` resource with `kubectl apply`: - -```shell-session -$ kubectl apply --filename service-defaults.yaml -``` - -All other terminating gateway operations can use the name of the `ServiceDefaults` component, in this case "example-https", as a Consul service name. - - - - - -Normally, Consul services are registered on the node that -they're running on. Since this service is an external service, there is no Consul node -to register it onto. Instead, we must make up a node name and register the -service to that node. - -Create a sample external service and register it with Consul. - - - -```json -{ - "Node": "example_com", - "Address": "example.com", - "NodeMeta": { - "external-node": "true", - "external-probe": "true" - }, - "Service": { - "Address": "example.com", - "ID": "example-https", - "Service": "example-https", - "Port": 443 - } -} -``` - - - -- `"Node": "example_com"` is our made up node name. -- `"Address": "example.com"` is the address of our node. Services registered to that node will use this address if - their own address isn't specified. If you're registering multiple external services, ensure you - use different node names with different addresses or set the `Service.Address` key. -- `"Service": { "Address": "example.com" ... }` is the address of our service. In this example this doesn't need to be set - since the address of the node is the same, but if there were two services registered to that same node - then this should be set. - -Register the external service with Consul: - -```shell-session -$ curl --request PUT --data @external.json --insecure $CONSUL_HTTP_ADDR/v1/catalog/register -true -``` - -If ACLs and TLS are enabled: - -```shell-session -$ curl --request PUT --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" --data @external.json --insecure $CONSUL_HTTP_ADDR/v1/catalog/register -true -``` - - - -### Update terminating gateway ACL role if ACLs are enabled - -If ACLs are enabled, update the terminating gateway ACL role to have `service:write` permissions on all of the services -being represented by the gateway. - -Create a new policy that includes the write permission for the service you created. - - - -```hcl -service "example-https" { - policy = "write" -} -``` - - - -```shell-session -$ consul acl policy create -name "example-https-write-policy" -rules @write-policy.hcl -ID: xxxxxxxxxxxxxxx -Name: example-https-write-policy -Description: -Datacenters: -Rules: -service "example-https" { - policy = "write" -} -``` - -Obtain the ID of the terminating gateway role. - -```shell-session -$ consul acl role list -format=json | jq --raw-output '[.[] | select(.Name | endswith("-terminating-gateway-acl-role"))] | if (. | length) == 1 then (. | first | .ID) else "Unable to determine the role ID because there are multiple roles matching this name.\n" | halt_error end' - -``` - -Update the terminating gateway ACL role with the new policy. - -```shell-session -$ consul acl role update -id -policy-name example-https-write-policy -AccessorID: -SecretID: -Description: RELEASE_NAME-terminating-gateway-acl-role -Local: true -Create Time: 2021-01-08 21:18:47.957450486 +0000 UTC -Policies: - 63bf1d9b-a87d-8672-ddcb-d25e2d88adb8 - RELEASE_NAME-terminating-gateway-policy - f63d1ae6-ffe7-44bd-bf7a-704a86939a63 - example-https-write-policy -``` - -### Create the configuration entry for the terminating gateway - -Once the roles have been updated, create the [TerminatingGateway](/consul/docs/connect/config-entries/terminating-gateway) -resource to configure the terminating gateway: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: TerminatingGateway -metadata: - name: terminating-gateway -spec: - services: - - name: example-https -``` - - - -If TLS is enabled for external services registered through the Consul catalog and you are not using [transparent proxy `destination`](#register-an-external-service-as-a-destination), you must include the [`caFile`](/consul/docs/connect/config-entries/terminating-gateway#cafile) parameter that points to the system trust store of the terminating gateway container. -By default, the trust store is located in the `/etc/ssl/certs/ca-certificates.crt` directory. -Configure the [`caFile`](/consul/docs/connect/config-entries/terminating-gateway#cafile) parameter in the `TerminatingGateway` config entry to point to the `/etc/ssl/cert.pem` directory if TLS is enabled and you are using one of the following components: -- Consul Helm chart 0.43 or older -- An Envoy image with an alpine base image - -Apply the `TerminatingGateway` resource with `kubectl apply`: - -```shell-session -$ kubectl apply --filename terminating-gateway.yaml -``` - -If using ACLs and TLS, create a [`ServiceIntentions`](/consul/docs/connect/config-entries/service-intentions) resource to allow access from services in the mesh to the external service: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: example-https -spec: - destination: - name: example-https - sources: - - name: static-client - action: allow -``` - - - --> **NOTE**: [L7 Intentions](/consul/docs/connect/config-entries/service-intentions#permissions) are not currently supported for `ServiceDefaults` destinations. - -Apply the `ServiceIntentions` resource with `kubectl apply`: - -```shell-session -$ kubectl apply --filename service-intentions.yaml -``` - -### Define the external services as upstreams for services in the mesh - -As a final step, you may define and deploy the external services as upstreams for the internal mesh services that wish to talk to them. -An example deployment is provided which will serve as a static client for the terminating gateway service. - - - - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: static-client -spec: - selector: - app: static-client - ports: - - port: 80 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-client ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-client -spec: - replicas: 1 - selector: - matchLabels: - app: static-client - template: - metadata: - name: static-client - labels: - app: static-client - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - spec: - containers: - - name: static-client - image: curlimages/curl:latest - command: ['/bin/sh', '-c', '--'] - args: ['while true; do sleep 30; done;'] - serviceAccountName: static-client -``` - - - - - - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: static-client -spec: - selector: - app: static-client - ports: - - port: 80 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-client ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-client -spec: - replicas: 1 - selector: - matchLabels: - app: static-client - template: - metadata: - name: static-client - labels: - app: static-client - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - 'consul.hashicorp.com/connect-service-upstreams': 'example-https:1234' - spec: - containers: - - name: static-client - image: curlimages/curl:latest - command: ['/bin/sh', '-c', '--'] - args: ['while true; do sleep 30; done;'] - serviceAccountName: static-client -``` - - - - - - -Deploy the service with `kubectl apply`. - -```shell-session -$ kubectl apply --filename static-client.yaml -``` - -Wait for the service to be ready. - -```shell-session -$ kubectl rollout status deploy static-client --watch -deployment "static-client" successfully rolled out -``` - -You can verify connectivity of the static-client and terminating gateway via a curl command. - - - - -```shell-session -$ kubectl exec deploy/static-client -- curl -vvvs https://example.com/ -``` - - - - -```shell-session -$ kubectl exec deploy/static-client -- curl -vvvs --header "Host: example-https.com" http://localhost:1234/ -``` - - - - diff --git a/website/content/docs/k8s/connect/transparent-proxy/enable-transparent-proxy.mdx b/website/content/docs/k8s/connect/transparent-proxy/enable-transparent-proxy.mdx deleted file mode 100644 index ce3df409eb95..000000000000 --- a/website/content/docs/k8s/connect/transparent-proxy/enable-transparent-proxy.mdx +++ /dev/null @@ -1,256 +0,0 @@ ---- -layout: docs -page_title: Enable transparent proxy mode -description: >- - Learn how to enable transparent proxy mode, which enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh and increase application security without configuring individual upstream services. ---- - -# Enable transparent proxy mode - -This topic describes how to use transparent proxy mode in your service mesh. Transparent proxy allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. Refer to [Transparent proxy overview](/consul/docs/k8s/connect/transparent-proxy) for additional information. - -## Requirements - -Your network must meet the following environment and software requirements to use transparent proxy. - -* Transparent proxy is available for Kubernetes environments. -* Consul 1.10.0+ -* Consul Helm chart 0.32.0+. If you want to use the Consul CNI plugin to redirect traffic, Helm chart 0.48.0+ is required. Refer to [Enable the Consul CNI plugin](#enable-the-consul-cni-plugin) for additional information. -* You must create [service intentions](/consul/docs/connect/intentions) that explicitly allow communication between services. Consul uses service intentions to infer upstream connections and route messages appropriately between sidecar proxies. -* The `ip_tables` kernel module must be running on all worker nodes within a Kubernetes cluster. If you are using the `modprobe` Linux utility, for example, issue the following command: - - `$ modprobe ip_tables` - -~> **Upgrading to a supported version**: Always follow the [proper upgrade path](/consul/docs/upgrading/upgrade-specific/#transparent-proxy-on-kubernetes) when upgrading to a supported version of Consul, Consul on Kubernetes (`consul-k8s`), and the Consul Helm chart. - -## Enable transparent proxy - -Transparent proxy mode is enabled for the entire cluster by default when you install Consul on Kubernetes using the Consul Helm chart. Refer to the [Consul Helm chart reference](/consul/docs/k8s/helm) for information about all default configurations. - -You can explicitly enable transparent proxy for the entire cluster, individual namespaces, and individual services. - -### Entire cluster - -Use the `connectInject.transparentProxy.defaultEnabled` Helm value to enable or disable transparent proxy for the entire cluster: - -```yaml -connectInject: - transparentProxy: - defaultEnabled: true -``` - -### Kubernetes namespace - -Apply the `consul.hashicorp.com/transparent-proxy=true` label to enable transparent proxy for a Kubernetes namespace. The label overrides the `connectInject.transparentProxy.defaultEnabled` Helm value and defines the default behavior of Pods in the namespace. The following example enables transparent proxy for Pods in the `my-app` namespace: - -```shell-session -$ kubectl label namespaces my-app "consul.hashicorp.com/transparent-proxy=true" -``` -### Individual service - -Apply the `consul.hashicorp.com/transparent-proxy=true` annotation to enable transparent proxy on the Pod for each service. The annotation overrides the Helm value and the namespace label. The following example enables transparent proxy for the `static-server` service: - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: static-server -spec: - selector: - app: static-server - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-server ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-server -spec: - replicas: 1 - selector: - matchLabels: - app: static-server - template: - metadata: - name: static-server - labels: - app: static-server - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - 'consul.hashicorp.com/transparent-proxy': 'true' - spec: - containers: - - name: static-server - image: hashicorp/http-echo:latest - args: - - -text="hello world" - - -listen=:8080 - ports: - - containerPort: 8080 - name: http - serviceAccountName: static-server -``` - -## Enable the Consul CNI plugin - -By default, Consul generates a `connect-inject init` container as part of the Kubernetes Pod startup process. The container configures traffic redirection in the service mesh through the sidecar proxy. To configure redirection, the container requires elevated CAP_NET_ADMIN privileges, which may not be compatible with security policies in your organization. - -Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. Because the plugin is executed by the Kubernetes kubelet, it already has the elevated privileges necessary to configure the network. Additionally, you do not need to specify annotations that automatically overwrite Kubernetes HTTP health probes when the plugin is enabled (see [Overwrite Kubernetes HTTP health probes](#overwrite-kubernetes-http-health-probes)). - -The Consul Helm chart installs the CNI plugin, but it is disabled by default. Refer to the [instructions for enabling the CNI plugin](/consul/docs/k8s/installation/install#enable-the-consul-cni-plugin) in the Consul on Kubernetes installation documentation for additional information. - -## Traffic redirection - -There are two mechanisms for redirecting traffic through the sidecar proxies. By default, Consul injects an init container that redirects all inbound and outbound traffic. The default mechanism requires elevated permissions (CAP_NET_ADMIN) in order to redirect traffic to the service mesh. - -Alternatively, you can enable the Consul CNI plugin to handle traffic redirection. Because the Kubernetes kubelet runs CNI plugins, the Consul CNI plugin has the necessary privileges to apply routing tables in the network. - -Both mechanisms redirect all inbound and outbound traffic, but you can configure exceptions for specific Pods or groups of Pods. The following annotations enable you to exclude certain traffic from being redirected to sidecar proxies. - -### Exclude inbound ports - -The [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) annotation defines a comma-separated list of inbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: - - - -```yaml -metadata: - annotations: - consul.hashicorp.com/transparent-proxy-exclude-inbound-ports: "8200, 8201" -``` - - - -### Exclude outbound ports - -The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) annotation defines a comma-separated list of outbound ports to exclude from traffic redirection when running in transparent proxy mode. The port numbers are string data values. In the following example, services in the pod at port `8200` and `8201` are not redirected through the transparent proxy: - - - -```yaml -metadata: - annotations": - consul.hashicorp.com/transparent-proxy-exclude-outbound-ports: "8200, 8201" -``` - - - -### Exclude outbound CIDR blocks - -The [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) annotation -defines a comma-separated list of outbound CIDR blocks to exclude from traffic redirection when running in transparent proxy mode. The CIDR blocks are string data values. -In the following example, services in the `3.3.3.3/24` IP range are not redirected through the transparent proxy: - - - -```yaml -metadata: - annotations: - consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: "3.3.3.3,3.3.3.3/24" -``` - - -### Exclude user IDs - -The [`consul.hashicorp.com/transparent-proxy-exclude-uids`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-uids) annotation -defines a comma-separated list of additional user IDs to exclude from traffic redirection when running in transparent proxy mode. The user IDs are string data values. -In the following example, services with the IDs `4444 ` and `44444 ` are not redirected through the transparent proxy: - - - -```yaml -metadata: - annotations: - consul.hashicorp.com/transparent-proxy-exclude-uids: "4444,44444" - } -} -``` - - - -## Kubernetes HTTP health probes configuration - -By default, `connect-inject` is disabled. As a result, Consul on Kubernetes uses a mechanism for traffic redirection that interferes with [Kubernetes HTTP health -probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/). This is because probes expect the kubelet to reach the application container on the probe's endpoint. Instead, traffic is redirected through the sidecar proxy. As a result, health probes return errors because the kubelet does not encrypt that traffic using a mesh proxy. - -There are two methods for solving this issue. The first method is to set the `connectInject.transparentProxy.defaultOverwriteProbes` annotation to overwrite the Kubernetes HTTP health probes so that they point to the proxy. The second method is to [enable the Consul container network interface (CNI) plugin](#enable-the-consul-cni-plugin) to perform traffic redirection. Refer to the [Consul on Kubernetes installation instructions](/consul/docs/k8s/installation/install) for additional information. - -### Overwrite Kubernetes HTTP health probes - -You can either include the `connectInject.transparentProxy.defaultOverwriteProbes` Helm value to your command or add the `consul.hashicorp.com/transparent-proxy-overwrite-probes` Kubernetes annotation to your pod configuration to overwrite health probes. - -Refer to [Kubernetes Health Checks in Consul on Kubernetes](/consul/docs/k8s/connect/health) for additional information. - -## Dial services across Kubernetes cluster - -If your [Consul servers are federated between Kubernetes clusters](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes), -then you must configure services in one Kubernetes cluster to explicitly dial a service in the datacenter of another Kubernetes cluster using the -[consul.hashicorp.com/connect-service-upstreams](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. -The following example configures the service to dial an upstream service called `my-service` in datacenter `dc2` on port `1234`: - -```yaml -consul.hashicorp.com/connect-service-upstreams: "my-service:1234:dc2" -``` - -If your Consul cluster is deployed to a [single datacenter spanning multiple Kubernetes clusters](/consul/docs/k8s/deployment-configurations/single-dc-multi-k8s), -then you must configure services in one Kubernetes cluster to explicitly dial a service in another Kubernetes cluster using the -[consul.hashicorp.com/connect-service-upstreams](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. -The following example configures the service to dial an upstream service called `my-service` in another Kubernetes cluster on port `1234`: - -```yaml -consul.hashicorp.com/connect-service-upstreams: "my-service:1234" -``` - -You do not need to configure services to explicitly dial upstream services if your Consul clusters are connected with a [peering connection](/consul/docs/connect/cluster-peering). - -## Configure service selectors - -When transparent proxy is enabled, traffic sent to [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) -or Pod IP addresses is redirected through the proxy. You must use a selector to bind Kubernetes Services to Pods as you define Kubernetes Services in the mesh. -The Kubernetes Service name must match the Consul service name to use KubeDNS. This is the default behavior unless you have applied the `consul.hashicorp.com/connect-service` -Kubernetes annotation to the service pods. The annotation overrides the Consul service name. - -Consul configures redirection for each Pod bound to the Kubernetes Service using `iptables` rules. The rules redirect all inbound and outbound traffic through an inbound and outbound listener on the sidecar proxy. Consul configures the proxy to route traffic to the appropriate upstream services based on [service -intentions](/consul/docs/connect/config-entries/service-intentions), which address the upstream services using KubeDNS. - -In the following example, the Kubernetes service selects `sample-app` application Pods so that they can be reached within the mesh. - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: sample-app - namespace: default -spec: - selector: - app: sample-app - ports: - - protocol: TCP - port: 80 -``` - - - -Additional services can query the [KubeDNS](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) at `sample-app.default.svc.cluster.local` to reach `sample-app`. If ACLs are enabled and configured with default `deny` policies, the configuration also requires a [`ServiceIntention`](/consul/docs/connect/config-entries/service-intentions) to allow it to talk to `sample-app`. - -You can query the KubeDNS for a service that belongs to a sameness group at `sample-app.virtual.group-name.sg.consul`. This syntax is required when failover is desired. To use KubeDNS with sameness groups, `spec.defaultForFailover` must be set to `true` in the sameness group CRD. Refer to [sameness group configuration entry reference](/consul/docs/connect/config-entries/sameness-group) for more information. - -### Headless services -For services that are not addressed using a virtual cluster IP, you must configure the upstream service using the [DialedDirectly](/consul/docs/connect/config-entries/service-defaults#dialeddirectly) option. Then, use DNS to discover individual instance addresses and dial them through the transparent proxy. When this mode is enabled on the upstream, services present service mesh certificates for mTLS and intentions are enforced at the destination. - -Note that when dialing individual instances, Consul ignores the HTTP routing rules configured with configuration entries. The transparent proxy acts as a TCP proxy to the original destination IP address. - -## Known limitations - -- Deployment configurations with federation across or a single datacenter spanning multiple clusters must explicitly dial a service in another datacenter or cluster using annotations. - -- When dialing headless services, the request is proxied using a plain TCP proxy. Consul does not take into consideration the upstream's protocol. diff --git a/website/content/docs/k8s/connect/transparent-proxy/index.mdx b/website/content/docs/k8s/connect/transparent-proxy/index.mdx deleted file mode 100644 index 78db657133f6..000000000000 --- a/website/content/docs/k8s/connect/transparent-proxy/index.mdx +++ /dev/null @@ -1,47 +0,0 @@ ---- -layout: docs -page_title: Transparent proxy overview -description: >- - Transparent proxy enables Consul on Kubernetes to direct inbound and outbound traffic through the service mesh. Learn how transparently proxying increases application security without configuring individual upstream services. ---- - -# Transparent proxy overview - -This topic provides overview information about transparent proxy mode, which allows applications to communicate through the service mesh without modifying their configurations. Transparent proxy also hardens application security by preventing direct inbound connections that bypass the mesh. - -## Introduction - -When service mesh proxies are in transparent mode, Consul service mesh uses IPtables to direct all inbound and outbound traffic to the sidecar. Consul also uses information configured in service intentions to infer routes, which eliminates the need to explicitly configure upstreams. - -### Transparent proxy enabled - -The following diagram shows how Consul routes traffic when proxies are in transparent mode: - -![Diagram demonstrating that with transparent proxy, connections are automatically routed through the mesh](/img/consul-connect/with-transparent-proxy.png) - -### Transparent proxy disabled - -When transparent proxy mode is disabled, you must manually configure explicit upstreams, configure your applications to query for services at `localhost:`, and configure applications to only listen on the loopback interface to prevent services from bypassing the mesh. - -The following diagram shows how Consul routes traffic when transparent proxy mode is disabled: - -![Diagram demonstrating that without transparent proxy, applications must "opt in" to connecting to their dependencies through the mesh](/img/consul-connect/without-transparent-proxy.png) - -Transparent proxy is available for Kubernetes environments. As part of the integration with Kubernetes, Consul registers Kubernetes Services, injects sidecar proxies, and enables traffic redirection. - -## Supported networking architectures - -Transparent proxy mode enables several networking architectures and workflows. You can query Consul DNS to discover upstreams for single services, virtual services, and failover service instances that are in peered clusters. - -Consul supports the following intra-datacenter connection types for discovering upstreams when transparent proxy mode is enabled: - -- KubeDNS lookups across WAN-federated datacenters -- Consul DNS lookups across WAN-federated datacenters -- KubeDNS lookups in peered clusters and admin partitions -- Consul DNS lookups in peered clusters and admin partitions - -## Mutual TLS for transparent proxy mode - -Transparent proxy mode is enabled by default when you install Consul on Kubernetes using the Consul Helm chart. As a result, all services in the mesh must communicate through sidecar proxies, which enforce service intentions and mTLS encryption for the service mesh. While onboarding new services to service mesh, your network may have mixed mTLS and non-mTLS traffic, which can result in broken service-to-service communication. - -You can temporarily enable permissive mTLS mode during the onboarding process so that existing mesh services can accept traffic from services that are not yet fully onboarded. Permissive mTLS enables sidecar proxies to access both mTLS and non-mTLS traffic. Refer to [Onboard mesh services in transparent proxy mode](/consul/docs/k8s/connect/onboarding-tproxy-mode) for additional information. diff --git a/website/content/docs/k8s/crds/index.mdx b/website/content/docs/k8s/crds/index.mdx deleted file mode 100644 index 3db8edaf2f45..000000000000 --- a/website/content/docs/k8s/crds/index.mdx +++ /dev/null @@ -1,390 +0,0 @@ ---- -layout: docs -page_title: Custom Resource Definitions for Consul on Kubernetes -description: >- - Consul on Kubernetes supports Consul's configuration entry kind through Custom Resource Definitions (CRDs). Learn how to configure Helm charts to enable CRDs and use kubectl to create, manage, and delete mesh components like gateways and intentions on k8s. ---- - -# Custom Resource Definitions (CRDs) for Consul on Kubernetes - -This topic describes how to manage Consul [configuration entries](/consul/docs/agent/config-entries) -with Kubernetes Custom Resources. Configuration entries provide cluster-wide defaults for the service mesh. - -## Supported Configuration Entries - -You can specify the following values in the `kind` field. Click on a configuration entry to view its documentation: - -- [`Mesh`](/consul/docs/connect/config-entries/mesh) -- [`ExportedServices`](/consul/docs/connect/config-entries/exported-services) -- [`PeeringAcceptor`](/consul/docs/k8s/connect/cluster-peering/tech-specs#peeringacceptor) -- [`PeeringDialer`](/consul/docs/k8s/connect/cluster-peering/tech-specs#peeringdialer) -- [`ProxyDefaults`](/consul/docs/connect/config-entries/proxy-defaults) -- [`Registration`](/consul/docs/connect/config-entries/registration) -- [`SamenessGroup`](/consul/docs/connect/config-entries/sameness-group) -- [`ServiceDefaults`](/consul/docs/connect/config-entries/service-defaults) -- [`ServiceSplitter`](/consul/docs/connect/config-entries/service-splitter) -- [`ServiceRouter`](/consul/docs/connect/config-entries/service-router) -- [`ServiceResolver`](/consul/docs/connect/config-entries/service-resolver) -- [`ServiceIntentions`](/consul/docs/connect/config-entries/service-intentions) -- [`IngressGateway`](/consul/docs/connect/config-entries/ingress-gateway) -- [`TerminatingGateway`](/consul/docs/connect/config-entries/terminating-gateway) - -## Installation - -Verify that the minimum version of the helm chart (`0.28.0`) is installed: - -```shell-session -$ helm search repo hashicorp/consul -NAME CHART VERSION APP VERSION DESCRIPTION -hashicorp/consul 0.28.0 1.9.1 Official HashiCorp Consul Chart -``` - -Update your helm repository cache if necessary: - -```shell-session -$ helm repo update -Hang tight while we grab the latest from your chart repositories... -...Successfully got an update from the "hashicorp" chart repository -Update Complete. ⎈Happy Helming!⎈ -``` - -Refer to [Install with Helm Chart](/consul/docs/k8s/installation/install) for further installation -instructions. - -**Note**: Configuration entries require `connectInject` to be enabled, which is a default behavior in the official Helm Chart. If you disabled this setting, you must re-enable it to use CRDs. - -## Upgrading An Existing Cluster to CRDs - -If you have an existing Consul cluster running on Kubernetes you may need to perform -extra steps to migrate to CRDs. Refer to [Upgrade An Existing Cluster to CRDs](/consul/docs/k8s/crds/upgrade-to-crds) for full instructions. - -## Usage - -Once installed, you can use `kubectl` to create and manage Consul's configuration entries. - -### Create - -You can create configuration entries with `kubectl apply`. - -```shell-session -$ cat < protocol: tcp -servicedefaults.consul.hashicorp.com/foo edited -``` - -You can then use `kubectl get` to ensure the change was synced to Consul: - -```shell-session -$ kubectl get servicedefaults foo -NAME SYNCED -foo True -``` - -### Delete - -You can use `kubectl delete [kind] [name]` to delete the configuration entry: - -```shell-session -$ kubectl delete servicedefaults foo -servicedefaults.consul.hashicorp.com "foo" deleted -``` - -You can then use `kubectl get` to ensure the configuration entry was deleted: - -```shell-session -$ kubectl get servicedefaults foo -Error from server (NotFound): servicedefaults.consul.hashicorp.com "foo" not found -``` - -#### Delete Hanging - -If running `kubectl delete` hangs without exiting, there may be -a dependent configuration entry registered with Consul that prevents the target configuration entry from being -deleted. For example, if you set the protocol of your service to `http` in `ServiceDefaults` and then -create a `ServiceSplitter`, you will not be able to delete `ServiceDefaults`. - -This is because by deleting the `ServiceDefaults` config, you are setting the -protocol back to the default which is `tcp`. Because `ServiceSplitter` requires -that the service has an `http` protocol, Consul will not allow the `ServiceDefaults` -to be deleted since that would put Consul into a broken state. - -In order to delete the `ServiceDefaults` config, you would need to first delete -the `ServiceSplitter`. - -## Kubernetes Namespaces - -### Consul CE ((#consul_oss)) - -Consul Community Edition (Consul CE) ignores Kubernetes namespaces and registers all services into the same -global Consul registry based on their names. For example, service `web` in Kubernetes namespace -`web-ns` and service `admin` in Kubernetes namespace `admin-ns` are registered into -Consul as `web` and `admin` with the Kubernetes source namespace ignored. - -When creating custom resources to configure these services, the namespace of the -custom resource is also ignored. For example, you can create a `ServiceDefaults` -custom resource for service `web` in the Kubernetes namespace `admin-ns` even though -the `web` service is actually running in the `web-ns` namespace (although this is not recommended): - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: web - namespace: admin-ns -spec: - protocol: http ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: web - namespace: web-ns -spec: ... -``` - -~> **Note:** If you create two custom resources with identical `kind` and `name` values in different Kubernetes namespaces, the last one you create is not able to sync. - -#### ServiceIntentions Special Case - -`ServiceIntentions` are different from the other custom resources because the -name of the resource doesn't matter. For other resources, the name of the resource -determines which service it configures. For example, this resource configures -the service `web`: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: web -spec: - protocol: http -``` - - - -For `ServiceIntentions`, because we need to support the ability to create -wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to **any** service), -and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name` -to configure the destination service for the intention: - - - -```yaml -# foo => * (allow) -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: name-does-not-matter -spec: - destination: - name: '*' - sources: - - name: foo - action: allow ---- -# foo => web (allow) -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: name-does-not-matter -spec: - destination: - name: web - sources: - - name: foo - action: allow -``` - - - -~> **Note:** If two `ServiceIntentions` resources set the same `spec.destination.name`, the -last one created is not synced. - -### Consul Enterprise - -Consul Enterprise supports multiple configurations for how Kubernetes namespaces are mapped -to Consul namespaces. The Consul namespace that the custom resource is registered -into depends on the configuration being used but in general, you should create your -custom resources in the same Kubernetes namespace as the service they configure. - -The details on each configuration are: - -1. **Mirroring** - The Kubernetes namespace is mirrored into Consul. For example, the - service `web` in Kubernetes namespace `web-ns` is registered as service `web` - in the Consul namespace `web-ns`. In the same vein, a `ServiceDefaults` custom resource with - name `web` in Kubernetes namespace `web-ns` configures that same service. - - This is configured with [`connectInject.consulNamespaces`](/consul/docs/k8s/helm#v-connectinject-consulnamespaces): - - - - ```yaml - global: - name: consul - enableConsulNamespaces: true - image: hashicorp/consul-enterprise:-ent - connectInject: - consulNamespaces: - mirroringK8S: true - ``` - - - -1. **Mirroring with prefix** - The Kubernetes namespace is mirrored into Consul - with a prefix added to the Consul namespace. For example, if the prefix is `k8s-` then service `web` in Kubernetes namespace `web-ns` will be registered as service `web` - in the Consul namespace `k8s-web-ns`. In the same vein, a `ServiceDefaults` custom resource with - name `web` in Kubernetes namespace `web-ns` configures that same service. - - This is configured with [`connectInject.consulNamespaces`](/consul/docs/k8s/helm#v-connectinject-consulnamespaces): - - - - ```yaml - global: - name: consul - enableConsulNamespaces: true - image: hashicorp/consul-enterprise:-ent - connectInject: - consulNamespaces: - mirroringK8S: true - mirroringK8SPrefix: k8s- - ``` - - - -1. **Single destination namespace** - The Kubernetes namespace is ignored and all services - are registered into the same Consul namespace. For example, if the destination Consul - namespace is `my-ns` then service `web` in Kubernetes namespace `web-ns` is registered as service `web` in Consul namespace `my-ns`. - - In this configuration, the Kubernetes namespace of the custom resource is ignored. - For example, a `ServiceDefaults` custom resource with the name `web` in Kubernetes - namespace `admin-ns` configures the service with name `web` even though that - service is running in Kubernetes namespace `web-ns` because the `ServiceDefaults` - resource ends up registered into the same Consul namespace `my-ns`. - - This is configured with [`connectInject.consulNamespaces`](/consul/docs/k8s/helm#v-connectinject-consulnamespaces): - - - - ```yaml - global: - name: consul - enableConsulNamespaces: true - image: hashicorp/consul-enterprise:-ent - connectInject: - consulNamespaces: - consulDestinationNamespace: 'my-ns' - ``` - - - - ~> **Note:** In this configuration, if two custom resources are created in two Kubernetes namespaces with identical `name` and `kind` values, the last one created is not synced. - -#### ServiceIntentions Special Case (Enterprise) - -`ServiceIntentions` are different from the other custom resources because the -name of the resource does not matter. For other resources, the name of the resource -determines which service it configures. For example, this resource configures -the service `web`: - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceDefaults -metadata: - name: web -spec: - protocol: http -``` - - - -For `ServiceIntentions`, because we need to support the ability to create -wildcard intentions (e.g. `foo => * (allow)` meaning that `foo` can talk to any service), -and because `*` is not a valid Kubernetes resource name, we instead use the field `spec.destination.name` -to configure the destination service for the intention: - - - -```yaml -# foo => * (allow) -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: name-does-not-matter -spec: - destination: - name: '*' - sources: - - name: foo - action: allow ---- -# foo => web (allow) -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: name-does-not-matter -spec: - destination: - name: web - sources: - - name: foo - action: allow -``` - - - -In addition, we support the field `spec.destination.namespace` to configure -the destination service's Consul namespace. If `spec.destination.namespace` -is empty, then the Consul namespace used is the same as the other -config entries as outlined above. diff --git a/website/content/docs/k8s/crds/upgrade-to-crds.mdx b/website/content/docs/k8s/crds/upgrade-to-crds.mdx deleted file mode 100644 index 0c09102a33e3..000000000000 --- a/website/content/docs/k8s/crds/upgrade-to-crds.mdx +++ /dev/null @@ -1,342 +0,0 @@ ---- -layout: docs -page_title: Upgrade Existing Clusters to Use Custom Resource Definitions -description: >- - Kubernetes clusters configured with a Consul Helm chart version older than 0.30.0 require updates in order to use CRDs. Learn about upgrading to a supported Helm version and how to migrate a Consul config entry to a k8s CRD. ---- - -# Upgrade Existing Clusters to Use Custom Resource Definitions - -Upgrading to consul-helm versions >= `0.30.0` will require some changes if -you utilize the following: - -- [`connectInject.centralConfig.enabled`](#central-config-enabled) -- [`connectInject.centralConfig.defaultProtocol`](#default-protocol) -- [`connectInject.centralConfig.proxyDefaults`](#proxy-defaults) -- [`meshGateway.globalMode`](#mesh-gateway-mode) -- [connect annotation `consul.hashicorp.com/connect-service-protocol`](#connect-service-protocol-annotation) - -## Central Config Enabled - -If you were previously setting `centralConfig.enabled` to `false`: - -```yaml -connectInject: - centralConfig: - enabled: false -``` - -Then instead you must use `server.extraConfig` and `client.extraConfig`: - -```yaml -client: - extraConfig: | - {"enable_central_service_config": false} -server: - extraConfig: | - {"enable_central_service_config": false} -``` - -If you were previously setting it to `true`, it now defaults to `true` so no -changes are required, but you can remove it from your config if you desire. - -## Default Protocol - -If you were previously setting: - -```yaml -connectInject: - centralConfig: - defaultProtocol: 'http' # or any value -``` - -Now you must use [custom resources](/consul/docs/k8s/crds) to manage the protocol for -new and existing services: - -1. To upgrade, first ensure you're running Consul >= 1.9.0. See [Consul Version Upgrade](/consul/docs/k8s/upgrade#consul-version-upgrade) - for more information on how to upgrade Consul versions. - - This version is required to support custom resources. - -1. Next, modify your Helm values: - 1. Remove the `defaultProtocol` config. This won't affect existing services. -1. Now you can upgrade your Helm chart to the latest version with the new Helm values. -1. From now on, any new service will require a [`ServiceDefaults`](/consul/docs/connect/config-entries/service-defaults) - resource to set its protocol: - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceDefaults - metadata: - name: my-service-name - spec: - protocol: 'http' - ``` - -1. Existing services will maintain their previously set protocol. If you wish to - change that protocol, you must migrate that service's `service-defaults` config - entry to a `ServiceDefaults` resource. See [Migrating Config Entries](#migrating-config-entries). - --> **Note:** This setting was removed because it didn't support changing the protocol after a service was first run and because it didn't work in secondary datacenters. - -## Proxy Defaults - -If you were previously setting: - -```yaml -connectInject: - centralConfig: - proxyDefaults: | - { - "key": "value" // or any values - } -``` - -You will need to perform the following steps to upgrade: - -1. You must remove the setting from your Helm values. This won't have any - effect on your existing cluster because this config is only read when - the cluster is **first created**. -1. You can then upgrade the Helm chart. -1. If you later wish to _change_ any of the proxy defaults settings, you will need - to follow the [Migrating Config Entries](#migrating-config-entries) - instructions for your `proxy-defaults` config entry. - - This will require Consul >= 1.9.0. - --> **Note:** This setting was removed because it couldn't be changed after initial -installation. - -## Mesh Gateway Mode - -If you were previously setting: - -```yaml -meshGateway: - globalMode: 'local' # or any value -``` - -You will need to perform the following steps to upgrade: - -1. You must remove the setting from your Helm values. This won't have any - effect on your existing cluster because this config is only read when - the cluster is **first created**. -1. You can then upgrade the Helm chart. -1. If you later wish to _change_ the mode or any other setting in [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults), you will need - to follow the [Migrating Config Entries](#migrating-config-entries) - instructions to migrate your `proxy-defaults` config entry to a `ProxyDefaults` resource. - - This will require Consul >= 1.9.0. - --> **Note:** This setting was removed because it couldn't be changed after initial -installation. - -## connect-service-protocol Annotation - -If any of your mesh services had the `consul.hashicorp.com/connect-service-protocol` -annotation set, e.g. - -```yaml -apiVersion: apps/v1 -kind: Deployment -... -spec: - template: - metadata: - annotations: - "consul.hashicorp.com/connect-inject": "true" - "consul.hashicorp.com/connect-service-protocol": "http" - ... -``` - -You will need to perform the following steps to upgrade: - -1. Ensure you're running Consul >= 1.9.0. See [Consul Version Upgrade](/consul/docs/k8s/upgrade#consul-version-upgrade) - for more information on how to upgrade Consul versions. - - This version is required to support custom resources. - -1. Next, remove this annotation from existing deployments. This will have no - effect on the deployments because the annotation was only used when the - service was first created. -1. Now you can upgrade your Helm chart to the latest version. -1. From now on, any new service will require a [`ServiceDefaults`](/consul/docs/connect/config-entries/service-defaults) - resource to set its protocol: - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceDefaults - metadata: - name: my-service-name - spec: - protocol: 'http' - ``` - -1. Existing services will maintain their previously set protocol. If you wish to - change that protocol, you must migrate that service's `service-defaults` config - entry to a `ServiceDefaults` resource. See [Migrating Config Entries](#migrating-config-entries). - --> **Note:** The annotation was removed because it didn't support changing the protocol -and it wasn't supported in secondary datacenters. - -## Migrating Config Entries - -A config entry that already exists in Consul must be migrated into a Kubernetes custom resource in order to -manage it from Kubernetes: - -1. Determine the `kind` and `name` of the config entry. For example, the protocol - would be set by a config entry with `kind: service-defaults` and `name` equal - to the name of the service. - - In another example, a `proxy-defaults` config has `kind: proxy-defaults` and - `name: global`. - -1. Once you've determined the `kind` and `name`, query Consul to get its contents: - - ```shell-session - $ consul config read -kind -name - ``` - - This will require `kubectl exec`'ing into a Consul server or client pod. If - you're using ACLs, you will also need an ACL token passed via the `-token` flag. - - For example: - - ```shell-session - $ kubectl exec consul-server-0 -- consul config read -name foo -kind service-defaults - { - "Kind": "service-defaults", - "Name": "foo", - "Protocol": "http", - "MeshGateway": {}, - "Expose": {}, - "CreateIndex": 60, - "ModifyIndex": 60 - } - ``` - -1. Now we're ready to construct a Kubernetes resource for the config entry. - - It will look something like: - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceDefaults - metadata: - name: foo - annotations: - 'consul.hashicorp.com/migrate-entry': 'true' - spec: - protocol: 'http' - ``` - - 1. The `apiVersion` will always be `consul.hashicorp.com/v1alpha1`. - 1. The `kind` will be the CamelCase version of the Consul kind, e.g. - `proxy-defaults` becomes `ProxyDefaults`. - 1. `metadata.name` will be the `name` of the config entry. - 1. `metadata.annotations` will contain the `"consul.hashicorp.com/migrate-entry": "true"` - annotation. - 1. The namespace should be whatever namespace the service is deployed in. - For `ProxyDefaults`, we recommend the namespace that Consul is deployed in. - 1. The contents of `spec` will be a transformation from JSON keys to YAML - keys. - - The following keys can be ignored: `CreateIndex`, `ModifyIndex` - and any key that has an empty object, e.g. `"Expose": {}`. - - For example: - - ```json - { - "Kind": "service-defaults", - "Name": "foo", - "Protocol": "http", - "MeshGateway": {}, - "Expose": {}, - "CreateIndex": 60, - "ModifyIndex": 60 - } - ``` - - Becomes: - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ServiceDefaults - metadata: - name: foo - annotations: - 'consul.hashicorp.com/migrate-entry': 'true' - spec: - protocol: 'http' - ``` - - And - - ```json - { - "Kind": "proxy-defaults", - "Name": "global", - "MeshGateway": { - "Mode": "local" - }, - "Config": { - "local_connect_timeout_ms": 1000, - "handshake_timeout_ms": 10000 - }, - "CreateIndex": 60, - "ModifyIndex": 60 - } - ``` - - Becomes: - - ```yaml - apiVersion: consul.hashicorp.com/v1alpha1 - kind: ProxyDefaults - metadata: - name: global - annotations: - 'consul.hashicorp.com/migrate-entry': 'true' - spec: - meshGateway: - mode: local - config: - # Note that anything under config for ProxyDefaults will use the exact - # same keys. - local_connect_timeout_ms: 1000 - handshake_timeout_ms: 10000 - ``` - -1. Run `kubectl apply` to apply the Kubernetes resource. -1. Next, check that it synced successfully: - - ```shell-session - $ kubectl get servicedefaults foo - NAME SYNCED AGE - foo True 1s - ``` - -1. If its `SYNCED` status is `True` then the migration for this config entry - was successful. -1. If its `SYNCED` status is `False`, use `kubectl describe` to view - the reason syncing failed: - - ```shell-session - $ kubectl describe servicedefaults foo - ... - Status: - Conditions: - Last Transition Time: 2021-01-12T21:03:29Z - Message: migration failed: Kubernetes resource does not match existing Consul config entry: consul={...}, kube={...} - Reason: MigrationFailedError - Status: False - Type: Synced - ``` - - The most likely reason is that the contents of the Kubernetes resource - don't match the Consul resource. Make changes to the Kubernetes resource - to match the Consul resource (ignoring the `CreateIndex`, `ModifyIndex` and `Meta` keys). - -1. Once the `SYNCED` status is true, you can make changes to the resource and they - will get synced to Consul. diff --git a/website/content/docs/k8s/deployment-configurations/clients-outside-kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/clients-outside-kubernetes.mdx deleted file mode 100644 index cb57f1652698..000000000000 --- a/website/content/docs/k8s/deployment-configurations/clients-outside-kubernetes.mdx +++ /dev/null @@ -1,151 +0,0 @@ ---- -layout: docs -page_title: Join External Services to Consul on Kubernetes -description: >- - Services running on a virtual machine (VM) can join a Consul datacenter running on Kubernetes. Learn how to configure the Kubernetes installation to accept communication from external services. ---- - -# Join External Services to Consul on Kubernetes - -Services running on non-Kubernetes nodes can join a Consul cluster running within Kubernetes. - -## Auto-join - -The recommended way to join a cluster running within Kubernetes is to -use the ["k8s" cloud auto-join provider](/consul/docs/install/cloud-auto-join#kubernetes-k8s). - -The auto-join provider dynamically discovers IP addresses to join using -the Kubernetes API. It authenticates with Kubernetes using a standard -`kubeconfig` file. Auto-join works with all major hosted Kubernetes offerings -as well as self-hosted installations. The token in the `kubeconfig` file -needs to have permissions to list pods in the namespace where Consul servers -are deployed. - -The auto-join string below joins a Consul server agent to a cluster using the [official Helm chart](/consul/docs/k8s/helm): - -```shell-session -$ consul agent -retry-join 'provider=k8s label_selector="app=consul,component=server"' -``` - --> **Note:** This auto-join command only connects on the default gossip port -8301, whether you are joining on the pod network or via host ports. A -Consul server that is already a member of the datacenter should be -listening on this port for the external service to connect through -auto-join. - -### Auto-join on the Pod network - -In the default Consul Helm chart installation, Consul servers are -routable through their pod IPs for server RPCs. As a result, any -external agents joining the Consul cluster running on Kubernetes -need to be able to connect to those pod IPs. - -In many hosted Kubernetes environments, you need to explicitly configure -your hosting provider to ensure that pod IPs are routable from external VMs. -For more information, refer to [Azure AKS CNI](https://docs.microsoft.com/en-us/azure/aks/concepts-network#azure-cni-advanced-networking), -[AWS EKS CNI](https://docs.aws.amazon.com/eks/latest/userguide/pod-networking.html) and -[GKE VPC-native clusters](https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips). - -To join external agents with Consul on Kubernetes deployments installed with default values through the [official Helm chart](/consul/docs/k8s/helm): - - 1. Make sure the pod IPs of the servers in Kubernetes are - routable from the VM and that the VM can access port 8301 (for gossip) and - port 8300 (for server RPC) on those pod IPs. - - 1. Make sure that the server pods running in Kubernetes can route - to the VM's advertise IP on its gossip port (default 8301). - - 1. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM. - - 1. On the external VM, run: - - ```shell-session - consul agent \ - -advertise="$ADVERTISE_IP" \ - -retry-join='provider=k8s label_selector="app=consul,component=server"' \ - -bind=0.0.0.0 \ - -hcl='leave_on_terminate = true' \ - -hcl='ports { grpc = 8502 }' \ - -config-dir=$CONFIG_DIR \ - -datacenter=$DATACENTER \ - -data-dir=$DATA_DIR \ - ``` - - 1. Run `consul members` to check if the join was successful. - - ```shell-session - / $ consul members - Node Address Status Type Build Protocol DC Segment - consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 - external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 - gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 - gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 - gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 - ``` - -### Auto-join through host ports - -If your external VMs cannot connect to Kubernetes pod IPs but they can connect -to the internal host IPs of the nodes in the Kubernetes cluster, you can join the two by exposing ports on the host IP instead. - - 1. Install the [official Helm chart](/consul/docs/k8s/helm) with the following values: - ```yaml - client: - exposeGossipPorts: true # exposes client gossip ports as hostPorts - server: - exposeGossipAndRPCPorts: true # exposes the server gossip and RPC ports as hostPorts - ports: - # Configures the server gossip port - serflan: - # Note that this needs to be different than 8301, to avoid conflicting with the client gossip hostPort - port: 9301 - ``` - This installation exposes the client gossip ports, the server gossip ports and the server RPC port at `hostIP:hostPort`. Note that `hostIP` is the **internal** IP of the VM that the client/server pods are deployed on. - - 1. Make sure the IPs of the Kubernetes nodes are routable from the VM and - that the VM can access ports 8301 and 9301 (for gossip) and port 8300 (for - server RPC) on those node IPs. - - 1. Make sure the server pods running in Kubernetes can route to - the VM's advertise IP on its gossip port (default 8301). - - 1. Make sure you have the `kubeconfig` file for the Kubernetes cluster in `$HOME/.kube/config` on the external VM. - - 1. On the external VM, run: - - ```shell-session - consul agent \ - -advertise="$ADVERTISE_IP" \ - -retry-join='provider=k8s host_network=true label_selector="app=consul,component=server"' - -bind=0.0.0.0 \ - -hcl='leave_on_terminate = true' \ - -hcl='ports { grpc = 8502 }' \ - -config-dir=$CONFIG_DIR \ - -datacenter=$DATACENTER \ - -data-dir=$DATA_DIR \ - ``` - - Note the addition of `host_network=true` in the retry-join argument. - - 1. Run `consul members` to check if the join was successful. - - ```shell-session - / $ consul members - Node Address Status Type Build Protocol DC Segment - consul-consul-server-0 10.138.0.43:9301 alive server 1.9.1 2 dc1 - external-agent 10.138.0.38:8301 alive client 1.9.0 2 dc1 - gke-external-agent-default-pool-32d15192-grs4 10.138.0.43:8301 alive client 1.9.1 2 dc1 - gke-external-agent-default-pool-32d15192-otge 10.138.0.44:8301 alive client 1.9.1 2 dc1 - gke-external-agent-default-pool-32d15192-vo7k 10.138.0.42:8301 alive client 1.9.1 2 dc1 - ``` - -## Manual join - -If you are unable to use auto-join, try following the instructions in -either of the auto-join sections, but instead of using a `provider` key in the -`-retry-join` flag, pass the address of at least one Consul server. Example: `-retry-join=$CONSUL_SERVER_IP:$SERVER_SERFLAN_PORT`. - -A `kubeconfig` file is not required when using manual join. - -Instead of hardcoding an IP address, we recommend you set up a DNS entry -that resolves to the pod IPs or host IPs that the Consul server pods are running on. \ No newline at end of file diff --git a/website/content/docs/k8s/deployment-configurations/consul-enterprise.mdx b/website/content/docs/k8s/deployment-configurations/consul-enterprise.mdx deleted file mode 100644 index 29314c943541..000000000000 --- a/website/content/docs/k8s/deployment-configurations/consul-enterprise.mdx +++ /dev/null @@ -1,138 +0,0 @@ ---- -layout: docs -page_title: Deploy Consul Enterprise on Kubernetes -description: >- - Consul Enterprise features are available when running Consul on Kubernetes. Learn how to apply your license in the Helm chart and return the license information with the `consul license get` command. ---- - -# Deploy Consul Enterprise on Kubernetes - -You can use this Helm chart to deploy Consul Enterprise by following a few extra steps. - -Find the license file that you received in your welcome email. It should have a `.hclic` extension. You will use the contents of this file to create a Kubernetes secret before installing the Helm chart. - --> **Note:** This guide assumes you are storing your license as a Kubernetes Secret. If you would like to store the enterprise license in Vault, please reference [Storing the Enterprise License in Vault](/consul/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license). - -You can use the following commands to create the secret with name `consul-ent-license` and key `key`: - -```bash -secret=$(cat 1931d1f4-bdfd-6881-f3f5-19349374841f.hclic) -kubectl create secret generic consul-ent-license --from-literal="key=${secret}" -``` - --> **Note:** If you cannot find your `.hclic` file, please contact your sales team or Technical Account Manager. - -In your `values.yaml`, change the value of `global.image` to one of the enterprise [release tags](https://hub.docker.com/r/hashicorp/consul-enterprise/tags). - - - -```yaml -global: - image: 'hashicorp/consul-enterprise:1.10.0-ent' -``` - - - -Add the name and key of the secret you just created to `server.enterpriseLicense`, if using Consul version 1.10+. - - - -```yaml -global: - image: 'hashicorp/consul-enterprise:1.10.0-ent' - enterpriseLicense: - secretName: 'consul-ent-license' - secretKey: 'key' -``` - - - -If the version of Consul is < 1.10, use the following config with the name and key of the secret you just created. -(These values are required on top of your normal configuration.) - --> **Note:** The value of `server.enterpriseLicense.enableLicenseAutoload` must be set to `false`. - - - -```yaml -global: - image: 'hashicorp/consul-enterprise:1.8.3-ent' - enterpriseLicense: - secretName: 'consul-ent-license' - secretKey: 'key' - enableLicenseAutoload: false -``` - - - -Now run `helm install`: - -```shell-session -$ helm install --wait hashicorp hashicorp/consul --values values.yaml -``` - -Once the cluster is up, you can verify the nodes are running Consul Enterprise by -using the `consul license get` command. - -First, forward your local port 8500 to the Consul servers so you can run `consul` -commands locally against the Consul servers in Kubernetes: - -```shell-session -$ kubectl port-forward service/hashicorp-consul-server 8500:8500 -``` - -In a separate tab, run the `consul license get` command (if using ACLs see below): - -```shell-session -$ consul license get -License is valid -License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f -Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996 -Expires At: 2020-03-09 03:59:59.999 +0000 UTC -Datacenter: * -Package: premium -Licensed Features: - Automated Backups - Automated Upgrades - Enhanced Read Scalability - Network Segments - Redundancy Zone - Advanced Network Federation -$ consul members -Node Address Status Type Build Protocol DC Segment -hashicorp-consul-server-0 10.60.0.187:8301 alive server 1.10.0+ent 2 dc1 -hashicorp-consul-server-1 10.60.1.229:8301 alive server 1.10.0+ent 2 dc1 -hashicorp-consul-server-2 10.60.2.197:8301 alive server 1.10.0+ent 2 dc1 -``` - -If you get an error: - -```bash -Error getting license: invalid character 'r' looking for beginning of value -``` - -Then you have likely enabled ACLs. You need to specify your ACL token when -running the `license get` command. First, assign the ACL token to the `CONSUL_HTTP_TOKEN` environment variable: - -```shell-session -$ export CONSUL_HTTP_TOKEN=$(kubectl get secrets/hashicorp-consul-bootstrap-acl-token --template='{{.data.token | base64decode }}') -``` - -Now the token will be used when running Consul commands: - -```shell-session -$ consul license get -License is valid -License ID: 1931d1f4-bdfd-6881-f3f5-19349374841f -Customer ID: b2025a4a-8fdd-f268-95ce-1704723b9996 -Expires At: 2020-03-09 03:59:59.999 +0000 UTC -Datacenter: * -Package: premium -Licensed Features: - Automated Backups - Automated Upgrades - Enhanced Read Scalability - Network Segments - Redundancy Zone - Advanced Network Federation -``` diff --git a/website/content/docs/k8s/deployment-configurations/external-service.mdx b/website/content/docs/k8s/deployment-configurations/external-service.mdx deleted file mode 100644 index 711f01640855..000000000000 --- a/website/content/docs/k8s/deployment-configurations/external-service.mdx +++ /dev/null @@ -1,38 +0,0 @@ ---- -layout: docs -page_title: Register services running on external nodes to Consul on Kubernetes -description: >- - Learn how to register a service running on an external node to the Consul catalog when running Consul on Kubernetes. ---- - -# Register services running on external nodes to Consul on Kubernetes - -This page provides an overview for registering services in the Consul catalog when the service runs on a node that is not part of the Kubernetes cluster. - -## Introduction - -Because Kubernetes has built-in service discovery capabilities, Consul on Kubernetes includes components such as [Consul dataplanes](/consul/docs/connect/dataplane) and [service sync](/consul/docs/k8s/service-sync) so that operators can continue to use Kubernetes tools and processes. However, this approach to service networking still requires a service to run on a node that Consul is aware of, either through a local Consul client agent or the Kubernetes cluster. We call services that run on external nodes that Consul cannot automatically recognize _external services_. - -Previously, the only way to register an external service when running Consul on Kubernetes was using Consul's HTTP API. This approach requires additional ACLs and direct access to Consul's HTTP API endpoint. Consul now supports registering external services and their associated health checks in the Consul catalog using a [`Registration` custom resource definition (CRD)](/consul/docs/connect/config-entries/registration) that follows the format of [Consul's service definitions](/consul/docs/services/configuration/services-configuration-reference). - -## Workflows - -The process to register an external service in Consul on Kubernetes consists of the following steps: - -1. [Start Consul ESM](/consul/tutorials/connect-services/service-registration-external-services#monitor-the-external-service-with-consul-esm). You must use Consul ESM to run health checks on external services. -1. Define the external service and its health checks in a [`Registration` CRD](/consul/docs/connect/config-entries/registration). -1. Apply the CRD to your Kubernetes cluster. Internally, this action triggers an API call to Consul's [`/catalog/register` endpoint](/consul/api-docs/catalog#register-entity) to register the service. -1. When using Consul's service mesh, you should also: - - Deploy a [terminating gateway](/consul/docs/k8s/connect/terminating-gateways) so that downstream services can communicate with the external service. - - Define [service intentions](/consul/docs/connect/intentions) for the external service and the downstream services that communicate with it. - -## Guidance - -The following resources are available to help you register external services to Consul on Kubernetes. - -### Reference - -- [`Registration` custom resource definition (CRD) configuration reference](/consul/docs/connect/config-entries/registration) -- [`/catalog/register` HTTP API endpoint reference](/consul/api-docs/catalog#register-entity) -- [Service configuration reference](/consul/docs/services/configuration/services-configuration-reference) -- [Health check configuration reference](/consul/docs/services/configuration/checks-configuration-reference) \ No newline at end of file diff --git a/website/content/docs/k8s/deployment-configurations/multi-cluster/index.mdx b/website/content/docs/k8s/deployment-configurations/multi-cluster/index.mdx deleted file mode 100644 index dbffe877c986..000000000000 --- a/website/content/docs/k8s/deployment-configurations/multi-cluster/index.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: docs -page_title: WAN Federation Through Mesh Gateways - Overview -description: >- - Federating Consul datacenters through mesh gateways enables agents to engage in WAN communication across runtimes and cloud providers. Learn about multi-cluster federation and its network requirements for Consul on Kubernetes. ---- - -# WAN Federation Through Mesh Gateways Overview - -In Consul, federation is the act of joining two or more Consul datacenters. -When datacenters are joined, Consul servers in each datacenter can communicate -with one another. This enables the following features: - -- Services on all clusters can make calls to each other through Consul Service Mesh. -- [Intentions](/consul/docs/connect/intentions) can be used to enforce rules about which services can communicate across all clusters. -- [L7 Routing Rules](/consul/docs/connect/manage-traffic) can enable multi-cluster failover - and traffic splitting. -- The Consul UI has a drop-down menu that lets you navigate between datacenters. - -## Traditional WAN Federation vs. WAN Federation Via Mesh Gateways - -Consul provides two mechanisms for WAN (Wide Area Network) federation: - -1. Traditional WAN Federation -1. WAN Federation Via Mesh Gateways (newly available in Consul 1.8.0) - -### Traditional WAN Federation - -With traditional WAN federation, all Consul servers must be exposed on the wide area -network. In the Kubernetes context this is often difficult to set up. It would require that -each Consul server pod is running on a Kubernetes node with an IP address that is routable from -all other Kubernetes clusters. Often Kubernetes clusters are deployed into private -subnets that other clusters cannot route to without additional network devices and configuration. - -The Kubernetes solution to the problem of exposing pods is load balancer services but these can't be used -with traditional WAN federation because it requires proxying both UDP and TCP and Kubernetes load balancers only proxy TCP. -In addition, each Consul server would need its own load balancer because each -server needs a unique address. This would increase cost and complexity. - -![Traditional WAN Federation](/img/traditional-wan-federation.png 'Traditional WAN Federation') - -### WAN Federation Via Mesh Gateways - -To solve the problems that occurred with traditional WAN federation, -Consul 1.8.0 now supports WAN federation **via mesh gateways**. This mechanism -only requires that mesh gateways are exposed with routable addresses, not Consul servers. We can front -the mesh gateway pods with a single Kubernetes service and all traffic flows between -datacenters through the mesh gateways. - -![WAN Federation Via Mesh Gateway](/img/mesh-gateway-wan-federation.png 'WAN Federation Via Mesh Gateway') - -## Network Requirements - -Clusters/datacenters can be federated even if they have overlapping pod IP spaces or if they're -on different cloud providers or platforms. Kubernetes clusters can even be -federated with Consul datacenters running on virtual machines (and vice versa). -Because the communication between clusters is end-to-end encrypted, mesh gateways -can even be exposed on the public internet. - -There are three networking requirements: -1. When Consul servers in secondary datacenters first start up, they must be able to make calls directly to the - primary datacenter's mesh gateways. -1. Once the Consul servers in secondary datacenters have made that initial call to the primary datacenter's mesh - gateways, the mesh gateways in the secondary datacenter will be able to start. From this point onwards, all - communication between servers will flow first to the local mesh gateways, and then to the remote mesh gateways. - This means all mesh gateways across datacenters must be able to route to one another. - - For example, if using a load balancer service in front of each cluster's mesh gateway pods, the load balancer IP - must be routable from the other mesh gateway pods. - If using a public load balancer, this is guaranteed. If using a private load balancer - then you'll need to make sure that its IP/DNS address is routable from your other clusters. -1. If ACLs are enabled, primary clusters must be able to make requests to the Kubernetes API URLs of secondary clusters. - -## Next Steps - -Now that you have an overview of federation, proceed to either the -[Federation Between Kubernetes Clusters](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes) -or [Federation Between VMs and Kubernetes](/consul/docs/k8s/deployment-configurations/multi-cluster/vms-and-kubernetes) -pages depending on your use case. diff --git a/website/content/docs/k8s/deployment-configurations/multi-cluster/kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/multi-cluster/kubernetes.mdx deleted file mode 100644 index 406ca258ec60..000000000000 --- a/website/content/docs/k8s/deployment-configurations/multi-cluster/kubernetes.mdx +++ /dev/null @@ -1,471 +0,0 @@ ---- -layout: docs -page_title: WAN Federation Through Mesh Gateways - Multiple Kubernetes Clusters -description: >- - WAN federation through mesh gateways enables federating multiple Kubernetes clusters in Consul. Learn how to configure primary and secondary datacenters, export a federation secret, get the k8s API URL, and verify federation. ---- - -# WAN Federation Between Multiple Kubernetes Clusters Through Mesh Gateways - --> **1.8.0+:** This feature is available in Consul versions 1.8.0 and higher - -~> This topic requires familiarity with [Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters) and [WAN Federation Via Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). - --> Looking for a step-by-step guide? Complete the [Secure and Route Service Mesh Communication Across Kubernetes](/consul/tutorials/kubernetes/kubernetes-mesh-gateways?utm_source=docs) tutorial to learn more. - -This page describes how to federate multiple Kubernetes clusters. Refer to [Multi-Cluster Overview](/consul/docs/k8s/deployment-configurations/multi-cluster) -for more information, including [networking requirements](/consul/docs/k8s/deployment-configurations/multi-cluster#network-requirements). - -## Primary Datacenter - -Consul treats each Kubernetes cluster as a separate Consul datacenter. -In order to federate clusters, one cluster must be designated the -primary datacenter. This datacenter will be -responsible for creating the certificate authority that signs the TLS certificates -that Consul service mesh uses to encrypt and authorize traffic. It also handles validating global ACL tokens. All other clusters -that are federated are considered secondaries. - -#### First Time Installation - -If you haven't installed Consul on your cluster, continue reading below. If you've -already installed Consul on a cluster and want to upgrade it to -support federation, see [Upgrading An Existing Cluster](#upgrading-an-existing-cluster). - -You will need to use the following `values.yaml` file for your primary cluster, -with the possible modifications listed below. - - - -```yaml -global: - name: consul - datacenter: dc1 - - # TLS configures whether Consul components use TLS. - tls: - # TLS must be enabled for federation in Kubernetes. - enabled: true - - federation: - enabled: true - # This will cause a Kubernetes secret to be created that - # can be imported by secondary datacenters to configure them - # for federation. - createFederationSecret: true - - acls: - manageSystemACLs: true - # If ACLs are enabled, we must create a token for secondary - # datacenters to replicate ACLs. - createReplicationToken: true - - # Gossip encryption secures the protocol Consul uses to quickly - # discover new nodes and detect failure. - gossipEncryption: - autoGenerate: true - -connectInject: - # Consul Connect service mesh must be enabled for federation. - enabled: true - -meshGateway: - # Mesh gateways are gateways between datacenters. They must be enabled - # for federation in Kubernetes since the communication between datacenters - # goes through the mesh gateways. - enabled: true -``` - - - -Modifications: - -1. The Consul datacenter name is `dc1`. The datacenter name in each federated - cluster **must be unique**. -1. ACLs are enabled in the template configuration. When ACLs are enabled, primary clusters must be able to make requests to the Kubernetes API URLs of secondary clusters. To disable ACLs for testing purposes, change the following settings: - - ```yaml - global: - acls: - manageSystemACLs: false - createReplicationToken: false - ``` - - ACLs secure Consul by requiring every API call to present an ACL token that - is validated to ensure it has the proper permissions. -1. Gossip encryption is enabled in the above config file. To disable it, comment - out or delete the `gossipEncryption` key: - - ```yaml - global: - # gossipEncryption: - # autoGenerate: true - ``` - - Gossip encryption encrypts the communication layer used to discover other - nodes in the cluster and report on failure. If you are only testing Consul, - this is not required. - -1. The default mesh gateway configuration - creates a Kubernetes Load Balancer service. If you wish to customize the - mesh gateway, for example using a Node Port service or a custom DNS entry, - see the [Helm reference](/consul/docs/k8s/helm#v-meshgateway) for that setting. - -With your `values.yaml` ready to go, follow our [Installation Guide](/consul/docs/k8s/installation/install) -to install Consul on your primary cluster. - --> **NOTE:** You must be using consul-helm 0.21.0+. To update, run `helm repo update`. - -#### Upgrading An Existing Cluster - -If you have an existing cluster, you will need to upgrade it to ensure it has -the following config: - - - -```yaml -global: - tls: - enabled: true - federation: - enabled: true - createFederationSecret: true - acls: - manageSystemACLs: true - createReplicationToken: true -meshGateway: - enabled: true -``` - - - -1. `global.tls.enabled` must be `true`. See [Configuring TLS on an Existing Cluster](/consul/docs/k8s/operations/tls-on-existing-cluster) - for more information on safely upgrading a cluster to use TLS. - -If you've set `enableAutoEncrypt: true`, this is also supported. - -1. `global.federation.enabled` must be set to `true`. This is a new config setting. -1. If using ACLs, you'll already have `global.acls.manageSystemACLs: true`. For the - primary cluster, you'll also need to set `global.acls.createReplicationToken: true`. - This ensures that an ACL token is created that secondary clusters can use to authenticate - with the primary. -1. Mesh Gateways are enabled with the default configuration. The default configuration - creates a Kubernetes Load Balancer service. If you wish to customize the - mesh gateway, see the [Helm reference](/consul/docs/k8s/helm#v-meshgateway) for that setting. - -With the above settings added to your existing config, follow the [Upgrading](/consul/docs/k8s/upgrade) -guide to upgrade your cluster and then come back to the [Federation Secret](#federation-secret) section. - --> **NOTE:** You must be using consul-helm 0.21.0+. - -#### ProxyDefaults - -If you are using consul-helm 0.30.0+ you must also create a [`ProxyDefaults`](/consul/docs/connect/config-entries/proxy-defaults) -resource to configure Consul to use the mesh gateways for service mesh traffic. - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ProxyDefaults -metadata: - name: global -spec: - meshGateway: - mode: 'local' -``` - -The `spec.meshGateway.mode` can be set to `local` or `remote`. If set to `local`, -traffic from one datacenter to another will egress through the local mesh gateway. -This may be useful if you prefer all your cross-cluster network traffic to egress -from the same locations. -If set to `remote`, traffic will be routed directly from the pod to the remote mesh gateway -(resulting in one less hop). - -Verify that the resource was synced to Consul: - -```shell-session -$ kubectl get proxydefaults global -NAME SYNCED AGE -global True 1s -``` - -Its `SYNCED` status should be `True`. - --> **NOTE:** The `ProxyDefaults` resource can be created in any namespace, but -we recommend creating it in the same namespace that Consul is installed in. -Its name must be `global`. - -## Federation Secret - -The federation secret is a Kubernetes secret containing information needed -for secondary datacenters/clusters to federate with the primary. This secret is created -automatically by setting: - - - -```yaml -global: - federation: - createFederationSecret: true -``` - - - -After the installation into your primary cluster you will need to export -this secret: - -```shell-session -$ kubectl get secret consul-federation --namespace consul --output yaml > consul-federation-secret.yaml -``` - -!> **Security note:** The federation secret makes it possible to gain -full admin privileges in Consul. This secret must be kept securely, i.e. -it should be deleted from your filesystem after importing it to your secondary -cluster and you should use RBAC permissions to ensure only administrators -can read it from Kubernetes. - -~> **Secret doesn't exist?** If you haven't set `global.name` to `consul` then the name of the secret will -be your Helm release name suffixed with `-consul-federation` e.g. `helm-release-consul-federation`. Also ensure you're -using the namespace Consul was installed into. - -Now you're ready to import the secret into your secondary cluster(s). - -Switch `kubectl` context to your secondary Kubernetes cluster. In this example -our context for our secondary cluster is `dc2`: - -```shell-session -$ kubectl config use-context dc2 -Switched to context "dc2". -``` - -And import the secret: - -```shell-session -$ kubectl apply --filename consul-federation-secret.yaml -secret/consul-federation configured -``` - -#### Federation Secret Contents - -The automatically generated federation secret contains: - -- **Server certificate authority certificate** - This is the certificate authority - used to sign Consul server-to-server communication. This is required by secondary - clusters because they must communicate with the Consul servers in the primary cluster. -- **Server certificate authority key** - This is the signing key for the server certificate - authority. This is required by secondary clusters because they need to create - server certificates for each Consul server using the same certificate authority - as the primary. - - !> **Security note:** The certificate authority key would enable an attacker to compromise Consul, - it should be kept securely. - -- **Consul server config** - This is a JSON snippet that must be used as part of the server config for secondary datacenters. - It sets: - - - [`primary_datacenter`](/consul/docs/agent/config/config-files#primary_datacenter) to the name of the primary datacenter. - - [`primary_gateways`](/consul/docs/agent/config/config-files#primary_gateways) to an array of IPs or hostnames - for the mesh gateways in the primary datacenter. These are the addresses that - Consul servers in secondary clusters will use to communicate with the primary - datacenter. - - Even if there are multiple secondary datacenters, only the primary gateways - need to be configured. Upon first connection with a primary datacenter, the - addresses for other secondary datacenters will be discovered. - -- **ACL replication token** - If ACLs are enabled, secondary datacenters need - an ACL token in order to authenticate with the primary datacenter. This ACL - token is also used to replicate ACLs from the primary datacenter so that - components in each datacenter can authenticate with one another. -- **Gossip encryption key** - If gossip encryption is enabled, secondary datacenters - need the gossip encryption key in order to be part of the gossip pool. - Gossip is the method by which Consul discovers the addresses and health of other - nodes. - - !> **Security note:** This gossip encryption key would enable an attacker to compromise Consul, - it should be kept securely. - -## Kubernetes API URL - -If ACLs are enabled, you must next determine the Kubernetes API URL for each secondary cluster. The API URL of the secondary cluster must be specified in the config files for each secondary cluster because they need -to create global Consul ACL tokens (tokens that are valid in all datacenters) and these tokens can only be created -by the primary datacenter. By setting the API URL, the secondary cluster will configure a [Consul auth method](/consul/docs/security/acl/auth-methods) -in the primary cluster so that components in the secondary cluster can use their Kubernetes ServiceAccount tokens -to retrieve global Consul ACL tokens from the primary. - -To determine the Kubernetes API URL, first get the cluster name in your kubeconfig for your secondary: - -```shell-session -$ export CLUSTER=$(kubectl config view -o jsonpath="{.contexts[?(@.name == \"$(kubectl config current-context)\")].context.cluster}") -``` - -Then get the API URL: - -```shell-session -$ kubectl config view -o jsonpath="{.clusters[?(@.name == \"$CLUSTER\")].cluster.server}" -https:// -``` - -Keep track of this URL, you'll need it in the next section. - -## Secondary Cluster(s) - -With the primary cluster up and running, and the [federation secret](#federation-secret) imported -into the secondary cluster, we can now install Consul into the secondary -cluster. - -You will need to use the following `values.yaml` file for your secondary cluster(s), -with the modifications listed below. - --> **NOTE: ** You must use a separate Helm config file for each cluster (primary and secondaries) since their -settings are different. - - - -```yaml -global: - name: consul - datacenter: dc2 - tls: - enabled: true - - # Here we're using the shared certificate authority from the primary - # datacenter that was exported via the federation secret. - caCert: - secretName: consul-federation - secretKey: caCert - caKey: - secretName: consul-federation - secretKey: caKey - - acls: - manageSystemACLs: true - - # Here we're importing the replication token that was - # exported from the primary via the federation secret. - replicationToken: - secretName: consul-federation - secretKey: replicationToken - - federation: - enabled: true - k8sAuthMethodHost: - primaryDatacenter: dc1 - gossipEncryption: - secretName: consul-federation - secretKey: gossipEncryptionKey -connectInject: - enabled: true -meshGateway: - enabled: true -server: - # Here we're including the server config exported from the primary - # via the federation secret. This config includes the addresses of - # the primary datacenter's mesh gateways so Consul can begin federation. - extraVolumes: - - type: secret - name: consul-federation - items: - - key: serverConfigJSON - path: config.json - load: true -``` - - - -Modifications: - -1. If ACLs are enabled, change the value of `global.federation.k8sAuthMethodHost` to the full URL (including `https://`) of the secondary cluster's Kubernetes API. -1. `global.federation.primaryDatacenter` must be set to the name of the primary datacenter. -1. The Consul datacenter name for the datacenter in this example is `dc2`. The datacenter name in **each** federated cluster **must be unique**. -1. ACLs are enabled in the above config file. They can be disabled by removing - the whole `acls` block: - - ```yaml - acls: - manageSystemACLs: false - replicationToken: - secretName: consul-federation - secretKey: replicationToken - ``` - - If ACLs are enabled in one datacenter, they must be enabled in all datacenters - because in order to communicate with that one datacenter ACL tokens are required. - -1. Gossip encryption is enabled in the above config file. To disable it, don't - set the `gossipEncryption` key: - - ```yaml - global: - # gossipEncryption: - # secretName: consul-federation - # secretKey: gossipEncryptionKey - ``` - - If gossip encryption is enabled in one datacenter, it must be enabled in all datacenters - because in order to communicate with that one datacenter the encryption key is required. - -1. The default mesh gateway configuration - creates a Kubernetes Load Balancer service. If you wish to customize the - mesh gateway, for example using a Node Port service or a custom DNS entry, - see the [Helm reference](/consul/docs/k8s/helm#v-meshgateway) for that setting. - -With your `values.yaml` ready to go, follow our [Installation Guide](/consul/docs/k8s/installation/install) -to install Consul on your secondary cluster(s). - -## Verifying Federation - -To verify that both datacenters are federated, run the -`consul members -wan` command on one of the Consul server pods: - -```shell-session -$ kubectl exec statefulset/consul-server --namespace consul -- consul members -wan -Node Address Status Type Build Protocol DC Segment -consul-server-0.dc1 10.32.4.216:8302 alive server 1.8.0 2 dc1 -consul-server-0.dc2 192.168.2.173:8302 alive server 1.8.0 2 dc2 -consul-server-1.dc1 10.32.5.161:8302 alive server 1.8.0 2 dc1 -consul-server-1.dc2 192.168.88.64:8302 alive server 1.8.0 2 dc2 -consul-server-2.dc1 10.32.1.175:8302 alive server 1.8.0 2 dc1 -consul-server-2.dc2 192.168.35.174:8302 alive server 1.8.0 2 dc2 -``` - -In this example (run from `dc1`), you can see that this datacenter knows about -the servers in dc2 and that they have status `alive`. - -You can also use the `consul catalog services` command with the `-datacenter` flag to ensure -each datacenter can read each other's services. In this example, our `kubectl` -context is `dc1` and we're querying for the list of services in `dc2`: - -```shell-session -$ kubectl exec statefulset/consul-server --namespace consul -- consul catalog services -datacenter dc2 -consul -mesh-gateway -``` - -You can switch kubectl contexts and run the same command in `dc2` with the flag -`-datacenter dc1` to ensure `dc2` can communicate with `dc1`. - -### Consul UI - -We can also use the Consul UI to verify federation. -See [Viewing the Consul UI](/consul/docs/k8s/installation/install#viewing-the-consul-ui) -for instructions on how to view the UI. - -~> NOTE: If ACLs are enabled, your kubectl context must be in the primary datacenter -to retrieve the bootstrap token mentioned in the UI documentation. - -With the UI open, you'll be able to switch between datacenters via the dropdown -in the top left: - -![Consul Datacenter Dropdown](/img/data-center-dropdown.png 'Consul Datacenter Dropdown') - -## Next Steps - -With your Kubernetes clusters federated, complete the [Secure and Route Service Mesh Communication Across Kubernetes](/consul/tutorials/kubernetes/kubernetes-mesh-gateways?utm_source=docs#deploy-microservices) tutorial to learn how to use Consul service mesh to -route between services deployed on each cluster. - -You can also read our in-depth documentation on [Consul Service Mesh In Kubernetes](/consul/docs/k8s/connect). - -If you are still considering a move to Kubernetes, or to Consul on Kubernetes specifically, our [Migrate to Microservices with Consul Service Mesh on Kubernetes](/consul/tutorials/microservices?utm_source=docs) -collection uses an example application written by a fictional company to illustrate why and how organizations can -migrate from monolith to microservices using Consul service mesh on Kubernetes. The case study in this collection -should provide information valuable for understanding how to develop services that leverage Consul during any stage -of your microservices journey. diff --git a/website/content/docs/k8s/deployment-configurations/multi-cluster/vms-and-kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/multi-cluster/vms-and-kubernetes.mdx deleted file mode 100644 index 763dcda5a8f3..000000000000 --- a/website/content/docs/k8s/deployment-configurations/multi-cluster/vms-and-kubernetes.mdx +++ /dev/null @@ -1,404 +0,0 @@ ---- -layout: docs -page_title: WAN Federation Through Mesh Gateways - VMs and Kubernetes -description: >- - WAN federation through mesh gateways extends service mesh deployments by enabling Consul on Kubernetes to securely communicate with instances on VMs. Learn how to configure multi-cluster federation with k8s as either the primary or secondary datacenter. ---- - -# WAN Federation Between VMs and Kubernetes Through Mesh Gateways - --> **1.8.0+:** This feature is available in Consul versions 1.8.0 and higher - -~> This topic requires familiarity with [Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters) and [WAN Federation Via Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). - -This page describes how to federate Consul clusters separately deployed in VM and Kubernetes runtimes. Refer to [Multi-Cluster Overview](/consul/docs/k8s/deployment-configurations/multi-cluster) -for more information, including [Kubernetes networking requirements](/consul/docs/k8s/deployment-configurations/multi-cluster#network-requirements). - -Consul datacenters running on non-kubernetes platforms like VMs or bare metal can -be federated with Kubernetes datacenters. - -## Kubernetes as the Primary - -One Consul datacenter must be the [primary](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes#primary-datacenter). If your primary datacenter is running on Kubernetes, use the Helm config from the [Primary Datacenter](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes#primary-datacenter) section to install Consul. - -Once installed on Kubernetes, and with the `ProxyDefaults` [resource created](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes#proxydefaults), -you'll need to export the following information from the primary Kubernetes cluster: - -- Certificate authority cert and key (in order to create SSL certs for VMs) -- External addresses of Kubernetes mesh gateways -- Replication ACL token -- Gossip encryption key - -The following sections detail how to export this data. - -### Certificates - -1. Retrieve the certificate authority cert: - - ```sh - kubectl get secrets/consul-ca-cert --namespace consul --template='{{index .data "tls.crt" | base64decode }}' > consul-agent-ca.pem - ``` - -1. And the certificate authority signing key: - - ```sh - kubectl get secrets/consul-ca-key --namespace consul --template='{{index .data "tls.key" | base64decode }}' > consul-agent-ca-key.pem - ``` - -1. With the `consul-agent-ca.pem` and `consul-agent-ca-key.pem` files you can - create certificates for your servers and clients running on VMs that share the - same certificate authority as your Kubernetes servers. - - You can use the `consul tls` commands to generate those certificates: - - ```sh - # NOTE: consul-agent-ca.pem and consul-agent-ca-key.pem must be in the current - # directory. - $ consul tls cert create -server -dc=vm-dc -node - ==> WARNING: Server Certificates grants authority to become a - server and access all state in the cluster including root keys - and all ACL tokens. Do not distribute them to production hosts - that are not server nodes. Store them as securely as CA keys. - ==> Using consul-agent-ca.pem and consul-agent-ca-key.pem - ==> Saved vm-dc-server-consul-0.pem - ==> Saved vm-dc-server-consul-0-key.pem - ``` - - -> Note the `-node` option in the above command. This should be same as the node name of the [Consul Agent](/consul/docs/agent#running-an-agent). This is a [requirement](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways#tls) for Consul Federation to work. Alternatively, if you plan to use the same certificate and key pair on all your Consul server nodes, or you don't know the nodename in advance, use `-node "*"` instead. - Not satisfying this requirement would result in the following error in the Consul Server logs: - `[ERROR] agent.server.rpc: TLS handshake failed: conn=from= error="remote error: tls: bad certificate"` - - See the help for output of `consul tls cert create -h` to see more options - for generating server certificates. - -1. These certificates can be used in your server config file: - - - - ```hcl - tls { - defaults { - cert_file = "vm-dc-server-consul-0.pem" - key_file = "vm-dc-server-consul-0-key.pem" - ca_file = "consul-agent-ca.pem" - } - } - ``` - - - -1. For clients, you can generate TLS certs with: - - ```shell-session - $ consul tls cert create -client - ==> Using consul-agent-ca.pem and consul-agent-ca-key.pem - ==> Saved dc1-client-consul-0.pem - ==> Saved dc1-client-consul-0-key.pem - ``` - - Or use the [auto_encrypt](/consul/docs/agent/config/config-files#auto_encrypt) feature. - -### Mesh Gateway Addresses - -Retrieve the WAN addresses of the mesh gateways: - -```shell-session -$ kubectl exec statefulset/consul-server --namespace consul -- sh -c \ - 'curl --silent --insecure https://localhost:8501/v1/catalog/service/mesh-gateway | jq ".[].ServiceTaggedAddresses.wan"' -{ - "Address": "1.2.3.4", - "Port": 443 -} -{ - "Address": "1.2.3.4", - "Port": 443 -} -``` - -In this example, the addresses are the same because both mesh gateway pods are -fronted by the same Kubernetes load balancer. - -These addresses will be used in the server config for the `primary_gateways` -setting: - -```hcl -primary_gateways = ["1.2.3.4:443"] -``` - -### Replication ACL Token - -If ACLs are enabled, you'll also need the replication ACL token: - -```shell-session -$ kubectl get secrets/consul-acl-replication-acl-token --namespace consul --template='{{.data.token | base64decode}}' -e7924dd1-dc3f-f644-da54-81a73ba0a178 -``` - -This token will be used in the server config for the replication token. - -```hcl -acls { - tokens { - replication = "e7924dd1-dc3f-f644-da54-81a73ba0a178" - } -} -``` - --> **NOTE:** You'll also need to set up additional ACL tokens as needed by the -ACL system. See tutorial [Secure Consul with Access Control Lists (ACLs)](/consul/tutorials/security/access-control-setup-production#apply-individual-tokens-to-agents) -for more information. - -### Gossip Encryption Key - -If gossip encryption is enabled, you'll need the key as well. The command -to retrieve the key will depend on which Kubernetes secret you've stored it in. - -This key will be used in server and client configs for the `encrypt` setting: - -```hcl -encrypt = "uF+GsbI66cuWU21kiXLze5JLEX5j4iDFlDTb0ZWNpDI=" -``` - -### Final Configuration - -A final example server config file might look like: - -```hcl -# From above -tls { - defaults { - cert_file = "vm-dc-server-consul-0.pem" - key_file = "vm-dc-server-consul-0-key.pem" - ca_file = "consul-agent-ca.pem" - } - - internal_rpc { - verify_incoming = true - verify_outgoing = true - verify_server_hostname = true - } -} -primary_gateways = ["1.2.3.4:443"] -acl { - enabled = true - default_policy = "deny" - down_policy = "extend-cache" - tokens { - agent = "e7924dd1-dc3f-f644-da54-81a73ba0a178" - replication = "e7924dd1-dc3f-f644-da54-81a73ba0a178" - } -} -encrypt = "uF+GsbI66cuWU21kiXLze5JLEX5j4iDFlDTb0ZWNpDI=" - -# Other server settings -server = true -datacenter = "vm-dc" -data_dir = "/opt/consul" -enable_central_service_config = true -primary_datacenter = "dc1" -connect { - enabled = true - enable_mesh_gateway_wan_federation = true -} -ports { - https = 8501 - http = -1 - grpc = 8502 -} -``` - -## Kubernetes as the Secondary - -If you're running your primary datacenter on VMs then you'll need to manually -construct the [Federation Secret](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes#federation-secret) in order to federate -Kubernetes clusters as secondaries. In addition, primary clusters must be able to make requests to the Kubernetes API URLs of secondary clusters when ACLs are enabled. - --> Your VM cluster must be running mesh gateways, and have mesh gateway WAN -federation enabled. See [WAN Federation via Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). - -You'll need: - -1. The root certificate authority cert placed in `consul-agent-ca.pem`. -1. The root certificate authority key placed in `consul-agent-ca-key.pem`. -1. The IP addresses of the mesh gateways running in your VM datacenter. These must - be routable from the Kubernetes cluster. -1. If ACLs are enabled you must create an ACL replication token with the following rules: - - ```hcl - acl = "write" - operator = "write" - agent_prefix "" { - policy = "read" - } - node_prefix "" { - policy = "write" - } - service_prefix "" { - policy = "read" - intentions = "read" - } - ``` - - This token is used for ACL replication and for automatic ACL management in Kubernetes. - - If you're running Consul Enterprise you'll need the rules: - - ```hcl - operator = "write" - agent_prefix "" { - policy = "read" - } - node_prefix "" { - policy = "write" - } - namespace_prefix "" { - acl = "write" - service_prefix "" { - policy = "read" - intentions = "read" - } - } - ``` -1. If ACLs are enabled you must also modify the [anonymous token](/consul/docs/security/acl/tokens#anonymous-token) policy to have the following permissions: - - ```hcl - node_prefix "" { - policy = "read" - } - service_prefix "" { - policy = "read" - } - ``` - - With Consul Enterprise, use: - - ```hcl - partition_prefix "" { - namespace_prefix "" { - node_prefix "" { - policy = "read" - } - service_prefix "" { - policy = "read" - } - } - } - ``` - - These permissions are needed to allow cross-datacenter requests. To make a cross-dc request the sidecar proxy in the originating DC needs to know about the - services running in the remote DC. To do so, it needs an ACL token that allows it to look up the services in the remote DC. The way tokens are created in - Kubernetes, the sidecar proxies have local ACL tokens–i.e tokens that are only valid in the local DC. When a request goes from one DC to another, if the - request has a local token, it is stripped from the request because the remote DC won't be able to validate it. When the request lands in the other DC, - it has no ACL token and so will be subject to the anonymous token policy. This is why the anonymous token policy must be configured to allow read access - to all services. When the Kubernetes DC is the primary, this is handled automatically, but when the primary DC is on VMs, this must be configured manually. - - To configure the anonymous token policy, first create a policy with the above rules, then attach it to the anonymous token. For example using the CLI: - - ```sh - echo 'node_prefix "" { - policy = "read" - } - service_prefix "" { - policy = "read" - }' | consul acl policy create -name anonymous -rules - - - consul acl token update -id 00000000-0000-0000-0000-000000000002 -policy-name anonymous - ``` - -1. If gossip encryption is enabled, you'll need the key. - -With that data ready, you can create the Kubernetes federation secret: - -```sh -kubectl create secret generic consul-federation \ - --from-literal=caCert=$(cat consul-agent-ca.pem) \ - --from-literal=caKey=$(cat consul-agent-ca-key.pem) - # If ACLs are enabled uncomment. - # --from-literal=replicationToken="" \ - # If using gossip encryption uncomment. - # --from-literal=gossipEncryptionKey="" -``` - -If ACLs are enabled, you must next determine the Kubernetes API URL for the secondary cluster. The API URL of the -must be specified in the config files for all secondary clusters because secondary clusters need -to create global Consul ACL tokens (tokens that are valid in all datacenters) and these tokens can only be created -by the primary datacenter. By setting the API URL, the secondary cluster will configure a [Consul auth method](/consul/docs/security/acl/auth-methods) -in the primary cluster so that components in the secondary cluster can use their Kubernetes ServiceAccount tokens -to retrieve global Consul ACL tokens from the primary. - -To determine the Kubernetes API URL, first get the cluster name in your kubeconfig: - -```shell-session -$ export CLUSTER=$(kubectl config view -o jsonpath="{.contexts[?(@.name == \"$(kubectl config current-context)\")].context.cluster}") -``` - -Then get the API URL: - -```shell-session -$ kubectl config view -o jsonpath="{.clusters[?(@.name == \"$CLUSTER\")].cluster.server}" -https:// -``` - -You'll use this URL when setting `global.federation.k8sAuthMethodHost`. - -Then use the following Helm config file: - -```yaml -global: - name: consul - datacenter: dc2 - tls: - enabled: true - caCert: - secretName: consul-federation - secretKey: caCert - caKey: - secretName: consul-federation - secretKey: caKey - - # Delete this acls section if ACLs are disabled. - acls: - manageSystemACLs: true - replicationToken: - secretName: consul-federation - secretKey: replicationToken - - federation: - enabled: true - k8sAuthMethodHost: - primaryDatacenter: dc1 - - # Delete this gossipEncryption section if gossip encryption is disabled. - gossipEncryption: - secretName: consul-federation - secretKey: gossipEncryptionKey - -connectInject: - enabled: true -meshGateway: - enabled: true -server: - extraConfig: | - { - "primary_gateways": ["", "", ...] - } -``` - -Notes: - -1. You must fill out the `server.extraConfig` section with the IPs of your mesh -gateways running on VMs. -1. Set `global.federation.k8sAuthMethodHost` to the Kubernetes API URL of this cluster (including `https://`). -1. `global.federation.primaryDatacenter` should be set to the name of your primary datacenter. - -With your config file ready to go, follow our [Installation Guide](/consul/docs/k8s/installation/install) -to install Consul on your secondary cluster(s). - -After installation, if you're using consul-helm 0.30.0+, [create the -`ProxyDefaults` resource](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes#proxydefaults) -to allow traffic between datacenters. - -## Next Steps - -In both cases (Kubernetes as primary or secondary), after installation, follow the [Verifying Federation](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes#verifying-federation) -section to verify that federation is working as expected. diff --git a/website/content/docs/k8s/deployment-configurations/servers-outside-kubernetes.mdx b/website/content/docs/k8s/deployment-configurations/servers-outside-kubernetes.mdx deleted file mode 100644 index a794f3643d5e..000000000000 --- a/website/content/docs/k8s/deployment-configurations/servers-outside-kubernetes.mdx +++ /dev/null @@ -1,170 +0,0 @@ ---- -layout: docs -page_title: Join Kubernetes Clusters to external Consul Servers -description: >- - Kubernetes clusters can be joined to existing Consul clusters in a much simpler way with the introduction of Consul Dataplane. Learn how to add Kubernetes Clusters into an existing Consul cluster and bootstrap ACLs by configuring the Helm chart. ---- - -# Join Kubernetes Clusters to external Consul Servers - -If you have a Consul cluster already running, you can configure your -Consul on Kubernetes installation to join this existing cluster. - -The below `values.yaml` file shows how to configure the Helm chart to install -Consul so that it joins an existing Consul server cluster. - -The `global.enabled` value first disables all chart components by default -so that each component is opt-in. - -Next, configure `externalServers` to point it to Consul servers. -The `externalServers.hosts` value must be provided and should be set to a DNS, an IP, -or an `exec=` string with a command returning Consul IPs. Please see [this documentation](https://github.com/hashicorp/go-netaddrs) -on how the `exec=` string works. -Other values in the `externalServers` section are optional. Please refer to -[Helm Chart configuration](/consul/docs/k8s/helm#h-externalservers) for more details. - - - -```yaml -global: - enabled: false - -externalServers: - hosts: [] -``` - - - -With the introduction of [Consul Dataplane](/consul/docs/connect/dataplane#what-is-consul-dataplane), Consul installation on Kubernetes is simplified by removing the Consul Client agents. -This requires the Helm installation and rest of the consul-k8s components installed on Kubernetes to talk to Consul Servers directly on various ports. -Before starting the installation, ensure that the Consul Servers are configured to have the gRPC port enabled `8502/tcp` using the [`ports.grpc = 8502`](/consul/docs/agent/config/config-files#grpc) configuration option. - - -## Configuring TLS - --> **Note:** Consul on Kubernetes currently does not support external servers that require mutual authentication -for the HTTPS clients of the Consul servers, that is when servers have either -`tls.defaults.verify_incoming` or `tls.https.verify_incoming` set to `true`. -As noted in the [Security Model](/consul/docs/security#secure-configuration), -that setting isn't strictly necessary to support Consul's threat model as it is recommended that -all requests contain a valid ACL token. - -If the Consul server has TLS enabled, you need to provide the CA certificate so that Consul on Kubernetes can communicate with the server. Save the certificate in a Kubernetes secret and then provide it in your Helm values, as demonstrated in the following example: - - - -```yaml -global: - tls: - enabled: true - caCert: - secretName: - secretKey: -externalServers: - enabled: true - hosts: [] -``` - - - -If your HTTPS port is different from Consul's default `8501`, you must also set -`externalServers.httpsPort`. If the Consul servers are not running TLS enabled, use this config to set the HTTP port the servers are configured with (default `8500`). - -## Configuring ACLs - -If you are running external servers with ACLs enabled, there are a couple of ways to configure the Helm chart -to help initialize ACL tokens for Consul clients and consul-k8s components for you. - -### Manually Bootstrapping ACLs - -If you would like to call the [ACL bootstrapping API](/consul/api-docs/acl#bootstrap-acls) yourself or if your cluster has already been bootstrapped with ACLs, -you can provide the bootstrap token to the Helm chart. The Helm chart will then use this token to configure ACLs -for Consul clients and any consul-k8s components you are enabling. - -First, create a Kubernetes secret containing your bootstrap token: - -```shell -kubectl create secret generic bootstrap-token --from-literal='token=' -``` - -Then provide that secret to the Helm chart: - - - -```yaml -global: - acls: - manageSystemACLs: true - bootstrapToken: - secretName: bootstrap-token - secretKey: token -``` - - - -The bootstrap token requires the following minimal permissions: - -- `acl:write` -- `operator:write` if enabling Consul namespaces -- `agent:read` if using WAN federation over mesh gateways - -Next, configure external servers. The Helm chart will use this configuration to talk to the Consul server's API -to create policies, tokens, and an auth method. If you are [enabling Consul service mesh](/consul/docs/k8s/connect), -`k8sAuthMethodHost` should be set to the address of your Kubernetes API server -so that the Consul servers can validate a Kubernetes service account token when using the [Kubernetes auth method](/consul/docs/security/acl/auth-methods/kubernetes) -with `consul login`. - --> **Note:** If `externalServers.k8sAuthMethodHost` is set and you are also using WAN federation -(`global.federation.enabled` is set to `true`), ensure that `global.federation.k8sAuthMethodHost` is set to the same -value as `externalServers.k8sAuthMethodHost`. - - - -```yaml -externalServers: - enabled: true - hosts: [] - k8sAuthMethodHost: 'https://kubernetes.example.com:443' -``` - - - -Your resulting Helm configuration will end up looking similar to this: - - - -```yaml -global: - enabled: false - acls: - manageSystemACLs: true - bootstrapToken: - secretName: bootstrap-token - secretKey: token -externalServers: - enabled: true - hosts: [] - k8sAuthMethodHost: 'https://kubernetes.example.com:443' -``` - - - -### Bootstrapping ACLs via the Helm chart - -If you would like the Helm chart to call the bootstrapping API and set the server tokens for you, then the steps are similar. -The only difference is that you don't need to set the bootstrap token. The Helm chart will save the bootstrap token as a Kubernetes secret. - - - -```yaml -global: - enabled: false - acls: - manageSystemACLs: true -externalServers: - enabled: true - hosts: [] - k8sAuthMethodHost: 'https://kubernetes.example.com:443' -``` - - diff --git a/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx b/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx deleted file mode 100644 index 4f23ccddabe9..000000000000 --- a/website/content/docs/k8s/deployment-configurations/single-dc-multi-k8s.mdx +++ /dev/null @@ -1,320 +0,0 @@ ---- -layout: docs -page_title: Deploy Single Consul Datacenter Across Multiple K8s Clusters -description: >- - A single Consul datacenter can run across multiple Kubernetes pods in a flat network as long as only one pod has server agents. Learn how to configure the Helm chart, deploy pods in sequence, and verify your service mesh. ---- - -# Deploy Single Consul Datacenter Across Multiple Kubernetes Clusters - -~> **Note:** When running Consul across multiple Kubernetes clusters, we recommend using [admin partitions](/consul/docs/enterprise/admin-partitions) for production environments. This Consul Enterprise feature allows you to accommodate multiple tenants without resource collisions when administering a cluster at scale. Admin partitions also enable you to run Consul on Kubernetes clusters across a non-flat network. - -This page describes deploying a single Consul datacenter in multiple Kubernetes clusters, -with servers running in one cluster and only Consul on Kubernetes components in the rest of the clusters. -This example uses two Kubernetes clusters, but this approach could be extended to using more than two. - -## Requirements - -* `consul-k8s` v1.0.x or higher, and Consul 1.14.x or higher -* Kubernetes clusters must be able to communicate over LAN on a flat network. -* Either the Helm release name for each Kubernetes cluster must be unique, or `global.name` for each Kubernetes cluster must be unique to prevent collisions of ACL resources with the same prefix. - -## Prepare Helm release name ahead of installs - -The Helm release name must be unique for each Kubernetes cluster. -The Helm chart uses the Helm release name as a prefix for the -ACL resources that it creates, such as tokens and auth methods. If the names of the Helm releases are identical, or if `global.name` for each cluster is identical, subsequent Consul on Kubernetes clusters will overwrite existing ACL resources and cause the clusters to fail. - -Before proceeding with installation, prepare the Helm release names as environment variables for both the server and client install. - -```shell-session - $ export HELM_RELEASE_SERVER=server - $ export HELM_RELEASE_CONSUL=consul - ... - $ export HELM_RELEASE_CONSUL2=consul2 -``` - -## Deploying Consul servers in the first cluster - -First, deploy the first cluster with Consul servers according to the following example Helm configuration. - - - -```yaml -global: - datacenter: dc1 - tls: - enabled: true - enableAutoEncrypt: true - acls: - manageSystemACLs: true - gossipEncryption: - secretName: consul-gossip-encryption-key - secretKey: key -server: - exposeService: - enabled: true - type: NodePort - nodePort: - ## all are random nodePorts and you can set your own - http: 30010 - https: 30011 - serf: 30012 - rpc: 30013 - grpc: 30014 -ui: - service: - type: NodePort -``` - - - -Note that this will deploy a secure configuration with gossip encryption, -TLS for all components and ACLs. In addition, this will enable the Consul Service Mesh and the controller for CRDs -that can be used later to verify the connectivity of services across clusters. - -The UI's service type is set to be `NodePort`. -This is needed to connect to servers from another cluster without using the pod IPs of the servers, -which are likely going to change. - -Other services are exposed as `NodePort` services and configured with random port numbers. In this example, the `grpc` port is set to `30014`, which enables services to discover Consul servers using gRPC when connecting from another cluster. - -To deploy, first generate the Gossip encryption key and save it as a Kubernetes secret. - -```shell-session -$ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen) -``` - -Now install Consul cluster with Helm: -```shell-session -$ helm install ${HELM_RELEASE_SERVER} --values cluster1-values.yaml hashicorp/consul -``` - -Once the installation finishes and all components are running and ready, the following information needs to be extracted (using the below command) and applied to the second Kubernetes cluster. - * The CA certificate generated during installation - * The ACL bootstrap token generated during installation - -```shell-session -$ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token --output yaml > cluster1-credentials.yaml -``` - -## Deploying Consul Kubernetes in the second cluster -~> **Note:** If multiple Kubernetes clusters will be joined to the Consul Datacenter, then the following instructions will need to be repeated for each additional Kubernetes cluster. - -Switch to the second Kubernetes cluster where Consul clients will be deployed -that will join the first Consul cluster. - -```shell-session -$ kubectl config use-context -``` - -First, apply the credentials extracted from the first cluster to the second cluster: - -```shell-session -$ kubectl apply --filename cluster1-credentials.yaml -``` -To deploy in the second cluster, the following example Helm configuration will be used: - - - -```yaml -global: - enabled: false - datacenter: dc1 - acls: - manageSystemACLs: true - bootstrapToken: - secretName: cluster1-consul-bootstrap-acl-token - secretKey: token - tls: - enabled: true - caCert: - secretName: cluster1-consul-ca-cert - secretKey: tls.crt -externalServers: - enabled: true - # This should be any node IP of the first k8s cluster or the load balancer IP if using LoadBalancer service type for the UI. - hosts: ["10.0.0.4"] - # The node port of the UI's NodePort service or the load balancer port. - httpsPort: 31557 - # Matches the gRPC port of the Consul servers in the first cluster. - grpcPort: 30014 - tlsServerName: server.dc1.consul - # The address of the kube API server of this Kubernetes cluster - k8sAuthMethodHost: https://kubernetes.example.com:443 -connectInject: - enabled: true -``` - - - -Note the references to the secrets extracted and applied from the first cluster in ACL and TLS configuration. - -The `externalServers.hosts` and `externalServers.httpsPort` -refer to the IP and port of the UI's NodePort service deployed in the first cluster. -Set the `externalServers.hosts` to any Node IP of the first cluster, -which can be seen by running `kubectl get nodes --output wide`. -Set `externalServers.httpsPort` to the `nodePort` of the `cluster1-consul-ui` service. -In our example, the port is `31557`. - -```shell-session -$ kubectl get service cluster1-consul-ui --context cluster1 -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -cluster1-consul-ui NodePort 10.0.240.80 443:31557/TCP 40h -``` - -The `grpcPort: 30014` configuration refers to the gRPC port number specified in the `NodePort` configuration in the first cluster. - -Set the `externalServer.tlsServerName` to `server.dc1.consul`. This the DNS SAN -(Subject Alternative Name) that is present in the Consul server's certificate. -This is required because the connection to the Consul servers uses the node IP, -but that IP isn't present in the server's certificate. -To make sure that the hostname verification succeeds during the TLS handshake, set the TLS -server name to a DNS name that *is* present in the certificate. - -Next, set `externalServers.k8sAuthMethodHost` to the address of the second Kubernetes API server. -This should be the address that is reachable from the first cluster, so it cannot be the internal DNS -available in each Kubernetes cluster. Consul needs it so that `consul login` with the Kubernetes auth method will work -from the second cluster. -More specifically, the Consul server will need to perform the verification of the Kubernetes service account -whenever `consul login` is called, and to verify service accounts from the second cluster, it needs to -reach the Kubernetes API in that cluster. -The easiest way to get it is from the `kubeconfig` by running `kubectl config view` and grabbing -the value of `cluster.server` for the second cluster. - -Now, proceed with the installation of the second cluster. - -```shell-session -$ helm install ${HELM_RELEASE_CONSUL} --values cluster2-values.yaml hashicorp/consul -``` - -## Verifying the Consul Service Mesh works - -~> When Transparent proxy is enabled, services in one Kubernetes cluster that need to communicate with a service in another Kubernetes cluster must have an explicit upstream configured through the ["consul.hashicorp.com/connect-service-upstreams"](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) annotation. - -Now that the Consul cluster spanning across multiple k8s clusters is up and running, deploy two services in separate k8s clusters and verify that they can connect to each other. - -First, deploy `static-server` service in the first cluster: - - - -```yaml ---- -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: static-server -spec: - destination: - name: static-server - sources: - - name: static-client - action: allow ---- -apiVersion: v1 -kind: Service -metadata: - name: static-server -spec: - type: ClusterIP - selector: - app: static-server - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-server ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-server -spec: - replicas: 1 - selector: - matchLabels: - app: static-server - template: - metadata: - name: static-server - labels: - app: static-server - annotations: - "consul.hashicorp.com/connect-inject": "true" - spec: - containers: - - name: static-server - image: hashicorp/http-echo:latest - args: - - -text="hello world" - - -listen=:8080 - ports: - - containerPort: 8080 - name: http - serviceAccountName: static-server -``` - - - -Note that defining a Service intention is required so that our services are allowed to talk to each other. - -Next, deploy `static-client` in the second cluster with the following configuration: - - - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: static-client -spec: - selector: - app: static-client - ports: - - port: 80 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-client ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-client -spec: - replicas: 1 - selector: - matchLabels: - app: static-client - template: - metadata: - name: static-client - labels: - app: static-client - annotations: - "consul.hashicorp.com/connect-inject": "true" - "consul.hashicorp.com/connect-service-upstreams": "static-server:1234" - spec: - containers: - - name: static-client - image: curlimages/curl:latest - command: [ "/bin/sh", "-c", "--" ] - args: [ "while true; do sleep 30; done;" ] - serviceAccountName: static-client -``` - - - -Once both services are up and running, try connecting to the `static-server` from `static-client`: - -```shell-session -$ kubectl exec deploy/static-client -- curl --silent localhost:1234 -"hello world" -``` - -A successful installation would return `hello world` for the above curl command output. diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/connect-ca.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/connect-ca.mdx deleted file mode 100644 index 7550eb444200..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/connect-ca.mdx +++ /dev/null @@ -1,101 +0,0 @@ ---- -layout: docs -page_title: Vault as the Service Mesh Certificate Provider on Kubernetes -description: >- - Using Vault as the provider for the Service Mesh certificates on Kubernetes. ---- - -# Vault as the Service Mesh Certificate Provider on Kubernetes - -This topic describes how to configure the Consul Helm chart to use TLS certificates issued by Vault for Consul service mesh communication. - --> **Note:** This feature requires Consul 1.11 or higher. As of v1.11, -Consul allows using Kubernetes auth methods to configure the service mesh CA. -This allows for automatic token rotation once the renewal is no longer possible. - -~> **Compatibility note:** If you use Vault 1.11.0+ as Consul's service mesh CA, versions of Consul released before Dec 13, 2022 will develop an issue with Consul control plane or service mesh communication ([GH-15525](https://github.com/hashicorp/consul/pull/15525)). Use or upgrade to a [Consul version that includes the fix](https://support.hashicorp.com/hc/en-us/articles/11308460105491#01GMC24E6PPGXMRX8DMT4HZYTW) to avoid this problem. - -## Overview - -To use Vault as the service mesh certificate provider on Kubernetes, you will complete a modified version of the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section. - -Complete the following steps once: - 1. Create a Vault policy that authorizes the desired level of access to the secret. - -Repeat the following steps for each datacenter in the cluster: - 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. - 1. Update the Consul on Kubernetes helm chart. - -## Prerequisites -Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). - -## Create Vault policy - -To configure [Vault as the provider](/consul/docs/connect/ca/vault) for the Consul service mesh certificates, -you will first need to decide on the type of policy that is suitable for you. -To see the permissions that Consul would need in Vault, please see [Vault ACL policies](/consul/docs/connect/ca/vault#vault-acl-policies) -documentation. - -## Create Vault Authorization Roles for Consul - -Next, you will create Kubernetes auth roles for the Consul servers: - -```shell-session -$ vault write auth/kubernetes/role/consul-server \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies= \ - ttl=1h -``` - -To find out the service account name of the Consul server, -you can run: - -```shell-session -$ helm template --release-name ${RELEASE_NAME} --show-only templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml -``` - -## Update Consul on Kubernetes Helm chart -Now you can configure the Consul Helm chart to use Vault as the service mesh (connect) CA provider: - - - -```yaml -global: - secretsBackend: - vault: - enabled: true - consulServerRole: consul-server - consulClientRole: consul-client - consulCARole: consul-ca - connectCA: - address: - rootPKIPath: - intermediatePKIPath: - ca: - secretName: -``` - - - -The `address` you provide to the `connectCA` configuration can be a Kubernetes DNS -address if the Vault cluster is running the same Kubernetes cluster. -The `rootPKIPath` and `intermediatePKIPath` should be the same as the ones -defined in your service mesh CA policy. Behind the scenes, Consul will authenticate to Vault using a Kubernetes -service account using the [Kubernetes auth method](/vault/docs/auth/kubernetes) and will use the Vault token for any API calls to Vault. If the Vault token can not be renewed, Consul will re-authenticate to -generate a new Vault token. - -The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so: - -```shell-session -$ kubectl create secret generic vault-ca --from-file vault.ca=/path/to/your/vault/ca -``` - -### Secondary Datacenters - -To configure Vault as the service mesh (connect) CA in secondary datacenters, you need to make sure that the Root CA path is the same, -but the intermediate is different for each datacenter. In the `connectCA` Helm configuration for a secondary datacenter, -you can specify a `intermediatePKIPath` that is, for example, prefixed with the datacenter -for which this configuration is intended (e.g. `dc2/connect-intermediate`). diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/gossip.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/gossip.mdx deleted file mode 100644 index c6c71875a56c..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/gossip.mdx +++ /dev/null @@ -1,113 +0,0 @@ ---- -layout: docs -page_title: Storing the Gossip Encryption Key in Vault -description: >- - Configuring the Consul Helm chart to use a gossip encryption key stored in Vault. ---- - -# Storing Gossip Encryption Key in Vault - -This topic describes how to configure the Consul Helm chart to use a gossip encryption key stored in Vault. - -## Overview -Complete the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section to use a gossip encryption key stored in Vault. - -Complete the following steps once: - 1. Store the secret in Vault. - 1. Create a Vault policy that authorizes the desired level of access to the secret. - -Repeat the following steps for each datacenter in the cluster: - 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. - 1. Update the Consul on Kubernetes helm chart. - -## Prerequisites -Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). - -## Store the Secret in Vault -First, generate and store the gossip key in Vault. You will only need to perform this action once: - -```shell-session -$ vault kv put consul-kv/secret/gossip key="$(consul keygen)" -``` -## Create Vault policy - -Next, create a policy that allows read access to this secret. - -The path to the secret referenced in the `path` resource is the same value that you will configure in the `global.gossipEncryption.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). - - - -```HCL -path "consul-kv/data/secret/gossip" { - capabilities = ["read"] -} -``` - - - -Apply the Vault policy by issuing the `vault policy write` CLI command: - -```shell-session -$ vault policy write gossip-policy gossip-policy.hcl -``` - -## Create Vault Authorization Roles for Consul - -Next, we will create Kubernetes auth roles for the Consul server and client: - -```shell-session -$ vault write auth/kubernetes/role/consul-server \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies=gossip-policy \ - ttl=1h -``` - -```shell-session -$ vault write auth/kubernetes/role/consul-client \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies=gossip-policy \ - ttl=1h -``` - -To find out the service account names of the Consul server and client, -you can run the following `helm template` commands with your Consul on Kubernetes values file: - -- Generate Consul server service account name - ```shell-session - $ helm template --release-name ${RELEASE_NAME} -s templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml - ``` - -- Generate Consul client service account name - ```shell-session - $ helm template --release-name ${RELEASE_NAME} -s templates/client-serviceaccount.yaml hashicorp/consul -f values.yaml - ``` - -## Update Consul on Kubernetes Helm chart - -Now that we've configured Vault, you can configure the Consul Helm chart to -use the gossip key in Vault: - - - -```yaml -global: - secretsBackend: - vault: - enabled: true - consulServerRole: consul-server - consulClientRole: consul-client - gossipEncryption: - secretName: consul-kv/data/secret/gossip - secretKey: key -``` - - - -Note that `global.gossipEncryption.secretName` is the path of the secret in Vault. -This should be the same path as the one you'd include in your Vault policy. -`global.gossipEncryption.secretKey` is the key inside the secret data. This should be the same -as the key we passed when we created the gossip secret in Vault. diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/index.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/index.mdx deleted file mode 100644 index 819d0e4a9751..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/index.mdx +++ /dev/null @@ -1,141 +0,0 @@ ---- -layout: docs -page_title: Vault as the Secrets Backend Data Integration Overview -description: >- - Overview of the data integration aspects to using Vault as the secrets backend for Consul on Kubernetes. ---- - -# Vault as the Secrets Backend - Data Integration - -This topic describes how to configure Vault and Consul in order to share secrets for use within Consul. - -## Prerequisites - -Before you set up the data integration between Vault and Consul on Kubernetes, read and complete the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). - -## General integration steps - -For each secret you want to store in Vault, you must complete two multi-step procedures. - -Complete the following steps once: - 1. Store the secret in Vault. - 1. Create a Vault policy that authorizes the desired level of access to the secret. - -Repeat the following steps for each datacenter in the cluster: - 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. - 1. Update the Consul on Kubernetes Helm chart. - -## Secrets-to-service account mapping - -At the most basic level, the goal of this configuration is to authorize a Consul on Kubernetes service account to access a secret in Vault. - -The following table associates Vault secrets and the Consul on Kubernetes service accounts that require access. -(NOTE: `Consul components` refers to all other services and jobs that are not Consul servers or clients. -It includes things like terminating gateways, ingress gateways, etc.) - -### Primary datacenter - -| Secret | Service Account For | Configurable Role in Consul k8s Helm | -| ------ | ------------------- | ------------------------------------ | -|[ACL Bootstrap token](/consul/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)| -|[ACL Partition token](/consul/docs/k8s/deployment-configurations/vault/data-integration/partition-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)| -|[ACL Replication token](/consul/docs/k8s/deployment-configurations/vault/data-integration/replication-token) | Consul server-acl-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)| -|[Enterprise license](/consul/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| -|[Gossip encryption key](/consul/docs/k8s/deployment-configurations/vault/data-integration/gossip) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| -|[Snapshot Agent config](/consul/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| -|[Server TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/server-tls) | Consul servers
    Consul clients
    Consul components | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)
    [`global.secretsBackend.vault.consulCARole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulcarole)| -|[Service Mesh and Consul client TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| -|[Webhook TLS certificates for controller and connect inject](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers
    Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)
    [`global.secretsBackend.vault.connectInjectRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)| - -### Secondary datacenters - -The mapping for secondary data centers is similar with the following differences: - -- There is no use of bootstrap token because ACLs would have been bootstrapped in the primary datacenter. -- ACL Partition token is mapped to both the `server-acl-init` job and the `partition-init` job service accounts. -- ACL Replication token is mapped to both the `server-acl-init` job and Consul service accounts. - -| Secret | Service Account For | Configurable Role in Consul k8s Helm | -| ------ | ------------------- | ------------------------------------ | -|[ACL Partition token](/consul/docs/k8s/deployment-configurations/vault/data-integration/partition-token) | Consul server-acl-init job
    Consul partition-init job | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)
    [`global.secretsBackend.vault.adminPartitionsRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-adminpartitionsrole)| -|[ACL Replication token](/consul/docs/k8s/deployment-configurations/vault/data-integration/replication-token) | Consul server-acl-init job
    Consul servers | [`global.secretsBackend.vault.manageSystemACLsRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-managesystemaclsrole)
    [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| -|[Enterprise license](/consul/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| -|[Gossip encryption key](/consul/docs/k8s/deployment-configurations/vault/data-integration/gossip) | Consul servers
    Consul clients | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)| -|[Snapshot Agent config](/consul/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| -|[Server TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/server-tls) | Consul servers
    Consul clients
    Consul components | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)
    [`global.secretsBackend.vault.consulClientRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulclientrole)
    [`global.secretsBackend.vault.consulCARole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulcarole)| -|[Service Mesh and Consul client TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul servers | [`global.secretsBackend.vault.consulServerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-consulserverrole)| -|[Webhook TLS certificates for controller and connect inject](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) | Consul controllers
    Consul connect inject | [`global.secretsBackend.vault.controllerRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)
    [`global.secretsBackend.vault.connectInjectRole`](/consul/docs/k8s/helm#v-global-secretsbackend-vault-controllerrole)| - -### Combining policies within roles - -Depending upon your needs, a Consul on Kubernetes service account may need to request more than one secret. To request multiple secrets, create one role for the Consul on Kubernetes service account that is mapped to multiple policies associated with the required secrets. - -For example, if your Consul on Kubernetes servers need access to [Consul Server TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/server-tls) and an [Enterprise license](/consul/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license): - -1. Create a policy for each secret. - - 1. Consul Server TLS credentials - - - - ```HCL - path "pki/cert/ca" { - capabilities = ["read"] - } - ``` - - - - ```shell-session - $ vault policy write ca-policy ca-policy.hcl - ``` - - 1. Enterprise License - - - Vault API calls to version 2 of the Key-Value secrets engine require the `data` field in the path configuration. In the following example, the key-value data in `consul-kv/secret/enterpriselicense` becomes accessible for Vault API calls on the `consul-kv/data/secret/enterpriselicense` path. - - - - - ```HCL - path "consul-kv/data/secret/enterpriselicense" { - capabilities = ["read"] - } - ``` - - - - ```shell-session - $ vault policy write license-policy license-policy.hcl - ``` - -1. Create one role that maps the Consul on Kubernetes service account to the 3 policies. - ```shell-session - $ vault write auth/kubernetes/role/consul-server \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies=ca-policy,license-policy \ - ttl=1h - ``` - -## Detailed data integration guides - -The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets: - -- [ACL Bootstrap token](/consul/docs/k8s/deployment-configurations/vault/data-integration/bootstrap-token) -- [ACL Partition token](/consul/docs/k8s/deployment-configurations/vault/data-integration/partition-token) -- [ACL Replication token](/consul/docs/k8s/deployment-configurations/vault/data-integration/replication-token) -- [Enterprise license](/consul/docs/k8s/deployment-configurations/vault/data-integration/enterprise-license) -- [Gossip encryption key](/consul/docs/k8s/deployment-configurations/vault/data-integration/gossip) -- [Snapshot Agent config](/consul/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config) - -The following TLS certificates and keys can generated and managed by Vault the Vault PKI Engine, which is meant to handle things like certificate expiration and rotation: - -- [Server TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/server-tls) -- [Service Mesh and Consul client TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) -- [Vault as the Webhook Certificate Provider for Consul Controller and Connect Inject on Kubernetes](/consul/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs) - -## Secrets-to-service account mapping - -Read through the [detailed data integration guides](#detailed-data-integration-guides) that are pertinent to your environment. diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/server-tls.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/server-tls.mdx deleted file mode 100644 index f20e0e9a00ee..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/server-tls.mdx +++ /dev/null @@ -1,205 +0,0 @@ ---- -layout: docs -page_title: Vault as the Server TLS Certificate Provider on Kubernetes -description: >- - Configuring the Consul Helm chart to use TLS certificates issued by Vault for the Consul server. ---- - -# Vault as the Server TLS Certificate Provider on Kubernetes - -## Overview -To use Vault as the server TLS certificate provider on Kubernetes, complete a modified version of the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section. - -Complete the following steps once: - 1. Create a Vault policy that authorizes the desired level of access to the secret. - -Repeat the following steps for each datacenter in the cluster: - 1. (Added) Configure allowed domains for PKI certificates - 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. - 1. Update the Consul on Kubernetes helm chart. - -## Prerequisites -Prior to setting up the data integration between Vault and Consul on Kubernetes, you will need to have: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -3. Complete the [Bootstrapping the PKI Engine](#bootstrapping-the-pki-engine) section. - -## Bootstrapping the PKI Engine - -Issue the following commands to enable and configure the PKI Secrets Engine to server -TLS certificates to Consul. - -* Enable the PKI Secrets Engine: - - ```shell-session - $ vault secrets enable pki - ``` - -* Tune the engine to enable longer TTL: - - ```shell-session - $ vault secrets tune -max-lease-ttl=87600h pki - ``` - -* Generate the root CA: - - -> **Note:** The `common_name` value is comprised of combining `global.datacenter` dot `global.domain`. - - ```shell-session - $ vault write -field=certificate pki/root/generate/internal \ - common_name="dc1.consul" \ - ttl=87600h - ``` -## Create Vault policies -To use Vault to issue Server TLS certificates, you will need to create the following: - -1. Create a policy that allows `["create", "update"]` access to the - [certificate issuing URL](/vault/api-docs/secret/pki#generate-certificate) so the Consul servers can - fetch a new certificate/key pair. - - The path to the secret referenced in the `path` resource is the same value that you will configure in the `server.serverCert.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). - - - - ```HCL - path "pki/issue/consul-server" { - capabilities = ["create", "update"] - } - ``` - - - -1. Apply the Vault policy by issuing the `vault policy write` CLI command: - - ```shell-session - $ vault policy write consul-server consul-server-policy.hcl - ``` - -1. Create a policy that allows `["read"]` access to the [CA URL](/vault/api-docs/secret/pki), -this is required for the Consul components to communicate with the Consul servers in order to fetch their auto-encryption certificates. - - The path to the secret referenced in the `path` resource is the same value that you will configure in the `global.tls.caCert.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). - - - - ```HCL - path "pki/cert/ca" { - capabilities = ["read"] - } - ``` - - - - ```shell-session - $ vault policy write ca-policy ca-policy.hcl - ``` - -1. Configure allowed domains for PKI certificates. - - Next, a Vault role for the PKI engine will set the default certificate issuance parameters: - - ```shell-session - $ vault write pki/roles/consul-server \ - allowed_domains="" \ - allow_subdomains=true \ - allow_bare_domains=true \ - allow_localhost=true \ - max_ttl="720h" - ``` - - To generate the `` use the following script as a template: - - ```shell-session - #!/bin/sh - - # NAME is set to either the value from `global.name` from your Consul K8s value file, or your $HELM_RELEASE_NAME-consul - export NAME=consulk8s - # NAMESPACE is where the Consul on Kubernetes is installed - export NAMESPACE=consul - # DATACENTER is the value of `global.datacenter` from your Helm values config file - export DATACENTER=dc1 - - echo allowed_domains=\"$DATACENTER.consul, $NAME-server, $NAME-server.$NAMESPACE, $NAME-server.$NAMESPACE.svc\" - ``` - -1. Finally, Kubernetes auth roles need to be created for servers, clients, and components. - - Role for Consul servers: - ```shell-session - $ vault write auth/kubernetes/role/consul-server \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies=consul-server \ - ttl=1h - ``` - - To find out the service account name of the Consul server, - you can run: - - ```shell-session - $ helm template --release-name ${RELEASE_NAME} --show-only templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml - ``` - - Role for Consul clients: - - ```shell-session - $ vault write auth/kubernetes/role/consul-client \ - bound_service_account_names= \ - bound_service_account_namespaces=default \ - policies=ca-policy \ - ttl=1h - ``` - - To find out the service account name of the Consul client, use the command below. - ```shell-session - $ helm template --release-name ${RELEASE_NAME} --show-only templates/client-serviceaccount.yaml hashicorp/consul -f values.yaml - ``` - - Role for CA components: - ```shell-session - $ vault write auth/kubernetes/role/consul-ca \ - bound_service_account_names="*" \ - bound_service_account_namespaces= \ - policies=ca-policy \ - ttl=1h - ``` - - The above Vault Roles will now be your Helm values for `global.secretsBackend.vault.consulServerRole` and - `global.secretsBackend.vault.consulCARole` respectively. - -## Update Consul on Kubernetes Helm chart - -Next, configure the Consul Helm chart to -use the server TLS certificates from Vault: - - - -```yaml -global: - secretsBackend: - vault: - enabled: true - consulServerRole: consul-server - consulClientRole: consul-client - consulCARole: consul-ca - tls: - enableAutoEncrypt: true - enabled: true - caCert: - secretName: "pki/cert/ca" -server: - serverCert: - secretName: "pki/issue/consul-server" - extraVolumes: - - type: "secret" - name: - load: "false" -``` - - - -The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so: - -```shell-session -$ kubectl create secret generic vault-ca --from-file vault.ca=/path/to/your/vault/ -``` diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config.mdx deleted file mode 100644 index 532b415b0954..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/snapshot-agent-config.mdx +++ /dev/null @@ -1,103 +0,0 @@ ---- -layout: docs -page_title: Storing the Snapshot Agent Config in Vault -description: >- - Configuring the Consul Helm chart to use a snapshot agent config stored in Vault. ---- - -# Storing the Snapshot Agent Config in Vault - -This topic describes how to configure the Consul Helm chart to use a snapshot agent config stored in Vault. -## Overview -To use an ACL replication token stored in Vault, follow the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section. - -Complete the following steps once: - 1. Store the secret in Vault. - 1. Create a Vault policy that authorizes the desired level of access to the secret. - -Repeat the following steps for each datacenter in the cluster: - 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. - 1. Update the Consul on Kubernetes helm chart. - -## Prerequisites -Before you set up data integration between Vault and Consul on Kubernetes, complete the following prerequisites: -1. Read and completed the steps in the [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -2. Read the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). - -## Store the Secret in Vault - -First, store the snapshot agent config in Vault: - -```shell-session -$ vault kv put consul-kv/secret/snapshot-agent-config key="" -``` - -## Create Vault policy - -Next, you will need to create a policy that allows read access to this secret. - -The path to the secret referenced in the `path` resource is the same values that you will configure in the `client.snapshotAgent.configSecret.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). - - - -```HCL -path "consul-kv/data/secret/snapshot-agent-config" { - capabilities = ["read"] -} -``` - - - -Apply the Vault policy by issuing the `vault policy write` CLI command: - -```shell-session -$ vault policy write snapshot-agent-config-policy snapshot-agent-config-policy.hcl -``` - -## Create Vault Authorization Roles for Consul - -Next, add this policy to your Consul server Kubernetes auth role: - -```shell-session -$ vault write auth/kubernetes/role/consul-server \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies=snapshot-agent-config-policy \ - ttl=1h -``` -Note that if you have other policies associated -with the Consul server service account that are not in the example, you need to include those as well. - -To find out the service account name of the Consul snapshot agent, -you can run the following `helm template` command with your Consul on Kubernetes values file: - -```shell-session -$ helm template --release-name ${RELEASE_NAME} -s templates/server-serviceaccount.yaml hashicorp/consul -f values.yaml -``` - -## Update Consul on Kubernetes Helm chart - -Now that you have configured Vault, you can configure the Consul Helm chart to -use the snapshot agent configuration in Vault: - - - -```yaml -global: - secretsBackend: - vault: - enabled: true - consulServerRole: consul-server -client: - snapshotAgent: - configSecret: - secretName: consul-kv/data/secret/snapshot-agent-config - secretKey: key -``` - - - -Note that `client.snapshotAgent.configSecret.secretName` is the path of the secret in Vault. -This should be the same path as the one you included in your Vault policy. -`client.snapshotAgent.configSecret.secretKey` is the key inside the secret data. This should be the same -as the key you passed when creating the snapshot agent config secret in Vault. diff --git a/website/content/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs.mdx b/website/content/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs.mdx deleted file mode 100644 index b615d31fabd7..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/data-integration/webhook-certs.mdx +++ /dev/null @@ -1,235 +0,0 @@ ---- -layout: docs -page_title: Vault as the Webhook Certificate Provider for Consul Controller and Connect Inject on Kubernetes -description: >- - Configuring the Consul Helm chart to use TLS certificates issued by Vault for the Consul Controller and Connect Inject webhooks. ---- - -# Vault as the Controller and Connect Inject Webhook Certificate Provider on Kubernetes - -This topic describes how to configure the Consul Helm chart to use TLS certificates issued by Vault in the Consul controller and connect inject webhooks. - -## Overview -In a Consul Helm chart configuration that does not use Vault, `webhook-cert-manager` ensures that a valid certificate is updated to the `mutatingwebhookconfiguration` of either the controller or connect inject to ensure that Kubernetes can communicate with each of these services. - -When Vault is configured as the controller and connect inject Webhook Certificate Provider on Kubernetes: - - `webhook-cert-manager` is no longer deployed to the cluster. - - Controller and connect inject each get their webhook certificates from its own Vault PKI mount via the injected Vault Agent. - - Controller and connect inject each need to be configured with its own Vault Role that has necessary permissions to receive certificates from its respective PKI mount. - - Controller and connect inject each locally update its own `mutatingwebhookconfiguration` so that Kubernetes can relay events. - - Vault manages certificate rotation and rotates certificates to each webhook. - -To use Vault as the controller and connect inject Webhook Certificate Provider, we will need to modify the steps outlined in the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) section: - -These following steps will be repeated for each datacenter: - 1. Create a Vault policy that authorizes the desired level of access to the secret. - 1. (Added) Create Vault PKI roles for controller and connect inject that each establish the domains that each is allowed to issue certificates for. - 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. - 1. Configure the Vault Kubernetes auth roles in the Consul on Kubernetes helm chart. - -## Prerequisites -Complete the following prerequisites prior to implementing the integration described in this topic: -1. Verify that you have completed the steps described in [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -1. You should be familiar with the [Data Integration Overview](/consul/docs/k8s/deployment-configurations/vault/data-integration) section of [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). -1. Configure [Vault as the Server TLS Certificate Provider on Kubernetes](/consul/docs/k8s/deployment-configurations/vault/data-integration/server-tls) -1. Configure [Vault as the Service Mesh Certificate Provider on Kubernetes](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) - -## Bootstrapping the PKI Engines -Issue the following commands to enable and configure the PKI Secrets Engine to serve TLS certificates for the controller and connect inject webhooks: - -* Mount the PKI Secrets Engine for each: - - ```shell-session - $ vault secrets enable -path=controller pki - ``` - - ```shell-session - $ vault secrets enable -path=connect-inject pki - ``` - -* Tune the engine mounts to enable longer TTL: - - ```shell-session - $ vault secrets tune -max-lease-ttl=87600h controller - ``` - - ```shell-session - $ vault secrets tune -max-lease-ttl=87600h connect-inject - ``` - -* Generate the root CA for each: - - ```shell-session - $ vault write -field=certificate controller/root/generate/internal \ - common_name="-controller-webhook" \ - ttl=87600h - ``` - - ```shell-session - $ vault write -field=certificate connect-inject/root/generate/internal \ - common_name="-connect-injector" \ - ttl=87600h - ``` -## Create Vault Policies -1. Create a policy that allows `["create", "update"]` access to the -[certificate issuing URL](/vault/api-docs/secret/pki) so Consul controller and connect inject can fetch a new certificate/key pair and provide it to the Kubernetes `mutatingwebhookconfiguration`. - - The path to the secret referenced in the `path` resource is the same value that you will configure in the `global.secretsBackend.vault.controller.tlsCert.secretName` and `global.secretsBackend.vault.connectInject.tlsCert.secretName` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). - - ```shell-session - $ vault policy write controller-tls-policy - <` for each use the following script as a template: - - ```shell-session - #!/bin/sh - - # NAME is set to either the value from `global.name` from your Consul K8s value file, or your $HELM_RELEASE_NAME-consul - export NAME=consulk8s - # NAMESPACE is where the Consul on Kubernetes is installed - export NAMESPACE=consul - # DATACENTER is the value of `global.datacenter` from your Helm values config file - export DATACENTER=dc1 - - echo allowed_domains_controller=\"${NAME}-controller-webhook,${NAME}-controller-webhook.${NAMESPACE},${NAME}-controller-webhook.${NAMESPACE}.svc,${NAME}-controller-webhook.${NAMESPACE}.svc.cluster.local\"" - - echo allowed_domains_connect_inject=\"${NAME}-connect-injector,${NAME}-connect-injector.${NAMESPACE},${NAME}-connect-injector.${NAMESPACE}.svc,${NAME}-connect-injector.${NAMESPACE}.svc.cluster.local\"" - ``` - -1. Finally, Kubernetes auth roles need to be created for controller and connect inject webhooks. - - The path to the secret referenced in the `path` resource is the same values that you will configure in the `global.secretsBackend.vault.controllerRole` and `global.secretsBackend.vault.connectInjectRole` Helm configuration (refer to [Update Consul on Kubernetes Helm chart](#update-consul-on-kubernetes-helm-chart)). - - Role for Consul controller webhooks: - - ```shell-session - $ vault write auth/kubernetes/role/controller-role \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies=controller-ca-policy \ - ttl=1h - ``` - - To find out the service account name of the Consul controller, - you can run: - - ```shell-session - $ helm template --release-name ${RELEASE_NAME} --show-only templates/controller-serviceaccount.yaml hashicorp/consul -f values.yaml - ``` - - Role for Consul connect inject webhooks: - - ```shell-session - $ vault write auth/kubernetes/role/connect-inject-role \ - bound_service_account_names= \ - bound_service_account_namespaces= \ - policies=connect-inject-ca-policy \ - ttl=1h - ``` - - To find out the service account name of the Consul connect inject, use the command below. - ```shell-session - $ helm template --release-name ${RELEASE_NAME} --show-only templates/connect-inject-serviceaccount.yaml hashicorp/consul -f values.yaml - ``` - -## Update Consul on Kubernetes Helm chart - -Now that we've configured Vault, you can configure the Consul Helm chart to -use the Server TLS certificates from Vault: - - - -```yaml -global: - secretsBackend: - vault: - enabled: true - consulServerRole: "consul-server" - consulClientRole: "consul-client" - consulCARole: "consul-ca" - controllerRole: "controller-role" - connectInjectRole: "connect-inject-role" - connectInject: - caCert: - secretName: "connect-inject/cert/ca" - tlsCert: - secretName: "connect-inject/issue/connect-inject-role" - tls: - enabled: true - enableAutoEncrypt: true - caCert: - secretName: "pki/cert/ca" -server: - serverCert: - secretName: "pki/issue/consul-server" - extraVolumes: - - type: "secret" - name: - load: "false" -connectInject: - enabled: true -``` - - - -The `vaultCASecret` is the Kubernetes secret that stores the CA Certificate that is used for Vault communication. To provide a CA, you first need to create a Kubernetes secret containing the CA. For example, you may create a secret with the Vault CA like so: - -```shell-session -$ kubectl create secret generic vault-ca --from-file vault.ca=/path/to/your/vault/ -``` diff --git a/website/content/docs/k8s/deployment-configurations/vault/index.mdx b/website/content/docs/k8s/deployment-configurations/vault/index.mdx deleted file mode 100644 index a3c1f24b7d80..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/index.mdx +++ /dev/null @@ -1,54 +0,0 @@ ---- -layout: docs -page_title: Vault as the Secrets Backend Overview -description: >- - Using Vault as the secrets backend for Consul on Kubernetes. ---- - -# Vault as the Secrets Backend Overview - -By default, Consul Helm chart will expect that any credentials it needs are stored as Kubernetes secrets. -As of Consul 1.11 and Consul Helm chart v0.38.0, we integrate more natively with Vault making it easier -to use Consul Helm chart with Vault as the secrets storage backend. - -## Secrets Overview - -By default, Consul on Kubernetes leverages Kubernetes secrets which are base64 encoded and unencrypted. In addition, the following limitations exist with managing sensitive data within Kubernetes secrets: - -- There are no lease or time-to-live properties associated with these secrets. -- Kubernetes can only manage resources, such as secrets, within a cluster boundary. If you have sets of clusters, the resources across them need to be managed separately. - -By leveraging Vault as a secrets backend for Consul on Kubernetes, you can now manage and store Consul related secrets within a centralized Vault cluster to use across one or many Consul on Kubernetes datacenters. - -### Secrets stored in the Vault KV Secrets Engine - -The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets: -- ACL Bootstrap token -- ACL Partition token -- ACL Replication token -- Enterprise license -- Gossip encryption key -- Snapshot Agent config - - -### Secrets generated and managed by the Vault PKI Engine - -The following TLS certificates and keys can be generated and managed by the Vault PKI Engine, which is meant to handle things like certificate expiration and rotation: -- Server TLS credentials -- Service Mesh and Consul client TLS credentials - -## Requirements - -1. Vault 1.9+ and Vault-k8s 0.14+ is required. -1. Vault must be installed and accessible to the Consul on Kubernetes installation. -1. `global.tls.enableAutoencrypt=true` is required if TLS is enabled for the Consul installation when using the Vault secrets backend. -1. The Vault installation must have been initialized, unsealed and the KV2 and PKI secrets engines and the Kubernetes Auth Method enabled. -## Next Steps - -The Vault integration with Consul on Kubernetes has two aspects or phases: -- [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) - Configure Vault and Consul on Kubernetes systems to leverage Vault as the secrets store. -- [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) - Configure specific secrets to be stored and -retrieved from Vault for use with Consul on Kubernetes. - -As a next step, please proceed to [Systems Integration](/consul/docs/k8s/deployment-configurations/vault/systems-integration) overview to understand how to first setup Vault and Consul on Kubernetes to leverage Vault as a secrets backend. - diff --git a/website/content/docs/k8s/deployment-configurations/vault/systems-integration.mdx b/website/content/docs/k8s/deployment-configurations/vault/systems-integration.mdx deleted file mode 100644 index a35a9969e24a..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/systems-integration.mdx +++ /dev/null @@ -1,219 +0,0 @@ ---- -layout: docs -page_title: Vault as the Secrets Backend Systems Integration Overview -description: >- - Overview of the systems integration aspects to using Vault as the secrets backend for Consul on Kubernetes. ---- - -# Vault as the Secrets Backend - Systems Integration - -## Overview -Integrating Vault with Consul on Kubernetes includes a one-time setup on Vault and setting up the secrets backend for each Consul datacenter via Helm. - -Complete the following steps once: - - Enabling Vault KV Secrets Engine - Version 2 to store arbitrary secrets - - Enabling Vault PKI Engine if you are choosing to store and manage either [Consul Server TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/server-tls) or [Service Mesh and Consul client TLS credentials](/consul/docs/k8s/deployment-configurations/vault/data-integration/connect-ca) - -Repeat the following steps for each datacenter in the cluster: - - Installing the Vault Injector within the Consul datacenter installation - - Configuring a Kubernetes Auth Method in Vault to authenticate and authorize operations from the Consul datacenter - - Enable Vault as the Secrets Backend in the Consul datacenter - -Please read [Run Vault on Kubernetes](/vault/docs/platform/k8s/helm/run) if instructions on setting up a Vault cluster are needed. - -## Vault KV Secrets Engine - Version 2 - -The following secrets can be stored in Vault KV secrets engine, which is meant to handle arbitrary secrets: -- ACL Bootstrap token ([`global.acls.bootstrapToken`](/consul/docs/k8s/helm#v-global-acls-bootstraptoken)) -- ACL Partition token ([`global.acls.partitionToken`](/consul/docs/k8s/helm#v-global-acls-partitiontoken)) -- ACL Replication token ([`global.acls.replicationToken`](/consul/docs/k8s/helm#v-global-acls-replicationtoken)) -- Gossip encryption key ([`global.gossipEncryption`](/consul/docs/k8s/helm#v-global-gossipencryption)) -- Enterprise license ([`global.enterpriseLicense`](/consul/docs/k8s/helm#v-global-enterpriselicense)) -- Snapshot Agent config ([`client.snapshotAgent.configSecret`](/consul/docs/k8s/helm#v-client-snapshotagent-configsecret)) - -In order to store any of these secrets, we must enable the [Vault KV secrets engine - Version 2](/vault/docs/secrets/kv/kv-v2). - -```shell-session -$ vault secrets enable -path=consul-kv kv-v2 -``` - -## Vault PKI Engine - -The Vault PKI Engine must be enabled in order to leverage Vault for issuing Consul Server TLS certificates. More details for configuring the PKI Engine is found in [Bootstrapping the PKI Engine](/consul/docs/k8s/deployment-configurations/vault/data-integration/server-tls#bootstrapping-the-pki-engine) under the Server TLS section. - -```shell-session -$ vault secrets enable pki -``` - -## Set Environment Variables - -Before installing the Vault Injector and configuring the Vault Kubernetes Auth Method, some environment variables need to be set to better ensure consistent mapping between Vault and Consul on Kubernetes. - - - DATACENTER - - We recommend using the value for `global.datacenter` in your Consul Helm values file for this variable. - ```shell-session - $ export DATACENTER=dc1 - ``` - - - VAULT_AUTH_METHOD_NAME - - We recommend using a concatenation of a `kubernetes-` prefix (to denote the auth method type) with the `DATACENTER` environment variable for this variable. - ```shell-session - $ export VAULT_AUTH_METHOD_NAME=kubernetes-${DATACENTER} - ``` - - - VAULT_SERVER_HOST - - We recommend using the external IP address of your Vault cluster for this variable. - - If Vault is installed in a Kubernetes cluster, get the external IP or DNS name of the Vault server load balancer. - - - - On EKS, you can get the hostname of the Vault server's load balancer with the following command: - - ```shell-session - $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') - ``` - - - - - - On GKE, you can get the IP address of the Vault server's load balancer with the following command: - - ```shell-session - $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].ip}') - ``` - - - - - - On AKS, you can get the IP address of the Vault server's load balancer with the following command: - - ```shell-session - $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 --output jsonpath='{.status.loadBalancer.ingress[0].ip}') - ``` - - - - - If Vault is not running on Kubernetes, utilize the `api_addr` as defined in the Vault [High Availability Parameters](/vault/docs/configuration#high-availability-parameters) configuration: - ```shell-session - $ export VAULT_SERVER_HOST= - ``` - - - VAULT_AUTH_METHOD_NAME - - We recommend connecting to port 8200 of the Vault server. - ```shell-session - $ export VAULT_ADDR=http://${VAULT_SERVER_HOST}:8200 - ``` - - If your vault installation is current exposed using SSL, this address will need to use `https` instead of `http`. You will also need to setup the [`VAULT_CACERT`](/vault/docs/commands#vault_cacert) environment variable. - - - VAULT_TOKEN - - We recommend using your allocated Vault token as the value for this variable. If running Vault in dev mode, this can be set to to `root`. - ```shell-session - $ export VAULT_TOKEN= - ``` - -## Install Vault Injector in Consul k8s cluster - -A minimal valid installation of Vault Kubernetes must include the Agent Injector which is utilized for accessing secrets from Vault. Vault servers could be deployed external to Vault on Kubernetes with the [`injector.externalVaultAddr`](/vault/docs/platform/k8s/helm/configuration#externalvaultaddr) value in the Vault Helm Configuration. - -```shell-session -$ cat <> vault-injector.yaml -# vault-injector.yaml -global: - enabled: true - externalVaultAddr: ${VAULT_ADDR} -server: - enabled: false -injector: - enabled: true - authPath: auth/${VAULT_AUTH_METHOD_NAME} -EOF -``` - -Issue the Helm `install` command to install the Vault agent injector using the HashiCorp Vault Helm chart. - -```shell-session -$ helm install vault-${DATACENTER} -f vault-injector.yaml hashicorp/vault --wait -``` - -## Configure the Kubernetes Auth Method in Vault - -Ensure that the Vault Kubernetes Auth method is enabled. - -```shell-session -$ vault auth enable -path=kubernetes-${DATACENTER} kubernetes -``` - -After enabling the Kubernetes auth method, in Vault, ensure that you have configured the Kubernetes Auth method properly as described in [Kubernetes Auth Method Configuration](/vault/docs/auth/kubernetes#configuration). - -First, while targeting your Consul cluster, get the externally reachable address of the Consul Kubernetes cluster. - -```shell-session -$ export KUBE_API_URL=$(kubectl config view -o jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}") -``` - -Next, you will configure the Vault Kubernetes Auth Method for the datacenter. You will need to provide it with: -- `token_reviewer_jwt` - this a JWT token from the Consul datacenter cluster that the Vault Kubernetes Auth Method will use to query the Consul datacenter Kubernetes API when services in the Consul datacenter request data from Vault. -- `kubernetes_host` - this is the URL of the Consul datacenter's Kubernetes API that Vault will query to authenticate the service account of an incoming request from a Consul data center kubernetes service. -- `kubernetes_ca_cert` - this is the CA certification that is currently being used by the Consul datacenter Kubernetes cluster. - -```shell-session -$ vault write auth/kubernetes/config \ - token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \ - kubernetes_host="https://${KUBE_API_URL}:443" \ - kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt -``` - -## Update Vault Helm chart -Finally, you will configure the Consul on Kubernetes helm chart for the datacenter to expect to receive the following values (if you have configured them) to be retrieved from Vault: -- ACL Bootstrap token ([`global.acls.bootstrapToken`](/consul/docs/k8s/helm#v-global-acls-bootstraptoken)) -- ACL Partition token ([`global.acls.partitionToken`](/consul/docs/k8s/helm#v-global-acls-partitiontoken)) -- ACL Replication token ([`global.acls.replicationToken`](/consul/docs/k8s/helm#v-global-acls-replicationtoken)) -- Enterprise license ([`global.enterpriseLicense`](/consul/docs/k8s/helm#v-global-enterpriselicense)) -- Gossip encryption key ([`global.gossipEncryption`](/consul/docs/k8s/helm#v-global-gossipencryption)) -- Snapshot Agent config ([`client.snapshotAgent.configSecret`](/consul/docs/k8s/helm#v-client-snapshotagent-configsecret)) -- TLS CA certificates ([`global.tls.caCert`](/consul/docs/k8s/helm#v-global-tls-cacert)) -- Server TLS certificates ([`server.serverCert`](/consul/docs/k8s/helm#v-server-servercert)) - - - -```yaml -global: - secretsBackend: - vault: - enabled: true -``` - - - -## Next Steps - -As a next step, please proceed to Vault integration with Consul on Kubernetes' [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration). - -## Troubleshooting - -The Vault integration with Consul on Kubernetes makes use of the Vault Agent Injectors. Kubernetes annotations are added to the -deployments of the Consul components which cause the Vault Agent Injector to be added as an init-container that will then attach -Vault secrets to Consul's pods at startup. Additionally the Vault Agent sidecar is added to the Consul component pods which -is responsible for synchronizing and reissuing secrets at runtime. -As a result of these additional sidecar containers the typical location for logging is expanded in the Consul components. - -As a general rule the best way to troubleshoot startup issues for your Consul installation when using the Vault integration -is to establish if the `vault-agent-init` container has completed or not via `kubectl logs -f -c vault-agent-int` -and checking to see if the secrets have completed rendering. -* If the secrets are not properly rendered the underlying problem will be logged in `vault-agent-init` init-container - and generally is related to the Vault Kube Auth Role not having the correct policies for the specific secret - e.g. `global.secretsBackend.vault.consulServerRole` not having the correct policies for TLS. -* If the secrets are rendered and the `vault-agent-init` container has completed AND the Consul component has not become `Ready`, - this generally points to an issue with Consul being unable to utilize the Vault secret. This can occur if, for example, the Vault Role - created for the PKI engine does not have the correct `alt_names` or otherwise is not properly configured. The best logs for this - circumstance are the Consul container logs: `kubectl logs -f -c consul`. diff --git a/website/content/docs/k8s/deployment-configurations/vault/wan-federation.mdx b/website/content/docs/k8s/deployment-configurations/vault/wan-federation.mdx deleted file mode 100644 index 3afcbb7072c4..000000000000 --- a/website/content/docs/k8s/deployment-configurations/vault/wan-federation.mdx +++ /dev/null @@ -1,693 +0,0 @@ ---- -layout: docs -page_title: Federation Between Kubernetes Clusters with Vault as Secrets Backend -description: >- - Federating multiple Kubernetes clusters using Vault as secrets backend. ---- - -# Federation Between Kubernetes Clusters with Vault as Secrets Backend - -~> **Note**: This topic requires familiarity with [Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters), [WAN Federation Via Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways). - -This page describes how you can federate multiple Kubernetes clusters using Vault as the secrets backend. See the [Multi-Cluster Overview](/consul/docs/k8s/deployment-configurations/multi-cluster) for more information on use cases and how it works. - -## Differences Between Using Kubernetes Secrets vs. Vault -The [Federation Between Kubernetes Clusters](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes) page provides an overview of WAN Federation using Mesh Gateways with Kubernetes secrets as the secret backend. When using Vault as the secrets backend, there are different systems and data integration configuration that will be explained in the [Usage](#usage) section of this page. The other main difference is that when using Vault, there is no need for you to export and import a [Federation Secret](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes#federation-secret) in each datacenter. - -## Usage - -The expected use case is to create WAN Federation on Kubernetes clusters. The following procedure results in a WAN Federation with Vault as the secrets backend between two clusters, dc1 and dc2. dc1 acts as the primary Consul cluster and also contains the Vault server installation. dc2 is the secondary Consul cluster. - -![Consul on Kubernetes with Vault as the Secrets Backend](/img/k8s/consul-vault-wan-federation-topology.svg 'Consul on Kubernetes with Vault as the Secrets Backend') - -The Vault Injectors in each cluster will ensure that every pod in cluster has a Vault agent inject into the pod. - -![Vault Injectors inject Vault agents into pods](/img/k8s/consul-vault-wan-federation-vault-injector.svg 'Vault Injectors inject Vault agents into pods') - -The Vault Agents on each Consul pod will communicate directly with Vault on its externally accessible endpoint. Consul pods are also configured with Vault annotations that configure the secrets that the pod needs as well as the path that the Vault agent should locally store those secrets. - -![Vault agent and server communication](/img/k8s/consul-vault-wan-federation-vault-communication.svg 'Vault agent and server communication') - -The two data centers will federated using mesh gateways. This communication topology is also described in the [WAN Federation Via Mesh Gateways](/consul/docs/k8s/deployment-configurations/multi-cluster#wan-federation-via-mesh-gateways) section of [Multi-Cluster Federation Overview](/consul/docs/k8s/deployment-configurations/multi-cluster). - -![Mesh Federation via Mesh Gateways](/img/k8s/consul-vault-wan-federation-mesh-communication.svg 'Mesh Federation via Mesh Gateways') - -### Install Vault - -In this setup, you will deploy Vault server in the primary datacenter (dc1) Kubernetes cluster, which is also the primary Consul datacenter. You will configure your Vault Helm installation in the secondary datacenter (dc2) Kubernetes cluster to use it as an external server. This way there will be a single vault server cluster that will be used by both Consul datacenters. - -~> **Note**: For demonstration purposes, the following example deploys a Vault server in dev mode. Do not use dev mode for production installations. Refer to the [Vault Deployment Guide](/vault/tutorials/day-one-raft/raft-deployment-guide) for guidance on how to install Vault in a production setting. - -1. Change your current Kubernetes context to target the primary datacenter (dc1). - - ```shell-session - $ kubectl config use-context - ``` - -1. Now, use the values files below for your Helm install. - - - - ```yaml - server: - dev: - enabled: true - service: - enabled: true - type: LoadBalancer - ui: - enabled: true - ``` - - - ```shell-session - $ helm install vault-dc1 --values vault-dc1.yaml hashicorp/vault --wait - ``` - -### Configuring your local environment - -1. Install Consul locally so that you can generate the gossip key. Please see the [Precompiled Binaries](/consul/docs/install#precompiled-binaries) section of the [Install Consul page](/consul/docs/install#precompiled-binaries). - -1. Set the VAULT_TOKEN with a default value. - - ```shell-session - $ export VAULT_ADDR=root - ``` - -1. Get the external IP or DNS name of the Vault server's load balancer. - - - - On EKS, you can get the hostname of the Vault server's load balancer with the following command: - - ```shell-session - $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') - ``` - - - - - - On GKE, you can get the IP address of the Vault server's load balancer with the following command: - - ```shell-session - $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 -o jsonpath='{.status.loadBalancer.ingress[0].ip}') - ``` - - - - - - On AKS, you can get the IP address of the Vault server's load balancer with the following command: - - ```shell-session - $ export VAULT_SERVER_HOST=$(kubectl get svc vault-dc1 --output jsonpath='{.status.loadBalancer.ingress[0].ip}') - ``` - - - - -1. Set the VAULT_ADDR environment variable. - - ```shell-session - $ export VAULT_ADDR=http://${VAULT_SERVER_HOST}:8200 - ``` - -## Systems Integration -There are two main procedures to enable Vault as the service mesh certificate provider in Kubernetes. - -Complete the following steps once: - 1. Enabling Vault KV Secrets Engine - Version 2. - 1. Enabling Vault PKI Engine. - -Repeat the following steps for each datacenter in the cluster: - 1. Installing the Vault Injector within the Consul datacenter installation - 1. Configuring a Kubernetes Auth Method in Vault to authenticate and authorize operations from the Consul datacenter - 1. Enable Vault as the Secrets Backend in the Consul datacenter - -### Configure Vault Secrets engines -1. Enable [Vault KV secrets engine - Version 2](/vault/docs/secrets/kv/kv-v2) in order to store the [Gossip Encryption Key](/consul/docs/k8s/helm#v-global-acls-replicationtoken) and the ACL Replication token ([`global.acls.replicationToken`](/consul/docs/k8s/helm#v-global-acls-replicationtoken)). - - ```shell-session - $ vault secrets enable -path=consul-kv kv-v2 - ``` - -1. Enable Vault PKI Engine in order to leverage Vault for issuing Consul Server TLS certificates. - - ```shell-session - $ vault secrets enable pki - ``` - - ```shell-session - $ vault secrets tune -max-lease-ttl=87600h pki - ``` - -### Primary Datacenter (dc1) -1. Install the Vault Injector in your Consul Kubernetes cluster (dc1), which is used for accessing secrets from Vault. - - -> **Note**: In the primary datacenter (dc1), you will not have to configure `injector.externalvaultaddr` value because the Vault server is in the same primary datacenter (dc1) cluster. - - - - ```yaml - server: - dev: - enabled: true - service: - enabled: true - type: LoadBalancer - injector: - enabled: true - authPath: auth/kubernetes-dc1 - ui: - enabled: true - ``` - - - Next, install Vault in the Kubernetes cluster. - - ```shell-session - $ helm upgrade vault-dc1 --values vault-dc1.yaml hashicorp/vault --wait - ``` - -1. Configure the Kubernetes Auth Method in Vault for the primary datacenter (dc1). - - ```shell-session - $ vault auth enable -path=kubernetes-dc1 kubernetes - ``` - - Because Consul is in the same datacenter cluster as Vault, the Vault Auth Method can use its own CA Cert and JWT to authenticate Consul dc1 service account requests. Therefore, you do not need to set `token_reviewer` and `kubernetes_ca_cert` on the dc1 Kubernetes Auth Method. - -1. Configure Auth Method with Kubernetes API host - - ```shell-session - $ vault write auth/kubernetes-dc1/config kubernetes_host=https://kubernetes.default.svc - ``` - -1. Enable Vault as the secrets backend in the primary datacenter (dc1). However, you will not yet apply the Helm install command. You will issue the Helm upgrade command after the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/wan-federation#setup-per-consul-datacenter-1) section. - - - - ```yaml - global: - secretsBackend: - vault: - enabled: true - ``` - - - - -### Secondary Datacenter (dc2) -1. Install the Vault Injector in the secondary datacenter (dc2). - - In the secondary datacenter (dc2), you will configure the `externalvaultaddr` value point to the external address of the Vault server in the primary datacenter (dc1). - - Change your Kubernetes context to target the secondary datacenter (dc2): - - ```shell-session - $ kubectl config use-context - ``` - - - - ```yaml - server: - enabled: false - injector: - enabled: true - externalVaultAddr: ${VAULT_ADDR} - authPath: auth/kubernetes-dc2 - ``` - - - - Next, install Vault in the Kubernetes cluster. - ```shell-session - $ helm install vault-dc2 --values vault-dc2.yaml hashicorp/vault --wait - ``` - -1. Configure the Kubernetes Auth Method in Vault for the datacenter - - ```shell-session - $ vault auth enable -path=kubernetes-dc2 kubernetes - ``` - -1. Create a service account with access to the Kubernetes API in the secondary datacenter (dc2). For the secondary datacenter (dc2) auth method, you first need to create a service account that allows the Vault server in the primary datacenter (dc1) cluster to talk to the Kubernetes API in the secondary datacenter (dc2) cluster. - - ```shell-session - $ cat <> auth-method-serviceaccount.yaml - # auth-method.yaml - apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: vault-dc2-auth-method - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:auth-delegator - subjects: - - kind: ServiceAccount - name: vault-dc2-auth-method - namespace: default - --- - apiVersion: v1 - kind: ServiceAccount - metadata: - name: vault-dc2-auth-method - namespace: default - EOF - ``` - - ```shell-session - $ kubectl apply --values auth-method-serviceaccount.yaml - ``` - -1. Next, you will need to get the token and CA cert from that service account secret. - - ```shell-session - $ export K8S_DC2_CA_CERT="$(kubectl get secret `kubectl get serviceaccounts vault-dc2-auth-method --output jsonpath='{.secrets[0].name}'` --output jsonpath='{.data.ca\.crt}' | base64 --decode)" - ``` - - ```shell-session - $ export K8S_DC2_JWT_TOKEN="$(kubectl get secret `kubectl get serviceaccounts vault-dc2-auth-method --output jsonpath='{.secrets[0].name}'` --output jsonpath='{.data.token}' | base64 --decode)" - ``` - -1. Configure the auth method with the JWT token of service account. First, get the externally reachable address of the secondary Consul datacenter (dc2) in the secondary Kubernetes cluster. Then set `kubernetes_host` in the auth method configuration. - - ```shell-session - $ export KUBE_API_URL_DC2=$(kubectl config view --output jsonpath="{.clusters[?(@.name == \"$(kubectl config current-context)\")].cluster.server}") - ``` - - ```shell-session - $ vault write auth/kubernetes-dc2/config \ - kubernetes_host="${KUBE_API_URL_DC2}" \ - token_reviewer_jwt="${K8S_DC2_JWT_TOKEN}" \ - kubernetes_ca_cert="${K8S_DC2_CA_CERT}" - ``` - -1. Enable Vault as the secrets backend in the secondary Consul datacenter (dc2). However, you will not yet apply the Helm install command. You will issue the Helm upgrade command after the [Data Integration](/consul/docs/k8s/deployment-configurations/vault/wan-federation#setup-per-consul-datacenter-1) section. - - - - ```yaml - global: - secretsBackend: - vault: - enabled: true - ``` - - - -## Data Integration -There are two main procedures for using Vault as the service mesh certificate provider in Kubernetes. - -Complete the following steps once: - 1. Store the secrets in Vault. - 1. Create a Vault policy that authorizes the desired level of access to the secrets. - -Repeat the following steps for each datacenter in the cluster: - 1. Create Vault Kubernetes auth roles that link the policy to each Consul on Kubernetes service account that requires access. - 1. Update the Consul on Kubernetes helm chart. - -### Secrets and Policies -1. Store the ACL bootstrap and replication tokens, gossip encryption key, and root CA certificate secrets in Vault. - - ```shell-session - $ vault kv put consul-kv/secret/gossip key="$(consul keygen)" - ``` - - ```shell-session - $ vault kv put consul-kv/secret/bootstrap token="$(uuidgen | tr '[:upper:]' '[:lower:]')" - ``` - - ```shell-session - $ vault kv put consul-kv/secret/replication token="$(uuidgen | tr '[:upper:]' '[:lower:]')" - ``` - ```shell-session - $ vault write pki/root/generate/internal common_name="Consul CA" ttl=87600h - ``` - -1. Create Vault policies that authorize the desired level of access to the secrets. - - ```shell-session - $ vault policy write gossip - < - ``` -### Primary Datacenter (dc1) -1. Create Server TLS and Service Mesh Cert Policies - - ```shell-session - $ vault policy write consul-cert-dc1 - < - - ```yaml - global: - datacenter: "dc1" - name: consul - secretsBackend: - vault: - enabled: true - consulServerRole: consul-server - consulClientRole: consul-client - consulCARole: consul-ca - manageSystemACLsRole: server-acl-init - connectCA: - address: http://vault-dc1.default:8200 - rootPKIPath: connect_root/ - intermediatePKIPath: dc1/connect_inter/ - authMethodPath: kubernetes-dc1 - tls: - enabled: true - enableAutoEncrypt: true - caCert: - secretName: pki/cert/ca - federation: - enabled: true - createFederationSecret: false - acls: - manageSystemACLs: true - createReplicationToken: true - bootstrapToken: - secretName: consul-kv/data/secret/bootstrap - secretKey: token - replicationToken: - secretName: consul-kv/data/secret/replication - secretKey: token - gossipEncryption: - secretName: consul-kv/data/secret/gossip - secretKey: key - server: - replicas: 1 - serverCert: - secretName: "pki/issue/consul-cert-dc1" - connectInject: - replicas: 1 - enabled: true - meshGateway: - enabled: true - replicas: 1 - ``` - - - - Next, install Consul in the primary Kubernetes cluster (dc1). - ```shell-session - $ helm install consul-dc1 --values consul-dc1.yaml hashicorp/consul - ``` - -### Pre-installation for Secondary Datacenter (dc2) -1. Update the Consul on Kubernetes Helm chart. For secondary datacenter (dc2), you need to get the address of the mesh gateway from the _primary datacenter (dc1)_ cluster. - - Keep your Kubernetes context targeting dc1 and set the `MESH_GW_HOST` environment variable that you will use in the Consul Helm chart for secondary datacenter (dc2). - - ```shell-session - $ kubectl config use-context - ``` - - Next, get mesh gateway address: - - - - - ```shell-session - $ export MESH_GW_HOST=$(kubectl get svc consul-mesh-gateway --output jsonpath='{.status.loadBalancer.ingress[0].hostname}') - ``` - - - - - - ```shell-session - $ export MESH_GW_HOST=$(kubectl get svc consul-mesh-gateway --output jsonpath='{.status.loadBalancer.ingress[0].ip}') - ``` - - - - - - ```shell-session - $ export MESH_GW_HOST=$(kubectl get svc consul-mesh-gateway --output jsonpath='{.status.loadBalancer.ingress[0].ip}') - ``` - - - - -1. Change your Kubernetes context to target the primary datacenter (dc2): - ```shell-session - $ kubectl config use-context - ``` -### Secondary Datacenter (dc2) - -1. Create Server TLS and Service Mesh Cert Policies - - ```shell-session - $ vault policy write consul-cert-dc2 - < **Note**: To configure Vault as the service mesh (connect) CA in secondary datacenters, you need to make sure that the Root CA path is the same. The intermediate path is different for each datacenter. In the `connectCA` Helm configuration for a secondary datacenter, you can specify a `intermediatePKIPath` that is, for example, prefixed with the datacenter for which this configuration is intended (e.g. `dc2/connect-intermediate`). - - - - ```yaml - global: - datacenter: "dc2" - name: consul - secretsBackend: - vault: - enabled: true - consulServerRole: consul-server - consulClientRole: consul-client - consulCARole: consul-ca - manageSystemACLsRole: server-acl-init - connectCA: - address: ${VAULT_ADDR} - rootPKIPath: connect_root/ - intermediatePKIPath: dc2/connect_inter/ - authMethodPath: kubernetes-dc2 - tls: - enabled: true - enableAutoEncrypt: true - caCert: - secretName: "pki/cert/ca" - federation: - enabled: true - primaryDatacenter: dc1 - k8sAuthMethodHost: ${KUBE_API_URL_DC2} - primaryGateways: - - ${MESH_GW_HOST}:443 - acls: - manageSystemACLs: true - replicationToken: - secretName: consul-kv/data/secret/replication - secretKey: token - gossipEncryption: - secretName: consul-kv/data/secret/gossip - secretKey: key - server: - replicas: 1 - serverCert: - secretName: "pki/issue/consul-cert-dc2" - connectInject: - replicas: 1 - enabled: true - controller: - enabled: true - meshGateway: - enabled: true - replicas: 1 - ``` - - - - Next, install Consul in the consul Kubernetes cluster (dc2). - - ```shell-session - $ helm install consul-dc2 -f consul-dc2.yaml hashicorp/consul - ``` - -## Next steps -You have completed the process of federating the secondary datacenter (dc2) with the primary datacenter (dc1) using Vault as the Secrets backend. To validate that everything is configured properly, please confirm that all pods within both datacenters are in a running state. - -For additional information about specific Consul secrets that you can store in Vault, refer to [Data Integration](/consul/docs/k8s/deployment-configurations/vault/data-integration) in the [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault) documentation. diff --git a/website/content/docs/k8s/dns/enable.mdx b/website/content/docs/k8s/dns/enable.mdx deleted file mode 100644 index 708ca9241c26..000000000000 --- a/website/content/docs/k8s/dns/enable.mdx +++ /dev/null @@ -1,263 +0,0 @@ ---- -layout: docs -page_title: Resolve Consul DNS requests in Kubernetes -description: >- - Use a k8s ConfigMap to configure KubeDNS or CoreDNS so that you can use Consul's `.service.consul` syntax for queries and other DNS requests. In Kubernetes, this process uses either stub-domain or proxy configuration. ---- - -# Resolve Consul DNS requests in Kubernetes - -This topic describes how to configure Consul DNS in -Kubernetes using a -[stub-domain configuration](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configure-stub-domain-and-upstream-dns-servers) -if using KubeDNS or a [proxy configuration](https://coredns.io/plugins/forward/) if using CoreDNS. - -Once configured, DNS requests in the form `.service.consul` will -resolve for services in Consul. This works from all Kubernetes namespaces. - --> **Note:** If you want requests to just `` (without the `.service.consul`) to resolve, then you'll need -to turn on [Consul to Kubernetes Service Sync](/consul/docs/k8s/service-sync#consul-to-kubernetes). - -## Consul DNS Cluster IP - -To configure KubeDNS or CoreDNS you'll first need the `ClusterIP` of the Consul -DNS service created by the [Helm chart](/consul/docs/k8s/helm). - -The default name of the Consul DNS service will be `consul-dns`. Use -that name to get the `ClusterIP`: - -```shell-session -$ kubectl get svc consul-dns --output jsonpath='{.spec.clusterIP}' -10.35.240.78% -``` - -For this installation the `ClusterIP` is `10.35.240.78`. - --> **Note:** If you've installed Consul using a different helm release name than `consul` -then the DNS service name will be `-consul-dns`. - -## KubeDNS - -If using KubeDNS, you need to create a `ConfigMap` that tells KubeDNS -to use the Consul DNS service to resolve all domains ending with `.consul`: - -Export the Consul DNS IP as an environment variable: - -```bash -export CONSUL_DNS_IP=10.35.240.78 -``` - -And create the `ConfigMap`: - -```shell-session -$ cat < **Note:** The `stubDomain` can only point to a static IP. If the cluster IP -of the Consul DNS service changes, then it must be updated in the config map to -match the new service IP for this to continue -working. This can happen if the service is deleted and recreated, such as -in full cluster rebuilds. - --> **Note:** If using a different zone than `.consul`, change the stub domain to -that zone. - -Now skip ahead to the [Verifying DNS Works](#verifying-dns-works) section. - -## CoreDNS Configuration - -If using CoreDNS instead of KubeDNS in your Kubernetes cluster, you will -need to update your existing `coredns` ConfigMap in the `kube-system` namespace to -include a `forward` definition for `consul` that points to the cluster IP of the -Consul DNS service. - -Edit the `ConfigMap`: - -```shell-session -$ kubectl edit configmap coredns --namespace kube-system -``` - -And add the `consul` block below the default `.:53` block and replace -`` with the DNS Service's IP address you -found previously. - -```diff -apiVersion: v1 -kind: ConfigMap -metadata: - labels: - addonmanager.kubernetes.io/mode: EnsureExists - name: coredns - namespace: kube-system -data: - Corefile: | - .:53 { - - } -+ consul { -+ errors -+ cache 30 -+ forward . -+ } -``` - --> **Note:** The consul proxy can only point to a static IP. If the cluster IP -of the `consul-dns` service changes, then it must be updated to the new IP to continue -working. This can happen if the service is deleted and recreated, such as -in full cluster rebuilds. - --> **Note:** If using a different zone than `.consul`, change the key accordingly. - -## OpenShift DNS Operator - --> **Note:** OpenShift CLI `oc` is utilized below complete the following steps. You can find more details on how to install OpenShift CLI from [Getting started with OpenShift CLI](https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html). - -You can use DNS forwarding to override the default forwarding configuration in the `/etc/resolv.conf` file by specifying -the `consul-dns` service for the `consul` subdomain (zone). - -Find `consul-dns` service clusterIP: - -```shell-session -$ oc get svc consul-dns --namespace consul --output jsonpath='{.spec.clusterIP}' -172.30.186.254 -``` - -Edit the `default` DNS Operator: - -```shell-session -$ oc edit edit dns.operator/default -``` - -Append the following `servers` section entry to the `spec` section of the DNS Operator configuration: - -```yaml -spec: - servers: - - name: consul-server - zones: - - consul - forwardPlugin: - policy: Random - upstreams: - - 172.30.186.254 # Set to clusterIP of consul-dns service -``` - -Save the configuration changes and verify the `dns-default` configmap has been updated: - -```shell-session -$ oc get configmap/dns-default -n openshift-dns -o yaml -``` - -Example output with updated `consul` forwarding zone: - -```yaml -... -data: - Corefile: | - # consul-server - consul:5353 { - prometheus 127.0.0.1:9153 - forward . 172.30.186.254 { - policy random - } - errors - log . { - class error - } - bufsize 1232 - cache 900 { - denial 9984 30 - } - } -... -``` - -## Verifying DNS Works - -To verify DNS works, run a simple job to query DNS. Save the following -job to the file `job.yaml` and run it: - - - -```yaml -apiVersion: batch/v1 -kind: Job -metadata: - name: dns -spec: - template: - spec: - containers: - - name: dns - image: anubhavmishra/tiny-tools - command: ['dig', 'consul.service.consul'] - restartPolicy: Never - backoffLimit: 4 -``` - - - -```shell-session -$ kubectl apply --filename job.yaml -``` - -Then query the pod name for the job and check the logs. You should see -output similar to the following showing a successful DNS query. If you see -any errors, then DNS is not configured properly. - -```shell-session -$ kubectl get pods --show-all | grep dns -dns-lkgzl 0/1 Completed 0 6m - -$ kubectl logs dns-lkgzl -; <<>> DiG 9.11.2-P1 <<>> consul.service.consul -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4489 -;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 4096 -;; QUESTION SECTION: -;consul.service.consul. IN A - -;; ANSWER SECTION: -consul.service.consul. 0 IN A 10.36.2.23 -consul.service.consul. 0 IN A 10.36.4.12 -consul.service.consul. 0 IN A 10.36.0.11 - -;; ADDITIONAL SECTION: -consul.service.consul. 0 IN TXT "consul-network-segment=" -consul.service.consul. 0 IN TXT "consul-network-segment=" -consul.service.consul. 0 IN TXT "consul-network-segment=" - -;; Query time: 5 msec -;; SERVER: 10.39.240.10#53(10.39.240.10) -;; WHEN: Wed Sep 12 02:12:30 UTC 2018 -;; MSG SIZE rcvd: 206 -``` diff --git a/website/content/docs/k8s/dns/views/enable.mdx b/website/content/docs/k8s/dns/views/enable.mdx deleted file mode 100644 index 3f29eac346c6..000000000000 --- a/website/content/docs/k8s/dns/views/enable.mdx +++ /dev/null @@ -1,102 +0,0 @@ ---- -layout: docs -page_title: Enable Consul DNS proxy for Kubernetes -description: -> - Learn how to schedule a Consul DNS proxy for a Kubernetes Pod so that your services can return Consul DNS results for service discovery. ---- - -# Enable Consul DNS proxy for Kubernetes - -This page describes the process to deploy a Consul DNS proxy in a Kubernetes Pod so that Services can resolve Consul DNS requests. For more information, refer to [Consul DNS views for Kubernetes](/consul/docs/k8s/dns/views). - -## Prerequisites - -You must meet the following minimum application versions to enable the Consul DNS proxy for Kubernetes: - -- Consul v1.20.0 or higher -- Either Consul on Kubernetes or the Consul Helm chart, v1.6.0 or higher - -## Update Helm values - -To enable the Consul DNS proxy, add the required [Helm values](/consul/docs/k8s/helm) to your Consul on Kubernetes deployment. - -```yaml -connectInject: - enabled: true -dns: - enabled: true - proxy: true -``` - -### ACLs - -We recommend you create a dedicated [ACL token with DNS permissions](/consul/docs/security/acl/tokens/create/create-a-dns-token) for the Consul DNS proxy. The Consul DNS proxy requires these ACL permissions. - -```hcl -node_prefix "" { - policy = "read" -} - -service_prefix "" { - policy = "read" -} -``` - -You can manage ACL tokens with Consul on Kubernetes, or you can configure the DNS proxy to access a token stored in Kubernetes secret. To use a Kubernetes secret, add the following configuration to your Helm chart. - -```yaml -dns: - proxy: - aclToken: - secretName: - secretKey: -``` - -## Retrieve Consul DNS proxy's address - -To look up the IP address for the Consul DNS proxy in the Kubernetes Pod, run the following command. - -```shell-session -$ kubectl get services –-all-namespaces --selector="app=consul,component=dns-proxy" --output jsonpath='{.spec.clusterIP}' -10.96.148.46 -``` - -Use this address when you update the ConfigMap resource. - -## Update Kubernetes ConfigMap - -Create or update a [ConfigMap object in the Kubernetes cluster](https://kubernetes.io/docs/concepts/configuration/configmap/) so that Kubernetes forwards DNS requests with the `.consul` domain to the IP address of the Consul DNS proxy. - -The following example of a `coredns-custom` ConfigMap configures Kubernetes to forward Consul DNS requests in the cluster to the Consul DNS Proxy running on `10.96.148.46`. This resource modifies the CoreDNS without modifications to the original `Corefile`. - -```yaml -kind: ConfigMap -metadata: - name: coredns-custom - namespace: kube-system -data: - consul.server: | - consul:53 { - errors - cache 30 - forward . 10.96.148.46 - reload - } -``` - -After updating the DNS configuration, perform a rolling restart of the CoreDNS. - -```shell-session -kubectl -n kube-system rollout restart deployment coredns -``` - -For more information about using a `coredns-custom` resource, refer to the [Rewrite DNS guide in the Azure documentation](https://learn.microsoft.com/en-us/azure/aks/coredns-custom#rewrite-dns). For general information about modifying a ConfigMap, refer to [the Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns). - -## Next steps - -After you enable the Consul DNS proxy, services in the Kubernetes cluster can resolve Consul DNS addresses. - -- To learn more about Consul DNS for service discovery, refer to [DNS usage overview](/consul/docs/services/discovery/dns-overview). -- If your datacenter has ACLs enabled, create a [Consul ACL token](/consul/docs/security/acl/tokens) for the Consul DNS proxy and then restart the DNS proxy. -- To enable service discovery across admin partitions, [export services between partitions](/consul/docs/connect/config-entries/exported-services). -- To use Consul DNS for service discovery with other runtimes, across cloud regions, or between cloud providers, [establish a cluster peering connection](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering). diff --git a/website/content/docs/k8s/dns/views/index.mdx b/website/content/docs/k8s/dns/views/index.mdx deleted file mode 100644 index 7d482a9d3e84..000000000000 --- a/website/content/docs/k8s/dns/views/index.mdx +++ /dev/null @@ -1,48 +0,0 @@ ---- -layout: docs -page_title: Consul DNS views for Kubernetes -description: -> - Kubernetes clusters can use the Consul DNS proxy to return service discovery results from the Consul catalog. Learn about how to configure your k8s cluster so that applications can resolve Consul DNS addresses without gossip communication. ---- - -# Consul DNS views for Kubernetes - -This topic describes how to schedule a dedicated Consul DNS proxy in a Kubernetes Pod so that applications in Kubernetes can resolve Consul DNS addresses. You can use the Consul DNS proxy to enable service discovery across admin partitions in Kubernetes deployments without needing to deploy Consul client agents. - -## Introduction - -Kubernetes operators typically choose networking tools such as [kube-dns](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) or [CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/coredns/) for their service discovery operations, and choose to bypass Consul DNS entirely. These DNS options are often sufficient for service networking operations within a single Kubernetes cluster. - -Consul on Kubernetes supports [configuring Kubernetes to resolve Consul DNS](/consul/docs/k8s/dns). However, two common challenges result when you rely on these configurations: - -- Kubernetes requires Consul to use gossip communication with agents or dataplanes in order to enable Consul DNS. -- Consul requires that admin partitions be included in the DNS address. Otherwise, DNS queries assume the `default` partition by default. - -The `consul-dns` proxy does not require the presence of Consul client agents or Consul dataplanes, removing gossip communication as a requirement for Consul DNS on Kubernetes. The proxy is also designed for deployment in a Kubernetes cluster with [external servers enabled](/consul/docs/k8s/deployment-configurations/servers-outside-kubernetes). When a cluster runs in a non-default admin partition and uses the proxy to query external servers, Consul automatically recognizes the admin partition that originated the request and returns service discovery results scoped to that specific admin partition. - -To use Consul DNS for service discovery on Kubernetes, deploy a `dns-proxy` service in each Kubernetes Pod that needs to resolve Consul DNS. Kubernetes sends all DNS requests to the Kubernetes controller first. The controller forwards requests for the `.consul` domain to the `dns-proxy` service, which then queries the Consul catalog and returns service discovery results. - -## Workflows - -The process to enable Consul DNS views for service discovery in Kubernetes deployments consists of the following steps: - -1. In a cluster configured to use [external Consul servers](/consul/docs/k8s/deployment-configurations/servers-outside-kubernetes), update the Helm values for your Consul on Kubernetes deployment so that `dns.proxy.enabled=true`. When you apply the updated configuration, Kubernetes deploys the Consul DNS proxy. -1. Look up the IP address for the Consul DNS proxy in the Kubernetes cluster. -1. Update the ConfigMap resource in the Kubernetes cluster so that it forwards requests for the `.consul` domain to the IP address of the Consul DNS proxy. - -For more information about the underlying concepts described in this workflow, refer to [DNS forwarding overview](/consul/docs/services/discovery/dns-forwarding). - -## Benefits - -Consul on Kubernetes currently uses [Consul dataplanes](/consul/docs/connect/dataplane) by default. These lightweight processes provide Consul access to the sidecar proxies in the service mesh, but leave Kubernetes in charge of most other service discovery and service mesh operations. - -- **Use Kubernetes DNS and Consul DNS in a single deployment**. The Consul DNS proxy enables any application in a Pod to resolve an address through Consul DNS without disrupting the underlying Kubernetes DNS functionality. -- **Consul service discovery using fewer resources**. When you use the Consul DNS proxy for service discovery, you do not need to schedule Consul client agents or dataplanes as sidecars. One Kubernetes Service that uses the same resources as a single Consul dataplane provides Pods access to the Consul service catalog. -- **Consul DNS without gossip communication**. The Consul DNS service runs on both Consul server and Consul client agents, which use [gossip communication](/consul/docs/security/encryption/gossip) to ensure that service discovery results are up-to-date. The Consul DNS proxy provides access to Consul DNS without the security overhead of agent-to-agent gossip. - -## Constraints and limitations - -If you experience issues using the Consul DNS proxy for Kubernetes, refer to the following list of technical constraints and limitations. - -- You must use Kubernetes as your runtime to use the Consul DNS proxy. You cannot schedule the Consul DNS proxy in other container-based environments. -- To perform DNS lookups on other admin partitions, you must [export services between partitions](/consul/docs/connect/config-entries/exported-services) before you can query them. \ No newline at end of file diff --git a/website/content/docs/k8s/helm.mdx b/website/content/docs/k8s/helm.mdx deleted file mode 100644 index 9315109464f0..000000000000 --- a/website/content/docs/k8s/helm.mdx +++ /dev/null @@ -1,2858 +0,0 @@ ---- -layout: docs -page_title: Helm Chart Reference -description: >- - The Helm Chart allows you to schedule Kubernetes clusters with injected Consul sidecars by defining custom values in a YAML configuration. Find stanza hierarchy, the parameters you can set, and their default values in this k8s reference guide. ---- - -# Helm Chart Reference - -The chart is highly customizable using -[Helm configuration values](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). -Each value has a reasonable default tuned for an optimal getting started experience -with Consul. - - - - -## Top-Level Stanzas - -Use these links to navigate to a particular top-level stanza. - -- [`global`](#h-global) -- [`server`](#h-server) -- [`externalServers`](#h-externalservers) -- [`client`](#h-client) -- [`dns`](#h-dns) -- [`ui`](#h-ui) -- [`syncCatalog`](#h-synccatalog) -- [`connectInject`](#h-connectinject) -- [`meshGateway`](#h-meshgateway) -- [`ingressGateways`](#h-ingressgateways) -- [`terminatingGateways`](#h-terminatinggateways) -- [`webhookCertManager`](#h-webhookcertmanager) -- [`prometheus`](#h-prometheus) -- [`tests`](#h-tests) -- [`telemetryCollector`](#h-telemetrycollector) - -## All Values - -### global ((#h-global)) - -- `global` ((#v-global)) - Holds values that affect multiple components of the chart. - - - `enabled` ((#v-global-enabled)) (`boolean: true`) - The main enabled/disabled setting. If true, servers, - clients, Consul DNS and the Consul UI will be enabled. Each component can override - this default via its component-specific "enabled" config. If false, no components - will be installed by default and per-component opt-in is required, such as by - setting `server.enabled` to true. - - - `logLevel` ((#v-global-loglevel)) (`string: info`) - The default log level to apply to all components which do not otherwise override this setting. - It is recommended to generally not set this below "info" unless actively debugging due to logging verbosity. - One of "debug", "info", "warn", or "error". - - - `logJSON` ((#v-global-logjson)) (`boolean: false`) - Enable all component logs to be output in JSON format. - - - `name` ((#v-global-name)) (`string: null`) - Set the prefix used for all resources in the Helm chart. If not set, - the prefix will be `-consul`. - - - `domain` ((#v-global-domain)) (`string: consul`) - The domain Consul will answer DNS queries for - (Refer to [`-domain`](/consul/docs/agent/config/cli-flags#_domain)) and the domain services synced from - Consul into Kubernetes will have, e.g. `service-name.service.consul`. - - - `peering` ((#v-global-peering)) - Configures the Cluster Peering feature. Requires Consul v1.14+ and Consul-K8s v1.0.0+. - - - `enabled` ((#v-global-peering-enabled)) (`boolean: false`) - If true, the Helm chart enables Cluster Peering for the cluster. This option enables peering controllers and - allows use of the PeeringAcceptor and PeeringDialer CRDs for establishing service mesh peerings. - - - `adminPartitions` ((#v-global-adminpartitions)) - Enabling `adminPartitions` allows creation of Admin Partitions in Kubernetes clusters. - It additionally indicates that you are running Consul Enterprise v1.11+ with a valid Consul Enterprise - license. Admin partitions enables deploying services across partitions, while sharing - a set of Consul servers. - - - `enabled` ((#v-global-adminpartitions-enabled)) (`boolean: false`) - If true, the Helm chart will enable Admin Partitions for the cluster. The clients in the server cluster - must be installed in the default partition. Creation of Admin Partitions is only supported during installation. - Admin Partitions cannot be installed via a Helm upgrade operation. Only Helm installs are supported. - - - `name` ((#v-global-adminpartitions-name)) (`string: default`) - The name of the Admin Partition. The partition name cannot be modified once the partition has been installed. - Changing the partition name would require an un-install and a re-install with the updated name. - Must be "default" in the server cluster ie the Kubernetes cluster that the Consul server pods are deployed onto. - - - `image` ((#v-global-image)) (`string: hashicorp/consul:`) - The name (and tag) of the Consul Docker image for clients and servers. - This can be overridden per component. This should be pinned to a specific - version tag, otherwise you may inadvertently upgrade your Consul version. - - Examples: - - ```yaml - # Consul 1.10.0 - image: "consul:1.10.0" - # Consul Enterprise 1.10.0 - image: "hashicorp/consul-enterprise:1.10.0-ent" - ``` - - - `imagePullSecrets` ((#v-global-imagepullsecrets)) (`array`) - Array of objects containing image pull secret names that will be applied to each service account. - This can be used to reference image pull secrets if using a custom consul or consul-k8s-control-plane Docker image. - Refer to https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry. - - Example: - - ```yaml - imagePullSecrets: - - name: pull-secret-name - - name: pull-secret-name-2 - ``` - - - `imageK8S` ((#v-global-imagek8s)) (`string: hashicorp/consul-k8s-control-plane:`) - The name (and tag) of the consul-k8s-control-plane Docker - image that is used for functionality such as catalog sync. - This can be overridden per component. - - - `imagePullPolicy` ((#v-global-imagepullpolicy)) (`string: ""`) - The image pull policy used globally for images controlled by Consul (consul, consul-dataplane, consul-k8s, consul-telemetry-collector). - One of "IfNotPresent", "Always", "Never", and "". Refer to https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy - - - `datacenter` ((#v-global-datacenter)) (`string: dc1`) - The name of the datacenter that the agents should - register as. This can't be changed once the Consul cluster is up and running - since Consul doesn't support an automatic way to change this value currently: - https://github.com/hashicorp/consul/issues/1858. - - - `enablePodSecurityPolicies` ((#v-global-enablepodsecuritypolicies)) (`boolean: false`) - Controls whether pod security policies are created for the Consul components - created by this chart. Refer to https://kubernetes.io/docs/concepts/policy/pod-security-policy/. - - - `secretsBackend` ((#v-global-secretsbackend)) - secretsBackend is used to configure Vault as the secrets backend for the Consul on Kubernetes installation. - The Vault cluster needs to have the Kubernetes Auth Method, KV2 and PKI secrets engines enabled - and have necessary secrets, policies and roles created prior to installing Consul. - Refer to [Vault as the Secrets Backend](/consul/docs/k8s/deployment-configurations/vault) - documentation for full instructions. - - The Vault cluster _must_ not have the Consul cluster installed by this Helm chart as its storage backend - as that would cause a circular dependency. - Vault can have Consul as its storage backend as long as that Consul cluster is not running on this Kubernetes cluster - and is being managed separately from this Helm installation. - - Note: When using Vault KV2 secrets engines the "data" field is implicitly required for Vault API calls, - secretName should be in the form of "vault-kv2-mount-path/data/secret-name". - secretKey should be in the form of "key". - - - `vault` ((#v-global-secretsbackend-vault)) - - - `vaultNamespace` ((#v-global-secretsbackend-vault-vaultnamespace)) (`string: ""`) - Vault namespace (optional). This sets the Vault namespace for the `vault.hashicorp.com/namespace` - agent annotation and [Vault Connect CA namespace](/consul/docs/connect/ca/vault#namespace). - To override one of these values individually, see `agentAnnotations` and `connectCA.additionalConfig`. - - - `enabled` ((#v-global-secretsbackend-vault-enabled)) (`boolean: false`) - Enabling the Vault secrets backend will replace Kubernetes secrets with referenced Vault secrets. - - - `consulServerRole` ((#v-global-secretsbackend-vault-consulserverrole)) (`string: ""`) - The Vault role for the Consul server. - The role must be connected to the Consul server's service account. - The role must also have a policy with read capabilities for the following secrets: - - gossip encryption key defined by the `global.gossipEncryption.secretName` value - - certificate issue path defined by the `server.serverCert.secretName` value - - CA certificate defined by the `global.tls.caCert.secretName` value - - replication token defined by the `global.acls.replicationToken.secretName` value if `global.federation.enabled` is `true` - To discover the service account name of the Consul server, run - ```shell-session - $ helm template --show-only templates/server-serviceaccount.yaml hashicorp/consul - ``` - and check the name of `metadata.name`. - - - `consulClientRole` ((#v-global-secretsbackend-vault-consulclientrole)) (`string: ""`) - The Vault role for the Consul client. - The role must be connected to the Consul client's service account. - The role must also have a policy with read capabilities for the gossip encryption - key defined by the `global.gossipEncryption.secretName` value. - To discover the service account name of the Consul client, run - ```shell-session - $ helm template --show-only templates/client-serviceaccount.yaml hashicorp/consul - ``` - and check the name of `metadata.name`. - - - `manageSystemACLsRole` ((#v-global-secretsbackend-vault-managesystemaclsrole)) (`string: ""`) - A Vault role for the Consul `server-acl-init` job, which manages setting ACLs so that clients and components can obtain ACL tokens. - The role must be connected to the `server-acl-init` job's service account. - The role must also have a policy with read and write capabilities for the bootstrap, replication or partition tokens - To discover the service account name of the `server-acl-init` job, run - ```shell-session - $ helm template --show-only templates/server-acl-init-serviceaccount.yaml \ - --set global.acls.manageSystemACLs=true hashicorp/consul - ``` - and check the name of `metadata.name`. - - - `adminPartitionsRole` ((#v-global-secretsbackend-vault-adminpartitionsrole)) (`string: ""`) - A Vault role that allows the Consul `partition-init` job to read a Vault secret for the partition ACL token. - The `partition-init` job bootstraps Admin Partitions on Consul servers. - . - This role must be bound the `partition-init` job's service account. - To discover the service account name of the `partition-init` job, run with Helm values for the client cluster: - ```shell-session - $ helm template --show-only templates/partition-init-serviceaccount.yaml -f client-cluster-values.yaml hashicorp/consul - ``` - and check the name of `metadata.name`. - - - `connectInjectRole` ((#v-global-secretsbackend-vault-connectinjectrole)) (`string: ""`) - The Vault role to read Consul connect-injector webhook's CA - and issue a certificate and private key. - A Vault policy must be created which grants issue capabilities to - `global.secretsBackend.vault.connectInject.tlsCert.secretName`. - - - `consulCARole` ((#v-global-secretsbackend-vault-consulcarole)) (`string: ""`) - The Vault role for all Consul components to read the Consul's server's CA Certificate (unauthenticated). - The role should be connected to the service accounts of all Consul components, or alternatively `*` since it - will be used only against the `pki/cert/ca` endpoint which is unauthenticated. A policy must be created which grants - read capabilities to `global.tls.caCert.secretName`, which is usually `pki/cert/ca`. - - - `agentAnnotations` ((#v-global-secretsbackend-vault-agentannotations)) (`string: null`) - This value defines additional annotations for - Vault agent on any pods where it'll be running. - This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `ca` ((#v-global-secretsbackend-vault-ca)) - Configuration for Vault server CA certificate. This certificate will be mounted - to any pod where Vault agent needs to run. - - - `secretName` ((#v-global-secretsbackend-vault-ca-secretname)) (`string: ""`) - The name of the Kubernetes or Vault secret that holds the Vault CA certificate. - A Kubernetes secret must be in the same namespace that Consul is installed into. - - - `secretKey` ((#v-global-secretsbackend-vault-ca-secretkey)) (`string: ""`) - The key within the Kubernetes or Vault secret that holds the Vault CA certificate. - - - `connectCA` ((#v-global-secretsbackend-vault-connectca)) - Configuration for the Vault Connect CA provider. - The provider will be configured to use the Vault Kubernetes auth method - and therefore requires the role provided by `global.secretsBackend.vault.consulServerRole` - to have permissions to the root and intermediate PKI paths. - Please refer to [Vault ACL policies](/consul/docs/connect/ca/vault#vault-acl-policies) - documentation for information on how to configure the Vault policies. - - - `address` ((#v-global-secretsbackend-vault-connectca-address)) (`string: ""`) - The address of the Vault server. - - - `authMethodPath` ((#v-global-secretsbackend-vault-connectca-authmethodpath)) (`string: kubernetes`) - The mount path of the Kubernetes auth method in Vault. - - - `rootPKIPath` ((#v-global-secretsbackend-vault-connectca-rootpkipath)) (`string: ""`) - The path to a PKI secrets engine for the root certificate. - For more details, please refer to [Vault Connect CA configuration](/consul/docs/connect/ca/vault#rootpkipath). - - - `intermediatePKIPath` ((#v-global-secretsbackend-vault-connectca-intermediatepkipath)) (`string: ""`) - The path to a PKI secrets engine for the generated intermediate certificate. - For more details, please refer to [Vault Connect CA configuration](/consul/docs/connect/ca/vault#intermediatepkipath). - - - `additionalConfig` ((#v-global-secretsbackend-vault-connectca-additionalconfig)) (`string: {}`) - Additional Connect CA configuration in JSON format. - Please refer to [Vault Connect CA configuration](/consul/docs/connect/ca/vault#configuration) - for all configuration options available for that provider. - - Example: - - ```yaml - additionalConfig: | - { - "connect": [{ - "ca_config": [{ - "leaf_cert_ttl": "36h" - }] - }] - } - ``` - - - `connectInject` ((#v-global-secretsbackend-vault-connectinject)) - - - `caCert` ((#v-global-secretsbackend-vault-connectinject-cacert)) - Configuration to the Vault Secret that Kubernetes uses on - Kubernetes pod creation, deletion, and update, to get CA certificates - used issued from vault to send webhooks to the ConnectInject. - - - `secretName` ((#v-global-secretsbackend-vault-connectinject-cacert-secretname)) (`string: null`) - The Vault secret path that contains the CA certificate for - Connect Inject webhooks. - - - `tlsCert` ((#v-global-secretsbackend-vault-connectinject-tlscert)) - Configuration to the Vault Secret that Kubernetes uses on - Kubernetes pod creation, deletion, and update, to get TLS certificates - used issued from vault to send webhooks to the ConnectInject. - - - `secretName` ((#v-global-secretsbackend-vault-connectinject-tlscert-secretname)) (`string: null`) - The Vault secret path that issues TLS certificates for connect - inject webhooks. - - - `gossipEncryption` ((#v-global-gossipencryption)) - Configures Consul's gossip encryption key. - (Refer to [`-encrypt`](/consul/docs/agent/config/cli-flags#_encrypt)). - By default, gossip encryption is not enabled. The gossip encryption key may be set automatically or manually. - The recommended method is to automatically generate the key. - To automatically generate and set a gossip encryption key, set autoGenerate to true. - Values for secretName and secretKey should not be set if autoGenerate is true. - To manually generate a gossip encryption key, set secretName and secretKey and use Consul to generate - a key, saving this as a Kubernetes secret or Vault secret path and key. - If `global.secretsBackend.vault.enabled=true`, be sure to add the "data" component of the secretName path as required by - the Vault KV-2 secrets engine [refer to example]. - - ```shell-session - $ kubectl create secret generic consul-gossip-encryption-key --from-literal=key=$(consul keygen) - ``` - - Vault CLI Example: - ```shell-session - $ vault kv put consul/secrets/gossip key=$(consul keygen) - ``` - `gossipEncryption.secretName="consul/data/secrets/gossip"` - `gossipEncryption.secretKey="key"` - - - `autoGenerate` ((#v-global-gossipencryption-autogenerate)) (`boolean: false`) - Automatically generate a gossip encryption key and save it to a Kubernetes or Vault secret. - - - `secretName` ((#v-global-gossipencryption-secretname)) (`string: ""`) - The name of the Kubernetes secret or Vault secret path that holds the gossip - encryption key. A Kubernetes secret must be in the same namespace that Consul is installed into. - - - `secretKey` ((#v-global-gossipencryption-secretkey)) (`string: ""`) - The key within the Kubernetes secret or Vault secret key that holds the gossip - encryption key. - - - `logLevel` ((#v-global-gossipencryption-loglevel)) (`string: ""`) - Override global log verbosity level for gossip-encryption-autogenerate-job pods. One of "trace", "debug", "info", "warn", or "error". - - - `recursors` ((#v-global-recursors)) (`array: []`) - A list of addresses of upstream DNS servers that are used to recursively resolve DNS queries. - These values are given as `-recursor` flags to Consul servers and clients. - Refer to [`-recursor`](/consul/docs/agent/config/cli-flags#_recursor) for more details. - If this is an empty array (the default), then Consul DNS will only resolve queries for the Consul top level domain (by default `.consul`). - - - `tls` ((#v-global-tls)) - Enables [TLS](/consul/tutorials/security/tls-encryption-secure) - across the cluster to verify authenticity of the Consul servers and clients. - Requires Consul v1.4.1+. - - - `enabled` ((#v-global-tls-enabled)) (`boolean: false`) - If true, the Helm chart will enable TLS for Consul - servers and clients and all consul-k8s-control-plane components, as well as generate certificate - authority (optional) and server and client certificates. - This setting is required for [Cluster Peering](/consul/docs/connect/cluster-peering/k8s). - - - `logLevel` ((#v-global-tls-loglevel)) (`string: ""`) - Override global log verbosity level. One of "trace", "debug", "info", "warn", or "error". - - - `enableAutoEncrypt` ((#v-global-tls-enableautoencrypt)) (`boolean: false`) - If true, turns on the auto-encrypt feature on clients and servers. - It also switches consul-k8s-control-plane components to retrieve the CA from the servers - via the API. Requires Consul 1.7.1+. - - - `serverAdditionalDNSSANs` ((#v-global-tls-serveradditionaldnssans)) (`array: []`) - A list of additional DNS names to set as Subject Alternative Names (SANs) - in the server certificate. This is useful when you need to access the - Consul server(s) externally, for example, if you're using the UI. - - - `serverAdditionalIPSANs` ((#v-global-tls-serveradditionalipsans)) (`array: []`) - A list of additional IP addresses to set as Subject Alternative Names (SANs) - in the server certificate. This is useful when you need to access the - Consul server(s) externally, for example, if you're using the UI. - - - `verify` ((#v-global-tls-verify)) (`boolean: true`) - If true, `verify_outgoing`, `verify_server_hostname`, - and `verify_incoming` for internal RPC communication will be set to `true` for Consul servers and clients. - Set this to false to incrementally roll out TLS on an existing Consul cluster. - Please refer to [TLS on existing clusters](/consul/docs/k8s/operations/tls-on-existing-cluster) - for more details. - - - `httpsOnly` ((#v-global-tls-httpsonly)) (`boolean: true`) - If true, the Helm chart will configure Consul to disable the HTTP port on - both clients and servers and to only accept HTTPS connections. - - - `caCert` ((#v-global-tls-cacert)) - A secret containing the certificate of the CA to use for TLS communication within the Consul cluster. - If you have generated the CA yourself with the consul CLI, you could use the following command to create the secret - in Kubernetes: - - ```shell-session - $ kubectl create secret generic consul-ca-cert \ - --from-file='tls.crt=./consul-agent-ca.pem' - ``` - If you are using Vault as a secrets backend with TLS, `caCert.secretName` must be provided and should reference - the CA path for your PKI secrets engine. This should be of the form `pki/cert/ca` where `pki` is the mount point of your PKI secrets engine. - A read policy must be created and associated with the CA cert path for `global.tls.caCert.secretName`. - This will be consumed by the `global.secretsBackend.vault.consulCARole` role by all Consul components. - When using Vault the secretKey is not used. - - - `secretName` ((#v-global-tls-cacert-secretname)) (`string: null`) - The name of the Kubernetes or Vault secret that holds the CA certificate. - - - `secretKey` ((#v-global-tls-cacert-secretkey)) (`string: null`) - The key within the Kubernetes or Vault secret that holds the CA certificate. - - - `caKey` ((#v-global-tls-cakey)) - A Kubernetes or Vault secret containing the private key of the CA to use for - TLS communication within the Consul cluster. If you have generated the CA yourself - with the consul CLI, you could use the following command to create the secret - in Kubernetes: - - ```shell-session - $ kubectl create secret generic consul-ca-key \ - --from-file='tls.key=./consul-agent-ca-key.pem' - ``` - - Note that we need the CA key so that we can generate server and client certificates. - It is particularly important for the client certificates since they need to have host IPs - as Subject Alternative Names. If you are setting server certs yourself via `server.serverCert` - and you are not enabling clients (or clients are enabled with autoEncrypt) then you do not - need to provide the CA key. - - - `secretName` ((#v-global-tls-cakey-secretname)) (`string: null`) - The name of the Kubernetes or Vault secret that holds the CA key. - - - `secretKey` ((#v-global-tls-cakey-secretkey)) (`string: null`) - The key within the Kubernetes or Vault secret that holds the CA key. - - - `annotations` ((#v-global-tls-annotations)) (`string: null`) - This value defines additional annotations for - tls init jobs. This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `enableConsulNamespaces` ((#v-global-enableconsulnamespaces)) (`boolean: false`) - `enableConsulNamespaces` indicates that you are running - Consul Enterprise v1.7+ with a valid Consul Enterprise license and would - like to make use of configuration beyond registering everything into - the `default` Consul namespace. Additional configuration - options are found in the `consulNamespaces` section of both the catalog sync - and connect injector. - - - `acls` ((#v-global-acls)) - Configure ACLs. - - - `manageSystemACLs` ((#v-global-acls-managesystemacls)) (`boolean: false`) - If true, the Helm chart will automatically manage ACL tokens and policies - for all Consul and consul-k8s-control-plane components. - This requires Consul >= 1.4. - - - `logLevel` ((#v-global-acls-loglevel)) (`string: ""`) - Override global log verbosity level. One of "trace", "debug", "info", "warn", or "error". - - - `bootstrapToken` ((#v-global-acls-bootstraptoken)) - A Kubernetes or Vault secret containing the bootstrap token to use for creating policies and - tokens for all Consul and consul-k8s-control-plane components. If `secretName` and `secretKey` - are unset, a default secret name and secret key are used. If the secret is populated, then - we will skip ACL bootstrapping of the servers and will only initialize ACLs for the Consul - clients and consul-k8s-control-plane system components. - If the secret is empty, then we will bootstrap ACLs on the Consul servers, and write the - bootstrap token to this secret. If ACLs are already bootstrapped on the servers, then the - secret must contain the bootstrap token. - - - `secretName` ((#v-global-acls-bootstraptoken-secretname)) (`string: null`) - The name of the Kubernetes or Vault secret that holds the bootstrap token. - If unset, this defaults to `{{ global.name }}-bootstrap-acl-token`. - - - `secretKey` ((#v-global-acls-bootstraptoken-secretkey)) (`string: null`) - The key within the Kubernetes or Vault secret that holds the bootstrap token. - If unset, this defaults to `token`. - - - `createReplicationToken` ((#v-global-acls-createreplicationtoken)) (`boolean: false`) - If true, an ACL token will be created that can be used in secondary - datacenters for replication. This should only be set to true in the - primary datacenter since the replication token must be created from that - datacenter. - In secondary datacenters, the secret needs to be imported from the primary - datacenter and referenced via `global.acls.replicationToken`. - - - `replicationToken` ((#v-global-acls-replicationtoken)) - replicationToken references a secret containing the replication ACL token. - This token will be used by secondary datacenters to perform ACL replication - and create ACL tokens and policies. - This value is ignored if `bootstrapToken` is also set. - - - `secretName` ((#v-global-acls-replicationtoken-secretname)) (`string: null`) - The name of the Kubernetes or Vault secret that holds the replication token. - - - `secretKey` ((#v-global-acls-replicationtoken-secretkey)) (`string: null`) - The key within the Kubernetes or Vault secret that holds the replication token. - - - `resources` ((#v-global-acls-resources)) (`map`) - The resource requests (CPU, memory, etc.) for the server-acl-init and server-acl-init-cleanup pods. - This should be a YAML map corresponding to a Kubernetes - [`ResourceRequirements``](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#resourcerequirements-v1-core) - object. - - Example: - - ```yaml - resources: - requests: - memory: '200Mi' - cpu: '100m' - limits: - memory: '200Mi' - cpu: '100m' - ``` - - - `partitionToken` ((#v-global-acls-partitiontoken)) - partitionToken references a Vault secret containing the ACL token to be used in non-default partitions. - This value should only be provided in the default partition and only when setting - the `global.secretsBackend.vault.enabled` value to true. - Consul will use the value of the secret stored in Vault to create an ACL token in Consul with the value of the - secret as the secretID for the token. - In non-default, partitions set this secret as the `bootstrapToken`. - - - `secretName` ((#v-global-acls-partitiontoken-secretname)) (`string: null`) - The name of the Vault secret that holds the partition token. - - - `secretKey` ((#v-global-acls-partitiontoken-secretkey)) (`string: null`) - The key within the Vault secret that holds the parition token. - - - `tolerations` ((#v-global-acls-tolerations)) (`string: ""`) - tolerations configures the taints and tolerations for the server-acl-init - and server-acl-init-cleanup jobs. This should be a multi-line string matching the - [Tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) array in a Pod spec. - - - `nodeSelector` ((#v-global-acls-nodeselector)) (`string: null`) - This value defines [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) - labels for the server-acl-init and server-acl-init-cleanup jobs pod assignment, formatted as a multi-line string. - - Example: - - ```yaml - nodeSelector: | - beta.kubernetes.io/arch: amd64 - ``` - - - `annotations` ((#v-global-acls-annotations)) (`string: null`) - This value defines additional annotations for - acl init jobs. This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `argocd` ((#v-global-argocd)) - If argocd.enabled is set to true, following annotations are added to - job - server-acl-init-job - annotations - - argocd.argoproj.io/hook: Sync - argocd.argoproj.io/hook-delete-policy: HookSucceeded - - - `enabled` ((#v-global-argocd-enabled)) (`boolean: false`) - - - `enterpriseLicense` ((#v-global-enterpriselicense)) - This value refers to a Kubernetes or Vault secret that you have created - that contains your enterprise license. It is required if you are using an - enterprise binary. Defining it here applies it to your cluster once a leader - has been elected. If you are not using an enterprise image or if you plan to - introduce the license key via another route, then set these fields to null. - Note: the job to apply license runs on both Helm installs and upgrades. - - - `secretName` ((#v-global-enterpriselicense-secretname)) (`string: null`) - The name of the Kubernetes or Vault secret that holds the enterprise license. - A Kubernetes secret must be in the same namespace that Consul is installed into. - - - `secretKey` ((#v-global-enterpriselicense-secretkey)) (`string: null`) - The key within the Kubernetes or Vault secret that holds the enterprise license. - - - `enableLicenseAutoload` ((#v-global-enterpriselicense-enablelicenseautoload)) (`boolean: true`) - Manages license autoload. Required in Consul 1.10.0+, 1.9.7+ and 1.8.12+. - - - `federation` ((#v-global-federation)) - Configure federation. - - - `enabled` ((#v-global-federation-enabled)) (`boolean: false`) - If enabled, this datacenter will be federation-capable. Only federation - via mesh gateways is supported. - Mesh gateways and servers will be configured to allow federation. - Requires `global.tls.enabled`, `connectInject.enabled`, and one of - `meshGateway.enabled` or `externalServers.enabled` to be true. - Requires Consul 1.8+. - - - `createFederationSecret` ((#v-global-federation-createfederationsecret)) (`boolean: false`) - If true, the chart will create a Kubernetes secret that can be imported - into secondary datacenters so they can federate with this datacenter. The - secret contains all the information secondary datacenters need to contact - and authenticate with this datacenter. This should only be set to true - in your primary datacenter. The secret name is - `-federation` (if setting `global.name`), otherwise - `-consul-federation`. - - - `primaryDatacenter` ((#v-global-federation-primarydatacenter)) (`string: null`) - The name of the primary datacenter. - - - `primaryGateways` ((#v-global-federation-primarygateways)) (`array: []`) - A list of addresses of the primary mesh gateways in the form `:` - (e.g. `["1.1.1.1:443", "2.3.4.5:443"]`). - - - `k8sAuthMethodHost` ((#v-global-federation-k8sauthmethodhost)) (`string: null`) - If you are setting `global.federation.enabled` to true and are in a secondary datacenter, - set `k8sAuthMethodHost` to the address of the Kubernetes API server of the secondary datacenter. - This address must be reachable from the Consul servers in the primary datacenter. - This auth method will be used to provision ACL tokens for Consul components and is different - from the one used by the Consul Service Mesh. - Please refer to the [Kubernetes Auth Method documentation](/consul/docs/security/acl/auth-methods/kubernetes). - - If `externalServers.enabled` is set to true, `global.federation.k8sAuthMethodHost` and - `externalServers.k8sAuthMethodHost` should be set to the same value. - - You can retrieve this value from your `kubeconfig` by running: - - ```shell-session - $ kubectl config view \ - -o jsonpath="{.clusters[?(@.name=='')].cluster.server}" - ``` - - - `logLevel` ((#v-global-federation-loglevel)) (`string: ""`) - Override global log verbosity level for the create-federation-secret-job pods. One of "trace", "debug", "info", "warn", or "error". - - - `metrics` ((#v-global-metrics)) - Configures metrics for Consul service mesh - - - `enabled` ((#v-global-metrics-enabled)) (`boolean: false`) - Configures the Helm chart’s components - to expose Prometheus metrics for the Consul service mesh. By default - this includes gateway metrics and sidecar metrics. - - - `enableAgentMetrics` ((#v-global-metrics-enableagentmetrics)) (`boolean: false`) - Configures consul agent metrics. Only applicable if - `global.metrics.enabled` is true. - - - `disableAgentHostName` ((#v-global-metrics-disableagenthostname)) (`boolean: false`) - Set to true to stop prepending the machine's hostname to gauge-type metrics. Default is false. - Only applicable if `global.metrics.enabled` and `global.metrics.enableAgentMetrics` is true. - - - `enableHostMetrics` ((#v-global-metrics-enablehostmetrics)) (`boolean: false`) - Configures consul agent underlying host metrics. Default is false. - Only applicable if `global.metrics.enabled` and `global.metrics.enableAgentMetrics` is true. - - - `agentMetricsRetentionTime` ((#v-global-metrics-agentmetricsretentiontime)) (`string: 1m`) - Configures the retention time for metrics in Consul clients and - servers. This must be greater than 0 for Consul clients and servers - to expose any metrics at all. - Only applicable if `global.metrics.enabled` is true. - - - `enableGatewayMetrics` ((#v-global-metrics-enablegatewaymetrics)) (`boolean: true`) - If true, mesh, terminating, and ingress gateways will expose their - Envoy metrics on port `20200` at the `/metrics` path and all gateway pods - will have Prometheus scrape annotations. Only applicable if `global.metrics.enabled` is true. - - - `enableTelemetryCollector` ((#v-global-metrics-enabletelemetrycollector)) (`boolean: false`) - Configures the Helm chart’s components to forward envoy metrics for the Consul service mesh to the - consul-telemetry-collector. This includes gateway metrics and sidecar metrics. - - - `prefixFilter` ((#v-global-metrics-prefixfilter)) - Configures the list of filter rules to apply for allowing or blocking - metrics by prefix in the following format: - - A leading "+" will enable any metrics with the given prefix, and a leading "-" will block them. - If there is overlap between two rules, the more specific rule will take precedence. - Blocking will take priority if the same prefix is listed multiple times. - - - `allowList` ((#v-global-metrics-prefixfilter-allowlist)) (`array: []`) - - - `blockList` ((#v-global-metrics-prefixfilter-blocklist)) (`array: []`) - - - `datadog` ((#v-global-metrics-datadog)) - Configures consul integration configurations for datadog on kubernetes. - Only applicable if `global.metrics.enabled` and `global.metrics.enableAgentMetrics` is true. - - - `enabled` ((#v-global-metrics-datadog-enabled)) (`boolean: false`) - Enables datadog [Consul Autodiscovery Integration](https://docs.datadoghq.com/integrations/consul/?tab=containerized#metric-collection) - by configuring the required `ad.datadoghq.com/consul.checks` annotation. The following _Consul_ agent metrics/health statuses - are monitored by Datadog unless monitoring via OpenMetrics (Prometheus) or DogStatsD: - - Serf events and member flaps - - The Raft protocol - - DNS performance - - API Endpoints scraped: - - `/v1/agent/metrics?format=prometheus` - - `/v1/agent/self` - - `/v1/status/leader` - - `/v1/status/peers` - - `/v1/catalog/services` - - `/v1/health/service` - - `/v1/health/state/any` - - `/v1/coordinate/datacenters` - - `/v1/coordinate/nodes` - - Setting either `global.metrics.datadog.otlp.enabled=true` or `global.metrics.datadog.dogstatsd.enabled=true` disables the above checks - in lieu of metrics data collection via DogStatsD or by a customer OpenMetrics (Prometheus) collection endpoint. - - ~> **Note:** If you have a [dogstatsd_mapper_profile](https://docs.datadoghq.com/integrations/consul/?tab=host#dogstatsd) configured for Consul - residing on either your Datadog NodeAgent or ClusterAgent the default Consul agent metrics/health status checks will fail. If you do not desire - to utilize DogStatsD metrics emission from Consul, remove this configuration file, and restart your Datadog agent to permit the checks to run. - - - `openMetricsPrometheus` ((#v-global-metrics-datadog-openmetricsprometheus)) - Configures Kubernetes Prometheus/OpenMetrics auto-discovery annotations for use with Datadog. - This configuration is less common and more for advanced usage with custom metrics monitoring - configurations. Refer to the [Datadog documentation](https://docs.datadoghq.com/containers/kubernetes/prometheus/?tab=kubernetesadv2) for more details. - - - `enabled` ((#v-global-metrics-datadog-openmetricsprometheus-enabled)) (`boolean: false`) - - - `otlp` ((#v-global-metrics-datadog-otlp)) - - - `enabled` ((#v-global-metrics-datadog-otlp-enabled)) (`boolean: false`) - Enables forwarding of Consul's Telemetry Collector OTLP metrics for - ingestion by Datadog Agent. - - - `protocol` ((#v-global-metrics-datadog-otlp-protocol)) (`string: "http"`) - Protocol used for DataDog Endpoint OTLP ingestion. - - Valid protocol options are one of either: - - - "http": will forward to DataDog HTTP OTLP Node Agent Endpoint default - "0.0.0.0:4318" - - "grpc": will forward to DataDog gRPC OTLP Node Agent Endpoint default - "0.0.0.0:4317" - - - `dogstatsd` ((#v-global-metrics-datadog-dogstatsd)) - Configuration settings for DogStatsD metrics aggregation service - that is bundled with the Datadog Agent. - DogStatsD implements the StatsD protocol and adds a few Datadog-specific extensions: - - Histogram metric type - - Service checks - - Events - - Tagging - - - `enabled` ((#v-global-metrics-datadog-dogstatsd-enabled)) (`boolean: false`) - - - `socketTransportType` ((#v-global-metrics-datadog-dogstatsd-sockettransporttype)) (`string: "UDS"`) - Sets the socket transport type for dogstatsd: - - "UDS" (Unix Domain Socket): prefixes `unix://` to URL and appends path to socket (i.e., "unix:///var/run/datadog/dsd.socket") - If set, this will create the required [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath) mount for - managing [DogStatsD with Unix Domain Socket on Kubernetes](https://docs.datadoghq.com/developers/dogstatsd/unix_socket/?tab=kubernetes). - The volume is mounted using the `DirectoryOrCreate` type, thereby setting `0755` permissions with the same kubelet group ownership. - - Applies the following `volumes` and `volumeMounts` to the consul-server stateful set consul containers: - - ```yaml - volumes: - - name: dsdsocket - hostPath: - path: /var/run/datadog - type: DirectoryOrCreate - volumeMounts: - - name: dsdsocket - mountPath: /var/run/datadog - readOnly: true - ``` - - "UDP" (User Datagram Protocol): assigns address to use `hostname/IP:Port` formatted URL for UDP transport to hostIP based - dogstatsd sink (i.e., 127.0.0.1:8125). HostIP of Datadog agent must be reachable and known to Consul server emitting metrics. - - - `dogstatsdAddr` ((#v-global-metrics-datadog-dogstatsd-dogstatsdaddr)) (`string: "/var/run/datadog/dsd.socket"`) - Sets URL path for dogstatsd: - - Can be either a path to unix domain socket or an IP Address or Hostname that's reachable from the - consul-server service, server containers. When using "UDS" the path will be appended. When using "UDP" - the path will be prepended to the specified `dogstatsdPort`. - - - `dogstatsdPort` ((#v-global-metrics-datadog-dogstatsd-dogstatsdport)) (`integer: 0`) - Configures IP based dogstatsd designated port that will be appended to "UDP" based transport socket IP/Hostname URL. - - If using a kubernetes service based address (i.e., datadog.default.svc.cluster.local), set this to 0 to - mitigate appending a port value to the dogstatsd address field. Resultant address would be "datadog.default.svc.cluster.local" with - default port setting, while appending a non-zero port would result in "172.10.23.6:8125" with a dogstatsdAddr value - of "172.10.23.6". - - - `dogstatsdTags` ((#v-global-metrics-datadog-dogstatsd-dogstatsdtags)) (`array: ["source:consul","consul_service:consul-server"]`) - Configures datadog [autodiscovery](https://docs.datadoghq.com/containers/kubernetes/log/?tab=operator#autodiscovery) - style [log integration](https://docs.datadoghq.com/integrations/consul/?tab=containerized#log-collection) - configuration for Consul. - - The default settings should handle most Consul Kubernetes deployment schemes. The resultant annotation - will reside on the consul-server statefulset as autodiscovery annotations. - (i.e., ad.datadoghq.com/consul.logs: ["source:consul","consul_service:consul-server", ""]) - - - `namespace` ((#v-global-metrics-datadog-namespace)) (`string: "default"`) - Namespace - - - `imageConsulDataplane` ((#v-global-imageconsuldataplane)) (`string: hashicorp/consul-dataplane:`) - The name (and tag) of the consul-dataplane Docker image used for the - connect-injected sidecar proxies and mesh, terminating, and ingress gateways. - - - `openshift` ((#v-global-openshift)) - Configuration for running this Helm chart on the Red Hat OpenShift platform. - This Helm chart currently supports OpenShift v4.x+. - - - `enabled` ((#v-global-openshift-enabled)) (`boolean: false`) - If true, the Helm chart will create necessary configuration for running - its components on OpenShift. - - - `consulAPITimeout` ((#v-global-consulapitimeout)) (`string: 5s`) - The time in seconds that the consul API client will wait for a response from - the API before cancelling the request. - - - `extraLabels` ((#v-global-extralabels)) (`map`) - Extra labels to attach to all pods, deployments, daemonsets, statefulsets, and jobs. This should be a YAML map. - - Example: - - ```yaml - extraLabels: - labelKey: label-value - anotherLabelKey: another-label-value - ``` - - - `trustedCAs` ((#v-global-trustedcas)) (`array: []`) - Optional PEM-encoded CA certificates that will be added to trusted system CAs. - - Example: - - ```yaml - trustedCAs: [ - | - -----BEGIN CERTIFICATE----- - MIIC7jCCApSgAwIBAgIRAIq2zQEVexqxvtxP6J0bXAwwCgYIKoZIzj0EAwIwgbkx - ... - ] - ``` - - - `experiments` ((#v-global-experiments)) (`array: []`) - Consul feature flags that will be enabled across components. - Supported feature flags: - - Example: - - ```yaml - experiments: [ "" ] - ``` - -### server ((#h-server)) - -- `server` ((#v-server)) - Server, when enabled, configures a server cluster to run. This should - be disabled if you plan on connecting to a Consul cluster external to - the Kube cluster. - - - `enabled` ((#v-server-enabled)) (`boolean: global.enabled`) - If true, the chart will install all the resources necessary for a - Consul server cluster. If you're running Consul externally and want agents - within Kubernetes to join that cluster, this should probably be false. - - - `logLevel` ((#v-server-loglevel)) (`string: ""`) - Override global log verbosity level. One of "trace", "debug", "info", "warn", or "error". - - - `image` ((#v-server-image)) (`string: null`) - The name of the Docker image (including any tag) for the containers running - Consul server agents. - - - `replicas` ((#v-server-replicas)) (`integer: 1`) - The number of server agents to run. This determines the fault tolerance of - the cluster. Please refer to the [deployment table](/consul/docs/architecture/consensus#deployment-table) - for more information. - - - `bootstrapExpect` ((#v-server-bootstrapexpect)) (`int: null`) - The number of servers that are expected to be running. - It defaults to server.replicas. - In most cases the default should be used, however if there are more - servers in this datacenter than server.replicas it might make sense - to override the default. This would be the case if two kube clusters - were joined into the same datacenter and each cluster ran a certain number - of servers. - - - `serverCert` ((#v-server-servercert)) - A secret containing a certificate & key for the server agents to use - for TLS communication within the Consul cluster. Cert needs to be provided with - additional DNS name SANs so that it will work within the Kubernetes cluster: - - Kubernetes Secrets backend: - ```bash - consul tls cert create -server -days=730 -domain=consul -ca=consul-agent-ca.pem \ - -key=consul-agent-ca-key.pem -dc={{datacenter}} \ - -additional-dnsname="{{fullname}}-server" \ - -additional-dnsname="*.{{fullname}}-server" \ - -additional-dnsname="*.{{fullname}}-server.{{namespace}}" \ - -additional-dnsname="*.{{fullname}}-server.{{namespace}}.svc" \ - -additional-dnsname="*.server.{{datacenter}}.{{domain}}" \ - -additional-dnsname="server.{{datacenter}}.{{domain}}" - ``` - - If you have generated the server-cert yourself with the consul CLI, you could use the following command - to create the secret in Kubernetes: - - ```bash - kubectl create secret generic consul-server-cert \ - --from-file='tls.crt=./dc1-server-consul-0.pem' - --from-file='tls.key=./dc1-server-consul-0-key.pem' - ``` - - Vault Secrets backend: - If you are using Vault as a secrets backend, a Vault Policy must be created which allows `["create", "update"]` - capabilities on the PKI issuing endpoint, which is usually of the form `pki/issue/consul-server`. - Complete [this tutorial](/consul/tutorials/vault-secure/vault-pki-consul-secure-tls) - to learn how to generate a compatible certificate. - Note: when using TLS, both the `server.serverCert` and `global.tls.caCert` which points to the CA endpoint of this PKI engine - must be provided. - - - `secretName` ((#v-server-servercert-secretname)) (`string: null`) - The name of the Vault secret that holds the PEM encoded server certificate. - - - `exposeGossipAndRPCPorts` ((#v-server-exposegossipandrpcports)) (`boolean: false`) - Exposes the servers' gossip and RPC ports as hostPorts. To enable a client - agent outside of the k8s cluster to join the datacenter, you would need to - enable `server.exposeGossipAndRPCPorts`, `client.exposeGossipPorts`, and - set `server.ports.serflan.port` to a port not being used on the host. Since - `client.exposeGossipPorts` uses the hostPort 8301, - `server.ports.serflan.port` must be set to something other than 8301. - - - `ports` ((#v-server-ports)) - Configures ports for the consul servers. - - - `serflan` ((#v-server-ports-serflan)) - Configures the LAN gossip port for the consul servers. If you choose to - enable `server.exposeGossipAndRPCPorts` and `client.exposeGossipPorts`, - that will configure the LAN gossip ports on the servers and clients to be - hostPorts, so if you are running clients and servers on the same node the - ports will conflict if they are both 8301. When you enable - `server.exposeGossipAndRPCPorts` and `client.exposeGossipPorts`, you must - change this from the default to an unused port on the host, e.g. 9301. By - default the LAN gossip port is 8301 and configured as a containerPort on - the consul server Pods. - - - `port` ((#v-server-ports-serflan-port)) (`integer: 8301`) - - - `storage` ((#v-server-storage)) (`string: 10Gi`) - This defines the disk size for configuring the - servers' StatefulSet storage. For dynamically provisioned storage classes, this is the - desired size. For manually defined persistent volumes, this should be set to - the disk size of the attached volume. - - - `storageClass` ((#v-server-storageclass)) (`string: null`) - The StorageClass to use for the servers' StatefulSet storage. It must be - able to be dynamically provisioned if you want the storage - to be automatically created. For example, to use - local(https://kubernetes.io/docs/concepts/storage/storage-classes/#local) - storage classes, the PersistentVolumeClaims would need to be manually created. - A `null` value will use the Kubernetes cluster's default StorageClass. If a default - StorageClass does not exist, you will need to create one. - Refer to the [Read/Write Tuning](/consul/docs/install/performance#read-write-tuning) - section of the Server Performance Requirements documentation for considerations - around choosing a performant storage class. - - ~> **Note:** The [Reference Architecture](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers) - contains best practices and recommendations for selecting suitable - hardware sizes for your Consul servers. - - - `persistentVolumeClaimRetentionPolicy` ((#v-server-persistentvolumeclaimretentionpolicy)) (`map`) - The [Persistent Volume Claim (PVC) retention policy](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention) - controls if and how PVCs are deleted during the lifecycle of a StatefulSet. - WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted, - and WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down. - - Example: - - ```yaml - persistentVolumeClaimRetentionPolicy: - whenDeleted: Retain - whenScaled: Retain - ``` - - - `connect` ((#v-server-connect)) (`boolean: true`) - This will enable/disable [service mesh](/consul/docs/connect). Setting this to true - _will not_ automatically secure pod communication, this - setting will only enable usage of the feature. Consul will automatically initialize - a new CA and set of certificates. Additional service mesh settings can be configured - by setting the `server.extraConfig` value or by applying [configuration entries](/consul/docs/connect/config-entries). - - - `enableAgentDebug` ((#v-server-enableagentdebug)) (`boolean: false`) - When set to true, enables Consul to report additional debugging information, including runtime profiling (pprof) data. - This setting is only required for clusters without ACL enabled. Sets `enable_debug` in server agent config to `true`. - If you change this setting, you must restart the agent for the change to take effect. Default is false. - - - `serviceAccount` ((#v-server-serviceaccount)) - - - `annotations` ((#v-server-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the server service account. This should be formatted as a multi-line - string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `resources` ((#v-server-resources)) (`map`) - The resource requests (CPU, memory, etc.) - for each of the server agents. This should be a YAML map corresponding to a Kubernetes - [`ResourceRequirements``](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#resourcerequirements-v1-core) - object. NOTE: The use of a YAML string is deprecated. - - Example: - - ```yaml - resources: - requests: - memory: '200Mi' - cpu: '100m' - limits: - memory: '200Mi' - cpu: '100m' - ``` - - - `securityContext` ((#v-server-securitycontext)) (`map`) - The security context for the server pods. This should be a YAML map corresponding to a - Kubernetes [SecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) object. - By default, servers will run as non-root, with user ID `100` and group ID `1000`, - which correspond to the consul user and group created by the Consul docker image. - Note: if running on OpenShift, this setting is ignored because the user and group are set automatically - by the OpenShift platform. - - - `containerSecurityContext` ((#v-server-containersecuritycontext)) (`map`) - The container securityContext for each container in the server pods. In - addition to the Pod's SecurityContext this can - set the capabilities of processes running in the container and ensure the - root file systems in the container is read-only. - - - `server` ((#v-server-containersecuritycontext-server)) (`map`) - The consul server agent container - - - `aclInit` ((#v-server-containersecuritycontext-aclinit)) (`map`) - The acl-init job - - - `tlsInit` ((#v-server-containersecuritycontext-tlsinit)) (`map`) - The tls-init job - - - `updatePartition` ((#v-server-updatepartition)) (`integer: 0`) - This value is used to carefully - control a rolling update of Consul server agents. This value specifies the - [partition](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions) - for performing a rolling update. Please read the linked Kubernetes - and [Upgrade Consul](/consul/docs/k8s/upgrade#upgrading-consul-servers) - documentation for more information. - - - `disruptionBudget` ((#v-server-disruptionbudget)) - This configures the [`PodDisruptionBudget`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) - for the server cluster. - - - `enabled` ((#v-server-disruptionbudget-enabled)) (`boolean: true`) - Enables registering a PodDisruptionBudget for the server - cluster. If enabled, it only registers the budget so long as - the server cluster is enabled. To disable, set to `false`. - - - `maxUnavailable` ((#v-server-disruptionbudget-maxunavailable)) (`integer: null`) - The maximum number of unavailable pods. In most cases you should not change this as it is automatically set to - the correct number when left as null. This setting has been kept to preserve backwards compatibility. - - By default, this is set to 1 internally in the chart. When server pods are stopped gracefully, they leave the Raft - consensus pool. When running an odd number of servers, one server leaving the pool does not change the quorum - size, and so fault tolerance is not affected. However, if more than one server were to leave the pool, the quorum - size would change. That's why this is set to 1 internally and should not be changed in most cases. - - If you need to set this to `0`, you will need to add a - --set 'server.disruptionBudget.maxUnavailable=0'` flag to the helm chart installation - command because of a limitation in the Helm templating language. - - - `extraConfig` ((#v-server-extraconfig)) (`string: {}`) - A raw string of extra [JSON configuration](/consul/docs/agent/config/config-files) for Consul - servers. This will be saved as-is into a ConfigMap that is read by the Consul - server agents. This can be used to add additional configuration that - isn't directly exposed by the chart. - - Example: - - ```yaml - extraConfig: | - { - "log_level": "DEBUG" - } - ``` - - This can also be set using Helm's `--set` flag using the following syntax: - - ```shell-session - --set 'server.extraConfig="{"log_level": "DEBUG"}"' - ``` - - - `extraVolumes` ((#v-server-extravolumes)) (`array`) - A list of extra volumes to mount for server agents. This - is useful for bringing in extra data that can be referenced by other configurations - at a well known path, such as TLS certificates or Gossip encryption keys. The - value of this should be a list of objects. - - Example: - - ```yaml - extraVolumes: - - type: secret - name: consul-certs - load: false - ``` - - Each object supports the following keys: - - - `type` - Type of the volume, must be one of "configMap" or "secret". Case sensitive. - - - `name` - Name of the configMap or secret to be mounted. This also controls - the path that it is mounted to. The volume will be mounted to `/consul/userconfig/`. - - - `load` - If true, then the agent will be - configured to automatically load HCL/JSON configuration files from this volume - with `-config-dir`. This defaults to false. - - - `extraContainers` ((#v-server-extracontainers)) (`array`) - A list of sidecar containers. - Example: - - ```yaml - extraContainers: - - name: extra-container - image: example-image:latest - command: - - ... - ``` - - - `affinity` ((#v-server-affinity)) (`string`) - This value defines the [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) - for server pods. It defaults to allowing only a single server pod on each node, which - minimizes risk of the cluster becoming unusable if a node is lost. If you need - to run more pods per node (for example, testing on Minikube), set this value - to `null`. - - Example: - - ```yaml - affinity: | - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: server - topologyKey: kubernetes.io/hostname - ``` - - - `tolerations` ((#v-server-tolerations)) (`string: ""`) - Toleration settings for server pods. This - should be a multi-line string matching the - [Tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) - array in a Pod spec. - - - `topologySpreadConstraints` ((#v-server-topologyspreadconstraints)) (`string: ""`) - Pod topology spread constraints for server pods. - This should be a multi-line YAML string matching the - [`topologySpreadConstraints`](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) - array in a Pod Spec. - - This requires K8S >= 1.18 (beta) or 1.19 (stable). - - Example: - - ```yaml - topologySpreadConstraints: | - - maxSkew: 1 - topologyKey: topology.kubernetes.io/zone - whenUnsatisfiable: DoNotSchedule - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: server - ``` - - - `nodeSelector` ((#v-server-nodeselector)) (`string: null`) - This value defines [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) - labels for server pod assignment, formatted as a multi-line string. - - Example: - - ```yaml - nodeSelector: | - beta.kubernetes.io/arch: amd64 - ``` - - - `priorityClassName` ((#v-server-priorityclassname)) (`string: ""`) - This value references an existing - Kubernetes [`priorityClassName`](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority) - that can be assigned to server pods. - - - `extraLabels` ((#v-server-extralabels)) (`map`) - Extra labels to attach to the server pods. This should be a YAML map. - - Example: - - ```yaml - extraLabels: - labelKey: label-value - anotherLabelKey: another-label-value - ``` - - - `annotations` ((#v-server-annotations)) (`string: null`) - This value defines additional annotations for - server pods. This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `exposeService` ((#v-server-exposeservice)) - Configures a service to expose ports on the Consul servers over a Kubernetes Service. - - - `enabled` ((#v-server-exposeservice-enabled)) (`boolean: -`) - When enabled, deploys a Kubernetes Service to reach the Consul servers. - - - `type` ((#v-server-exposeservice-type)) (`string: LoadBalancer`) - Type of service, supports LoadBalancer or NodePort. - - - `nodePort` ((#v-server-exposeservice-nodeport)) - If service is of type NodePort, configures the nodePorts. - - - `http` ((#v-server-exposeservice-nodeport-http)) (`integer: null`) - Configures the nodePort to expose the Consul server http port. - - - `https` ((#v-server-exposeservice-nodeport-https)) (`integer: null`) - Configures the nodePort to expose the Consul server https port. - - - `serf` ((#v-server-exposeservice-nodeport-serf)) (`integer: null`) - Configures the nodePort to expose the Consul server serf port. - - - `rpc` ((#v-server-exposeservice-nodeport-rpc)) (`integer: null`) - Configures the nodePort to expose the Consul server rpc port. - - - `grpc` ((#v-server-exposeservice-nodeport-grpc)) (`integer: null`) - Configures the nodePort to expose the Consul server grpc port. - - - `annotations` ((#v-server-exposeservice-annotations)) (`string: null`) - This value defines additional annotations for - server pods. This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `service` ((#v-server-service)) - Server service properties. - - - `annotations` ((#v-server-service-annotations)) (`string: null`) - Annotations to apply to the server service. - - ```yaml - annotations: | - "annotation-key": "annotation-value" - ``` - - - `extraEnvironmentVars` ((#v-server-extraenvironmentvars)) (`map`) - A list of extra environment variables to set within the stateful set. - These could be used to include proxy settings required for cloud auto-join - feature, in case kubernetes cluster is behind egress http proxies. Additionally, - it could be used to configure custom consul parameters. - - - `snapshotAgent` ((#v-server-snapshotagent)) - Values for setting up and running - [snapshot agents](/consul/commands/snapshot/agent) - within the Consul clusters. They run as a sidecar with Consul servers. - - - `enabled` ((#v-server-snapshotagent-enabled)) (`boolean: false`) - If true, the chart will install resources necessary to run the snapshot agent. - - - `interval` ((#v-server-snapshotagent-interval)) (`string: 1h`) - Interval at which to perform snapshots. - Refer to [`interval`](/consul/commands/snapshot/agent#interval) - - - `configSecret` ((#v-server-snapshotagent-configsecret)) - A Kubernetes or Vault secret that should be manually created to contain the entire - config to be used on the snapshot agent. - This is the preferred method of configuration since there are usually storage - credentials present. Please refer to the [Snapshot agent config](/consul/commands/snapshot/agent#config-file-options) - for details. - - - `secretName` ((#v-server-snapshotagent-configsecret-secretname)) (`string: null`) - The name of the Kubernetes secret or Vault secret path that holds the snapshot agent config. - - - `secretKey` ((#v-server-snapshotagent-configsecret-secretkey)) (`string: null`) - The key within the Kubernetes secret or Vault secret key that holds the snapshot agent config. - - - `resources` ((#v-server-snapshotagent-resources)) (`map`) - The resource settings for snapshot agent pods. - - - `caCert` ((#v-server-snapshotagent-cacert)) (`string: null`) - Optional PEM-encoded CA certificate that will be added to the trusted system CAs. - Useful if using an S3-compatible storage exposing a self-signed certificate. - - Example: - - ```yaml - caCert: | - -----BEGIN CERTIFICATE----- - MIIC7jCCApSgAwIBAgIRAIq2zQEVexqxvtxP6J0bXAwwCgYIKoZIzj0EAwIwgbkx - ... - ``` - - - `auditLogs` ((#v-server-auditlogs)) - Added in Consul 1.8, the audit object allow users to enable auditing - and configure a sink and filters for their audit logs. Please refer to - [audit logs](/consul/docs/enterprise/audit-logging) documentation - for further information. - - - `enabled` ((#v-server-auditlogs-enabled)) (`boolean: false`) - Controls whether Consul logs out each time a user performs an operation. - global.acls.manageSystemACLs must be enabled to use this feature. - - - `sinks` ((#v-server-auditlogs-sinks)) (`array`) - A single entry of the sink object provides configuration for the destination to which Consul - will log auditing events. - - Example: - - ```yaml - sinks: - - name: My Sink - type: file - format: json - path: /tmp/audit.json - delivery_guarantee: best-effort - rotate_duration: 24h - rotate_max_files: 15 - rotate_bytes: 25165824 - - ``` - - The sink object supports the following keys: - - - `name` - Name of the sink. - - - `type` - Type specifies what kind of sink this is. Currently only file sinks are available - - - `format` - Format specifies what format the events will be emitted with. Currently only `json` - events are emitted. - - - `path` - The directory and filename to write audit events to. - - - `delivery_guarantee` - Specifies the rules governing how audit events are written. Consul - only supports `best-effort` event delivery. - - - `mode` - The permissions to set on the audit log files. - - - `rotate_duration` - Specifies the interval by which the system rotates to a new log file. - At least one of `rotate_duration` or `rotate_bytes` must be configured to enable audit logging. - - - `rotate_bytes` - Specifies how large an individual log file can grow before Consul rotates to a new file. - At least one of rotate_bytes or rotate_duration must be configured to enable audit logging. - - - `rotate_max_files` - Defines the limit that Consul should follow before it deletes old log files. - - - `limits` ((#v-server-limits)) - Settings for potentially limiting timeouts, rate limiting on clients as well - as servers, and other settings to limit exposure too many requests, requests - waiting for too long, and other runtime considerations. - - - `requestLimits` ((#v-server-limits-requestlimits)) - This object specifies configurations that limit the rate of RPC and gRPC - requests on the Consul server. Limiting the rate of gRPC and RPC requests - also limits HTTP requests to the Consul server. - /consul/docs/agent/config/config-files#request_limits - - - `mode` ((#v-server-limits-requestlimits-mode)) (`string: disabled`) - Setting for disabling or enabling rate limiting. If not disabled, it - enforces the action that will occur when RequestLimitsReadRate - or RequestLimitsWriteRate is exceeded. The default value of "disabled" will - prevent any rate limiting from occuring. A value of "enforce" will block - the request from processings by returning an error. A value of - "permissive" will not block the request and will allow the request to - continue processing. - - - `readRate` ((#v-server-limits-requestlimits-readrate)) (`integer: -1`) - Setting that controls how frequently RPC, gRPC, and HTTP - queries are allowed to happen. In any large enough time interval, rate - limiter limits the rate to RequestLimitsReadRate tokens per second. - - See https://en.wikipedia.org/wiki/Token_bucket for more about token - buckets. - - - `writeRate` ((#v-server-limits-requestlimits-writerate)) (`integer: -1`) - Setting that controls how frequently RPC, gRPC, and HTTP - writes are allowed to happen. In any large enough time interval, rate - limiter limits the rate to RequestLimitsWriteRate tokens per second. - - See https://en.wikipedia.org/wiki/Token_bucket for more about token - buckets. - -### externalServers ((#h-externalservers)) - -- `externalServers` ((#v-externalservers)) - Configuration for Consul servers when the servers are running outside of Kubernetes. - When running external servers, configuring these values is recommended - if setting `global.tls.enableAutoEncrypt` to true - or `global.acls.manageSystemACLs` to true. - - - `enabled` ((#v-externalservers-enabled)) (`boolean: false`) - If true, the Helm chart will be configured to talk to the external servers. - If setting this to true, you must also set `server.enabled` to false. - - - `hosts` ((#v-externalservers-hosts)) (`array: []`) - An array of external Consul server hosts that are used to make - HTTPS connections from the components in this Helm chart. - Valid values include an IP, a DNS name, or an [exec=](https://github.com/hashicorp/go-netaddrs) string. - The port must be provided separately below. - Note: This slice can only contain a single element. - Note: If enabling clients, `client.join` must also be set to the hosts that should be - used to join the cluster. In most cases, the `client.join` values - should be the same, however, they may be different if you - wish to use separate hosts for the HTTPS connections. `tlsServerName` is required if TLS is enabled and 'hosts' is not a DNS name. - - - `httpsPort` ((#v-externalservers-httpsport)) (`integer: 8501`) - The HTTPS port of the Consul servers. - - - `grpcPort` ((#v-externalservers-grpcport)) (`integer: 8502`) - The GRPC port of the Consul servers. - - - `tlsServerName` ((#v-externalservers-tlsservername)) (`string: null`) - The server name to use as the SNI host header when connecting with HTTPS. This name also appears as the hostname in the server certificate's subject field. - - - `useSystemRoots` ((#v-externalservers-usesystemroots)) (`boolean: false`) - If true, consul-k8s-control-plane components will ignore the CA set in - `global.tls.caCert` when making HTTPS calls to Consul servers and - will instead use the consul-k8s-control-plane image's system CAs for TLS verification. - If false, consul-k8s-control-plane components will use `global.tls.caCert` when - making HTTPS calls to Consul servers. - **NOTE:** This does not affect Consul's internal RPC communication which will - always use `global.tls.caCert`. - - - `k8sAuthMethodHost` ((#v-externalservers-k8sauthmethodhost)) (`string: null`) - If you are setting `global.acls.manageSystemACLs` and - `connectInject.enabled` to true, set `k8sAuthMethodHost` to the address of the Kubernetes API server. - This address must be reachable from the Consul servers. - Please refer to the [Kubernetes Auth Method documentation](/consul/docs/security/acl/auth-methods/kubernetes). - - If `global.federation.enabled` is set to true, `global.federation.k8sAuthMethodHost` and - `externalServers.k8sAuthMethodHost` should be set to the same value. - - You could retrieve this value from your `kubeconfig` by running: - - ```shell-session - $ kubectl config view \ - -o jsonpath="{.clusters[?(@.name=='')].cluster.server}" - ``` - - - `skipServerWatch` ((#v-externalservers-skipserverwatch)) (`boolean: false`) - If true, setting this prevents the consul-dataplane and consul-k8s components from watching the Consul servers for changes. This is - useful for situations where Consul servers are behind a load balancer. - -### client ((#h-client)) - -- `client` ((#v-client)) - Values that configure running a Consul client on Kubernetes nodes. - - - `enabled` ((#v-client-enabled)) (`boolean: false`) - If true, the chart will install all - the resources necessary for a Consul client on every Kubernetes node. This _does not_ require - `server.enabled`, since the agents can be configured to join an external cluster. - - - `logLevel` ((#v-client-loglevel)) (`string: ""`) - Override global log verbosity level. One of "trace", "debug", "info", "warn", or "error". - - - `image` ((#v-client-image)) (`string: null`) - The name of the Docker image (including any tag) for the containers - running Consul client agents. - - - `join` ((#v-client-join)) (`array: null`) - A list of valid [`-retry-join` values](/consul/docs/agent/config/cli-flags#_retry_join). - If this is `null` (default), then the clients will attempt to automatically - join the server cluster running within Kubernetes. - This means that with `server.enabled` set to true, clients will automatically - join that cluster. If `server.enabled` is not true, then a value must be - specified so the clients can join a valid cluster. - - - `dataDirectoryHostPath` ((#v-client-datadirectoryhostpath)) (`string: null`) - An absolute path to a directory on the host machine to use as the Consul - client data directory. If set to the empty string or null, the Consul agent - will store its data in the Pod's local filesystem (which will - be lost if the Pod is deleted). Security Warning: If setting this, Pod Security - Policies _must_ be enabled on your cluster and in this Helm chart (via the - `global.enablePodSecurityPolicies` setting) to prevent other pods from - mounting the same host path and gaining access to all of Consul's data. - Consul's data is not encrypted at rest. - - - `grpc` ((#v-client-grpc)) (`boolean: true`) - If true, agents will enable their GRPC listener on - port 8502 and expose it to the host. This will use slightly more resources, but is - required for Connect. - - - `nodeMeta` ((#v-client-nodemeta)) - nodeMeta specifies an arbitrary metadata key/value pair to associate with the node - (refer to [`-node-meta`](/consul/docs/agent/config/cli-flags#_node_meta)) - - - `pod-name` ((#v-client-nodemeta-pod-name)) (`string: ${HOSTNAME}`) - - - `host-ip` ((#v-client-nodemeta-host-ip)) (`string: ${HOST_IP}`) - - - `exposeGossipPorts` ((#v-client-exposegossipports)) (`boolean: false`) - If true, the Helm chart will expose the clients' gossip ports as hostPorts. - This is only necessary if pod IPs in the k8s cluster are not directly routable - and the Consul servers are outside of the k8s cluster. - This also changes the clients' advertised IP to the `hostIP` rather than `podIP`. - - - `serviceAccount` ((#v-client-serviceaccount)) - - - `annotations` ((#v-client-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the client service account. This should be formatted as a multi-line - string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `resources` ((#v-client-resources)) (`map`) - The resource settings for Client agents. - NOTE: The use of a YAML string is deprecated. Instead, set directly as a - YAML map. - - - `securityContext` ((#v-client-securitycontext)) (`map`) - The security context for the client pods. This should be a YAML map corresponding to a - Kubernetes [SecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) object. - By default, servers will run as non-root, with user ID `100` and group ID `1000`, - which correspond to the consul user and group created by the Consul docker image. - Note: if running on OpenShift, this setting is ignored because the user and group are set automatically - by the OpenShift platform. - - - `containerSecurityContext` ((#v-client-containersecuritycontext)) (`map`) - The container securityContext for each container in the client pods. In - addition to the Pod's SecurityContext this can - set the capabilities of processes running in the container and ensure the - root file systems in the container is read-only. - - - `client` ((#v-client-containersecuritycontext-client)) (`map`) - The consul client agent container - - - `aclInit` ((#v-client-containersecuritycontext-aclinit)) (`map`) - The acl-init initContainer - - - `tlsInit` ((#v-client-containersecuritycontext-tlsinit)) (`map`) - The tls-init initContainer - - - `extraConfig` ((#v-client-extraconfig)) (`string: {}`) - A raw string of extra [JSON configuration](/consul/docs/agent/config/config-files) for Consul - clients. This will be saved as-is into a ConfigMap that is read by the Consul - client agents. This can be used to add additional configuration that - isn't directly exposed by the chart. - - Example: - - ```yaml - extraConfig: | - { - "log_level": "DEBUG" - } - ``` - - This can also be set using Helm's `--set` flag using the following syntax: - - ```shell-session - --set 'client.extraConfig="{"log_level": "DEBUG"}"' - ``` - - - `extraVolumes` ((#v-client-extravolumes)) (`array`) - A list of extra volumes to mount for client agents. This - is useful for bringing in extra data that can be referenced by other configurations - at a well known path, such as TLS certificates or Gossip encryption keys. The - value of this should be a list of objects. - - Example: - - ```yaml - extraVolumes: - - type: secret - name: consul-certs - load: false - ``` - - Each object supports the following keys: - - - `type` - Type of the volume, must be one of "configMap" or "secret". Case sensitive. - - - `name` - Name of the configMap or secret to be mounted. This also controls - the path that it is mounted to. The volume will be mounted to `/consul/userconfig/`. - - - `load` - If true, then the agent will be - configured to automatically load HCL/JSON configuration files from this volume - with `-config-dir`. This defaults to false. - - - `extraContainers` ((#v-client-extracontainers)) (`array`) - A list of sidecar containers. - Example: - - ```yaml - extraContainers: - - name: extra-container - image: example-image:latest - command: - - ... - ``` - - - `tolerations` ((#v-client-tolerations)) (`string: ""`) - Toleration Settings for Client pods - This should be a multi-line string matching the Toleration array - in a PodSpec. - The example below will allow Client pods to run on every node - regardless of taints - - ```yaml - tolerations: | - - operator: Exists - ``` - - - `nodeSelector` ((#v-client-nodeselector)) (`string: null`) - nodeSelector labels for client pod assignment, formatted as a multi-line string. - ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector - - Example: - - ```yaml - nodeSelector: | - beta.kubernetes.io/arch: amd64 - ``` - - - `affinity` ((#v-client-affinity)) (`string: null`) - Affinity Settings for Client pods, formatted as a multi-line YAML string. - ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity - - Example: - - ```yaml - affinity: | - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: node-role.kubernetes.io/master - operator: DoesNotExist - ``` - - - `priorityClassName` ((#v-client-priorityclassname)) (`string: ""`) - This value references an existing - Kubernetes [`priorityClassName`](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#pod-priority) - that can be assigned to client pods. - - - `annotations` ((#v-client-annotations)) (`string: null`) - This value defines additional annotations for - client pods. This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `extraLabels` ((#v-client-extralabels)) (`map`) - Extra labels to attach to the client pods. This should be a regular YAML map. - - Example: - - ```yaml - extraLabels: - labelKey: label-value - anotherLabelKey: another-label-value - ``` - - - `extraEnvironmentVars` ((#v-client-extraenvironmentvars)) (`map`) - A list of extra environment variables to set within the stateful set. - These could be used to include proxy settings required for cloud auto-join - feature, in case kubernetes cluster is behind egress http proxies. Additionally, - it could be used to configure custom consul parameters. - - - `dnsPolicy` ((#v-client-dnspolicy)) (`string: null`) - This value defines the [Pod DNS policy](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy) - for client pods to use. - - - `hostNetwork` ((#v-client-hostnetwork)) (`boolean: false`) - hostNetwork defines whether or not we use host networking instead of hostPort in the event - that a CNI plugin doesn't support `hostPort`. This has security implications and is not recommended - as doing so gives the consul client unnecessary access to all network traffic on the host. - In most cases, pod network and host network are on different networks so this should be - combined with `dnsPolicy: ClusterFirstWithHostNet` - - - `updateStrategy` ((#v-client-updatestrategy)) (`string: null`) - updateStrategy for the DaemonSet. - Refer to the Kubernetes [Daemonset upgrade strategy](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy) - documentation. - This should be a multi-line string mapping directly to the updateStrategy - - Example: - - ```yaml - updateStrategy: | - rollingUpdate: - maxUnavailable: 5 - type: RollingUpdate - ``` - -### dns ((#h-dns)) - -- `dns` ((#v-dns)) - Configuration for DNS configuration within the Kubernetes cluster. - This creates a service that routes to all agents (client or server) - for serving DNS requests. This DOES NOT automatically configure kube-dns - today, so you must still manually configure a `stubDomain` with kube-dns - for this to have any effect: - https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configure-stub-domain-and-upstream-dns-servers - - - `enabled` ((#v-dns-enabled)) (`boolean: -`) - - - `enableRedirection` ((#v-dns-enableredirection)) (`boolean: -`) - If true, services using Consul service mesh will use Consul DNS - for default DNS resolution. The DNS lookups fall back to the nameserver IPs - listed in /etc/resolv.conf if not found in Consul. - - - `type` ((#v-dns-type)) (`string: ClusterIP`) - Used to control the type of service created. For - example, setting this to "LoadBalancer" will create an external load - balancer (for supported K8S installations) - - - `clusterIP` ((#v-dns-clusterip)) (`string: null`) - Set a predefined cluster IP for the DNS service. - Useful if you need to reference the DNS service's IP - address in CoreDNS config. - - - `annotations` ((#v-dns-annotations)) (`string: null`) - Extra annotations to attach to the dns service - This should be a multi-line string of - annotations to apply to the dns Service - - - `additionalSpec` ((#v-dns-additionalspec)) (`string: null`) - Additional ServiceSpec values - This should be a multi-line string mapping directly to a Kubernetes - ServiceSpec object. - -### ui ((#h-ui)) - -- `ui` ((#v-ui)) - Values that configure the Consul UI. - - - `enabled` ((#v-ui-enabled)) (`boolean: global.enabled`) - If true, the UI will be enabled. This will - only _enable_ the UI, it doesn't automatically register any service for external - access. The UI will only be enabled on server agents. If `server.enabled` is - false, then this setting has no effect. To expose the UI in some way, you must - configure `ui.service`. - - - `service` ((#v-ui-service)) - Configure the service for the Consul UI. - - - `enabled` ((#v-ui-service-enabled)) (`boolean: true`) - This will enable/disable registering a - Kubernetes Service for the Consul UI. This value only takes effect if `ui.enabled` is - true and taking effect. - - - `type` ((#v-ui-service-type)) (`string: null`) - The service type to register. - - - `port` ((#v-ui-service-port)) - Set the port value of the UI service. - - - `http` ((#v-ui-service-port-http)) (`integer: 80`) - HTTP port. - - - `https` ((#v-ui-service-port-https)) (`integer: 443`) - HTTPS port. - - - `nodePort` ((#v-ui-service-nodeport)) - Optionally set the nodePort value of the ui service if using a NodePort service. - If not set and using a NodePort service, Kubernetes will automatically assign - a port. - - - `http` ((#v-ui-service-nodeport-http)) (`integer: null`) - HTTP node port - - - `https` ((#v-ui-service-nodeport-https)) (`integer: null`) - HTTPS node port - - - `annotations` ((#v-ui-service-annotations)) (`string: null`) - Annotations to apply to the UI service. - - Example: - - ```yaml - annotations: | - 'annotation-key': annotation-value - ``` - - - `additionalSpec` ((#v-ui-service-additionalspec)) (`string: null`) - Additional ServiceSpec values - This should be a multi-line string mapping directly to a Kubernetes - ServiceSpec object. - - - `ingress` ((#v-ui-ingress)) - Configure Ingress for the Consul UI. - If `global.tls.enabled` is set to `true`, the Ingress will expose - the port 443 on the UI service. Please ensure the Ingress Controller - supports SSL pass-through and it is enabled to ensure traffic forwarded - to port 443 has not been TLS terminated. - - - `enabled` ((#v-ui-ingress-enabled)) (`boolean: false`) - This will create an Ingress resource for the Consul UI. - - - `ingressClassName` ((#v-ui-ingress-ingressclassname)) (`string: ""`) - Optionally set the ingressClassName. - - - `pathType` ((#v-ui-ingress-pathtype)) (`string: Prefix`) - pathType override - refer to: https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types - - - `hosts` ((#v-ui-ingress-hosts)) (`array`) - hosts is a list of host name to create Ingress rules. - - ```yaml - hosts: - - host: foo.bar - paths: - - /example - - /test - ``` - - - `tls` ((#v-ui-ingress-tls)) (`array`) - tls is a list of hosts and secret name in an Ingress - which tells the Ingress controller to secure the channel. - - ```yaml - tls: - - hosts: - - chart-example.local - secretName: testsecret-tls - ``` - - - `annotations` ((#v-ui-ingress-annotations)) (`string: null`) - Annotations to apply to the UI ingress. - - Example: - - ```yaml - annotations: | - 'annotation-key': annotation-value - ``` - - - `metrics` ((#v-ui-metrics)) - Configurations for displaying metrics in the UI. - - - `enabled` ((#v-ui-metrics-enabled)) (`boolean: global.metrics.enabled`) - Enable displaying metrics in the UI. The default value of "-" - will inherit from `global.metrics.enabled` value. - - - `provider` ((#v-ui-metrics-provider)) (`string: prometheus`) - Provider for metrics. Refer to - [`metrics_provider`](/consul/docs/agent/config/config-files#ui_config_metrics_provider) - This value is only used if `ui.enabled` is set to true. - - - `baseURL` ((#v-ui-metrics-baseurl)) (`string: http://prometheus-server`) - baseURL is the URL of the prometheus server, usually the service URL. - This value is only used if `ui.enabled` is set to true. - - - `dashboardURLTemplates` ((#v-ui-dashboardurltemplates)) - Corresponds to [`dashboard_url_templates`](/consul/docs/agent/config/config-files#ui_config_dashboard_url_templates) - configuration. - - - `service` ((#v-ui-dashboardurltemplates-service)) (`string: ""`) - Sets [`dashboardURLTemplates.service`](/consul/docs/agent/config/config-files#ui_config_dashboard_url_templates_service). - -### syncCatalog ((#h-synccatalog)) - -- `syncCatalog` ((#v-synccatalog)) - Configure the catalog sync process to sync K8S with Consul - services. This can run bidirectional (default) or unidirectionally (Consul - to K8S or K8S to Consul only). - - This process assumes that a Consul agent is available on the host IP. - This is done automatically if clients are enabled. If clients are not - enabled then set the node selection so that it chooses a node with a - Consul agent. - - - `enabled` ((#v-synccatalog-enabled)) (`boolean: false`) - True if you want to enable the catalog sync. Set to "-" to inherit from - global.enabled. - - - `image` ((#v-synccatalog-image)) (`string: null`) - The name of the Docker image (including any tag) for consul-k8s-control-plane - to run the sync program. - - - `default` ((#v-synccatalog-default)) (`boolean: true`) - If true, all valid services in K8S are - synced by default. If false, the service must be [annotated](/consul/docs/k8s/service-sync#enable-and-disable-sync) - properly to sync. - In either case an annotation can override the default. - - - `priorityClassName` ((#v-synccatalog-priorityclassname)) (`string: ""`) - Optional priorityClassName. - - - `toConsul` ((#v-synccatalog-toconsul)) (`boolean: true`) - If true, will sync Kubernetes services to Consul. This can be disabled to - have a one-way sync. - - - `toK8S` ((#v-synccatalog-tok8s)) (`boolean: true`) - If true, will sync Consul services to Kubernetes. This can be disabled to - have a one-way sync. - - - `k8sPrefix` ((#v-synccatalog-k8sprefix)) (`string: null`) - Service prefix to prepend to services before registering - with Kubernetes. For example "consul-" will register all services - prepended with "consul-". (Consul -> Kubernetes sync) - - - `k8sAllowNamespaces` ((#v-synccatalog-k8sallownamespaces)) (`array: ["*"]`) - List of k8s namespaces to sync the k8s services from. - If a k8s namespace is not included in this list or is listed in `k8sDenyNamespaces`, - services in that k8s namespace will not be synced even if they are explicitly - annotated. Use `["*"]` to automatically allow all k8s namespaces. - - For example, `["namespace1", "namespace2"]` will only allow services in the k8s - namespaces `namespace1` and `namespace2` to be synced and registered - with Consul. All other k8s namespaces will be ignored. - - To deny all namespaces, set this to `[]`. - - Note: `k8sDenyNamespaces` takes precedence over values defined here. - - - `k8sDenyNamespaces` ((#v-synccatalog-k8sdenynamespaces)) (`array: ["kube-system", "kube-public"]`) - List of k8s namespaces that should not have their - services synced. This list takes precedence over `k8sAllowNamespaces`. - `*` is not supported because then nothing would be allowed to sync. - - For example, if `k8sAllowNamespaces` is `["*"]` and `k8sDenyNamespaces` is - `["namespace1", "namespace2"]`, then all k8s namespaces besides `namespace1` - and `namespace2` will be synced. - - - `k8sSourceNamespace` ((#v-synccatalog-k8ssourcenamespace)) (`string: null`) - [DEPRECATED] Use k8sAllowNamespaces and k8sDenyNamespaces instead. For - backwards compatibility, if both this and the allow/deny lists are set, - the allow/deny lists will be ignored. - k8sSourceNamespace is the Kubernetes namespace to watch for service - changes and sync to Consul. If this is not set then it will default - to all namespaces. - - - `consulNamespaces` ((#v-synccatalog-consulnamespaces)) - These settings manage the catalog sync's interaction with - Consul namespaces (requires consul-ent v1.7+). - Also, `global.enableConsulNamespaces` must be true. - - - `consulDestinationNamespace` ((#v-synccatalog-consulnamespaces-consuldestinationnamespace)) (`string: default`) - Name of the Consul namespace to register all - k8s services into. If the Consul namespace does not already exist, - it will be created. This will be ignored if `mirroringK8S` is true. - - - `mirroringK8S` ((#v-synccatalog-consulnamespaces-mirroringk8s)) (`boolean: true`) - If true, k8s services will be registered into a Consul namespace - of the same name as their k8s namespace, optionally prefixed if - `mirroringK8SPrefix` is set below. If the Consul namespace does not - already exist, it will be created. Turning this on overrides the - `consulDestinationNamespace` setting. - `addK8SNamespaceSuffix` may no longer be needed if enabling this option. - If mirroring is enabled, avoid creating any Consul resources in the following - Kubernetes namespaces, as Consul currently reserves these namespaces for - system use: "system", "universal", "operator", "root". - - - `mirroringK8SPrefix` ((#v-synccatalog-consulnamespaces-mirroringk8sprefix)) (`string: ""`) - If `mirroringK8S` is set to true, `mirroringK8SPrefix` allows each Consul namespace - to be given a prefix. For example, if `mirroringK8SPrefix` is set to "k8s-", a - service in the k8s `staging` namespace will be registered into the - `k8s-staging` Consul namespace. - - - `addK8SNamespaceSuffix` ((#v-synccatalog-addk8snamespacesuffix)) (`boolean: true`) - Appends Kubernetes namespace suffix to - each service name synced to Consul, separated by a dash. - For example, for a service 'foo' in the default namespace, - the sync process will create a Consul service named 'foo-default'. - Set this flag to true to avoid registering services with the same name - but in different namespaces as instances for the same Consul service. - Namespace suffix is not added if 'annotationServiceName' is provided. - - - `consulPrefix` ((#v-synccatalog-consulprefix)) (`string: null`) - Service prefix which prepends itself - to Kubernetes services registered within Consul - For example, "k8s-" will register all services prepended with "k8s-". - (Kubernetes -> Consul sync) - consulPrefix is ignored when 'annotationServiceName' is provided. - NOTE: Updating this property to a non-null value for an existing installation will result in deregistering - of existing services in Consul and registering them with a new name. - - - `k8sTag` ((#v-synccatalog-k8stag)) (`string: null`) - Optional tag that is applied to all of the Kubernetes services - that are synced into Consul. If nothing is set, defaults to "k8s". - (Kubernetes -> Consul sync) - - - `consulNodeName` ((#v-synccatalog-consulnodename)) (`string: k8s-sync`) - Defines the Consul synthetic node that all services - will be registered to. - NOTE: Changing the node name and upgrading the Helm chart will leave - all of the previously sync'd services registered with Consul and - register them again under the new Consul node name. The out-of-date - registrations will need to be explicitly removed. - - - `syncClusterIPServices` ((#v-synccatalog-syncclusteripservices)) (`boolean: true`) - Syncs services of the ClusterIP type, which may - or may not be broadly accessible depending on your Kubernetes cluster. - Set this to false to skip syncing ClusterIP services. - - - `syncLoadBalancerEndpoints` ((#v-synccatalog-syncloadbalancerendpoints)) (`boolean: false`) - If true, LoadBalancer service endpoints instead of ingress addresses will be synced to Consul. - If false, LoadBalancer endpoints are not synced to Consul. - - - `ingress` ((#v-synccatalog-ingress)) - - - `enabled` ((#v-synccatalog-ingress-enabled)) (`boolean: false`) - Syncs the hostname from a Kubernetes Ingress resource to service registrations - when a rule matched a service. Currently only supports host based routing and - not path based routing. The only supported path on an ingress rule is "/". - Set this to false to skip syncing Ingress services. - - Currently, port 80 is synced if there is not TLS entry for the hostname. Syncs the port - 443 if there is a TLS entry that matches the hostname. - - - `loadBalancerIPs` ((#v-synccatalog-ingress-loadbalancerips)) (`boolean: false`) - Requires syncIngress to be `true`. syncs the LoadBalancer IP from a Kubernetes Ingress - resource instead of the hostname to service registrations when a rule matched a service. - - - `nodePortSyncType` ((#v-synccatalog-nodeportsynctype)) (`string: ExternalFirst`) - Configures the type of syncing that happens for NodePort - services. The valid options are: ExternalOnly, InternalOnly, ExternalFirst. - - - ExternalOnly will only use a node's ExternalIP address for the sync - - InternalOnly use's the node's InternalIP address - - ExternalFirst will preferentially use the node's ExternalIP address, but - if it doesn't exist, it will use the node's InternalIP address instead. - - - `aclSyncToken` ((#v-synccatalog-aclsynctoken)) - Refers to a Kubernetes secret that you have created that contains - an ACL token for your Consul cluster which allows the sync process the correct - permissions. This is only needed if ACLs are managed manually within the Consul cluster, i.e. `global.acls.manageSystemACLs` is `false`. - - - `secretName` ((#v-synccatalog-aclsynctoken-secretname)) (`string: null`) - The name of the Kubernetes secret that holds the acl sync token. - - - `secretKey` ((#v-synccatalog-aclsynctoken-secretkey)) (`string: null`) - The key within the Kubernetes secret that holds the acl sync token. - - - `nodeSelector` ((#v-synccatalog-nodeselector)) (`string: null`) - This value defines [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) - labels for catalog sync pod assignment, formatted as a multi-line string. - - Example: - - ```yaml - nodeSelector: | - beta.kubernetes.io/arch: amd64 - ``` - - - `affinity` ((#v-synccatalog-affinity)) (`string: null`) - Affinity Settings - This should be a multi-line string matching the affinity object - - - `tolerations` ((#v-synccatalog-tolerations)) (`string: null`) - Toleration Settings - This should be a multi-line string matching the Toleration array - in a PodSpec. - - - `serviceAccount` ((#v-synccatalog-serviceaccount)) - - - `annotations` ((#v-synccatalog-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the mesh gateways' service account. This should be formatted as a - multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `resources` ((#v-synccatalog-resources)) (`map`) - The resource settings for sync catalog pods. - - - `logLevel` ((#v-synccatalog-loglevel)) (`string: ""`) - Override global log verbosity level. One of "debug", "info", "warn", or "error". - - - `consulWriteInterval` ((#v-synccatalog-consulwriteinterval)) (`string: null`) - Override the default interval to perform syncing operations creating Consul services. - - - `extraLabels` ((#v-synccatalog-extralabels)) (`map`) - Extra labels to attach to the sync catalog pods. This should be a YAML map. - - Example: - - ```yaml - extraLabels: - labelKey: label-value - anotherLabelKey: another-label-value - ``` - - - `annotations` ((#v-synccatalog-annotations)) (`string: null`) - This value defines additional annotations for - the catalog sync pods. This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - -### connectInject ((#h-connectinject)) - -- `connectInject` ((#v-connectinject)) - Configures the automatic Connect sidecar injector. - - - `enabled` ((#v-connectinject-enabled)) (`boolean: true`) - True if you want to enable connect injection. Set to "-" to inherit from - global.enabled. - - - `replicas` ((#v-connectinject-replicas)) (`integer: 1`) - The number of deployment replicas. - - - `image` ((#v-connectinject-image)) (`string: null`) - Image for consul-k8s-control-plane that contains the injector. - - - `default` ((#v-connectinject-default)) (`boolean: false`) - If true, the injector will inject the - Connect sidecar into all pods by default. Otherwise, pods must specify the - [injection annotation](/consul/docs/k8s/connect#consul-hashicorp-com-connect-inject) - to opt-in to Connect injection. If this is true, pods can use the same annotation - to explicitly opt-out of injection. - - - `transparentProxy` ((#v-connectinject-transparentproxy)) - Configures Transparent Proxy for Consul Service mesh services. - Using this feature requires Consul 1.10.0-beta1+. - - - `defaultEnabled` ((#v-connectinject-transparentproxy-defaultenabled)) (`boolean: true`) - If true, then all Consul Service mesh will run with transparent proxy enabled by default, - i.e. we enforce that all traffic within the pod will go through the proxy. - This value is overridable via the "consul.hashicorp.com/transparent-proxy" pod annotation. - - - `defaultOverwriteProbes` ((#v-connectinject-transparentproxy-defaultoverwriteprobes)) (`boolean: true`) - If true, we will overwrite Kubernetes HTTP probes of the pod to point to the Envoy proxy instead. - This setting is recommended because with traffic being enforced to go through the Envoy proxy, - the probes on the pod will fail because kube-proxy doesn't have the right certificates - to talk to Envoy. - This value is also overridable via the "consul.hashicorp.com/transparent-proxy-overwrite-probes" annotation. - Note: This value has no effect if transparent proxy is disabled on the pod. - - - `disruptionBudget` ((#v-connectinject-disruptionbudget)) - This configures the [`PodDisruptionBudget`](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) - for the service mesh sidecar injector. - - - `enabled` ((#v-connectinject-disruptionbudget-enabled)) (`boolean: true`) - This will enable/disable registering a PodDisruptionBudget for the - service mesh sidecar injector. If this is enabled, it will only register the budget so long as - the service mesh is enabled. - - - `maxUnavailable` ((#v-connectinject-disruptionbudget-maxunavailable)) (`integer: null`) - The maximum number of unavailable pods. By default, this will be - automatically computed based on the `connectInject.replicas` value to be `(n/2)-1`. - If you need to set this to `0`, you will need to add a - --set 'connectInject.disruptionBudget.maxUnavailable=0'` flag to the helm chart installation - command because of a limitation in the Helm templating language. - - - `minAvailable` ((#v-connectinject-disruptionbudget-minavailable)) (`integer: null`) - The minimum number of available pods. - Takes precedence over maxUnavailable if set. - - - `apiGateway` ((#v-connectinject-apigateway)) - Configuration settings for the Consul API Gateway integration. - - - `manageExternalCRDs` ((#v-connectinject-apigateway-manageexternalcrds)) (`boolean: true`) - Enables Consul on Kubernetes to manage the CRDs used for Gateway API. - Setting this to true will install the CRDs used for the Gateway API when Consul on Kubernetes is installed. - These CRDs can clash with existing Gateway API CRDs if they are already installed in your cluster. - If this setting is false, you will need to install the Gateway API CRDs manually. - - - `manageNonStandardCRDs` ((#v-connectinject-apigateway-managenonstandardcrds)) (`boolean: false`) - Enables Consul on Kubernets to manage only the non-standard CRDs used for Gateway API. If manageExternalCRDs is true - then all CRDs will be installed; otherwise, if manageNonStandardCRDs is true then only TCPRoute, GatewayClassConfig and MeshService - will be installed. - - - `managedGatewayClass` ((#v-connectinject-apigateway-managedgatewayclass)) - Configuration settings for the GatewayClass installed by Consul on Kubernetes. - - - `nodeSelector` ((#v-connectinject-apigateway-managedgatewayclass-nodeselector)) (`string: null`) - This value defines [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) - labels for gateway pod assignment, formatted as a multi-line string. - - Example: - - ```yaml - nodeSelector: | - beta.kubernetes.io/arch: amd64 - ``` - - - `tolerations` ((#v-connectinject-apigateway-managedgatewayclass-tolerations)) (`string: null`) - Toleration settings for gateway pods created with the managed gateway class. - This should be a multi-line string matching the - [Tolerations](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/) array in a Pod spec. - - - `serviceType` ((#v-connectinject-apigateway-managedgatewayclass-servicetype)) (`string: LoadBalancer`) - This value defines the type of Service created for gateways (e.g. LoadBalancer, ClusterIP) - - - `copyAnnotations` ((#v-connectinject-apigateway-managedgatewayclass-copyannotations)) - Configuration settings for annotations to be copied from the Gateway to other child resources. - - - `service` ((#v-connectinject-apigateway-managedgatewayclass-copyannotations-service)) (`string: null`) - This value defines a list of annotations to be copied from the Gateway to the Service created, formatted as a multi-line string. - - Example: - - ```yaml - service: - annotations: | - - external-dns.alpha.kubernetes.io/hostname - ``` - - - `metrics` ((#v-connectinject-apigateway-managedgatewayclass-metrics)) - Metrics settings for gateways created with this gateway class configuration. - - - `enabled` ((#v-connectinject-apigateway-managedgatewayclass-metrics-enabled)) (`boolean: -`) - This value enables or disables metrics collection on a gateway, overriding the global gateway metrics collection settings. - - - `port` ((#v-connectinject-apigateway-managedgatewayclass-metrics-port)) (`int: null`) - This value sets the port to use for scraping gateway metrics via prometheus, defaults to 20200 if not set. Must be in the port - range of 1024-65535. - - - `path` ((#v-connectinject-apigateway-managedgatewayclass-metrics-path)) (`string: null`) - This value sets the path to use for scraping gateway metrics via prometheus, defaults to /metrics if not set. - - - `resources` ((#v-connectinject-apigateway-managedgatewayclass-resources)) (`map`) - The resource settings for Pods handling traffic for Gateway API. - - - `deployment` ((#v-connectinject-apigateway-managedgatewayclass-deployment)) - This value defines the number of pods to deploy for each Gateway as well as a min and max number of pods for all Gateways - - - `defaultInstances` ((#v-connectinject-apigateway-managedgatewayclass-deployment-defaultinstances)) (`integer: 1`) - - - `maxInstances` ((#v-connectinject-apigateway-managedgatewayclass-deployment-maxinstances)) (`integer: 1`) - - - `minInstances` ((#v-connectinject-apigateway-managedgatewayclass-deployment-mininstances)) (`integer: 1`) - - - `openshiftSCCName` ((#v-connectinject-apigateway-managedgatewayclass-openshiftsccname)) (`string: restricted-v2`) - The name of the OpenShift SecurityContextConstraints resource to use for Gateways. - Only applicable if `global.openshift.enabled` is true. - - - `mapPrivilegedContainerPorts` ((#v-connectinject-apigateway-managedgatewayclass-mapprivilegedcontainerports)) (`integer: 0`) - This value defines the amount we will add to privileged container ports on gateways that use this class. - This is useful if you don't want to give your containers extra permissions to run privileged ports. - Example: The gateway listener is defined on port 80, but the underlying value of the port on the container - will be the 80 + the number defined below. - - - `serviceAccount` ((#v-connectinject-apigateway-serviceaccount)) - Configuration for the ServiceAccount created for the api-gateway component - - - `annotations` ((#v-connectinject-apigateway-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the client service account. This should be formatted as a multi-line - string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `cni` ((#v-connectinject-cni)) - Configures consul-cni plugin for Consul Service mesh services - - - `enabled` ((#v-connectinject-cni-enabled)) (`boolean: false`) - If true, then all traffic redirection setup uses the consul-cni plugin. - Requires connectInject.enabled to also be true. - - - `logLevel` ((#v-connectinject-cni-loglevel)) (`string: null`) - Log level for the installer and plugin. Overrides global.logLevel - - - `namespace` ((#v-connectinject-cni-namespace)) (`string: null`) - Set the namespace to install the CNI plugin into. Overrides global namespace settings for CNI resources. - Ex: "kube-system" - - - `cniBinDir` ((#v-connectinject-cni-cnibindir)) (`string: /opt/cni/bin`) - Location on the kubernetes node where the CNI plugin is installed. Shoud be the absolute path and start with a '/' - Example on GKE: - - ```yaml - cniBinDir: "/home/kubernetes/bin" - ``` - - - `cniNetDir` ((#v-connectinject-cni-cninetdir)) (`string: /etc/cni/net.d`) - Location on the kubernetes node of all CNI configuration. Should be the absolute path and start with a '/' - - - `multus` ((#v-connectinject-cni-multus)) (`string: false`) - If multus CNI plugin is enabled with consul-cni. When enabled, consul-cni will not be installed as a chained - CNI plugin. Instead, a NetworkAttachementDefinition CustomResourceDefinition (CRD) will be created in the helm - release namespace. Following multus plugin standards, an annotation is required in order for the consul-cni plugin - to be executed and for your service to be added to the Consul Service Mesh. - - Add the annotation `'k8s.v1.cni.cncf.io/networks': '[{ "name":"consul-cni","namespace": "consul" }]'` to your pod - to use the default installed NetworkAttachementDefinition CRD. - - Please refer to the [Multus Quickstart Guide](https://github.com/k8snetworkplumbingwg/multus-cni/blob/master/docs/quickstart.md) - for more information about using multus. - - - `resources` ((#v-connectinject-cni-resources)) (`map`) - The resource settings for CNI installer daemonset. - - - `resourceQuota` ((#v-connectinject-cni-resourcequota)) - Resource quotas for running the daemonset as system critical pods - - - `pods` ((#v-connectinject-cni-resourcequota-pods)) (`integer: 5000`) - - - `securityContext` ((#v-connectinject-cni-securitycontext)) (`map`) - The security context for the CNI installer daemonset. This should be a YAML map corresponding to a - Kubernetes [SecurityContext](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) object. - By default, servers will run as root, with user ID `0` and group ID `0`. - Note: if running on OpenShift, this setting is ignored because the user and group are set automatically - by the OpenShift platform. - - - `updateStrategy` ((#v-connectinject-cni-updatestrategy)) (`string: null`) - updateStrategy for the CNI installer DaemonSet. - Refer to the Kubernetes [Daemonset upgrade strategy](https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/#daemonset-update-strategy) - documentation. - This should be a multi-line string mapping directly to the updateStrategy - - Example: - - ```yaml - updateStrategy: | - rollingUpdate: - maxUnavailable: 5 - type: RollingUpdate - ``` - - - `consulNode` ((#v-connectinject-consulnode)) - - - `meta` ((#v-connectinject-consulnode-meta)) (`map`) - meta specifies an arbitrary metadata key/value pair to associate with the node. - - Example: - - ```yaml - meta: - cluster: test-cluster - persistent: true - ``` - - - `metrics` ((#v-connectinject-metrics)) - Configures metrics for Consul service mesh services. All values are overridable - via annotations on a per-pod basis. - - - `defaultEnabled` ((#v-connectinject-metrics-defaultenabled)) (`string: -`) - If true, the connect-injector will automatically - add prometheus annotations to connect-injected pods. It will also - add a listener on the Envoy sidecar to expose metrics. The exposed - metrics will depend on whether metrics merging is enabled: - - If metrics merging is enabled: - the consul-dataplane will run a merged metrics server - combining Envoy sidecar and Connect service metrics, - i.e. if your service exposes its own Prometheus metrics. - - If metrics merging is disabled: - the listener will just expose Envoy sidecar metrics. - This will inherit from `global.metrics.enabled`. - - - `defaultEnableMerging` ((#v-connectinject-metrics-defaultenablemerging)) (`boolean: false`) - Configures the consul-dataplane to run a merged metrics server - to combine and serve both Envoy and Connect service metrics. - This feature is available only in Consul v1.10.0 or greater. - - - `defaultMergedMetricsPort` ((#v-connectinject-metrics-defaultmergedmetricsport)) (`integer: 20100`) - Configures the port at which the consul-dataplane will listen on to return - combined metrics. This port only needs to be changed if it conflicts with - the application's ports. - - - `defaultPrometheusScrapePort` ((#v-connectinject-metrics-defaultprometheusscrapeport)) (`integer: 20200`) - Configures the port Prometheus will scrape metrics from, by configuring - the Pod annotation `prometheus.io/port` and the corresponding listener in - the Envoy sidecar. - NOTE: This is *not* the port that your application exposes metrics on. - That can be configured with the - `consul.hashicorp.com/service-metrics-port` annotation. - - - `defaultPrometheusScrapePath` ((#v-connectinject-metrics-defaultprometheusscrapepath)) (`string: /metrics`) - Configures the path Prometheus will scrape metrics from, by configuring the pod - annotation `prometheus.io/path` and the corresponding handler in the Envoy - sidecar. - NOTE: This is *not* the path that your application exposes metrics on. - That can be configured with the - `consul.hashicorp.com/service-metrics-path` annotation. - - - `envoyExtraArgs` ((#v-connectinject-envoyextraargs)) (`string: null`) - Used to pass arguments to the injected envoy sidecar. - Valid arguments to pass to envoy can be found here: https://www.envoyproxy.io/docs/envoy/latest/operations/cli - e.g "--log-level debug --disable-hot-restart" - - - `priorityClassName` ((#v-connectinject-priorityclassname)) (`string: ""`) - Optional priorityClassName. - - - `extraLabels` ((#v-connectinject-extralabels)) (`map`) - Extra labels to attach to the connect inject pods. This should be a YAML map. - - Example: - - ```yaml - extraLabels: - labelKey: label-value - anotherLabelKey: another-label-value - ``` - - - `annotations` ((#v-connectinject-annotations)) (`string: null`) - This value defines additional annotations for - connect inject pods. This should be formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `imageConsul` ((#v-connectinject-imageconsul)) (`string: null`) - The Docker image for Consul to use when performing Connect injection. - Defaults to global.image. - - - `logLevel` ((#v-connectinject-loglevel)) (`string: ""`) - Sets the `logLevel` for the `consul-dataplane` sidecar and the `consul-connect-inject-init` container. When set, this value overrides the global log verbosity level. One of "debug", "info", "warn", or "error". - - - `serviceAccount` ((#v-connectinject-serviceaccount)) - - - `annotations` ((#v-connectinject-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the injector service account. This should be formatted as a - multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `resources` ((#v-connectinject-resources)) (`map`) - The resource settings for connect inject pods. The defaults, are optimized for getting started worklows on developer deployments. The settings should be tweaked for production deployments. - - - `requests` ((#v-connectinject-resources-requests)) - - - `memory` ((#v-connectinject-resources-requests-memory)) (`string: 200Mi`) - Recommended production default: 500Mi - - - `cpu` ((#v-connectinject-resources-requests-cpu)) (`string: 50m`) - Recommended production default: 250m - - - `limits` ((#v-connectinject-resources-limits)) - - - `memory` ((#v-connectinject-resources-limits-memory)) (`string: 200Mi`) - Recommended production default: 500Mi - - - `cpu` ((#v-connectinject-resources-limits-cpu)) (`string: 50m`) - Recommended production default: 250m - - - `failurePolicy` ((#v-connectinject-failurepolicy)) (`string: Fail`) - Sets the failurePolicy for the mutating webhook. By default this will cause pods not part of the consul installation to fail scheduling while the webhook - is offline. This prevents a pod from skipping mutation if the webhook were to be momentarily offline. - Once the webhook is back online the pod will be scheduled. - In some environments such as Kind this may have an undesirable effect as it may prevent volume provisioner pods from running - which can lead to hangs. In these environments it is recommend to use "Ignore" instead. - This setting can be safely disabled by setting to "Ignore". - - - `namespaceSelector` ((#v-connectinject-namespaceselector)) (`string`) - Selector for restricting the webhook to only specific namespaces. - Use with `connectInject.default: true` to automatically inject all pods in namespaces that match the selector. This should be set to a multiline string. - Refer to https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#matching-requests-namespaceselector - for more details. - - By default, we exclude kube-system since usually users won't - want those pods injected and local-path-storage and openebs so that - Kind (Kubernetes In Docker) and [OpenEBS](https://openebs.io/) respectively can provision Pods used to create PVCs. - Note that this exclusion is only supported in Kubernetes v1.21.1+. - - Example: - - ```yaml - namespaceSelector: | - matchLabels: - namespace-label: label-value - ``` - - - `k8sAllowNamespaces` ((#v-connectinject-k8sallownamespaces)) (`array: ["*"]`) - List of k8s namespaces to allow Connect sidecar - injection in. If a k8s namespace is not included or is listed in `k8sDenyNamespaces`, - pods in that k8s namespace will not be injected even if they are explicitly - annotated. Use `["*"]` to automatically allow all k8s namespaces. - - For example, `["namespace1", "namespace2"]` will only allow pods in the k8s - namespaces `namespace1` and `namespace2` to have Consul service mesh sidecars injected - and registered with Consul. All other k8s namespaces will be ignored. - - To deny all namespaces, set this to `[]`. - - Note: `k8sDenyNamespaces` takes precedence over values defined here and - `namespaceSelector` takes precedence over both since it is applied first. - `kube-system` and `kube-public` are never injected, even if included here. - - - `k8sDenyNamespaces` ((#v-connectinject-k8sdenynamespaces)) (`array: []`) - List of k8s namespaces that should not allow Connect - sidecar injection. This list takes precedence over `k8sAllowNamespaces`. - `*` is not supported because then nothing would be allowed to be injected. - - For example, if `k8sAllowNamespaces` is `["*"]` and k8sDenyNamespaces is - `["namespace1", "namespace2"]`, then all k8s namespaces besides "namespace1" - and "namespace2" will be available for injection. - - Note: `namespaceSelector` takes precedence over this since it is applied first. - `kube-system` and `kube-public` are never injected. - - - `consulNamespaces` ((#v-connectinject-consulnamespaces)) - These settings manage the connect injector's interaction with - Consul namespaces (requires consul-ent v1.7+). - Also, `global.enableConsulNamespaces` must be true. - - - `consulDestinationNamespace` ((#v-connectinject-consulnamespaces-consuldestinationnamespace)) (`string: default`) - Name of the Consul namespace to register all - k8s pods into. If the Consul namespace does not already exist, - it will be created. This will be ignored if `mirroringK8S` is true. - - - `mirroringK8S` ((#v-connectinject-consulnamespaces-mirroringk8s)) (`boolean: true`) - Causes k8s pods to be registered into a Consul namespace - of the same name as their k8s namespace, optionally prefixed if - `mirroringK8SPrefix` is set below. If the Consul namespace does not - already exist, it will be created. Turning this on overrides the - `consulDestinationNamespace` setting. If mirroring is enabled, avoid creating any Consul - resources in the following Kubernetes namespaces, as Consul currently reserves these - namespaces for system use: "system", "universal", "operator", "root". - - - `mirroringK8SPrefix` ((#v-connectinject-consulnamespaces-mirroringk8sprefix)) (`string: ""`) - If `mirroringK8S` is set to true, `mirroringK8SPrefix` allows each Consul namespace - to be given a prefix. For example, if `mirroringK8SPrefix` is set to "k8s-", a - pod in the k8s `staging` namespace will be registered into the - `k8s-staging` Consul namespace. - - - `nodeSelector` ((#v-connectinject-nodeselector)) (`string: null`) - Selector labels for connectInject pod assignment, formatted as a multi-line string. - ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector - - Example: - - ```yaml - nodeSelector: | - beta.kubernetes.io/arch: amd64 - ``` - - - `affinity` ((#v-connectinject-affinity)) (`string: null`) - Affinity Settings - This should be a multi-line string matching the affinity object - - - `tolerations` ((#v-connectinject-tolerations)) (`string: null`) - Toleration Settings - This should be a multi-line string matching the Toleration array - in a PodSpec. - - - `aclBindingRuleSelector` ((#v-connectinject-aclbindingruleselector)) (`string: serviceaccount.name!=default`) - Query that defines which Service Accounts - can authenticate to Consul and receive an ACL token during Connect injection. - The default setting, i.e. serviceaccount.name!=default, prevents the - 'default' Service Account from logging in. - If set to an empty string all service accounts can log in. - This only has effect if ACLs are enabled. - - Refer to Auth methods [Binding rules](/consul/docs/security/acl/auth-methods#binding-rules) - and [Trusted identiy attributes](/consul/docs/security/acl/auth-methods/kubernetes#trusted-identity-attributes) - for more details. - Requires Consul >= v1.5. - - - `overrideAuthMethodName` ((#v-connectinject-overrideauthmethodname)) (`string: ""`) - If you are not using global.acls.manageSystemACLs and instead manually setting up an - auth method for Connect inject, set this to the name of your auth method. - - - `aclInjectToken` ((#v-connectinject-aclinjecttoken)) - Refers to a Kubernetes secret that you have created that contains - an ACL token for your Consul cluster which allows the Connect injector the correct - permissions. This is only needed if Consul namespaces and ACLs - are enabled on the Consul cluster and you are not setting - `global.acls.manageSystemACLs` to `true`. - This token needs to have `operator = "write"` privileges to be able to - create Consul namespaces. - - - `secretName` ((#v-connectinject-aclinjecttoken-secretname)) (`string: null`) - The name of the Vault secret that holds the ACL inject token. - - - `secretKey` ((#v-connectinject-aclinjecttoken-secretkey)) (`string: null`) - The key within the Vault secret that holds the ACL inject token. - - - `sidecarProxy` ((#v-connectinject-sidecarproxy)) - - - `concurrency` ((#v-connectinject-sidecarproxy-concurrency)) (`string: 2`) - The number of worker threads to be used by the Envoy proxy. - By default the threading model of Envoy will use one thread per CPU core per envoy proxy. This - leads to unnecessary thread and memory usage and leaves unnecessary idle connections open. It is - advised to keep this number low for sidecars and high for edge proxies. - This will control the `--concurrency` flag to Envoy. - For additional information, refer to https://blog.envoyproxy.io/envoy-threading-model-a8d44b922310 - - This setting can be overridden on a per-pod basis via this annotation: - - `consul.hashicorp.com/consul-envoy-proxy-concurrency` - - - `resources` ((#v-connectinject-sidecarproxy-resources)) (`map`) - Set default resources for sidecar proxy. If null, that resource won't - be set. - These settings can be overridden on a per-pod basis via these annotations: - - - `consul.hashicorp.com/sidecar-proxy-cpu-limit` - - `consul.hashicorp.com/sidecar-proxy-cpu-request` - - `consul.hashicorp.com/sidecar-proxy-memory-limit` - - `consul.hashicorp.com/sidecar-proxy-memory-request` - - - `requests` ((#v-connectinject-sidecarproxy-resources-requests)) - - - `memory` ((#v-connectinject-sidecarproxy-resources-requests-memory)) (`string: null`) - Recommended production default: 100Mi - - - `cpu` ((#v-connectinject-sidecarproxy-resources-requests-cpu)) (`string: null`) - Recommended production default: 100m - - - `limits` ((#v-connectinject-sidecarproxy-resources-limits)) - - - `memory` ((#v-connectinject-sidecarproxy-resources-limits-memory)) (`string: null`) - Recommended production default: 100Mi - - - `cpu` ((#v-connectinject-sidecarproxy-resources-limits-cpu)) (`string: null`) - Recommended production default: 100m - - - `lifecycle` ((#v-connectinject-sidecarproxy-lifecycle)) (`map`) - Set default lifecycle management configuration for sidecar proxy. - These settings can be overridden on a per-pod basis via these annotations: - - - `consul.hashicorp.com/enable-sidecar-proxy-lifecycle` - - `consul.hashicorp.com/enable-sidecar-proxy-shutdown-drain-listeners` - - `consul.hashicorp.com/sidecar-proxy-lifecycle-shutdown-grace-period-seconds` - - `consul.hashicorp.com/sidecar-proxy-lifecycle-startup-grace-period-seconds` - - `consul.hashicorp.com/sidecar-proxy-lifecycle-graceful-port` - - `consul.hashicorp.com/sidecar-proxy-lifecycle-graceful-shutdown-path` - - `consul.hashicorp.com/sidecar-proxy-lifecycle-graceful-startup-path` - - - `defaultEnabled` ((#v-connectinject-sidecarproxy-lifecycle-defaultenabled)) (`boolean: true`) - - - `defaultEnableShutdownDrainListeners` ((#v-connectinject-sidecarproxy-lifecycle-defaultenableshutdowndrainlisteners)) (`boolean: true`) - - - `defaultShutdownGracePeriodSeconds` ((#v-connectinject-sidecarproxy-lifecycle-defaultshutdowngraceperiodseconds)) (`integer: 30`) - - - `defaultStartupGracePeriodSeconds` ((#v-connectinject-sidecarproxy-lifecycle-defaultstartupgraceperiodseconds)) (`integer: 0`) - - - `defaultGracefulPort` ((#v-connectinject-sidecarproxy-lifecycle-defaultgracefulport)) (`integer: 20600`) - - - `defaultGracefulShutdownPath` ((#v-connectinject-sidecarproxy-lifecycle-defaultgracefulshutdownpath)) (`string: /graceful_shutdown`) - - - `defaultGracefulStartupPath` ((#v-connectinject-sidecarproxy-lifecycle-defaultgracefulstartuppath)) (`string: /graceful_startup`) - - - `defaultStartupFailureSeconds` ((#v-connectinject-sidecarproxy-defaultstartupfailureseconds)) (`integer: 0`) - Configures how long the k8s startup probe will wait before the proxy is considered to be unhealthy and the container is restarted. - A value of zero disables the probe. - - - `defaultLivenessFailureSeconds` ((#v-connectinject-sidecarproxy-defaultlivenessfailureseconds)) (`integer: 0`) - Configures how long the k8s liveness probe will wait before the proxy is considered to be unhealthy and the container is restarted. - A value of zero disables the probe. - - - `initContainer` ((#v-connectinject-initcontainer)) (`map`) - The resource settings for the Connect injected init container. If null, the resources - won't be set for the initContainer. The defaults are optimized for developer instances of - Kubernetes, however they should be tweaked with the recommended defaults as shown below to speed up service registration times. - - - `resources` ((#v-connectinject-initcontainer-resources)) - - - `requests` ((#v-connectinject-initcontainer-resources-requests)) - - - `memory` ((#v-connectinject-initcontainer-resources-requests-memory)) (`string: 25Mi`) - Recommended production default: 150Mi - - - `cpu` ((#v-connectinject-initcontainer-resources-requests-cpu)) (`string: 50m`) - Recommended production default: 250m - - - `limits` ((#v-connectinject-initcontainer-resources-limits)) - - - `memory` ((#v-connectinject-initcontainer-resources-limits-memory)) (`string: 150Mi`) - Recommended production default: 150Mi - - - `cpu` ((#v-connectinject-initcontainer-resources-limits-cpu)) (`string: null`) - Recommended production default: 500m - -### meshGateway ((#h-meshgateway)) - -- `meshGateway` ((#v-meshgateway)) - [Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway) enable Consul Connect to work across Consul datacenters. - - - `enabled` ((#v-meshgateway-enabled)) (`boolean: false`) - If [mesh gateways](/consul/docs/connect/gateways/mesh-gateway) are enabled, a Deployment will be created that runs - gateways and Consul service mesh will be configured to use gateways. - This setting is required for [Cluster Peering](/consul/docs/connect/cluster-peering/k8s). - Requirements: consul 1.6.0+ if using `global.acls.manageSystemACLs``. - - - `logLevel` ((#v-meshgateway-loglevel)) (`string: ""`) - Override global log verbosity level for mesh-gateway-deployment pods. One of "trace", "debug", "info", "warn", or "error". - - - `replicas` ((#v-meshgateway-replicas)) (`integer: 1`) - Number of replicas for the Deployment. - - - `wanAddress` ((#v-meshgateway-wanaddress)) - What gets registered as WAN address for the gateway. - - - `source` ((#v-meshgateway-wanaddress-source)) (`string: Service`) - source configures where to retrieve the WAN address (and possibly port) - for the mesh gateway from. - Can be set to either: `Service`, `NodeIP`, `NodeName` or `Static`. - - - `Service` - Determine the address based on the service type. - - - If `service.type=LoadBalancer` use the external IP or hostname of - the service. Use the port set by `service.port`. - - - If `service.type=NodePort` use the Node IP. The port will be set to - `service.nodePort` so `service.nodePort` cannot be null. - - - If `service.type=ClusterIP` use the `ClusterIP`. The port will be set to - `service.port`. - - - `service.type=ExternalName` is not supported. - - - `NodeIP` - The node IP as provided by the Kubernetes downward API. - - - `NodeName` - The name of the node as provided by the Kubernetes downward - API. This is useful if the node names are DNS entries that - are routable from other datacenters. - - - `Static` - Use the address hardcoded in `meshGateway.wanAddress.static`. - - - `port` ((#v-meshgateway-wanaddress-port)) (`integer: 443`) - Port that gets registered for WAN traffic. - If source is set to "Service" then this setting will have no effect. - Refer to the documentation for source as to which port will be used in that - case. - - - `static` ((#v-meshgateway-wanaddress-static)) (`string: ""`) - If source is set to "Static" then this value will be used as the WAN - address of the mesh gateways. This is useful if you've configured a - DNS entry to point to your mesh gateways. - - - `service` ((#v-meshgateway-service)) - The service option configures the Service that fronts the Gateway Deployment. - - - `type` ((#v-meshgateway-service-type)) (`string: LoadBalancer`) - Type of service, ex. LoadBalancer, ClusterIP. - - - `port` ((#v-meshgateway-service-port)) (`integer: 443`) - Port that the service will be exposed on. - The targetPort will be set to meshGateway.containerPort. - - - `nodePort` ((#v-meshgateway-service-nodeport)) (`integer: null`) - Optionally set the nodePort value of the service if using a NodePort service. - If not set and using a NodePort service, Kubernetes will automatically assign - a port. - - - `annotations` ((#v-meshgateway-service-annotations)) (`string: null`) - Annotations to apply to the mesh gateway service. - - Example: - - ```yaml - annotations: | - 'annotation-key': annotation-value - ``` - - - `additionalSpec` ((#v-meshgateway-service-additionalspec)) (`string: null`) - Optional YAML string that will be appended to the Service spec. - - - `hostNetwork` ((#v-meshgateway-hostnetwork)) (`boolean: false`) - If set to true, gateway Pods will run on the host network. - - - `dnsPolicy` ((#v-meshgateway-dnspolicy)) (`string: null`) - dnsPolicy to use. - - - `consulServiceName` ((#v-meshgateway-consulservicename)) (`string: mesh-gateway`) - Consul service name for the mesh gateways. - Cannot be set to anything other than "mesh-gateway" if - global.acls.manageSystemACLs is true since the ACL token - generated is only for the name 'mesh-gateway'. - - - `containerPort` ((#v-meshgateway-containerport)) (`integer: 8443`) - Port that the gateway will run on inside the container. - - - `hostPort` ((#v-meshgateway-hostport)) (`integer: null`) - Optional hostPort for the gateway to be exposed on. - This can be used with wanAddress.port and wanAddress.useNodeIP - to expose the gateways directly from the node. - If hostNetwork is true, this must be null or set to the same port as - containerPort. - NOTE: Cannot set to 8500 or 8502 because those are reserved for the Consul - agent. - - - `serviceAccount` ((#v-meshgateway-serviceaccount)) - - - `annotations` ((#v-meshgateway-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the mesh gateways' service account. This should be formatted as a - multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `resources` ((#v-meshgateway-resources)) (`map`) - The resource settings for mesh gateway pods. - NOTE: The use of a YAML string is deprecated. Instead, set directly as a - YAML map. - - - `initServiceInitContainer` ((#v-meshgateway-initserviceinitcontainer)) (`map`) - The resource settings for the `service-init` init container. - - - `affinity` ((#v-meshgateway-affinity)) (`string: null`) - This value defines the [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) - for mesh gateway pods. It defaults to `null` thereby allowing multiple gateway pods on each node. But if one would prefer - a mode which minimizes risk of the cluster becoming unusable if a node is lost, set this value - to the value in the example below. - - Example: - - ```yaml - affinity: | - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: mesh-gateway - topologyKey: kubernetes.io/hostname - ``` - - - `tolerations` ((#v-meshgateway-tolerations)) (`string: null`) - Optional YAML string to specify tolerations. - - - `topologySpreadConstraints` ((#v-meshgateway-topologyspreadconstraints)) (`string: ""`) - Pod topology spread constraints for mesh gateway pods. - This should be a multi-line YAML string matching the - [`topologySpreadConstraints`](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) - array in a Pod Spec. - - This requires K8S >= 1.18 (beta) or 1.19 (stable). - - Example: - - ```yaml - topologySpreadConstraints: | - - maxSkew: 1 - topologyKey: topology.kubernetes.io/zone - whenUnsatisfiable: DoNotSchedule - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: mesh-gateway - ``` - - - `nodeSelector` ((#v-meshgateway-nodeselector)) (`string: null`) - Optional YAML string to specify a nodeSelector config. - - - `priorityClassName` ((#v-meshgateway-priorityclassname)) (`string: ""`) - Optional priorityClassName. - - - `annotations` ((#v-meshgateway-annotations)) (`string: null`) - Annotations to apply to the mesh gateway deployment. - - Example: - - ```yaml - annotations: | - 'annotation-key': annotation-value - ``` - -### ingressGateways ((#h-ingressgateways)) - -- `ingressGateways` ((#v-ingressgateways)) - Configuration options for ingress gateways. Default values for all - ingress gateways are defined in `ingressGateways.defaults`. Any of - these values may be overridden in `ingressGateways.gateways` for a - specific gateway with the exception of annotations. Annotations will - include both the default annotations and any additional ones defined - for a specific gateway. - Requirements: consul >= 1.8.0 - - - `enabled` ((#v-ingressgateways-enabled)) (`boolean: false`) - Enable ingress gateway deployment. Requires `connectInject.enabled=true`. - - - `logLevel` ((#v-ingressgateways-loglevel)) (`string: ""`) - Override global log verbosity level for ingress-gateways-deployment pods. One of "trace", "debug", "info", "warn", or "error". - - - `defaults` ((#v-ingressgateways-defaults)) - Defaults sets default values for all gateway fields. With the exception - of annotations, defining any of these values in the `gateways` list - will override the default values provided here. Annotations will - include both the default annotations and any additional ones defined - for a specific gateway. - - - `replicas` ((#v-ingressgateways-defaults-replicas)) (`integer: 1`) - Number of replicas for each ingress gateway defined. - - - `service` ((#v-ingressgateways-defaults-service)) - The service options configure the Service that fronts the gateway Deployment. - - - `type` ((#v-ingressgateways-defaults-service-type)) (`string: ClusterIP`) - Type of service: LoadBalancer, ClusterIP or NodePort. If using NodePort service - type, you must set the desired nodePorts in the `ports` setting below. - - - `ports` ((#v-ingressgateways-defaults-service-ports)) (`array: [{port: 8080, port: 8443}]`) - Ports that will be exposed on the service and gateway container. Any - ports defined as ingress listeners on the gateway's Consul configuration - entry should be included here. The first port will be used as part of - the Consul service registration for the gateway and be listed in its - SRV record. If using a NodePort service type, you must specify the - desired nodePort for each exposed port. - - - `annotations` ((#v-ingressgateways-defaults-service-annotations)) (`string: null`) - Annotations to apply to the ingress gateway service. Annotations defined - here will be applied to all ingress gateway services in addition to any - service annotations defined for a specific gateway in `ingressGateways.gateways`. - - Example: - - ```yaml - annotations: | - 'annotation-key': annotation-value - ``` - - - `additionalSpec` ((#v-ingressgateways-defaults-service-additionalspec)) (`string: null`) - Optional YAML string that will be appended to the Service spec. - - - `serviceAccount` ((#v-ingressgateways-defaults-serviceaccount)) - - - `annotations` ((#v-ingressgateways-defaults-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the ingress gateways' service account. This should be formatted - as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `resources` ((#v-ingressgateways-defaults-resources)) (`map`) - Resource limits for all ingress gateway pods - - - `affinity` ((#v-ingressgateways-defaults-affinity)) (`string: null`) - This value defines the [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) - for ingress gateway pods. It defaults to `null` thereby allowing multiple gateway pods on each node. But if one would prefer - a mode which minimizes risk of the cluster becoming unusable if a node is lost, set this value - to the value in the example below. - - Example: - - ```yaml - affinity: | - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: ingress-gateway - topologyKey: kubernetes.io/hostname - ``` - - - `tolerations` ((#v-ingressgateways-defaults-tolerations)) (`string: null`) - Optional YAML string to specify tolerations. - - - `topologySpreadConstraints` ((#v-ingressgateways-defaults-topologyspreadconstraints)) (`string: ""`) - Pod topology spread constraints for ingress gateway pods. - This should be a multi-line YAML string matching the - [`topologySpreadConstraints`](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) - array in a Pod Spec. - - This requires K8S >= 1.18 (beta) or 1.19 (stable). - - Example: - - ```yaml - topologySpreadConstraints: | - - maxSkew: 1 - topologyKey: topology.kubernetes.io/zone - whenUnsatisfiable: DoNotSchedule - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: ingress-gateway - ``` - - - `nodeSelector` ((#v-ingressgateways-defaults-nodeselector)) (`string: null`) - Optional YAML string to specify a nodeSelector config. - - - `priorityClassName` ((#v-ingressgateways-defaults-priorityclassname)) (`string: ""`) - Optional priorityClassName. - - - `terminationGracePeriodSeconds` ((#v-ingressgateways-defaults-terminationgraceperiodseconds)) (`integer: 10`) - Amount of seconds to wait for graceful termination before killing the pod. - - - `annotations` ((#v-ingressgateways-defaults-annotations)) (`string: null`) - Annotations to apply to the ingress gateway deployment. Annotations defined - here will be applied to all ingress gateway deployments in addition to any - annotations defined for a specific gateway in `ingressGateways.gateways`. - - Example: - - ```yaml - annotations: | - "annotation-key": 'annotation-value' - ``` - - - `consulNamespace` ((#v-ingressgateways-defaults-consulnamespace)) (`string: default`) - `consulNamespace` defines the Consul namespace to register - the gateway into. Requires `global.enableConsulNamespaces` to be true and - Consul Enterprise v1.7+ with a valid Consul Enterprise license. - Note: The Consul namespace MUST exist before the gateway is deployed. - - - `gateways` ((#v-ingressgateways-gateways)) (`array`) - Gateways is a list of gateway objects. The only required field for - each is `name`, though they can also contain any of the fields in - `defaults`. You must provide a unique name for each ingress gateway. These names - must be unique across different namespaces. - Values defined here override the defaults, except in the case of annotations where both will be applied. - - - `name` ((#v-ingressgateways-gateways-name)) (`string: ingress-gateway`) - -### terminatingGateways ((#h-terminatinggateways)) - -- `terminatingGateways` ((#v-terminatinggateways)) - Configuration options for terminating gateways. Default values for all - terminating gateways are defined in `terminatingGateways.defaults`. Any of - these values may be overridden in `terminatingGateways.gateways` for a - specific gateway with the exception of annotations. Annotations will - include both the default annotations and any additional ones defined - for a specific gateway. - Requirements: consul >= 1.8.0 - - - `enabled` ((#v-terminatinggateways-enabled)) (`boolean: false`) - Enable terminating gateway deployment. Requires `connectInject.enabled=true`. - - - `logLevel` ((#v-terminatinggateways-loglevel)) (`string: ""`) - Override global log verbosity level. One of "trace", "debug", "info", "warn", or "error". - - - `defaults` ((#v-terminatinggateways-defaults)) - Defaults sets default values for all gateway fields. With the exception - of annotations, defining any of these values in the `gateways` list - will override the default values provided here. Annotations will - include both the default annotations and any additional ones defined - for a specific gateway. - - - `replicas` ((#v-terminatinggateways-defaults-replicas)) (`integer: 1`) - Number of replicas for each terminating gateway defined. - - - `extraVolumes` ((#v-terminatinggateways-defaults-extravolumes)) (`array`) - A list of extra volumes to mount. These will be exposed to Consul in the path `/consul/userconfig//`. - - Example: - - ```yaml - extraVolumes: - - type: secret - name: my-secret - items: # optional items array - - key: key - path: path # secret will now mount to /consul/userconfig/my-secret/path - ``` - - - `resources` ((#v-terminatinggateways-defaults-resources)) (`map`) - Resource limits for all terminating gateway pods - - - `affinity` ((#v-terminatinggateways-defaults-affinity)) (`string: null`) - This value defines the [affinity](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity) - for terminating gateway pods. It defaults to `null` thereby allowing multiple gateway pods on each node. But if one would prefer - a mode which minimizes risk of the cluster becoming unusable if a node is lost, set this value - to the value in the example below. - - Example: - - ```yaml - affinity: | - podAntiAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: terminating-gateway - topologyKey: kubernetes.io/hostname - ``` - - - `tolerations` ((#v-terminatinggateways-defaults-tolerations)) (`string: null`) - Optional YAML string to specify tolerations. - - - `topologySpreadConstraints` ((#v-terminatinggateways-defaults-topologyspreadconstraints)) (`string: ""`) - Pod topology spread constraints for terminating gateway pods. - This should be a multi-line YAML string matching the - [`topologySpreadConstraints`](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) - array in a Pod Spec. - - This requires K8S >= 1.18 (beta) or 1.19 (stable). - - Example: - - ```yaml - topologySpreadConstraints: | - - maxSkew: 1 - topologyKey: topology.kubernetes.io/zone - whenUnsatisfiable: DoNotSchedule - labelSelector: - matchLabels: - app: {{ template "consul.name" . }} - release: "{{ .Release.Name }}" - component: terminating-gateway - ``` - - - `nodeSelector` ((#v-terminatinggateways-defaults-nodeselector)) (`string: null`) - Optional YAML string to specify a nodeSelector config. - - - `priorityClassName` ((#v-terminatinggateways-defaults-priorityclassname)) (`string: ""`) - Optional priorityClassName. - - - `annotations` ((#v-terminatinggateways-defaults-annotations)) (`string: null`) - Annotations to apply to the terminating gateway deployment. Annotations defined - here will be applied to all terminating gateway deployments in addition to any - annotations defined for a specific gateway in `terminatingGateways.gateways`. - - Example: - - ```yaml - annotations: | - 'annotation-key': annotation-value - ``` - - - `serviceAccount` ((#v-terminatinggateways-defaults-serviceaccount)) - - - `annotations` ((#v-terminatinggateways-defaults-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the terminating gateways' service account. This should be - formatted as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `consulNamespace` ((#v-terminatinggateways-defaults-consulnamespace)) (`string: default`) - `consulNamespace` defines the Consul namespace to register - the gateway into. Requires `global.enableConsulNamespaces` to be true and - Consul Enterprise v1.7+ with a valid Consul Enterprise license. - Note: The Consul namespace MUST exist before the gateway is deployed. - - - `gateways` ((#v-terminatinggateways-gateways)) (`array`) - Gateways is a list of gateway objects. The only required field for - each is `name`, though they can also contain any of the fields in - `defaults`. Values defined here override the defaults except in the - case of annotations where both will be applied. - - - `name` ((#v-terminatinggateways-gateways-name)) (`string: terminating-gateway`) - -### webhookCertManager ((#h-webhookcertmanager)) - -- `webhookCertManager` ((#v-webhookcertmanager)) - Configuration settings for the webhook-cert-manager - `webhook-cert-manager` ensures that cert bundles are up to date for the mutating webhook. - - - `tolerations` ((#v-webhookcertmanager-tolerations)) (`string: null`) - Toleration Settings - This should be a multi-line string matching the Toleration array - in a PodSpec. - - - `nodeSelector` ((#v-webhookcertmanager-nodeselector)) (`string: null`) - This value defines [`nodeSelector`](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector) - labels for the webhook-cert-manager pod assignment, formatted as a multi-line string. - - Example: - - ```yaml - nodeSelector: | - beta.kubernetes.io/arch: amd64 - ``` - -### prometheus ((#h-prometheus)) - -- `prometheus` ((#v-prometheus)) - Configures a demo Prometheus installation. - - - `enabled` ((#v-prometheus-enabled)) (`boolean: false`) - When true, the Helm chart will install a demo Prometheus server instance - alongside Consul. - -### tests ((#h-tests)) - -- `tests` ((#v-tests)) - Control whether a test Pod manifest is generated when running helm template. - When using helm install, the test Pod is not submitted to the cluster so this - is only useful when running helm template. - - - `enabled` ((#v-tests-enabled)) (`boolean: true`) - -### telemetryCollector ((#h-telemetrycollector)) - -- `telemetryCollector` ((#v-telemetrycollector)) - - - `enabled` ((#v-telemetrycollector-enabled)) (`boolean: false`) - Enables the consul-telemetry-collector deployment - - - `logLevel` ((#v-telemetrycollector-loglevel)) (`string: ""`) - Override global log verbosity level. One of "trace", "debug", "info", "warn", or "error". - - - `image` ((#v-telemetrycollector-image)) (`string: hashicorp/consul-telemetry-collector:0.0.2`) - The name of the Docker image (including any tag) for the containers running - the consul-telemetry-collector - - - `resources` ((#v-telemetrycollector-resources)) (`map`) - The resource settings for consul-telemetry-collector pods. - - - `replicas` ((#v-telemetrycollector-replicas)) (`integer: 1`) - This value sets the number of consul-telemetry-collector replicas to deploy. - - - `customExporterConfig` ((#v-telemetrycollector-customexporterconfig)) (`string: null`) - This value defines additional configuration for the telemetry collector. It should be formatted as a multi-line - json blob string - - ```yaml - customExporterConfig: | - {"http_collector_endpoint": "other-otel-collector"} - ``` - - - `service` ((#v-telemetrycollector-service)) - - - `annotations` ((#v-telemetrycollector-service-annotations)) (`string: null`) - This value defines additional annotations for the telemetry-collector's service account. This should be formatted as a multi-line - string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `serviceAccount` ((#v-telemetrycollector-serviceaccount)) - - - `annotations` ((#v-telemetrycollector-serviceaccount-annotations)) (`string: null`) - This value defines additional annotations for the telemetry-collector's service account. This should be formatted - as a multi-line string. - - ```yaml - annotations: | - "sample/annotation1": "foo" - "sample/annotation2": "bar" - ``` - - - `initContainer` ((#v-telemetrycollector-initcontainer)) - - - `resources` ((#v-telemetrycollector-initcontainer-resources)) (`map`) - The resource settings for consul-telemetry-collector initContainer. - - - `nodeSelector` ((#v-telemetrycollector-nodeselector)) (`string: null`) - Optional YAML string to specify a nodeSelector config. - - - `priorityClassName` ((#v-telemetrycollector-priorityclassname)) (`string: ""`) - Optional priorityClassName. - - - `extraEnvironmentVars` ((#v-telemetrycollector-extraenvironmentvars)) (`map`) - A list of extra environment variables to set within the deployment. - These could be used to include proxy settings required for cloud auto-join - feature, in case kubernetes cluster is behind egress http proxies. Additionally, - it could be used to configure custom consul parameters. - - - -## Helm Chart Examples - -The below `values.yaml` results in a single server Consul cluster with a `LoadBalancer` to allow external access to the UI and API. - -```yaml -# values.yaml -server: - replicas: 1 - bootstrapExpect: 1 - -ui: - service: - type: LoadBalancer -``` - -The below `values.yaml` results in a three server Consul Enterprise cluster with 100GB of storage and automatic connect injection. - -Note, this would require a secret that contains the enterprise license key. - -```yaml -# values.yaml -global: - image: 'hashicorp/consul-enterprise:1.4.2-ent' - -server: - replicas: 3 - bootstrapExpect: 3 - enterpriseLicense: - secretName: 'consul-license' - secretKey: 'key' - storage: 100Gi - connect: true - -client: - grpc: true - -connectInject: - enabled: true - default: false -``` - -## Customizing the Helm Chart - -Consul within Kubernetes is highly configurable and the Helm chart contains dozens -of the most commonly used configuration options. -If you need to extend the Helm chart with additional options, we recommend using a third-party tool, -such as [kustomize](https://github.com/kubernetes-sigs/kustomize) or [ship](https://github.com/replicatedhq/ship). -Note that the Helm chart heavily relies on Helm lifecycle hooks, and so features like bootstrapping ACLs or TLS -will not work as expected. Additionally, we can make changes to the internal implementation (e.g., renaming template files) that -may be backward incompatible with such customizations. diff --git a/website/content/docs/k8s/index.mdx b/website/content/docs/k8s/index.mdx deleted file mode 100644 index e1a8291abe3e..000000000000 --- a/website/content/docs/k8s/index.mdx +++ /dev/null @@ -1,71 +0,0 @@ ---- -layout: docs -page_title: Consul on Kubernetes -description: >- - Consul supports Kubernetes natively, allowing you to deploy Consul sidecars to a Kubernetes service mesh and sync the k8s service registry with non-k8s services. Learn how to install Consul on Kubernetes with Helm or the Consul K8s CLI and get started with tutorials. ---- - -# Consul on Kubernetes - -Consul has many integrations with Kubernetes. You can deploy Consul -to Kubernetes using the [Helm chart](/consul/docs/k8s/installation/install#helm-chart-installation) or [Consul K8s CLI](/consul/docs/k8s/installation/install-cli#consul-k8s-cli-installation), sync services between Consul and -Kubernetes, run Consul Service Mesh, and more. -This section documents the official integrations between Consul and Kubernetes. - -## Use Cases - -**Consul Service Mesh**: -Consul can automatically inject the [Consul Service Mesh](/consul/docs/connect) -sidecar into pods so that they can accept and establish encrypted -and authorized network connections with mutual TLS. And because Consul Service Mesh -can run anywhere, pods and external services can communicate with each other over a fully encrypted connection. - -**Service sync to enable Kubernetes and non-Kubernetes services to communicate**: -Consul can sync Kubernetes services with its own service registry. This service sync allows -Kubernetes services to use Kubernetes' native service discovery capabilities to discover -and connect to external services registered in Consul, and for external services -to use Consul service discovery to discover and connect to Kubernetes services. - -**Additional integrations**: Consul can run directly on Kubernetes, so in addition to the -native integrations provided by Consul itself, any other tool built for -Kubernetes can leverage Consul. - -## Getting Started With Consul and Kubernetes - -There are several ways to try Consul with Kubernetes in different environments. - -### Tutorials - -- The [Getting Started with Consul Service Mesh track](/consul/tutorials/get-started-kubernetes) - provides guidance for installing Consul as service mesh for Kubernetes using the Helm - chart, deploying services in the service mesh, and using intentions to secure service - communications. - -- The [Migrate to Microservices with Consul Service Mesh on Kubernetes](/consul/tutorials/microservices?utm_source=docs) - collection uses an example application written by a fictional company to illustrate why and how organizations can - migrate from monolith to microservices using Consul service mesh on Kubernetes. The case study in this collection - should provide information valuable for understanding how to develop services that leverage Consul during any stage - of your microservices journey. - -- The [Consul and Minikube guide](/consul/tutorials/kubernetes/kubernetes-minikube?utm_source=docs) is a quick step-by-step guide for deploying Consul with the official Helm chart on a local instance of Minikube. - -- Review production best practices and cloud-specific configurations for deploying Consul on managed Kubernetes runtimes. - - - The [Consul on Azure Kubernetes Service (AKS) tutorial](/consul/tutorials/kubernetes/kubernetes-aks-azure?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on AKS. The guide also allows you to practice deploying two microservices. - - The [Consul on Amazon Elastic Kubernetes Service (EKS) tutorial](/consul/tutorials/kubernetes/kubernetes-eks-aws?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on EKS. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API. - - The [Consul on Google Kubernetes Engine (GKE) tutorial](/consul/tutorials/kubernetes/kubernetes-gke-google?utm_source=docs) is a complete step-by-step guide on how to deploy Consul on GKE. Additionally, it provides guidance on interacting with your datacenter with the Consul UI, CLI, and API. - -- The [Consul and Kubernetes Reference Architecture](/consul/tutorials/kubernetes/kubernetes-reference-architecture?utm_source=docs) guide provides recommended practices for production. - -- The [Consul and Kubernetes Deployment](/consul/tutorials/kubernetes/kubernetes-deployment-guide?utm_source=docs) tutorial covers the necessary steps to install and configure a new Consul cluster on Kubernetes in production. - -- The [Secure Consul and Registered Services on Kubernetes](/consul/tutorials/kubernetes/kubernetes-secure-agents?utm_source=docs) tutorial covers - the necessary steps to secure a Consul cluster running on Kubernetes in production. - -- The [Layer 7 Observability with Consul Service Mesh](/consul/tutorials/kubernetes/kubernetes-layer7-observability) tutorial covers monitoring a - Consul service mesh running on Kubernetes with Prometheus and Grafana. - -### Documentation - -- [Installing Consul](/consul/docs/k8s/installation/install) covers how to install Consul using the Helm chart. -- [Helm Chart Reference](/consul/docs/k8s/helm) describes the different options for configuring the Helm chart. diff --git a/website/content/docs/k8s/installation/install-cli.mdx b/website/content/docs/k8s/installation/install-cli.mdx deleted file mode 100644 index 0665641755ed..000000000000 --- a/website/content/docs/k8s/installation/install-cli.mdx +++ /dev/null @@ -1,284 +0,0 @@ ---- -layout: docs -page_title: Install Consul on K8s CLI -description: >- - You can use the Consul K8s CLI tool to schedule Kubernetes deployments instead of using Helm. Learn how to download and install the tool to interact with Consul on Kubernetes using the `consul-k8s` command. ---- - -# Install Consul on Kubernetes from Consul K8s CLI - -This topic describes how to install Consul on Kubernetes using the Consul K8s CLI tool. The Consul K8s CLI tool enables you to quickly install and interact with Consul on Kubernetes. Use the Consul K8s CLI tool to install Consul on Kubernetes if you are deploying a single cluster. We recommend using the [Helm chart installation method](/consul/docs/k8s/installation/install) if you are installing Consul on Kubernetes for multi-cluster deployments that involve cross-partition or cross datacenter communication. - -## Introduction - -If it is your first time installing Consul on Kubernetes, then you must first install the Consul K8s CLI tool. You can install Consul on Kubernetes using the Consul K8s tool after installing the CLI. - -## Requirements - -- The `kubectl` client must already be configured to authenticate to the Kubernetes cluster using a valid `kubeconfig` file. -- Install one of the following package managers so that you can install the Consul K8s CLI tool. The installation instructions also provide commands for installing and using the package managers: - - MacOS: [Homebrew](https://brew.sh) - - Ubuntu/Debian: apt - - CentOS/RHEL: yum - -You must install the correct version of the CLI for your Consul on Kubernetes deployment. To deploy a previous version of Consul on Kubernetes, download the specific version of the CLI that matches the version of the control plane that you would like to deploy. Refer to the [compatibility matrix](/consul/docs/k8s/compatibility) for details. - - -## Install the CLI - -The following instructions describe how to install the latest version of the Consul K8s CLI tool, as well as earlier versions, so that you can install an appropriate version of tool for your control plane. - -### Install the latest version - -Complete the following instructions for a fresh installation of Consul on Kubernetes. - - - - - -The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. The Homebrew formulae always installs the latest version of a binary. - -1. Install the HashiCorp `tap`, which is a repository of all Homebrew packages for HashiCorp: - ```shell-session - $ brew tap hashicorp/tap - ``` - -1. Install the Consul K8s CLI with `hashicorp/tap/consul` formula. - ```shell-session - $ brew install hashicorp/tap/consul-k8s - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation: - - ```shell-session - $ consul-k8s version - consul-k8s 1.0 - ``` - - - - - -1. Add the HashiCorp GPG key. - - ```shell-session - $ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - - ``` - -1. Add the HashiCorp apt repository. - - ```shell-session - $ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" - ``` - -1. Run apt-get install to install the `consul-k8s` CLI. - - ```shell-session - $ sudo apt-get update && sudo apt-get install consul-k8s - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation. - - ```shell-session - $ consul-k8s version - consul-k8s 1.0 - ``` - - - - - -1. Install `yum-config-manager` to manage your repositories. - - ```shell-session - $ sudo yum install -y yum-utils - ``` - -1. Use `yum-config-manager` to add the official HashiCorp Linux repository. - - ```shell-session - $ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo - ``` - -1. Install the `consul-k8s` CLI. - - ```shell-session - $ sudo yum -y install consul-k8s - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation. - - ```shell-session - $ consul-k8s version - consul-k8s 1.0 - ``` - - - - - -### Install a previous version - -Complete the following instructions to install a specific version of the CLI so that your tool is compatible with your Consul on Kubernetes control plane. Refer to the [compatibility matrix](/consul/docs/k8s/compatibility) for additional information. - - - - - -1. Download the appropriate version of Consul K8s CLI using the following `curl` command. Set the `$VERSION` environment variable to the appropriate version for your deployment. - - ```shell-session - $ export VERSION=1.1.1 && \ - curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip - ``` - -1. Unzip the zip file output to extract the `consul-k8s` CLI binary. This overwrites existing files and also creates a `.consul-k8s` subdirectory in your `$HOME` folder. - - ```shell-session - $ unzip -o consul-k8s-cli.zip -d ~/consul-k8s - ``` - -1. Add the path to your directory. In order to persist the `$PATH` across sessions, dd it to your shellrc (i.e. shell run commands) file for the shell used by your terminal. - - ```shell-session - $ export PATH=$PATH:$HOME/consul-k8s - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation. - - ```shell-session - $ consul-k8s version - consul-k8s 1.0 - ``` - - - - - -1. Add the HashiCorp GPG key. - - ```shell-session - $ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - - ``` - -1. Add the HashiCorp apt repository. - - ```shell-session - $ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" - ``` - -1. Run apt-get install to install the `consul-k8s` CLI. - - ```shell-session - $ export VERSION=0.39.0 && \ - sudo apt-get update && sudo apt-get install consul-k8s=${VERSION} - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation. - - ```shell-session - $ consul-k8s version - consul-k8s 1.0 - ``` - - - - - -1. Install `yum-config-manager` to manage your repositories. - - ```shell-session - $ sudo yum install -y yum-utils - ``` - -1. Use `yum-config-manager` to add the official HashiCorp Linux repository. - - ```shell-session - $ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo - ``` - -1. Install the `consul-k8s` CLI. - - ```shell-session - $ export VERSION=-1.0 && \ - sudo yum -y install consul-k8s-${VERSION}-1 - ``` - -2. (Optional) Issue the `consul-k8s version` command to verify the installation. - - ```shell-session - $ consul-k8s version - consul-k8s 1.0 - ``` - - - - - -## Install Consul on Kubernetes - -After installing the Consul K8s CLI tool (`consul-k8s`), issue the `install` subcommand and any additional options to install Consul on your existing Kubernetes cluster. Refer to the [Consul K8s CLI reference](/consul/docs/k8s/k8s-cli) for details about all commands and available options. If you do not include any additional options, the `consul-k8s` CLI installs Consul on Kubernetes using the default settings form the Consul Helm chart values. The following example installs Consul on Kubernetes with service mesh and CRDs enabled. - -```shell-session -$ consul-k8s install - -==> Pre-Install Checks -No existing installations found. - ✓ No previous persistent volume claims found - ✓ No previous secrets found -==> Consul Installation Summary - Installation name: consul - Namespace: consul - Overrides: - connectInject: - enabled: true - - Proceed with installation? (y/N) y - -==> Running Installation - ✓ Downloaded charts ---> creating 1 resource(s) ---> creating 45 resource(s) ---> beginning wait for 45 resources with timeout of 10m0s - ✓ Consul installed into namespace "consul" -``` - -You can include the `-auto-approve` option set to `true` to proceed with the installation if the pre-install checks pass. - -The pre-install checks may fail if existing `PersistentVolumeClaims` (PVC) are detected. Refer to the [uninstall instructions](/consul/docs/k8s/operations/uninstall#uninstall-consul) for information about removing PVCs. - -## Custom installation - -You can create a values file and specify parameters to overwrite the default Helm chart installation. Add the `-f` and specify your values file to implement your configuration, for example: - -```shell-session -$ consul-k8s install -f values.yaml -``` - -### Install Consul on OpenShift clusters - -[Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a security-conscious, opinionated wrapper for Kubernetes. To install Consul on OpenShift-managed Kubernetes, set `global.openshift.enabled=true` in your [custom installation](#custom-installation) values file: - -```yaml -global: - openshift: - enabled: true -``` - -Refer to [`openshift` in the Helm chart reference](/consul/docs/k8s/helm#v-global-openshift) for additional information. - -## Check the Consul cluster status - -Issue the `consul-k8s status` command to view the status of the installed Consul cluster. - -```shell-session -$ consul-k8s status - -==> Consul-K8s Status Summary - NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED ----------+-----------+----------+--------------+------------+----------+-------------------------- - consul | consul | deployed | 0.40.0 | 1.14.0 | 1 | 2022/01/31 16:58:51 PST - -✓ Consul servers healthy (3/3) -✓ Consul clients healthy (3/3) -``` diff --git a/website/content/docs/k8s/installation/install.mdx b/website/content/docs/k8s/installation/install.mdx deleted file mode 100644 index 30dab8c19d8f..000000000000 --- a/website/content/docs/k8s/installation/install.mdx +++ /dev/null @@ -1,415 +0,0 @@ ---- -layout: docs -page_title: Install Consul on Kubernetes with Helm -description: >- - You can use Helm to configure Consul on Kubernetes deployments. Learn how to add the official Helm chart to your repository and the parameters that enable the service mesh, CNI plugins, Consul UI, and Consul HTTP API. ---- - -# Install Consul on Kubernetes with Helm - -This topic describes how to install Consul on Kubernetes using the official Consul Helm chart. For instruction on how to install Consul on Kubernetes using the Consul K8s CLI, refer to [Installing the Consul K8s CLI](/consul/docs/k8s/installation/install-cli). - -## Introduction - -We recommend using the Consul Helm chart to install Consul on Kubernetes for multi-cluster installations that involve cross-partition or cross datacenter communication. The Helm chart installs and configures all necessary components to run Consul. - -Consul can run directly on Kubernetes so that you can leverage Consul functionality if your workloads are fully deployed to Kubernetes. For heterogeneous workloads, Consul agents can join a server running inside or outside of Kubernetes. Refer to the [Consul on Kubernetes architecture](/consul/docs/k8s/architecture) to learn more about its general architecture. - -The Helm chart exposes several useful configurations and automatically sets up complex resources, but it does not automatically operate Consul. You must still become familiar with how to monitor, backup, and upgrade the Consul cluster. - -The Helm chart has no required configuration, so it installs a Consul cluster with default configurations. We strongly recommend that you [learn about the configuration options](/consul/docs/k8s/helm#configuration-values) before going to production. - --> **Security warning**: By default, Helm installs Consul with security configurations disabled so that the out-of-box experience is optimized for new users. We strongly recommend using a properly-secured Kubernetes cluster or making sure that you understand and enable [Consul’s security features](/consul/docs/security) before going into production. Some security features are not supported in the Helm chart and require additional manual configuration. - -Refer to the [architecture](/consul/docs/k8s/installation/install#architecture) section to learn more about the general architecture of Consul on Kubernetes. - -For a hands-on experience with Consul as a service mesh -for Kubernetes, follow the [Getting Started with Consul service -mesh](/consul/tutorials/get-started-kubernetes) tutorial. - -## Requirements - -Using the Helm Chart requires Helm version 3.6+. Visit the [Helm website](https://helm.sh/docs/intro/install/) to download the latest version. - -## Install Consul - -1. Add the HashiCorp Helm Repository: - - ```shell-session - $ helm repo add hashicorp https://helm.releases.hashicorp.com - "hashicorp" has been added to your repositories - ``` - -1. Verify that you have access to the consul chart: - - ```shell-session - $ helm search repo hashicorp/consul - NAME CHART VERSION APP VERSION DESCRIPTION - hashicorp/consul 1.0.1 1.14.1 Official HashiCorp Consul Chart - ``` - -1. Before you install Consul on Kubernetes with Helm, ensure that the `consul` Kubernetes namespace does not exist. We recommend installing Consul on a dedicated namespace. - - ```shell-session - $ kubectl get namespace - NAME STATUS AGE - default Active 18h - kube-node-lease Active 18h - kube-public Active 18h - kube-system Active 18h - ``` - -1. Install Consul on Kubernetes using Helm. The Helm chart does everything to set up your deployment: after installation, agents automatically form clusters, elect leaders, and run the necessary agents. - - - Run the following command to install the latest version of Consul on Kubernetes with its default configuration. - - ```shell-session - $ helm install consul hashicorp/consul --set global.name=consul --create-namespace --namespace consul - ``` - - You can also install Consul on a dedicated namespace of your choosing by modifying the value of the `-n` flag for the Helm install. - - - To install a specific version of Consul on Kubernetes, issue the following command with `--version` flag: - - ```shell-session - $ export VERSION=1.0.1 - $ helm install consul hashicorp/consul --set global.name=consul --version ${VERSION} --create-namespace --namespace consul - ``` - -## Custom installation - -If you want to customize your installation, -create a `values.yaml` file to override the default settings. -To learn what settings are available, run `helm inspect values hashicorp/consul` -or read the [Helm Chart Reference](/consul/docs/k8s/helm). - -### Minimal `values.yaml` for Consul service mesh - -The following `values.yaml` config file contains the minimum required settings to enable [Consul Service Mesh](/consul/docs/k8s/connect): - - - -```yaml -global: - name: consul -``` - - - -After you create your `values.yaml` file, run `helm install` with the `--values` flag: - -```shell-session -$ helm install consul hashicorp/consul --create-namespace --namespace consul --values values.yaml -NAME: consul -... -``` - -### Install Consul on Red Hat OpenShift - -[Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift) is a security-conscious, opinionated wrapper for Kubernetes. To install Consul on OpenShift-managed Kubernetes, set `global.openshift.enabled=true` in your [custom installation](#custom-installation) values file: - -```yaml -global: - openshift: - enabled: true -``` - -Refer to [`openshift` in the Helm chart reference](/consul/docs/k8s/helm#v-global-openshift) for additional information regarding the OpenShift stanza. In addition, refer to the [Deploy Consul on RedHat OpenShift tutorial](/consul/tutorials/kubernetes/kubernetes-openshift-red-hat) for a complete working example that deploys Consul Service Mesh using Red Hat Certified UBI images. - -### Install Consul on GKE Autopilot - -GKE Autopilot provides a fully managed environment for containerized workloads and requires the Consul CNI plugin to be installed. Refer to [Enable the Consul CNI plugin](#enable-the-consul-cni-plugin) for a full reference on how to enable the CNI plugin. - -By default, GKE Autopilot also installs [Gateway API resources](https://gateway-api.sigs.k8s.io), so we recommend customizing the `connectInject.apiGateway` stanza to accommodate for the pre-installed Gateway API CRDs. - -The following working example enables both Consul Service Mesh and Consul API Gateway on GKE Autopilot. Refer to [`connectInject.agiGateway` in the Helm chart reference](https://developer.hashicorp.com/consul/docs/k8s/helm#v-connectinject-apigateway) for additional information. - - - - ```yaml - global: - name: consul - connectInject: - enabled: true - apiGateway: - manageExternalCRDs: false - manageNonStandardCRDs: true - cni: - enabled: true - logLevel: debug - cniBinDir: "/home/kubernetes/bin" - cniNetDir: "/etc/cni/net.d" - server: - resources: - requests: - memory: "500Mi" - cpu: "500m" - limits: - memory: "500Mi" - cpu: "500m" - ``` - - -### Enable the Consul CNI plugin - -By default, Consul injects a `connect-inject-init` init container as part of the Kubernetes pod startup process when Consul is in [transparent proxy mode](/consul/docs/connect/transparent-proxy). -The container configures traffic redirection in the service mesh through the sidecar proxy. -To configure redirection, the container requires elevated `CAP_NET_ADMIN` privileges, which may not be compatible with security policies in your organization. - -Instead, you can enable the Consul container network interface (CNI) plugin to perform traffic redirection. -Because the plugin is executed by the local Kubernetes kubelet, the plugin already has the elevated privileges necessary to configure the network. - -The Consul Helm Chart is responsible for installing the Consul CNI plugin. -To configure the plugin to be installed, add the following configuration to your `values.yaml` file: - - - - - - - - ```yaml - global: - name: consul - connectInject: - enabled: true - cni: - enabled: true - logLevel: info - cniBinDir: "/opt/cni/bin" - cniNetDir: "/etc/cni/net.d" - ``` - - - - - - - - - ```yaml - global: - name: consul - connectInject: - enabled: true - cni: - enabled: true - logLevel: info - cniBinDir: "/home/kubernetes/bin" - cniNetDir: "/etc/cni/net.d" - ``` - - - - - - - - ```yaml - global: - name: consul - openshift: - enabled: true - connectInject: - enabled: true - cni: - enabled: true - logLevel: info - multus: true - cniBinDir: "/var/lib/cni/bin" - cniNetDir: "/etc/kubernetes/cni/net.d" -``` - - - - - -The following table describes the available CNI plugin options: - -| Option | Description | Default | -| ---------- | ----------- | ------------- | -| `cni.enabled` | Boolean value that enables or disables the CNI plugin. If `true`, the plugin is responsible for redirecting traffic in the service mesh. If `false`, redirection is handled by the `connect-inject init` container. | `false` | -| `cni.logLevel` | String value that specifies the log level for the installer and plugin. You can specify the following values: `info`, `debug`, `error`. | `info` | -| `cni.namespace` | Set the namespace to install the CNI plugin into. Overrides global namespace settings for CNI resources, for example `kube-system` | namespace used for `consul-k8s` install, for example `consul` | -| `cni.multus` | Boolean value that enables multus CNI plugin support. If `true`, multus will be enabled. If `false`, Consul CNI will operate as a chained plugin. | `false` | -| `cni.cniBinDir` | String value that specifies the location on the Kubernetes node where the CNI plugin is installed. | `/opt/cni/bin` | -| `cni.cniNetDir` | String value that specifies the location on the Kubernetes node for storing the CNI configuration. | `/etc/cni/net.d` | - -### Enable Consul service mesh on select namespaces - -By default, Consul Service Mesh is enabled on almost all namespaces within a Kubernetes cluster, with the exception of `kube-system` and `local-path-storage`. To restrict the service mesh to a subset of namespaces: - -1. specify a `namespaceSelector` that matches a label attached to each namespace where you want to deploy the service mesh. In order to default to enabling service mesh on select namespaces by label, the `connectInject.default` value must be set to `true`. - - - - ```yaml - global: - name: consul - connectInject: - enabled: true - default: true - namespaceSelector: | - matchLabels: - connect-inject : enabled - ``` - - - -1. Label the namespaces where you would like to enable Consul Service Mesh. - - ```shell-session - $ kubectl create ns foo - $ kubectl label namespace foo connect-inject=enabled - ``` - -1. Run `helm install` with the `--values` flag: - - ```shell-session - $ helm install consul hashicorp/consul --create-namespace --namespace consul --values values.yaml - NAME: consul - ``` - -### Update your Consul on Kubernetes configuration - -If you already installed Consul and want to make changes, you need to run -`helm upgrade`. Refer to [Upgrading](/consul/docs/k8s/upgrade) for more details. - -## Usage - -You can view the Consul UI and access the Consul HTTP API after installation. - -### Viewing the Consul UI - -The Consul UI is enabled by default when using the Helm chart. - -For security reasons, it is not exposed through a `LoadBalancer` service by default. To visit the UI, you must -use `kubectl port-forward`. - -#### Port forward with TLS disabled - -If running with TLS disabled, the Consul UI is accessible through http on port 8500: - -```shell-session -$ kubectl port-forward service/consul-server --namespace consul 8500:8500 -... -``` - -After you set up the port forward, navigate to [http://localhost:8500](http://localhost:8500). - -#### Port forward with TLS enabled - -If running with TLS enabled, the Consul UI is accessible through https on port 8501: - -```shell-session -$ kubectl port-forward service/consul-server --namespace consul 8501:8501 -... -``` - -After you set up the port forward, navigate to [https://localhost:8501](https://localhost:8501). - -~> You need to click through an SSL warning from your browser because the -Consul certificate authority is self-signed and not in the browser's trust store. - -#### ACLs Enabled - -If ACLs are enabled, you need to input an ACL token to display all resources and make modifications in the UI. - -To retrieve the bootstrap token that has full permissions, run: - -```shell-session -$ kubectl get secrets/consul-bootstrap-acl-token --template='{{.data.token | base64decode }}' -e7924dd1-dc3f-f644-da54-81a73ba0a178% -``` - -Then paste the token into the UI under the ACLs tab (without the `%`). - -~> NOTE: If using multi-cluster federation, your kubectl context must be in the primary datacenter -to retrieve the bootstrap token since secondary datacenters use a separate token -with less permissions. - -#### Exposing the UI through a service - -If you want to expose the UI via a Kubernetes Service, configure -the [`ui.service` chart values](/consul/docs/k8s/helm#v-ui-service). -Because this service allows requests to the Consul servers, it should -not be open to the world. - -### Accessing the Consul HTTP API - -While technically any listening agent can respond to the HTTP API, communicating with the local Consul node has important caching behavior and allows you to use the simpler [`/agent` endpoints for services and checks](/consul/api-docs/agent). - -To find information about a node, you can use the [downward API](https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/). - -An example pod specification is shown below. In addition to pods, anything -with a pod template can also access the downward API and can therefore also -access Consul: StatefulSets, Deployments, Jobs, etc. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: consul-example -spec: - containers: - - name: example - image: 'hashicorp/consul:latest' - env: - - name: HOST_IP - valueFrom: - fieldRef: - fieldPath: status.hostIP - command: - - '/bin/sh' - - '-ec' - - | - export CONSUL_HTTP_ADDR="${HOST_IP}:8500" - consul kv put hello world - restartPolicy: Never -``` - -An example `Deployment` is also shown below to show how the host IP can -be accessed from nested pod specifications: - - - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: consul-example-deployment -spec: - replicas: 1 - selector: - matchLabels: - app: consul-example - template: - metadata: - labels: - app: consul-example - spec: - containers: - - name: example - image: 'hashicorp/consul:latest' - env: - - name: HOST_IP - valueFrom: - fieldRef: - fieldPath: status.hostIP - command: - - '/bin/sh' - - '-ec' - - | - export CONSUL_HTTP_ADDR="${HOST_IP}:8500" - consul kv put hello world -``` - - - -## Next Steps - -If you are still considering a move to Kubernetes, or to Consul on Kubernetes specifically, our [Migrate to Microservices with Consul Service Mesh on Kubernetes](/consul/tutorials/microservices?utm_source=docs) -collection uses an example application written by a fictional company to illustrate why and how organizations can -migrate from monolith to microservices using Consul service mesh on Kubernetes. The case study in this collection -should provide information valuable for understanding how to develop services that leverage Consul during any stage -of your microservices journey. diff --git a/website/content/docs/k8s/k8s-cli.mdx b/website/content/docs/k8s/k8s-cli.mdx deleted file mode 100644 index 005363ec3cff..000000000000 --- a/website/content/docs/k8s/k8s-cli.mdx +++ /dev/null @@ -1,1125 +0,0 @@ ---- -layout: docs -page_title: Consul on Kubernetes CLI Reference -description: >- - The Consul on Kubernetes CLI tool enables you to manage Consul with the `consul-k8s` command instead of direct interaction with Helm, kubectl, or Consul’s CLI. Learn about commands, their flags, and review examples in this reference guide. ---- - -# Consul on Kubernetes CLI Reference - -The Consul on Kubernetes CLI, `consul-k8s`, is a tool for managing Consul -that does not require direct interaction with Helm, the [Consul CLI](/consul/commands), -or `kubectl`. - -For guidance on how to install `consul-k8s`, refer to the -[Installing the Consul K8s CLI](/consul/docs/k8s/installation/install-cli) documentation. - -This topic describes the commands and available options for using `consul-k8s`. - -## Usage - -The Consul on Kubernetes CLI uses the following syntax: - -```shell-session -$ consul-k8s -``` - -## Commands - -You can use the following commands with `consul-k8s`. - - - [`config`](#config): Interact with helm configuration. - - [`config read`](#config-read): Read helm configuration of a Consul installation. - - [`install`](#install): Install Consul on Kubernetes. - - [`proxy`](#proxy): Inspect Envoy proxies managed by Consul. - - [`proxy list`](#proxy-list): List all Pods running proxies managed by Consul. - - [`proxy read`](#proxy-read): Inspect the Envoy configuration for a given Pod. - - [`proxy log`](#proxy-log): Inspect and modify the Envoy logging configuration for a given Pod. - - [`proxy stats`](#proxy-stats): View the Envoy cluster stats for a given Pod. - - [`status`](#status): Check the status of a Consul installation on Kubernetes. - - [`troubleshoot`](#troubleshoot): Troubleshoot Consul service mesh and networking issues from a given pod. - - [`uninstall`](#uninstall): Uninstall Consul deployment. - - [`upgrade`](#upgrade): Upgrade Consul on Kubernetes from an existing installation. - - [`version`](#version): Print the version of the Consul on Kubernetes CLI. - -### `config` - -The `config` command exposes the `read` subcommand that allows to read the helm configuration of a Consul installation. - -- [`config read`](#config-read): Read helm configuration of a Consul installation. - -### `config read` - -```shell-session -$ consul-k8s config read -``` - -| Flag | Description | Default | -| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | -| `-all-namespaces`, `-A` | `Boolean` List pods in all Kubernetes namespaces. | `false` | -| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | - -Refer to the [Global Options](#global-options) for additional options that you can use -when installing Consul on Kubernetes. - -#### Example Commands - -The following example command reads the Helm configuration in the `myNS` namespace. - -```shell-session -$ consul-k8s config read -namespace=myNS -``` - -``` -global: - cloud: - clientId: - secretKey: client-id - secretName: consul-hcp-client-id - clientSecret: - secretKey: client-secret - secretName: consul-hcp-client-secret - enabled: true - resourceId: - secretKey: resource-id - secretName: consul-hcp-resource-id - image: hashicorp/consul:1.14.7 - name: consul -``` - -### `install` - -The `install` command installs Consul on your Kubernetes cluster. - -```shell-session -$ consul-k8s install -``` - -The following options are available. - -| Flag | Description | Default | -| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------| -| `-auto-approve` | Boolean value that enables you to skip the installation confirmation prompt. | `false` | -| `-dry-run` | Boolean value that validates the installation and returns a summary. | `false` | -| `-config-file` | String value that specifies the path to a file containing custom installation configurations, e.g., Consul Helm chart values file.
    You can use the `-config-file` flag multiple times to specify multiple files. | none | -| `-namespace` | String value that specifies the namespace of the Consul installation. | `consul` | -| `-preset` | String value that installs Consul based on a preset configuration. You can specify the following values:
    `demo`: Installs a single replica server with sidecar injection enabled; useful for testing service mesh functionality.
    `secure`: Installs a single replica server with sidecar injection, ACLs, and TLS enabled; useful for testing service mesh functionality. | Configuration of the Consul Helm chart. | -| `-set` | String value that enables you to set a customizable value. This flag is comparable to the `helm install --set` flag.
    You can use the `-set` flag multiple times to set multiple values.
    Consul Helm chart values are supported. | none | -| `-set-file` | String value that specifies the name of an arbitrary config file. This flag is comparable to the `helm install --set-file`
    flag. The contents of the file will be used to set a customizable value. You can use the `-set-file` flag multiple times to specify multiple files.
    Consul Helm chart values are supported. | none | -| `-set-string` | String value that enables you to set a customizable string value. This flag is comparable to the `helm install --set-string`
    flag. You can use the `-set-string` flag multiple times to specify multiple strings.
    Consul Helm chart values are supported. | none | -| `-timeout` | Specifies how long to wait for the installation process to complete before timing out. The value is specified with an integer and string value indicating a unit of time.
    The following units are supported:
    `ms` (milliseconds)
    `s` (seconds)
    `m` (minutes)
    In the following example, installation will timeout after one minute:
    `consul-k8s install -timeout 1m` | `10m` | -| `-wait` | Boolean value that determines if Consul should wait for resources in the installation to be ready before exiting the command. | `true` | -| `-verbose`, `-v` | Boolean value that specifies whether to output verbose logs from the install command with the status of resources being installed. | `false` | -| `-help`, `-h` | Prints usage information for this option. | none | - -See [Global Options](#global-options) for additional commands that you can use when installing Consul on Kubernetes. - -#### Example Commands - -The following example command installs Consul in the `myNS` namespace according to the `secure` preset. - -```shell-session -$ consul-k8s install -preset=secure -namespace=myNS -``` - -The following example commands install Consul on Kubernetes using custom values, files, or strings that are set via flags. The underlying Consul-on-Kubernetes Helm chart uses the flags to customize the installation. The flags are comparable to the `helm install` [flags](https://helm.sh/docs/helm/helm_install/#helm-install). - -```shell-session -$ consul-k8s install -set key=value -``` - -```shell-session -$ consul-k8s install -set key1=value1 -set key2=value2 -``` -```shell-session -$ consul-k8s install -set-file config1=value1.conf -``` - -```shell-session -$ consul-k8s install -set-file config1=value1.conf -set-file config2=value2.conf -``` - -```shell-session -$ consul-k8s install -set-string key=value-bool -``` - -### `proxy` - -The `proxy` command exposes two subcommands for interacting with proxies managed by -Consul in your Kubernetes Cluster. - -- [`proxy list`](#proxy-list): List all Pods running proxies managed by Consul. -- [`proxy read`](#proxy-read): Inspect the Envoy configuration for a given Pod. -- [`proxy log`](#proxy-log): Inspect and modify the Envoy logging configuration for a given Pod. -- [`proxy stats`](#proxy-stats): View the Envoy cluster stats for a given Pod. - -### `proxy list` - -```shell-session -$ consul-k8s proxy list -``` - -| Flag | Description | Default | -| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | -| `-all-namespaces`, `-A` | `Boolean` List pods in all Kubernetes namespaces. | `false` | -| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | -| `-output-format`, `-o` | `String` If set to json, outputs the result in json format, else table format | `table` - -Refer to the [Global Options](#global-options) for additional options that you can use -when installing Consul on Kubernetes. - -This command lists proxies and their `Type`. Types of proxies include: - -- `Sidecar`: The majority of pods in the cluster are `Sidecar` types. They run the - proxy as a sidecar to connect the pod as a service in the mesh. -- `API Gateway`: These pods run a proxy to manage connections with networks - outside of the Consul cluster. Read more about [API gateways](/consul/docs/api-gateway). -- `Ingress Gateway`: These pods run a proxy to manage ingress into the - Kubernetes cluster. Read more about [ingress gateways](/consul/docs/k8s/connect/ingress-gateways). -- `Terminating Gateway`: These pods run a proxy to control connections to - external services. Read more about [terminating gateways](/consul/docs/k8s/connect/terminating-gateways). -- `Mesh Gateway`: These pods run a proxy to manage connections between - Consul clusters connected using mesh federation. Read more about [Consul Mesh Federation](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes). - -#### Example Commands - -Display all pods in the current Kubernetes namespace that run proxies managed -by Consul. - -```shell-session -$ consul-k8s proxy list -``` - -``` -Namespace: default - -Name Type -backend-658b679b45-d5xlb Sidecar -client-767ccfc8f9-6f6gx Sidecar -client-767ccfc8f9-f8nsn Sidecar -client-767ccfc8f9-ggrtx Sidecar -frontend-676564547c-v2mfq Sidecar -``` - -Display all pods in the `consul` Kubernetes namespace that run proxies managed -by Consul. - -```shell-session -$ consul-k8s proxy list -n consul -``` - -``` -Namespace: consul - -Name Type -consul-ingress-gateway-6fb5544485-br6fl Ingress Gateway -consul-ingress-gateway-6fb5544485-m54sp Ingress Gateway -``` - -Display all Pods across all namespaces that run proxies managed by Consul. - -```shell-session -$ consul-k8s proxy list -A -Namespace: All namespaces - -Namespace Name Type -consul consul-ingress-gateway-6fb5544485-br6fl Ingress Gateway -consul consul-ingress-gateway-6fb5544485-m54sp Ingress Gateway -default backend-658b679b45-d5xlb Sidecar -default client-767ccfc8f9-6f6gx Sidecar -default client-767ccfc8f9-f8nsn Sidecar -default client-767ccfc8f9-ggrtx Sidecar -default frontend-676564547c-v2mfq Sidecar -``` - -Display all Pods across all namespaces that run proxies managed by Consul in JSON format - -```shell-session -$ consul-k8s proxy list -A -o json -Namespace: All namespaces - -[ - { - "Name": "frontend-6fd97b8fb5-spqb8", - "Namespace": "default", - "Type": "Sidecar" - }, - { - "Name": "nginx-6d7469694f-p5wrz", - "Namespace": "default", - "Type": "Sidecar" - }, - { - "Name": "payments-667d87bf95-ktb8n", - "Namespace": "default", - "Type": "Sidecar" - }, - { - "Name": "product-api-7c4d77c7c9-g4g2b", - "Namespace": "default", - "Type": "Sidecar" - }, - { - "Name": "product-api-db-685c844cb-k5l8f", - "Namespace": "default", - "Type": "Sidecar" - }, - { - "Name": "public-api-567d949866-cgksl", - "Namespace": "default", - "Type": "Sidecar" - } -] -``` - -### `proxy read` - -The `proxy read` command allows you to inspect the configuration of Envoy proxies running on a given Pod. - -```shell-session -$ consul-k8s proxy read -``` - -The command takes a required value, ``. This should be the full name -of a Kubernetes Pod. If a Pod is running more than one Envoy proxy managed by -Consul, as in the [Multiport configuration](/consul/docs/k8s/connect#kubernetes-pods-with-multiple-ports), -configuration for all proxies in the Pod will be displayed. - -The following options are available. - -| Flag | Description | Default | -| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | -| `-namespace`, `-n` | `String` The namespace where the target Pod can be found. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | -| `-output`, `-o` | `String` Output the Envoy configuration as 'table', 'json', or 'raw'. | `'table'` | -| `-clusters` | `Boolean` Filter output to only show clusters. | `false` | -| `-endpoints` | `Boolean` Filter output to only show endpoints. | `false` | -| `-listeners` | `Boolean` Filter output to only show listeners. | `false` | -| `-routes` | `Boolean` Filter output to only show routes. | `false` | -| `-secrets` | `Boolean` Filter output to only show secrets. | `false` | -| `-address` | `String` Filter clusters, endpoints, and listeners output to only those with endpoint addresses which contain the given value. | `""` | -| `-fqdn` | `String` Filter cluster output to only clusters with a fully qualified domain name which contains the given value. | `""` | -| `-port` | `Int` Filter endpoints output to only endpoints with the given port number. | `-1` which does not filter by port | - -#### Example commands - -Get the configuration summary for the Envoy proxy running on the Pod -`backend-658b679b45-d5xlb`. - -```shell-session -$ consul-k8s proxy read backend-658b679b45-d5xlb -Envoy configuration for backend-658b679b45-d5xlb in namespace default: - -==> Clusters (5) -Name FQDN Endpoints Type Last Updated -local_agent local_agent 192.168.79.187:8502 STATIC 2022-05-13T04:22:39.553Z -client client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.18.110:20000, 192.168.52.101:20000, 192.168.65.131:20000 EDS 2022-08-08T12:02:07.471Z -frontend frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.63.120:20000 EDS 2022-08-08T12:02:07.354Z -local_app local_app 127.0.0.1:8080 STATIC 2022-05-13T04:22:39.655Z -original-destination original-destination ORIGINAL_DST 2022-05-13T04:22:39.743Z - - -==> Endpoints (6) -Address:Port Cluster Weight Status -192.168.79.187:8502 local_agent 1.00 HEALTHY -192.168.18.110:20000 client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY -192.168.52.101:20000 client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY -192.168.65.131:20000 client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY -192.168.63.120:20000 frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY -127.0.0.1:8080 local_app 1.00 HEALTHY - -==> Listeners (2) -Name Address:Port Direction Filter Chain Match Filters Last Updated -public_listener 192.168.69.179:20000 INBOUND Any * to local_app/ 2022-08-08T12:02:22.261Z -outbound_listener 127.0.0.1:15001 OUTBOUND 10.100.134.173/32, 240.0.0.3/32 to client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 2022-07-18T15:31:03.246Z - 10.100.31.2/32, 240.0.0.5/32 to frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul - Any to original-destination - -==> Routes (1) -Name Destination Cluster Last Updated -public_listener local_app/ 2022-08-08T12:02:22.260Z - -==> Secrets (0) -Name Type Last Updated - - -``` - -Get the Envoy configuration summary for all clusters with a fully qualified -domain name that includes `"default"`. Display only clusters and listeners. - -```shell-session -$ consul-k8s proxy read backend-658b679b45-d5xlb -fqdn default -clusters -listeners -==> Filters applied - Fully qualified domain names containing: default - -Envoy configuration for backend-658b679b45-d5xlb in namespace default: - -==> Clusters (2) -Name FQDN Endpoints Type Last Updated -client client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.18.110:20000, 192.168.52.101:20000, 192.168.65.131:20000 EDS 2022-08-08T12:02:07.471Z -frontend frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.63.120:20000 EDS 2022-08-08T12:02:07.354Z - - -==> Listeners (2) -Name Address:Port Direction Filter Chain Match Filters Last Updated -public_listener 192.168.69.179:20000 INBOUND Any * to local_app/ 2022-08-08T12:02:22.261Z -outbound_listener 127.0.0.1:15001 OUTBOUND 10.100.134.173/32, 240.0.0.3/32 to client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 2022-07-18T15:31:03.246Z - 10.100.31.2/32, 240.0.0.5/32 to frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul - Any to original-destination - -``` - -Get the Envoy configuration summary in a JSON format. Note that this is not the -same as the raw configuration dump from the admin API. This information is the -same as what is displayed in the table output above, but in a JSON format. - -```shell-session -$ consul-k8s proxy read backend-658b679b45-d5xlb -o json -{ - "backend-658b679b45-d5xlb": { - "clusters": [ - { - "Name": "local_agent", - "FullyQualifiedDomainName": "local_agent", - "Endpoints": [ - "192.168.79.187:8502" - ], - "Type": "STATIC", - "LastUpdated": "2022-05-13T04:22:39.553Z" - }, - { - "Name": "client", - "FullyQualifiedDomainName": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", - "Endpoints": [ - "192.168.18.110:20000", - "192.168.52.101:20000", - "192.168.65.131:20000" - ], - "Type": "EDS", - "LastUpdated": "2022-08-08T12:02:07.471Z" - }, - { - "Name": "frontend", - "FullyQualifiedDomainName": "frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", - "Endpoints": [ - "192.168.63.120:20000" - ], - "Type": "EDS", - "LastUpdated": "2022-08-08T12:02:07.354Z" - }, - { - "Name": "local_app", - "FullyQualifiedDomainName": "local_app", - "Endpoints": [ - "127.0.0.1:8080" - ], - "Type": "STATIC", - "LastUpdated": "2022-05-13T04:22:39.655Z" - }, - { - "Name": "original-destination", - "FullyQualifiedDomainName": "original-destination", - "Endpoints": [], - "Type": "ORIGINAL_DST", - "LastUpdated": "2022-05-13T04:22:39.743Z" - } - ], - "endpoints": [ - { - "Address": "192.168.79.187:8502", - "Cluster": "local_agent", - "Weight": 1, - "Status": "HEALTHY" - }, - { - "Address": "192.168.18.110:20000", - "Cluster": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", - "Weight": 1, - "Status": "HEALTHY" - }, - { - "Address": "192.168.52.101:20000", - "Cluster": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", - "Weight": 1, - "Status": "HEALTHY" - }, - { - "Address": "192.168.65.131:20000", - "Cluster": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", - "Weight": 1, - "Status": "HEALTHY" - }, - { - "Address": "192.168.63.120:20000", - "Cluster": "frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", - "Weight": 1, - "Status": "HEALTHY" - }, - { - "Address": "127.0.0.1:8080", - "Cluster": "local_app", - "Weight": 1, - "Status": "HEALTHY" - } - ], - "listeners": [ - { - "Name": "public_listener", - "Address": "192.168.69.179:20000", - "FilterChain": [ - { - "Filters": [ - "* to local_app/" - ], - "FilterChainMatch": "Any" - } - ], - "Direction": "INBOUND", - "LastUpdated": "2022-08-08T12:02:22.261Z" - }, - { - "Name": "outbound_listener", - "Address": "127.0.0.1:15001", - "FilterChain": [ - { - "Filters": [ - "to client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul" - ], - "FilterChainMatch": "10.100.134.173/32, 240.0.0.3/32" - }, - { - "Filters": [ - "to frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul" - ], - "FilterChainMatch": "10.100.31.2/32, 240.0.0.5/32" - }, - { - "Filters": [ - "to original-destination" - ], - "FilterChainMatch": "Any" - } - ], - "Direction": "OUTBOUND", - "LastUpdated": "2022-07-18T15:31:03.246Z" - } - ], - "routes": [ - { - "Name": "public_listener", - "DestinationCluster": "local_app/", - "LastUpdated": "2022-08-08T12:02:22.260Z" - } - ], - "secrets": [] - } -} -``` - -Get the raw Envoy configuration dump and clusters information for the Envoy -proxy running on the Pod `backend-658b679b45-d5xlb`. The example command returns -the raw configuration for each service as JSON. You can use the -[JQ command line tool](https://stedolan.github.io/jq/) to index into -the configuration for the service you want to inspect. - -Refer to the [Envoy config dump documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/admin/v3/config_dump.proto) -for more information on the structure of the config dump. - -The following output is truncated for brevity. - -```shell-session -$ consul-k8s proxy read backend-658b679b45-d5xlb -o raw -{ - "backend-658b679b45-d5xlb": { - "clusters": { - // [-- snip 372 lines --] output from the Envoy admin interface's /clusters endpoint. - }, - "config_dump": { - // [-- snip 1816 lines --] output from the Envoy admin interface's /config_dump?include_eds endpoint. - } -} -``` - -### `proxy log` - -The `proxy log` command allows you to inspect and modify the logging configuration of Envoy proxies running on a given Pod. - -```shell-session -$ consul-k8s proxy log -``` - -The command takes a required value, ``. This should be the full name -of a Kubernetes Pod. If a Pod is running more than one Envoy proxy managed by -Consul, as in the [Multiport configuration](/consul/docs/k8s/connect#kubernetes-pods-with-multiple-ports), -the terminal displays configuration information for all proxies in the pod. - -The following options are available. - -| Flag | Description | Default | -| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | -| `-namespace`, `-n` | `String` Specifies the namespace containing the target Pod. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | -| `-update-level`, `-u` | `String` Specifies the logger (optional) and the level to update.

    Use the following format to configure the same level for loggers: `-update-level `.

    You can also specify a comma-delineated list to configure levels for specific loggers, for example: `-update-level grpc:warning,http:info`.

    | none | -| `-reset`, `-r` | `String` Reset the log levels for all loggers back to the default of `info` | `info` | - -#### Example commands -In the following example, Consul returns the log levels for all of an Envoy proxy's loggers in a pod with the ID `server-697458b9f8-4vr29`: - -```shell-session -$ consul-k8s proxy log server-697458b9f8-4vr29 -Envoy log configuration for server-697458b9f8-4vr29 in namespace default: - -==> Log Levels for server-697458b9f8-4vr29 -Name Level -rds info -backtrace info -hc info -http info -io info -jwt info -rocketmq info -matcher info -runtime info -redis info -stats info -tap info -alternate_protocols_cache info -grpc info -init info -quic info -thrift info -wasm info -aws info -conn_handler info -ext_proc info -hystrix info -tracing info -dns info -oauth2 info -connection info -health_checker info -kafka info -mongo info -config info -admin info -forward_proxy info -misc info -websocket info -dubbo info -happy_eyeballs info -main info -client info -lua info -udp info -cache_filter info -filter info -multi_connection info -quic_stream info -router info -http2 info -key_value_store info -secret info -testing info -upstream info -assert info -ext_authz info -rbac info -decompression info -envoy_bug info -file info -pool info -``` - -The following command updates the log levels for all loggers of an Envoy proxy to `warning`. -```shell-session -$ consul-k8s proxy log server-697458b9f8-4vr29 -update-level warning -Envoy log configuration for server-697458b9f8-4vr29 in namespace default: - -==> Log Levels for server-697458b9f8-4vr29 -Name Level -pool warning -rbac warning -tracing warning -aws warning -cache_filter warning -decompression warning -init warning -assert warning -client warning -misc warning -udp warning -config warning -hystrix warning -key_value_store warning -runtime warning -admin warning -dns warning -jwt warning -redis warning -quic warning -alternate_protocols_cache warning -conn_handler warning -ext_proc warning -http warning -oauth2 warning -ext_authz warning -http2 warning -kafka warning -mongo warning -router warning -thrift warning -grpc warning -matcher warning -hc warning -multi_connection warning -wasm warning -dubbo warning -filter warning -upstream warning -backtrace warning -connection warning -io warning -main warning -happy_eyeballs warning -rds warning -tap warning -envoy_bug warning -rocketmq warning -file warning -forward_proxy warning -stats warning -health_checker warning -lua warning -secret warning -quic_stream warning -testing warning -websocket warning -``` -The following command updates the `grpc` log level to `error`, the `http` log level to `critical`, and the `runtime` log level to `debug` for pod ID `server-697458b9f8-4vr29` -```shell-session -$ consul-k8s proxy log server-697458b9f8-4vr29 -update-level grpc:error,http:critical,runtime:debug -Envoy log configuration for server-697458b9f8-4vr29 in namespace default: - -==> Log Levels for server-697458b9f8-4vr29 -Name Level -assert info -dns info -http critical -pool info -thrift info -udp info -grpc error -hc info -stats info -wasm info -alternate_protocols_cache info -ext_authz info -filter info -http2 info -key_value_store info -tracing info -cache_filter info -quic_stream info -aws info -io info -matcher info -rbac info -tap info -connection info -conn_handler info -rocketmq info -hystrix info -oauth2 info -redis info -backtrace info -file info -forward_proxy info -kafka info -config info -router info -runtime debug -testing info -happy_eyeballs info -ext_proc info -init info -lua info -health_checker info -misc info -envoy_bug info -jwt info -main info -quic info -upstream info -websocket info -client info -decompression info -mongo info -multi_connection info -rds info -secret info -admin info -dubbo info -``` -The following command resets the log levels for all loggers of an Envoy proxy in pod `server-697458b9f8-4vr29` to the default level of `info`. -```shell-session -$ consul-k8s proxy log server-697458b9f8-4vr29 -r -Envoy log configuration for server-697458b9f8-4vr29 in namespace default: - -==> Log Levels for server-697458b9f8-4vr29 -Name Level -ext_proc info -secret info -thrift info -tracing info -dns info -rocketmq info -happy_eyeballs info -hc info -io info -misc info -conn_handler info -key_value_store info -rbac info -hystrix info -wasm info -admin info -cache_filter info -client info -health_checker info -oauth2 info -runtime info -testing info -grpc info -upstream info -forward_proxy info -matcher info -pool info -aws info -decompression info -jwt info -tap info -assert info -redis info -http info -quic info -rds info -connection info -envoy_bug info -stats info -alternate_protocols_cache info -backtrace info -filter info -http2 info -init info -multi_connection info -quic_stream info -dubbo info -ext_authz info -main info -udp info -websocket info -config info -mongo info -router info -file info -kafka info -lua info -``` - -### `proxy stats` - -The `proxy stats` command allows you to inspect the Envoy cluster stats for Envoy proxies running on a given Pod. - -```shell-session -$ consul-k8s proxy stats -``` -| Flag | Description | Default | -| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | -| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | - -Refer to the [Global Options](#global-options) for additional options that you can use -when installing Consul on Kubernetes. - -#### Example Commands - -Display the Envoy cluster stats in a given pod in default namespace. - -```shell-session -$ consul-k8s proxy stats product-api-7c4d77c7c9-6slnl -cluster.frontend.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.nginx.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.payments.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.product-api-db.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.public-api.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster_manager.cds.version_text: "" -control_plane.identifier: "" -listener_manager.lds.version_text: "" -cluster.consul-dataplane.assignment_stale: 0 -cluster.consul-dataplane.assignment_timeout_received: 0 -cluster.consul-dataplane.bind_errors: 0 -cluster.consul-dataplane.circuit_breakers.default.cx_open: 0 -cluster.consul-dataplane.circuit_breakers.default.cx_pool_open: 0 -cluster.consul-dataplane.circuit_breakers.default.rq_open: 0 -cluster.consul-dataplane.circuit_breakers.default.rq_pending_open: 0 -cluster.consul-dataplane.circuit_breakers.default.rq_retry_open: 0 -cluster.consul-dataplane.circuit_breakers.high.cx_open: 0 -cluster.consul-dataplane.circuit_breakers.high.cx_pool_open: 0 -cluster.consul-dataplane.circuit_breakers.high.rq_open: 0 -cluster.consul-dataplane.circuit_breakers.high.rq_pending_open: 0 -cluster.consul-dataplane.circuit_breakers.high.rq_retry_open: 0 -cluster.consul-dataplane.default.total_match_count: 1 -cluster.consul-dataplane.http2.deferred_stream_close: 0 -cluster.consul-dataplane.http2.dropped_headers_with_underscores: 0 -cluster.consul-dataplane.http2.header_overflow: 0 -cluster.consul-dataplane.http2.headers_cb_no_stream: 0 -cluster.consul-dataplane.http2.inbound_empty_frames_flood: 0 -cluster.consul-dataplane.http2.inbound_priority_frames_flood: 0 -......... -``` - -Display the Envoy cluster stats in a given pod in different namespace. - -```shell-session -$ consul-k8s proxy stats public-api-567d949866-452xc -n consul -cluster.frontend.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.nginx.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.payments.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.product-api-db.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster.product-api.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" -cluster_manager.cds.version_text: "" -control_plane.identifier: "" -listener_manager.lds.version_text: "" -cluster.consul-dataplane.assignment_stale: 0 -cluster.consul-dataplane.assignment_timeout_received: 0 -cluster.consul-dataplane.bind_errors: 0 -cluster.consul-dataplane.circuit_breakers.default.cx_open: 0 -cluster.consul-dataplane.circuit_breakers.default.cx_pool_open: 0 -cluster.consul-dataplane.circuit_breakers.default.rq_open: 0 -cluster.consul-dataplane.circuit_breakers.default.rq_pending_open: 0 -cluster.consul-dataplane.circuit_breakers.default.rq_retry_open: 0 -cluster.consul-dataplane.circuit_breakers.high.cx_open: 0 -cluster.consul-dataplane.circuit_breakers.high.cx_pool_open: 0 -cluster.consul-dataplane.circuit_breakers.high.rq_open: 0 -cluster.consul-dataplane.circuit_breakers.high.rq_pending_open: 0 -cluster.consul-dataplane.circuit_breakers.high.rq_retry_open: 0 -......... -``` - -### `status` - -The `status` command provides an overall status summary of the Consul on Kubernetes installation. It also provides the configuration that was used to deploy Consul K8s and information about the health of Consul servers and clients. This command does not take in any flags. - -```shell-session -$ consul-k8s status -``` - -#### Example Command - -```shell-session -$ consul-k8s status - -==> Consul-K8s Status Summary - NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED ----------+-----------+----------+--------------+------------+----------+-------------------------- - consul | consul | deployed | 0.41.1 | 1.11.4 | 1 | 2022/03/10 07:48:58 MST - -==> Config: - connectInject: - enabled: true - metrics: - defaultEnableMerging: true - defaultEnabled: true - enableGatewayMetrics: true - global: - metrics: - enableAgentMetrics: true - enabled: true - name: consul - prometheus: - enabled: true - server: - replicas: 1 - ui: - enabled: true - service: - enabled: true - - ✓ Consul servers healthy (1/1) - ✓ Consul clients healthy (3/3) -``` - -### `troubleshoot` - -The `troubleshoot` command exposes two subcommands for troubleshooting Consul -service mesh and network issues from a given pod. - -- [`troubleshoot upstreams`](#troubleshoot-upstreams): List all Envoy upstreams in Consul service mesh from the given pod. -- [`troubleshoot proxy`](#troubleshoot-proxy): Troubleshoot Consul service mesh configuration and network issues between the given pod and the given upstream. - -### `troubleshoot upstreams` - -```shell-session -$ consul-k8s troubleshoot upstreams -pod -``` - -| Flag | Description | Default | -| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | -| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | - -#### Example Commands - -The following example displays all transparent proxy upstreams in Consul service mesh from the given pod. - - ```shell-session - $ consul-k8s troubleshoot upstreams -pod frontend-767ccfc8f9-6f6gx - - ==> Upstreams (explicit upstreams only) (0) - - ==> Upstreams IPs (transparent proxy only) (1) - [10.4.6.160 240.0.0.3] true map[backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul] - - If you cannot find the upstream address or cluster for a transparent proxy upstream: - - Check intentions: Tproxy upstreams are configured based on intentions. Make sure you have configured intentions to allow traffic to your upstream. - - To check that the right cluster is being dialed, run a DNS lookup for the upstream you are dialing. For example, run `dig backend.svc.consul` to return the IP address for the `backend` service. If the address you get from that is missing from the upstream IPs, it means that your proxy may be misconfigured. - ``` - -The following example displays all explicit upstreams from the given pod in the Consul service mesh. - - ```shell-session - $ consul-k8s troubleshoot upstreams -pod client-767ccfc8f9-6f6gx - - ==> Upstreams (explicit upstreams only) (1) - server - counting - - ==> Upstreams IPs (transparent proxy only) (0) - - If you cannot find the upstream address or cluster for a transparent proxy upstream: - - Check intentions: Tproxy upstreams are configured based on intentions. Make sure you have configured intentions to allow traffic to your upstream. - - To check that the right cluster is being dialed, run a DNS lookup for the upstream you are dialing. For example, run `dig backend.svc.consul` to return the IP address for the `backend` service. If the address you get from that is missing from the upstream IPs, it means that your proxy may be misconfigured. - ``` - -### `troubleshoot proxy` - -```shell-session -$ consul-k8s troubleshoot proxy -pod -upstream-ip -$ consul-k8s troubleshoot proxy -pod -upstream-envoy-id -``` - -| Flag | Description | Default | -| ------------------------------------ | ----------------------------------------------------------| ---------------------------------------------------------------------------------------------------------------------- | -| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | -| `-upstream-ip` | `String` The IP address of the upstream transparent proxy | | -| `-upstream-envoy-id` | `String` The Envoy identifier of the upstream | | - -#### Example Commands - -The following example troubleshoots the Consul service mesh configuration and network issues between the given pod and the given upstream IP. - - ```shell-session - $ consul-k8s troubleshoot proxy -pod frontend-767ccfc8f9-6f6gx -upstream-ip 10.4.6.160 - - ==> Validation - ✓ certificates are valid - ✓ Envoy has 0 rejected configurations - ✓ Envoy has detected 0 connection failure(s) - ✓ listener for upstream "backend" found - ✓ route for upstream "backend" found - ✓ cluster "backend.default.dc1.internal..consul" for upstream "backend" found - ✓ healthy endpoints for cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found - ✓ cluster "backend2.default.dc1.internal..consul" for upstream "backend" found - ! no healthy endpoints for cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found - ``` - -The following example troubleshoots the Consul service mesh configuration and network issues between the given pod and the given upstream. - - ```shell-session - $ consul-k8s troubleshoot proxy -pod frontend-767ccfc8f9-6f6gx -upstream-envoy-id db - - ==> Validation - ✓ certificates are valid - ✓ Envoy has 0 rejected configurations - ✓ Envoy has detected 0 connection failure(s) - ! no listener for upstream "db" found - ! no route for upstream "backend" found - ! no cluster "db.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "db" found - ! no healthy endpoints for cluster "db.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "db" found - ``` - -### `uninstall` - -The `uninstall` command removes Consul from Kubernetes. - -```shell-session -$ consul-k8s uninstall -``` - -The following options are available. - -| Flag | Description | Default | -| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | -| `-auto-approve` | Boolean value that enables you to skip the removal confirmation prompt. | `false` | -| `-name` | String value for the name of the installation to remove. | none | -| `-namespace` | String value that specifies the namespace of the Consul installation to remove. | `consul` | -| `-timeout` | Specifies how long to wait for the removal process to complete before timing out. The value is specified with an integer and string value indicating a unit of time.
    The following units are supported:
    `ms` (milliseconds)
    `s` (seconds)
    `m` (minutes)
    `h` (hours)
    In the following example, removal will timeout after one minute:
    `consul-k8s uninstall -timeout 1m` | `10m` | -| `-wipe-data` | Boolean value that deletes PVCs and secrets associated with the Consul installation during installation.
    Data will be removed without a verification prompt if the `-auto-approve` flag is set to `true`. | `false`
    Instructions for removing data will be printed to the console. | -| `--help` | Prints usage information for this option. | none | - -See [Global Options](#global-options) for additional commands that you can use when uninstalling Consul from Kubernetes. - -#### Example Command - -The following example command immediately uninstalls Consul from the `my-ns` namespace with the name `my-consul` and removes PVCs and secrets associated with the installation without asking for verification: - -```shell-session -$ consul-k8s uninstall -namespace=my-ns -name=my-consul -wipe-data=true -auto-approve=true -``` - -### `upgrade` - -The `upgrade` command upgrades the Consul on Kubernetes components to the current version of the `consul-k8s` cli. Prior to running `consul-k8s upgrade`, the `consul-k8s` CLI should first be upgraded to the latest version as described [Upgrade the Consul K8s CLI](#upgrade-the-consul-k8s-cli) - -```shell-session -$ consul-k8s upgrade -``` - -The following options are available. - -| Flag | Description | Default | -| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | -| `-auto-approve` | Boolean value that enables you to skip the upgrade confirmation prompt. | `false` | -| `-dry-run` | Boolean value that allows you to run pre-upgrade checks and returns a summary of the upgrade. | `false` | -| `-config-file` | String value that specifies the path to a file containing custom upgrade configurations, e.g., Consul Helm chart values file.
    You can use the `-config-file` flag multiple times to specify multiple files. | none | -| `-namespace` | String value that specifies the namespace of the Consul installation. | `consul` | -| `-preset` | String value that upgrades Consul based on a preset configuration. | Configuration of the Consul Helm chart. | -| `-set` | String value that enables you to set a customizable value. This flag is comparable to the `helm upgrade --set` flag.
    You can use the `-set` flag multiple times to set multiple values.
    Consul Helm chart values are supported. | none | -| `-set-file` | String value that specifies the name of an arbitrary config file. This flag is comparable to the `helm upgrade --set-file`
    flag. The contents of the file will be used to set a customizable value. You can use the `-set-file` flag multiple times to specify multiple files.
    Consul Helm chart values are supported. | none | -| `-set-string` | String value that enables you to set a customizable string value. This flag is comparable to the `helm upgrade --set-string`
    flag. You can use the `-set-string` flag multiple times to specify multiple strings.
    Consul Helm chart values are supported. | none | -| `-timeout` | Specifies how long to wait for the upgrade process to complete before timing out. The value is specified with an integer and string value indicating a unit of time.
    The following units are supported:
    `ms` (milliseconds)
    `s` (seconds)
    `m` (minutes)
    In the following example, the upgrade will timeout after one minute:
    `consul-k8s upgrade -timeout 1m` | `10m` | -| `-wait` | Boolean value that determines if Consul should wait for resources in the upgrade to be ready before exiting the command. | `true` | -| `-verbose`, `-v` | Boolean value that specifies whether to output verbose logs from the upgrade command with the status of resources being upgraded. | `false` | -| `--help` | Prints usage information for this option. | none | - -See [Global Options](#global-options) for additional commands that you can use when installing Consul on Kubernetes. - -### `version` - -The `version` command prints the Consul on Kubernetes version. This command does not take any options. - -```shell-session -$ consul-k8s version -``` - -You can also print the version with the `--version` flag. - -```shell-session -$ consul-k8s --version -``` - -## Global Options - -The following global options are available. - -| Flag | Description | Default | -| -------------------------------- | ----------------------------------------------------------------------------------- | ------- | -| `-context` | String value that sets the Kubernetes context to use for Consul K8s CLI operations. | none | -| `-kubeconfig`, `-c` | String value that specifies the path to the `kubeconfig` file.
    | none | diff --git a/website/content/docs/k8s/l7-traffic/failover-tproxy.mdx b/website/content/docs/k8s/l7-traffic/failover-tproxy.mdx deleted file mode 100644 index b5bd4983a27b..000000000000 --- a/website/content/docs/k8s/l7-traffic/failover-tproxy.mdx +++ /dev/null @@ -1,124 +0,0 @@ ---- -layout: docs -page_title: Configure failover services on Kubernetes -description: Learn how to define failover services in Consul on Kubernetes when proxies are in transparent proxy mode. Consul can send traffic to backup service instances if a destination service becomes unhealthy or unresponsive. ---- - -# Configure failover services on Kubernetes - -This topic describes how to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode. If a service becomes unhealthy or unresponsive, Consul can use the service resolver configuration entry to send inbound requests to backup services. Service resolvers are part of the service mesh proxy upstream discovery chain. Refer to [Service mesh traffic management](/consul/docs/connect/manage-traffic) for additional information. - -## Overview - -Complete the following steps to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode: - -- Create a service resolver configuration entry -- Create intentions that allow the downstream service to access the primary and failover service instances. -- Configure your application to call the discovery chain using the Consul DNS or KubeDNS. - -## Requirements - -- `consul-k8s` v1.2.0 or newer. -- Consul service mesh must be enabled. Refer to [How does Consul Service Mesh Work on Kubernetes](/consul/docs/k8s/connect). -- Proxies must be configured to run in transparent proxy mode. -- To query virtual DNS names, you must use Consul DNS. -- To query the discovery chain using KubeDNS, the service resolver must be in the same partition as the running service. - -### ACL requirements - -The default ACLs that the Consul Helm chart configures are suitable for most cases, but if you have more complex security policies for Consul API access, refer to the [ACL documentation](/consul/docs/security/acl) for additional guidance. - -## Create a service resolver configuration entry - -Specify the target failover in the [`spec.failover.targets`](/consul/docs/connect/config-entries/service-resolver#failover-targets-service) field in the service resolver configuration entry. In the following example, the `api-beta` service is configured to failover to the `api` service in any service subset: - - - -```yaml -apiversion: consul.hashicorp.com/v1alpha1 -kind: ServiceResolver -metadata: - name: api-beta -spec: - failover: - '*': - targets: - - service: api -``` - - - -Refer to the [service resolver configuration entry reference](/consul/docs/connect/config-entries/service-resolver) documentation for information about all service resolver configurations. - -You can apply the configuration using the `kubectl apply` command: - -```shell-session -$ kubectl apply -f api-beta-failover.yaml -``` - -## Create service intentions - -If intentions are not already defined, create and apply intentions that allow the appropriate downstream to access the target service and the failover service. In the following examples, the `frontend` service is allowed to send messages to the `api` service, which is allowed to send messages to the `api-beta` failover service. - - - - -```yaml -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: api -spec: - destination: - name: api - sources: - - name: frontend - action: allow ---- -apiVersion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: api-beta -spec: - destination: - name: api-beta - sources: - - name: frontend - action: allow -``` - - - -Refer to the [service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional information about configuring service intentions. - -You can apply the configuration using the `kubectl apply` command: - -```shell-session -$ kubectl apply -f frontend-api-api-beta-allow.yaml -``` - -## Configure your application to call the DNS - -Configure your application to contact the discovery chain in either the Consul DNS or the KubeDNS. - -### Consul DNS - -You can query the Consul DNS using the `.virtual.consul` lookup format. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups. - -In the following example, the application queries the Consul catalog for `api-beta` over HTTP. By default, the lookup would query the `default` partition and `default` namespace if Consul Enterprise manages the network infrastructure: - -```text -http://api-beta.virtual.consul/ -``` - -### KubeDNS - -You can query the KubeDNS if the failover service is in the same Kubernetes cluster as the primary service. In the following example, the application queries KubeDNS for `api-beta` over HTTP: - -```text -http://api-beta..svc.cluster.local -``` - -Note that you cannot use KubeDNS if a corresponding Kubernetes service and pod do not exist. - - diff --git a/website/content/docs/k8s/l7-traffic/route-to-virtual-services.mdx b/website/content/docs/k8s/l7-traffic/route-to-virtual-services.mdx deleted file mode 100644 index bd8c9e3db319..000000000000 --- a/website/content/docs/k8s/l7-traffic/route-to-virtual-services.mdx +++ /dev/null @@ -1,122 +0,0 @@ ---- -layout: docs -page_title: Route traffic to a virtual service -description: Define virtual services in service resolver config entries so that Consul on Kubernetes can route traffic to virtual services when transparent proxy mode is enabled for Envoy proxies. ---- - -# Route traffic to a virtual service - -This topic describes how to define virtual services so that Consul on Kubernetes can route traffic to virtual services when transparent proxy mode is enabled for Envoy proxies. - -## Overview - -You can define virtual services in service resolver configuration entries so that downstream applications can send requests to a virtual service using a Consul DNS name in peered clusters. Your applications can send requests to virtual services in the same cluster using KubeDNS. Service resolvers are part of the service mesh proxy upstream discovery chain. Refer to [Service mesh traffic management](/consul/docs/connect/manage-traffic) for additional information. - -Complete the following steps to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode: - -1. Create a service resolver configuration entry. -1. Create intentions that allow the downstream service to access the real service and the virtual service. -1. Configure your application to call the discovery chain using the Consul DNS or KubeDNS. - -## Requirements - -- `consul-k8s` v1.2.0 or newer. -- Consul service mesh must be enabled. Refer to [How does Consul service mesh work on Kubernetes](/consul/docs/k8s/connect). -- Proxies must be configured to run in transparent proxy mode. -- To query virtual DNS names, you must use Consul DNS. -- To query the discovery chain using KubeDNS, the service resolver must be in the same partition as the running service. - -### ACL requirements - -The default ACLs that the Consul Helm chart configures are suitable for most cases, but if you have more complex security policies for Consul API access, refer to the [ACL documentation](/consul/docs/security/acl) for additional guidance. - -## Create a service resolver configuration entry - -Specify the target failover in the [`spec.redirect.service`](/consul/docs/connect/config-entries/service-resolver#spec-redirect-service) field in the service resolver configuration entry. In the following example, the `virtual-api` service is configured to redirect to the `real-api`: - - - -```yaml -apiversion: consul.hashicorp.com/v1alpha1 -kind: ServiceResolver -metadata: - name: virtual-api -spec: - redirect: - service: real-api -``` - - - -Refer to the [service resolver configuration entry reference](/consul/docs/connect/config-entries/service-resolver) documentation for information about all service resolver configurations. - -You can apply the configuration using the `kubectl apply` command: - -```shell-session -$ kubectl apply -f virtual-api-redirect.yaml -``` - -## Create service intentions - -If intentions are not already defined, create and apply intentions that allow the appropriate downstream to access the real service and the target redirect service. In the following examples, the `frontend` service is allowed to send messages to the `virtual-api` and `real-api` services: - - - - -```yaml -apiversion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: virtual-api -spec: - destination: - name: virtual-api - sources: - - name: frontend - action: allow ---- -apiversion: consul.hashicorp.com/v1alpha1 -kind: ServiceIntentions -metadata: - name: real-api -spec: - destination: - name: real-api - sources: - - name: frontend - action: allow -``` - - - -Refer to the [service intentions configuration entry reference](/consul/docs/connect/config-entries/service-intentions) for additional information about configuring service intentions. - -You can apply the configuration using the `kubectl apply` command: - -```shell-session -$ kubectl apply -f frontend-api-api-beta-allow.yaml -``` - -## Configure your application to call the DNS - -Configure your application to contact the discovery chain in either the Consul DNS or the KubeDNS. - -### Consul DNS - -You can query the Consul DNS using the `.virtual.consul` lookup format. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups. - -In the following example, the application queries the Consul catalog for `virtual-api` over HTTP. By default, the lookup would query the `default` partition and `default` namespace if Consul Enterprise manages the network infrastructure: - -```text -http://virtual-api.virtual.consul/ -``` - -### KubeDNS - -You can query the KubeDNS if the real and virtual services are in the same Kubernetes cluster by specifying the name of the service. In the following example, the application queries KubeDNS for `virtual-api` over HTTP: - -```text -http://virtual-api..svc.cluster.local -``` - -Note that you cannot use KubeDNS if a corresponding Kubernetes service and pod do not exist. diff --git a/website/content/docs/k8s/operations/certificate-rotation.mdx b/website/content/docs/k8s/operations/certificate-rotation.mdx deleted file mode 100644 index 85f86a7b9cd1..000000000000 --- a/website/content/docs/k8s/operations/certificate-rotation.mdx +++ /dev/null @@ -1,28 +0,0 @@ ---- -layout: docs -page_title: Rotate TLS Certificates for Consul on Kubernetes -description: >- - In Consul Helm version 0.29.0 and later, new server agent TLS certificates are issued every time the Helm version is upgraded. Learn how to manually trigger certificate rotation if they do not rotate automatically. ---- - -# Rotate TLS Certificates for Consul on Kubernetes - -As of Consul Helm version `0.29.0`, if TLS is enabled, new TLS certificates for the Consul Server -are issued every time the Helm chart is upgraded. These certificates are signed by the same CA and will -continue to work as expected in the existing cluster. - -Consul servers read the certificates from Kubernetes secrets during start-up and keep them in memory. In order to ensure the -servers use the newer certificate, the server pods need to be [restarted explicitly](/consul/docs/k8s/upgrade#upgrading-consul-servers) in -a situation where `helm upgrade` does not restart the server pods. - -To explicitly perform server certificate rotation, follow these steps: - -1. Perform a `helm upgrade`: - - ```shell-session - $ helm upgrade consul hashicorp/consul --values /path/to/my/values.yaml - ``` - - This should run the `tls-init` job that will generate new Server certificates. - -1. Restart the Server pods following the steps [here](/consul/docs/k8s/upgrade#upgrading-consul-servers). diff --git a/website/content/docs/k8s/operations/gossip-encryption-key-rotation.mdx b/website/content/docs/k8s/operations/gossip-encryption-key-rotation.mdx deleted file mode 100644 index df49d79b18d8..000000000000 --- a/website/content/docs/k8s/operations/gossip-encryption-key-rotation.mdx +++ /dev/null @@ -1,176 +0,0 @@ ---- -layout: docs -page_title: Rotate Gossip Encryption Keys for Consul on Kubernetes -description: >- - Consul agents use encryption keys to secure their gossip communication, and you must rotate the keys periodically to maintain network security. Learn how to use `keygen` and `keyring` commands to rotate keys for agents on k8s clusters. ---- - -# Rotate Gossip Encryption Keys for Consul on Kubernetes - -The following instructions provides a step-by-step manual process for rotating [gossip encryption](/consul/docs/security/encryption#gossip-encryption) keys on Consul clusters that are deployed onto a Kubernetes cluster with Consul on Kubernetes. - -The following steps need only be performed once in any single datacenter if your Consul clusters are [federated](/consul/docs/k8s/deployment-configurations/multi-cluster/kubernetes). Rotating the gossip encryption key in one datacenter will automatically rotate the gossip encryption key for all the other datacenters. - --> **Note:** Careful precaution should be taken to prohibit new clients from joining during the gossip encryption rotation process, otherwise the new clients will join the gossip pool without knowledge of the new primary gossip encryption key. In addition, deletion of a gossip encryption key from the keyring should occur only after clients have safely migrated to utilizing the new gossip encryption key for communication. - -1. (Optional) If Consul is installed in a dedicated namespace, set the kubeConfig context to the consul namespace. Otherwise, subsequent commands will need to include -n consul. - - ```shell-session - $ kubectl config set-context --current --namespace=consul - ``` -1. Generate a new key and store in safe place for retrieval in the future ([Vault KV Secrets Engine](/vault/docs/secrets/kv/kv-v2#usage) is a recommended option). - - ```shell-session - $ consul keygen - ``` - - This should generate a new key which can be used as the gossip encryption key. In this example, we will be using - `Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w=` as the replacement gossip encryption key. - -1. Add new key to consul keyring. - - 1. `kubectl exec` into a Consul Agent pod (Server or Client) and add the new key to the Consul Keyring. This can be performed by running the following command: - - ```shell-session - $ kubectl exec -it consul-server-0 -- /bin/sh - ``` - - 1. **Note:** If ACLs are enabled, export the bootstrap token as the CONSUL_HTTP_TOKEN to perform all `consul keyring` operations. The bootstrap token can be found in the Kubernetes secret `consul-bootstrap-acl-token` of the primary datacenter. - - ```shell-session - $ export CONSUL_HTTP_TOKEN= - ``` - - 1. Install the new Gossip encryption key with the `consul keyring` command: - - ```shell-session - $ consul keyring -install="Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w=" - ==> Installing new gossip encryption key... - ``` - Consul automatically propagates this encryption key across all clients and servers across the cluster and the federation if Consul federation is enabled. - - 1. List the keys in the keyring to verify the new key has been installed successfully. - - ```shell-session - $ consul keyring -list - ==> Gathering installed encryption keys... - - WAN: - CL6M+jKj3630CZLXI0IRVeyci1jgIAveiZKvdtTybbA= [2/2] - Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w= [2/2] - - dc1 (LAN): - CL6M+jKj3630CZLXI0IRVeyci1jgIAveiZKvdtTybbA= [4/4] - Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w= [4/4] - - dc2 (LAN): - CL6M+jKj3630CZLXI0IRVeyci1jgIAveiZKvdtTybbA= [4/4] - Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w= [4/4] - ``` - -1. Use the new key as the gossip encryption key. - - 1. After the new key has been added to the keychain, you can install it as the new gossip encryption key. Run the following command in the Consul Agent pod using `kubectl exec`: - - ```shell-session - $ consul keyring -use="Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w=" - ==> Changing primary gossip encryption key... - ``` - - 1. You can ensure that the key has been propagated to all agents by verifying the number of agents that recognize the key over the number of total agents in the datacenter. Listing them provides that information. - - ```shell-session - $ consul keyring -list - ==> Gathering installed encryption keys... - - WAN: - CL6M+jKj3630CZLXI0IRVeyci1jgIAveiZKvdtTybbA= [2/2] - Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w= [2/2] - - dc1 (LAN): - CL6M+jKj3630CZLXI0IRVeyci1jgIAveiZKvdtTybbA= [4/4] - Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w= [4/4] - - dc2 (LAN): - CL6M+jKj3630CZLXI0IRVeyci1jgIAveiZKvdtTybbA= [4/4] - Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w= [4/4] - ``` - -1. Update the Kubernetes or Vault secrets with the latest gossip encryption key. - - - - Update the gossip encryption Kubernetes Secret with the value of the new gossip encryption key to ensure that subsequent `helm upgrades` commands execute successfully. - The name of the secret that stores the value of the gossip encryption key can be found in the Helm values file: - ```yaml - global: - gossipEncryption: - secretName: consul-gossip-encryption-key - secretKey: key - ``` - - ```shell-session - $ kubectl patch secret consul-gossip-encryption-key --patch='{"stringData":{"key": "Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w="}}' - ``` - - **Note:** In the case of federated Consul clusters, update the federation-secret value for the gossip encryption key. The name of the secret and key can be found in the values file of the secondary datacenter. - - ```yaml - global: - gossipEncryption: - secretName: consul-federation - secretKey: gossipEncryptionKey - ``` - - ```shell-session - $ kubectl patch secret consul-federation --patch='{"stringData":{"gossipEncryptionKey": "Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w="}}' - ``` - - - - -> **Note:** These Vault instructions assume that you have integrated your [Gossip encryption key](/consul/docs/k8s/deployment-configurations/vault/data-integration/gossip) using [Vault as a Secrets Backend](/consul/docs/k8s/deployment-configurations/vault). - - Update the gossip encryption Vault Secret with the value of the new gossip encryption key to ensure that subsequent `helm upgrades` commands execute successfully. - The name of the secret that stores the value of the gossip encryption key can be found in the Helm values file: - ```yaml - global: - gossipEncryption: - secretName: secret/data/consul/gossip-encryption - secretKey: key - ``` - - ```shell-session - $ vault kv put secret/consul/gossip-encryption key="Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w=" - ``` - - **Note:** In the case of federated Consul clusters, update the federation-secret value for the gossip encryption key. The name of the secret and key can be found in the values file of the secondary datacenter. - - ```yaml - global: - gossipEncryption: - secretName: consul-federation - secretKey: gossip-key - ``` - - ```shell-session - $ vault kv put secret/consul/consul-federation gossip-key="Wa6/XFAnYy0f9iqVH2iiG+yore3CqHSemUy4AIVTa/w=" - ``` - - - -1. Remove the old key once the new one has been installed successfully. - - 1. `kubectl exec` into a Consul Agent pod (server or client) and add the new key to the Consul Keyring. This can be performed by running the following command: - ```shell-session - $ kubectl exec -it consul-server-0 -- /bin/sh - ``` - 1. **Note:** If ACLs are enabled, export the bootstrap token as the CONSUL_HTTP_TOKEN to perform all `consul keyring` operations. - - ```shell-session - $ export CONSUL_HTTP_TOKEN= - ``` - 1. Remove old Gossip encryption key with the `consul keyring` command: - ```shell-session - $ consul keyring -remove="CL6M+jKj3630CZLXI0IRVeyci1jgIAveiZKvdtTybbA=" - ==> Removing gossip encryption key... - ``` diff --git a/website/content/docs/k8s/operations/tls-on-existing-cluster.mdx b/website/content/docs/k8s/operations/tls-on-existing-cluster.mdx deleted file mode 100644 index 32b89d326cd1..000000000000 --- a/website/content/docs/k8s/operations/tls-on-existing-cluster.mdx +++ /dev/null @@ -1,97 +0,0 @@ ---- -layout: docs -page_title: Rolling Updates to TLS for Existing Clusters on Kubernetes -description: >- - Consul Helm chart 0.16.0 and later supports TLS communication within clusters. Follow the instructions to trigger rolling updates for consul-k8s without causing downtime. ---- - -# Rolling Updates to TLS for Existing Clusters on Kubernetes - -As of Consul Helm version `0.16.0`, the chart supports TLS for communication -within the cluster. If you already have a Consul cluster deployed on Kubernetes, -you may want to configure TLS in a way that minimizes downtime to your applications. -Consul already supports rolling out TLS on an existing cluster without downtime. -However, depending on your Kubernetes use case, your upgrade procedure may be different. - -## Gradual TLS Rollout without Consul Service Mesh - -If you do not use a service mesh, follow this process. - -1. Run a Helm upgrade with the following config: - - ```yaml - global: - tls: - enabled: true - # This configuration sets `verify_outgoing`, `verify_server_hostname`, - # and `verify_incoming` to `false` on servers and clients, - # which allows TLS-disabled nodes to join the cluster. - verify: false - server: - updatePartition: - ``` - - This upgrade trigger a rolling update of `consul-k8s` components. - -1. Perform a rolling upgrade of the servers, as described in - [Upgrade Consul Servers](/consul/docs/k8s/upgrade#upgrading-consul-servers). - -1. Repeat steps 1 and 2, turning on TLS verification by setting `global.tls.verify` - to `true`. - -## Gradual TLS Rollout with Consul Service Mesh - -Because the sidecar Envoy proxies need to talk to the Consul client agent regularly -for service discovery, we can't enable TLS on the clients without also re-injecting a -TLS-enabled proxy into the application pods. To perform TLS rollout with minimal -downtime, we recommend instead to add a new Kubernetes node pool and migrate your -applications to it. - -1. Add a new identical node pool. - -1. Cordon all nodes in the old pool by running `kubectl cordon`. - This command ensures Kubernetes does not schedule any new workloads on those nodes, - and instead schedules onto the new TLS-enabled nodes. - -1. Create the following Helm config file for the upgrade: - - ```yaml - global: - tls: - enabled: true - # This configuration sets `verify_outgoing`, `verify_server_hostname`, - # and `verify_incoming` to `false` on servers and clients, - # which allows TLS-disabled nodes to join the cluster. - verify: false - server: - updatePartition: - client: - updateStrategy: | - type: OnDelete - ``` - - In this configuration, we're setting `server.updatePartition` to the number of - server replicas as described in [Upgrade Consul Servers](/consul/docs/k8s/upgrade#upgrading-consul-servers). - -1. Run `helm upgrade` with the above config file. - -1. At this point, all components (e.g., Consul service mesh webhook and sync catalog) should be running - on the new node pool. - -1. Redeploy all your mesh-enabled applications. - One way to trigger a redeploy is to run `kubectl drain` on the nodes in the old pool. - Now that the service mesh webhook is TLS-aware, it adds TLS configuration to - the sidecar proxy. Also, Kubernetes should schedule these applications on the new node pool. - -1. Perform a rolling upgrade of the servers described in - [Upgrade Consul Servers](/consul/docs/k8s/upgrade#upgrading-consul-servers). - -1. If everything is healthy, delete the old node pool. - -1. Finally, set `global.tls.verify` to `true` in your Helm config file, remove the - `client.updateStrategy` property, and perform a rolling upgrade of the servers. - --> **Note:** It is possible to do this upgrade without fully duplicating the node pool. -You could drain a subset of the Kubernetes nodes within your existing node pool and treat it -as your "new node pool." Then follow the above instructions. Repeat this process for the rest -of the nodes in the node pool. diff --git a/website/content/docs/k8s/operations/uninstall.mdx b/website/content/docs/k8s/operations/uninstall.mdx deleted file mode 100644 index 3d780df1830b..000000000000 --- a/website/content/docs/k8s/operations/uninstall.mdx +++ /dev/null @@ -1,115 +0,0 @@ ---- -layout: docs -page_title: Uninstall Consul on Kubernetes -description: >- - You can use the Consul-K8s CLI tool to remove all or part of a Consul installation on Kubernetes. You can also use Helm and then manually remove resources that Helm does not delete. ---- - -# Uninstall Consul on Kubernetes - -You can uninstall Consul using Helm commands or the Consul K8s CLI. - -## Consul K8s CLI - -Issue the `consul-k8s uninstall` command to remove Consul on Kubernetes. You can specify the installation name, namespace, and data retention behavior using the applicable options. By default, the uninstall preserves the secrets and PVCs that are provisioned by Consul on Kubernetes. - -```shell-session -$ consul-k8s uninstall -``` - -In the following example, Consul will be uninstalled and the data removed without prompting you to verify the operations: - -```shell-session -$ consul-k8s uninstall -auto-approve=true -wipe-data=true -``` - -Refer to the [Consul K8s CLI reference](/consul/docs/k8s/k8s-cli#uninstall) topic for details. - -## Helm commands - -Run the `helm uninstall` **and** manually remove resources that Helm does not delete. - -1. Although the Helm chart automates the deletion of CRDs upon uninstall, sometimes the finalizers tied to those CRDs may not complete because the deletion of the CRDs rely on the Consul K8s controller running. Ensure that previously created CRDs for Consul on Kubernetes are deleted, so subsequent installs of Consul on Kubernetes on the same Kubernetes cluster do not get blocked. - - ```shell-session - $ kubectl delete crd --selector app=consul - ``` - -1. (Optional) If Consul is installed in a dedicated namespace, set the kubeConfig context to the `consul` namespace. Otherwise, subsequent commands will need to include `--namespace consul`. - - ```shell-session - $ kubectl config set-context --current --namespace=consul - ``` - -1. Run the `helm uninstall ` command and specify the release name you've installed Consul with, e.g.,: - - ```shell-session - $ helm uninstall consul - release "consul" uninstalled - ``` - -1. After deleting the Helm release, you need to delete the `PersistentVolumeClaim`'s - for the persistent volumes that store Consul's data. A [bug](https://github.com/helm/helm/issues/5156) in Helm prevents PVCs from being deleted. Issue the following commands: - - ```shell-session - $ kubectl get pvc --selector="chart=consul-helm" - NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE - data-default-hashicorp-consul-server-0 Bound pvc-32cb296b-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m - data-default-hashicorp-consul-server-1 Bound pvc-32d79919-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m - data-default-hashicorp-consul-server-2 Bound pvc-331581ea-1213-11ea-b6f0-42010a8001db 10Gi RWO standard 17m - - $ kubectl delete pvc --selector="chart=consul-helm" - persistentvolumeclaim "data-default-hashicorp-consul-server-0" deleted - persistentvolumeclaim "data-default-hashicorp-consul-server-1" deleted - persistentvolumeclaim "data-default-hashicorp-consul-server-2" deleted - ``` - - ~> **NOTE:** This will delete **all** data stored in Consul and it can't be - recovered unless you've taken other backups. - -1. If installing with ACLs enabled, you will need to then delete the ACL secrets: - - ```shell-session - $ kubectl get secrets --field-selector="type=Opaque" | grep consul - consul-acl-replication-acl-token Opaque 1 41m - consul-bootstrap-acl-token Opaque 1 41m - consul-client-acl-token Opaque 1 41m - consul-connect-inject-acl-token Opaque 1 37m - consul-controller-acl-token Opaque 1 37m - consul-federation Opaque 4 41m - consul-mesh-gateway-acl-token Opaque 1 41m - ``` - -1. Ensure that the secrets you're about to delete are all created by Consul and not - created by another user with the word `consul`. - - ```shell-session - $ kubectl get secrets --field-selector="type=Opaque" | grep consul | awk '{print $1}' | xargs kubectl delete secret - secret "consul-acl-replication-acl-token" deleted - secret "consul-bootstrap-acl-token" deleted - secret "consul-client-acl-token" deleted - secret "consul-connect-inject-acl-token" deleted - secret "consul-controller-acl-token" deleted - secret "consul-federation" deleted - secret "consul-mesh-gateway-acl-token" deleted - secret "consul-gossip-encryption-key" deleted - ``` - -1. If installing with `tls.enabled` then, run the following commands to delete the `ServiceAccount` left behind: - - ```shell-session - $ kubectl get serviceaccount consul-tls-init - NAME SECRETS AGE - consul-tls-init 1 47m - ``` - - ```shell-session - $ kubectl delete serviceaccount consul-tls-init - serviceaccount "consul-tls-init" deleted - ``` - -1. (Optional) Delete the namespace (i.e. `consul` in the following example) that you have dedicated for installing Consul on Kubernetes. - - ```shell-session - $ kubectl delete ns consul - ``` diff --git a/website/content/docs/k8s/platforms/self-hosted-kubernetes.mdx b/website/content/docs/k8s/platforms/self-hosted-kubernetes.mdx deleted file mode 100644 index e5f1f0031ffc..000000000000 --- a/website/content/docs/k8s/platforms/self-hosted-kubernetes.mdx +++ /dev/null @@ -1,52 +0,0 @@ ---- -layout: docs -page_title: Install Consul on Self-Hosted Kubernetes Clusters -description: >- - The process for installing Consul on Kubernetes is the same as installing it on cloud-hosted k8s platforms, but requires additional configuration. Learn how to pre-define Persistent Volume Claims (PVCs) and a default storage class for server agents. ---- - -# Install Consul on Self-Hosted Kubernetes Clusters - -Except for creating persistent volumes and ensuring there is a storage class -configured (see below), installing Consul on your -self-hosted Kubernetes cluster is the same process as installing Consul on a -cloud-hosted Kubernetes cluster. See the [Installation Overview](/consul/docs/k8s/installation/install) -for install instructions. - -## Predefined Persistent Volume Claims (PVCs) - -If running a self-hosted Kubernetes installation, you may need to pre-create -the persistent volumes for the stateful set that the Consul servers run in. - -The only way to use a pre-created PVC is to name them in the format Kubernetes expects: - -```text -data---consul-server- -``` - -The Kubernetes namespace you are installing into, Helm release name, and ordinal -must match between your Consul servers and your pre-created PVCs. You only -need as many PVCs as you have Consul servers. For example, given a Kubernetes -namespace of "vault," a release name of "consul," and 5 servers, you would need -to create PVCs with the following names: - -```text -data-vault-consul-consul-server-0 -data-vault-consul-consul-server-1 -data-vault-consul-consul-server-2 -data-vault-consul-consul-server-3 -data-vault-consul-consul-server-4 -``` - -## Storage Class - -Your Kubernetes installation must either have a default storage class specified -(see https://kubernetes.io/docs/concepts/storage/storage-classes/ and https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/) -or you must specify the storage class for the Consul servers: - -```yaml -server: - storageClass: your-class -``` - -See the [Helm reference](/consul/docs/k8s/helm#v-server-storageclass) for that setting for more information. diff --git a/website/content/docs/k8s/upgrade/index.mdx b/website/content/docs/k8s/upgrade/index.mdx deleted file mode 100644 index 690ed380f36e..000000000000 --- a/website/content/docs/k8s/upgrade/index.mdx +++ /dev/null @@ -1,199 +0,0 @@ ---- -layout: docs -page_title: Upgrading Consul on Kubernetes Components -description: >- - Consul on Kubernetes relies on packages and binaries that have individual upgrade requirements. Learn how to update Helm configurations, Helm versions, Consul versions, and Consul agents, as well as how to determine what will change and its impact on your service mesh. ---- - -# Upgrading Consul on Kubernetes components - -This topic describes considerations and strategies for upgrading Consul deployments running on Kubernetes clusters. In addition to upgrading the version of Consul, you may need to update your Helm chart or the release version of the Helm chart. - -## Version-specific upgrade requirements - -As of Consul v1.14.0 and the corresponding Helm chart version v1.0.0, Kubernetes deployments use [Consul Dataplane](/consul/docs/connect/dataplane) instead of client agents. If you upgrade Consul from a version that uses client agents to a version that uses dataplanes, you must follow specific steps to update your Helm chart and remove client agents from the existing deployment. Refer to [Upgrading to Consul Dataplane](/consul/docs/k8s/upgrade#upgrading-to-consul-dataplane) for more information. - -The v1.0.0 release of the Consul on Kubernetes Helm chart also introduced a change to the [`externalServers[].hosts` parameter](/consul/docs/k8s/helm#v-externalservers-hosts). Previously, you were able to enter a provider lookup as a string in this field. Now, you must include `exec=` at the start of a string containing a provider lookup. Otherwise, the string is treated as a DNS name. Refer to the [`go-netaddrs`](https://github.com/hashicorp/go-netaddrs) library and command line tool for more information. - -If you configured your Consul agents to use [`ports.grpc_tls`](https://developer.hashicorp.com/consul/docs/agent/config/config-files#grpc_tls_port) instead of [`ports.grpc`](https://developer.hashicorp.com/consul/docs/agent/config/config-files#grpc_port) and you want to upgrade a multi-datacenter deployment with Consul servers running outside of the Kubernetes cluster to v1.0.0 or higher, set [`externalServers.tlsServerName`](/consul/docs/k8s/helm#v-externalservers-tlsservername) to `server..domain`. - -## Upgrade types - -We recommend updating Consul on Kubernetes when: - - - You change your Helm configuration - - A new Helm chart is released - - You want to upgrade your Consul version - -The upgrade procedure you use depends on the type of upgrade you are performing. - -### Helm configuration changes - -If you make a change to your Helm values file, you need to perform a `helm upgrade` -for those changes to take effect. - -1. Determine your current installed chart version. - - ```shell-session - $ helm list --filter consul --namespace consul - NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION - consul consul 2 2022-02-02 21:49:45.647678 -0800 PST deployed consul-0.40.0 1.11.2 - ``` - - In this example, version `0.40.0` (from `consul-k8s:0.40.0`) is being used, and Consul on Kubernetes is installed in the `consul` namespace. - -1. Perform a `helm upgrade` and make sure that you specify the current chart version: - - ```shell-session - $ helm upgrade consul hashicorp/consul --namespace consul --version 0.40.0 --values /path/to/my/values.yaml - ``` - -~> Note: If you don't pass the `--version` flag when upgrading a Helm chart, Helm uses the most up-to-date version of the chart in its local cache, which may result in an unintended version upgrade. - -### Upgrade Helm chart version - -You may wish to upgrade your Helm chart version to take advantage of new features and -bug fixes, or because you want to upgrade your Consul version and it requires a -certain Helm chart version. - -1. Update your local Helm repository cache: - - ```shell-session - $ helm repo update - ``` - -1. List all available versions. The console lists version `0.40.0` in the following example. - - ```shell-session hideClipboard - $ helm search repo hashicorp/consul --versions - NAME CHART VERSION APP VERSION DESCRIPTION - hashicorp/consul 0.40.0 1.11.2 Official HashiCorp Consul Chart - hashicorp/consul 0.39.0 1.11.1 Official HashiCorp Consul Chart - hashicorp/consul 0.38.0 1.10.4 Official HashiCorp Consul Chart - hashicorp/consul 0.37.0 1.10.4 Official HashiCorp Consul Chart - ... - ``` - - 1. To determine which version you have installed, issue the following command: - - ```shell-session - $ helm list --filter consul --namespace consul - NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION - consul consul 2 2022-02-02 21:49:45.647678 -0800 PST deployed consul-0.39.0 1.11.1 - ``` - - In this example, version `0.39.0` (from `consul-k8s:0.39.0`) is being used. - -1. Check the changelog for any breaking changes from that version and any versions in between: [CHANGELOG.md](https://github.com/hashicorp/consul-k8s/blob/main/CHANGELOG.md). - -1. Check the [Consul on Kubernetes Version Compatibility](/consul/docs/k8s/compatibility) matrix. Each 1.x version of - the chart corresponds to a specific 1.x version of Consul. You may need to upgrade your Consul version to match the - chart version you want to upgrade to. For example, chart version `1.3.1` must be used with Consul version `1.17.x`. - To set the Consul version, set `global.image` in your `values.yaml` file, for example: - - ``` - global: - image: "hashicorp/consul:1.17.5" - ``` - - You can leave the `global.image` value unset to use the latest supported version of Consul. - version automatically. - -1. Upgrade by performing a `helm upgrade` with the `--version` flag set to the version you want to upgrade to: - - ```shell-session - $ helm upgrade consul hashicorp/consul --namespace consul --version 0.40.0 --values /path/to/my/values.yaml - ``` - -### Upgrade Consul version - -If a new version of Consul is released, you need to perform a Helm upgrade -to update to the new version. Before you upgrade to a new version: - -1. Read the [Upgrading Consul](/consul/docs/upgrading) documentation. -1. Read any [specific instructions](/consul/docs/upgrading/upgrade-specific) for the version you want to upgrade - to, as well as the Consul [changelog](https://github.com/hashicorp/consul/blob/main/CHANGELOG.md) for that version. -1. Read our [Compatibility Matrix](/consul/docs/k8s/compatibility) to ensure - your current Helm chart version supports this Consul version. If it does not, - you may need to also upgrade your Helm chart version at the same time. -1. Set `global.image` in your `values.yaml` to the desired version: - - - - ```yaml - global: - image: "hashicorp/consul:1.11.1" - ``` - - - -1. Determine the version of your existing Helm installation. The following example shows that version `0.39.0` is installed. The version is derived from the `CHART` column. - - ```shell-session - $ helm list --filter consul --namespace consul - NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION - consul consul 2 2022-02-02 21:49:45.647678 -0800 PST deployed consul-0.39.0 1.11.1 - ``` - -1. Check the [Consul on Kubernetes Version Compatibility](/consul/docs/k8s/compatibility) matrix. Each 1.x version of -the chart corresponds to a specific 1.x version of Consul. You may need to upgrade your chart version to match the -Consul version you want to upgrade to. - -1. Perform a `helm upgrade`: - - ```shell-session - $ helm upgrade consul hashicorp/consul --namespace consul --version 0.39.0 --values /path/to/my/values.yaml - ``` - -~> Note: If you don't pass the `--version` flag when upgrading a Helm chart, Helm uses the most up-to-date version of the chart in its local cache, which may result in an unintended version upgrade. - -## Consul server restarts and upgrades - -Note that for versions of Consul on Kubernetes prior to `1.4.0`, we recommended using the `server.updatePartition` setting to gradually upgrade -Consul servers. Refer to an older version of the documentation for instructions on upgrading to a version of the chart older than `v1.4.0`. Use the version drop-down at the top of this page to select a version older than or equal to `v1.17.0`. Consul documentation versions correspond to the Consul version in your chart, not the chart version, that contains the instructions. - -## Upgrading to Consul Dataplane - -In earlier versions, Consul on Kubernetes used client agents in its deployments. As of v1.14.0, Consul uses [Consul Dataplane](/consul/docs/connect/dataplane/) in Kubernetes deployments instead of client agents. - -If you upgrade Consul from a version that uses client agents to a version the uses dataplanes, complete the following steps to upgrade your deployment safely and without downtime. - -1. If ACLs are enabled, you must first upgrade to consul-k8s 0.49.8 or above. These versions expose the setting `connectInject.prepareDataplanesUpgrade` - which is required for no-downtime upgrades when ACLs are enabled. - - Set `connectInject.prepareDataplanesUpgrade` to `true` and then perform the upgrade to 0.49.8 or above (whichever is the latest in the 0.49.x series) - - ```yaml filename="values.yaml" - connectInject: - prepareDataplanesUpgrade: true - ``` - -1. Consul dataplanes disables Consul clients by default, but during an upgrade you need to ensure Consul clients continue to run. Edit your Helm chart configuration and set the [`client.enabled`](/consul/docs/k8s/helm#v-client-enabled) field to `true` and specify an action for Consul to take during the upgrade process in the [`client.updateStrategy`](/consul/docs/k8s/helm#v-client-updatestrategy) field: - - ```yaml filename="values.yaml" - client: - enabled: true - updateStrategy: | - type: OnDelete - ``` - -1. Follow our [recommended procedures to upgrade servers](#upgrade-consul-servers) on Kubernetes deployments to upgrade Helm values for the new version of Consul. The latest version of consul-k8s components may be in a CrashLoopBackoff state during the performance of the server upgrade from versions <1.14.x until all Consul servers are on versions >=1.14.x. Components in CrashLoopBackoff will not negatively affect the cluster because older versioned components will still be operating. Once all servers have been fully upgraded, the latest consul-k8s components will automatically restore from CrashLoopBackoff and older component versions will be spun down. - -1. Run `kubectl rollout restart` to restart your service mesh applications. Restarting service mesh application causes Kubernetes to re-inject them with the webhook for dataplanes. - -1. Restart all gateways in your service mesh. - -1. Now that all services and gateways are using Consul dataplanes, disable client agents in your Helm chart by deleting the `client` stanza or setting `client.enabled` to `false` and running a `consul-k8s` or Helm upgrade. - -1. If ACLs are enabled, outdated ACL tokens will persist a result of the upgrade. You can manually delete the tokens to declutter your Consul environment. - - Outdated connect-injector tokens have the following description: `token created via login: {"component":"connect-injector"}`. Do not delete - the tokens that have a description where `pod` is a key, for example `token created via login: {"component":"connect-injector","pod":"default/consul-connect-injector-576b65747c-9547x"}`). The dataplane-enabled connect inject pods use these tokens. - - You can also review the creation date for the tokens and only delete the injector tokens created before your upgrade, but do not delete all old tokens without considering if they are still in use. Some tokens, such as the server tokens, are still necessary. - -## Configuring TLS on an existing cluster - -If you already have a Consul cluster deployed on Kubernetes and -would like to turn on TLS for internal Consul communication, -refer to [Configuring TLS on an Existing Cluster](/consul/docs/k8s/operations/tls-on-existing-cluster). diff --git a/website/content/docs/k8s/upgrade/upgrade-cli.mdx b/website/content/docs/k8s/upgrade/upgrade-cli.mdx deleted file mode 100644 index 36b30e4a3ff1..000000000000 --- a/website/content/docs/k8s/upgrade/upgrade-cli.mdx +++ /dev/null @@ -1,82 +0,0 @@ ---- -layout: docs -page_title: Update the Consul K8s CLI -description: >- - The Consul on Kubernetes CLI tool helps you schedule clusters without direct interaction with Helm or Consul’s CLI. Learn how to update the consul-k8s CLI tool to a new version. ---- - -# Update the Consul K8s CLI - -Consul K8s CLI is a tool for quickly installing and interacting with Consul on Kubernetes. Ensure that you are running the correct version of the CLI prior to upgrading your Consul on Kubernetes deployment, as the CLI and the control plane are version dependent. - -## Upgrade the CLI - -These instructions describe how to upgrade the current installed version of the CLI to the latest version. If you are looking to upgrade to a specific version, please follow [Install a specific version of the CLI](/consul/docs/k8s/installation/install-cli#install-a-specific-version-of-the-cli). - - - - - -The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. The `brew upgrade` command assumes that Hashicorp `tap` repository has already been installed from a prior install. - -1. Upgrade the Consul K8s CLI with the latest `consul-k8s` package. - ```shell-session - $ brew upgrade consul-k8s - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation: - - ```shell-session - $ consul-k8s version - consul-k8s 0.39.0 - ``` - - - - - -1. Upgrade all the available packages on your system. - - ```shell-session - $ sudo apt-get update - ``` - -1. Upgrade the Consul K8s CLI with the latest `consul-k8s` package. - - ```shell-session - $ sudo apt-get --only-upgrade install consul-k8s - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation: - - ```shell-session - $ consul-k8s version - consul-k8s 0.39.0 - ``` - - - - - -1. Upgrade all the available packages on your system. - - ```shell-session - $ sudo yum update - ``` - -1. Upgrade the Consul K8s CLI with the latest `consul-k8s` package. - - ```shell-session - $ sudo yum update consul-k8s - ``` - -1. (Optional) Issue the `consul-k8s version` command to verify the installation: - - ```shell-session - $ consul-k8s version - consul-k8s 0.39.0 - ``` - - - - \ No newline at end of file diff --git a/website/content/docs/lambda.mdx b/website/content/docs/lambda.mdx new file mode 100644 index 000000000000..529ee1be7935 --- /dev/null +++ b/website/content/docs/lambda.mdx @@ -0,0 +1,36 @@ +--- +layout: docs +page_title: Consul Lambda overview +description: >- + Consul supports Amazon Web Services Lambda functions, which are event-driven programs and scripts. Learn about Consul's requirements for registering and invoking AWS Lambda functions in your service mesh. +--- + +# Consul Lambda overview + +You can configure Consul to allow services in your mesh to invoke Lambda functions, as well as allow Lambda functions to invoke services in your mesh. Lambda functions are programs or scripts that run in AWS Lambda. Refer to the [AWS Lambda website](https://aws.amazon.com/lambda/) for additional information. + +## Register Lambda functions into Consul + +The first step is to register your Lambda functions into Consul. We recommend using the [Lambda registrator module](https://github.com/hashicorp/terraform-aws-consul-lambda/tree/main/modules/lambda-registrator) to automatically synchronize Lambda functions into Consul. You can also manually register Lambda functions into Consul if you are unable to use the Lambda registrator. + +Refer to [Lambda Function Registration Requirements](/consul/docs/register/service/lambda) for additional information about registering Lambda functions into Consul. + +## Invoke Lambda functions from Consul service mesh + +After registering AWS Lambda functions, you can invoke Lambda functions from the Consul service mesh through terminating gateways (recommended) or directly from connected proxies. + +Refer to [Invoke Lambda Functions from Services](/consul/docs/connect/lambda/function) for details. + +## Invoke mesh services from Lambda function + +~> **Lambda-to-mesh functionality is currently in beta**: Functionality associated with beta features are subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may have performance issues, scaling issues, and limited support. + +You can also add the `consul-lambda-extension` plugin as a layer in your Lambda functions, which enables them to send requests to services in the mesh. The plugin starts a lightweight sidecar proxy that directs requests from Lambda functions to [mesh gateways](/consul/docs/connect/gateways#mesh-gateways). The gateways route traffic to the destination service to complete the request. + +![Invoke mesh service from Lambda function](/img/invoke-service-from-lambda-flow.svg) + +Refer to [Invoke Services from Lambda Functions](/consul/docs/connect/lambda/service) for additional information about registering Lambda functions into Consul. + +Consul mesh gateways are required to send requests from Lambda functions to mesh services. Refer to [Mesh Gateways](/consul/docs/east-west/mesh-gateway) for additional information. + +Note that L7 traffic management features are not supported. As a result, requests from Lambda functions ignore service routes and splitters. \ No newline at end of file diff --git a/website/content/docs/lambda/index.mdx b/website/content/docs/lambda/index.mdx deleted file mode 100644 index 50af9ce6f063..000000000000 --- a/website/content/docs/lambda/index.mdx +++ /dev/null @@ -1,36 +0,0 @@ ---- -layout: docs -page_title: Consul and AWS Lambda Overview -description: >- - Consul supports Amazon Web Services Lambda functions, which are event-driven programs and scripts. Learn about Consul's requirements for registering and invoking AWS Lambda functions in your service mesh. ---- - -# Consul and AWS Lambda Overview - -You can configure Consul to allow services in your mesh to invoke Lambda functions, as well as allow Lambda functions to invoke services in your mesh. Lambda functions are programs or scripts that run in AWS Lambda. Refer to the [AWS Lambda website](https://aws.amazon.com/lambda/) for additional information. - -## Register Lambda functions into Consul - -The first step is to register your Lambda functions into Consul. We recommend using the [Lambda registrator module](https://github.com/hashicorp/terraform-aws-consul-lambda/tree/main/modules/lambda-registrator) to automatically synchronize Lambda functions into Consul. You can also manually register Lambda functions into Consul if you are unable to use the Lambda registrator. - -Refer to [Lambda Function Registration Requirements](/consul/docs/lambda/registration) for additional information about registering Lambda functions into Consul. - -## Invoke Lambda functions from Consul service mesh - -After registering AWS Lambda functions, you can invoke Lambda functions from the Consul service mesh through terminating gateways (recommended) or directly from connected proxies. - -Refer to [Invoke Lambda Functions from Services](/consul/docs/lambda/invocation) for details. - -## Invoke mesh services from Lambda function - -~> **Lambda-to-mesh functionality is currently in beta**: Functionality associated with beta features are subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may have performance issues, scaling issues, and limited support. - -You can also add the `consul-lambda-extension` plugin as a layer in your Lambda functions, which enables them to send requests to services in the mesh. The plugin starts a lightweight sidecar proxy that directs requests from Lambda functions to [mesh gateways](/consul/docs/connect/gateways#mesh-gateways). The gateways route traffic to the destination service to complete the request. - -![Invoke mesh service from Lambda function](/img/invoke-service-from-lambda-flow.svg) - -Refer to [Invoke Services from Lambda Functions](/consul/docs/lambda/invoke-from-lambda) for additional information about registering Lambda functions into Consul. - -Consul mesh gateways are required to send requests from Lambda functions to mesh services. Refer to [Mesh Gateways](/consul/docs/connect/gateways/mesh-gateway) for additional information. - -Note that L7 traffic management features are not supported. As a result, requests from Lambda functions ignore service routes and splitters. diff --git a/website/content/docs/lambda/invocation.mdx b/website/content/docs/lambda/invocation.mdx deleted file mode 100644 index cdfe774ef649..000000000000 --- a/website/content/docs/lambda/invocation.mdx +++ /dev/null @@ -1,81 +0,0 @@ ---- -layout: docs -page_title: Invoke AWS Lambda Functions -description: >- - You can invoke an Amazon Web Services Lambda function in your Consul service mesh by configuring terminating gateways or sidecar proxies. Learn how to declare a registered function as an upstream and why we recommend using terminating gateways with Lambda. ---- - -# Invoke Lambda Functions from Mesh Services - -This topic describes how to invoke AWS Lambda functions from the Consul service mesh. - -## Overview - -You can invoke Lambda functions from the Consul service mesh through terminating gateways (recommended) or directly from service mesh proxies. - - -### Terminating Gateway - -We recommend invoking Lambda functions through terminating gateways. This method supports cross-datacenter communication, transparent -proxies, intentions, and all other Consul service mesh features. - -The terminating gateway must have [the appropriate IAM permissions](/consul/docs/lambda/registration#configure-iam-permissions-for-envoy) -to invoke the function. - -The following diagram shows the invocation procedure: - - - -![Terminating Gateway to Lambda](/img/terminating_gateway_to_lambda.svg) - - - -1. Make an HTTP request to the local service mesh proxy. -1. The service mesh proxy forwards the request to the terminating gateway. -1. The terminating gateway invokes the function. - -### Service Mesh Proxy - -You can invoke Lambda functions directly from a service's mesh sidecar proxy. -This method has the following limitations: -- Intentions are unsupported. Consul enforces intentions by validating the client certificates presented when a connection is received. Lambda does not support client certificate validation, which prevents Consul from supporting intentions using this method. -- Transparent proxies are unsupported. This is because Lambda services are not - registered to a proxy. - -This method is secure because AWS IAM permissions is required to invoke Lambda functions. Additionally, all communication is encrypted with Amazon TLS when invoking Lambda resources. - -The Envoy sidecar proxy must have the correct AWS IAM credentials to invoke the function. You can define the credentials in environment variables, EC2 metadata, or ECS task metadata. - -The following diagram shows the invocation procedure: - - - -![Service Mesh Proxy to Lambda](/img/connect_proxy_to_lambda.svg) - - - -1. Make an HTTP request to the local service mesh proxy. -2. The service mesh proxy invokes the Lambda. - -## Invoke a Lambda Function -Before you can invoke a Lambda function, register the service used to invoke -the Lambda function and the service running in Lambda with -Consul (refer to [registration](/consul/docs/lambda/registration) for -instructions). The service used to invoke the function must be deployed to the -service mesh. - -1. Update the invoking service to use the Lambda service as an upstream. In the following example, the `destination_name` for the invoking service (`api`) points to a Lambda service called `authentication`: - ```hcl - upstreams { - local_bind_port = 2345 - destination_name = "authentication" - } - ``` -1. Issue the `consul services register` command to store the configuration: - ```shell-session - $ consul services register api-sidecar-proxy.hcl - ``` -1. Call the upstream service to invoke the Lambda function. In the following example, the `api` service invokes the `authentication` service at `localhost:2345`: - ```shell-session - $ curl https://localhost:2345 - ``` diff --git a/website/content/docs/lambda/invoke-from-lambda.mdx b/website/content/docs/lambda/invoke-from-lambda.mdx deleted file mode 100644 index 0582cad20054..000000000000 --- a/website/content/docs/lambda/invoke-from-lambda.mdx +++ /dev/null @@ -1,273 +0,0 @@ ---- -layout: docs -page_title: Invoke Services from Lambda Functions -description: >- - This topic describes how to invoke services in the mesh from Lambda functions registered with Consul. ---- - -# Invoke Services from Lambda Functions - -This topic describes how to invoke services in the mesh from Lambda functions registered with Consul. - -~> **Lambda-to-mesh functionality is currently in beta**: Functionality associated with beta features are subject to change. You should never use the beta release in secure environments or production scenarios. Features in beta may have performance issues, scaling issues, and limited support. - -## Introduction - -The following steps describe the process: - -1. Deploy the destination service and mesh gateway. -1. Deploy the Lambda extension layer. -1. Deploy the Lambda registrator. -1. Write the Lambda function code. -1. Deploy the Lambda function. -1. Invoke the Lambda function. - -You must add the `consul-lambda-extension` extension as a Lambda layer to enable Lambda functions to send requests to mesh services. Refer to the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/invocation-layers.html) for instructions on how to add layers to your Lambda functions. - -The layer runs an external Lambda extension that starts a sidecar proxy. The proxy listens on one port for each upstream service and upgrades the outgoing connections to mTLS. It then proxies the requests through to [mesh gateways](/consul/docs/connect/gateways#mesh-gateways). - -## Prerequisites - -You must deploy the destination services and mesh gateway prior to deploying your Lambda service with the `consul-lambda-extension` layer. - -### Deploy the destination service - -There are several methods for deploying services to Consul service mesh. The following example configuration deploys a service named `static-server` with Consul on Kubernetes. - -```yaml -kind: Service -apiVersion: v1 -metadata: - # Specifies the service name in Consul. - name: static-server -spec: - selector: - app: static-server - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: static-server ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: static-server -spec: - replicas: 1 - selector: - matchLabels: - app: static-server - template: - metadata: - name: static-server - labels: - app: static-server - annotations: - 'consul.hashicorp.com/connect-inject': 'true' - spec: - containers: - - name: static-server - image: hashicorp/http-echo:latest - args: - - -text="hello world" - - -listen=:8080 - ports: - - containerPort: 8080 - name: http - serviceAccountName: static-server -``` - -### Deploy the mesh gateway - -The mesh gateway must be running and registered to the Lambda function’s Consul datacenter. Refer to the following documentation and tutorials for instructions: - -- [Mesh Gateways between WAN-Federated Datacenters](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-wan-datacenters) -- [Mesh Gateways between Admin Partitions](/consul/docs/connect/gateways/mesh-gateway/service-to-service-traffic-partitions) -- [Establish cluster peering connections](/consul/docs/connect/cluster-peering/usage/establish-cluster-peering) -- [Connect Services Across Datacenters with Mesh Gateways](/consul/tutorials/developer-mesh/service-mesh-gateways) - -## Deploy the Lambda extension layer - -The `consul-lambda-extension` extension runs during the `Init` phase of the Lambda function execution. The extension retrieves the data that the Lambda registrator has been configured to store from AWS Parameter Store and creates a lightweight TCP proxy. The proxy creates a local listener for each upstream defined in the `CONSUL_SERVICE_UPSTREAMS` environment variable. - -The extension periodically retrieves the data from the AWS Parameter Store so that the function can process requests. When the Lambda function receives a shutdown event, the extension also stops. - -1. Download the `consul-lambda-extension` extension from [releases.hashicorp.com](https://releases.hashicorp.com/): - - ```shell-session - curl -o consul-lambda-extension__linux_amd64.zip https://releases.hashicorp.com/consul-lambda//consul-lambda-extension__linux_amd64.zip - ``` -1. Create the AWS Lambda layer in the same AWS region as the Lambda function. You can create the layer manually using the AWS CLI or AWS Console, but we recommend using Terraform: - - - - ```hcl - resource "aws_lambda_layer_version" "consul_lambda_extension" { - layer_name = "consul-lambda-extension" - filename = "consul-lambda-extension__linux_amd64.zip" - source_code_hash = filebase64sha256("consul-lambda-extension__linux_amd64.zip") - description = "Consul service mesh extension for AWS Lambda" - } - ``` - - - -## Deploy the Lambda registrator - -Configure and deploy the Lambda registrator. Refer to the [registrator configuration documentation](/consul/docs/lambda/registration/automate#configuration) and the [registrator deployment documentation](/consul/docs/lambda/registration/automate#deploy-the-lambda-registrator) for instructions. - -## Write the Lambda function code - -Refer to the [AWS Lambda documentation](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) for instructions on how to write a Lambda function. In the following example, the function calls an upstream service on port `2345`: - - -```go -package main - -import ( - "context" - "io" - "fmt" - "net/http" - "github.com/aws/aws-lambda-go/lambda" -) - -type Response struct { - StatusCode int `json:"statusCode"` - Body string `json:"body"` -} - -func HandleRequest(ctx context.Context, _ interface{}) (Response, error) { - resp, err := http.Get("http://localhost:2345") - fmt.Println("Got response", resp) - if err != nil { - return Response{StatusCode: 500, Body: "Something bad happened"}, err - } - - if resp.StatusCode != 200 { - return Response{StatusCode: resp.StatusCode, Body: resp.Status}, err - } - - defer resp.Body.Close() - - b, err := io.ReadAll(resp.Body) - if err != nil { - return Response{StatusCode: 500, Body: "Error decoding body"}, err - } - - return Response{StatusCode: 200, Body: string(b)}, nil -} - -func main() { - lambda.Start(HandleRequest) -} -``` - -## Deploy the Lambda function - -1. Create and apply an IAM policy that allows the Lambda function’s role to fetch the Lambda extension’s data from the AWS Parameter Store. The following example, creates an IAM role for the Lambda function, creates an IAM policy with the necessary permissions and attaches the policy to the role: - - - - ```hcl - resource "aws_iam_role" "lambda" { - name = "lambda-role" - - assume_role_policy = < - -1. Configure and deploy the Lambda function. Refer to the [Lambda extension configuration](#lambda-extension-configuration) reference for information about all available options. There are several methods for deploying Lambda functions. The following example uses Terraform to deploy a function that can invoke the `static-server` upstream service using mTLS data stored under the `/lambda_extension_data` prefix: - - - - ```hcl - resource "aws_lambda_function" "example" { - … - function_name = "lambda" - role = aws_iam_role.lambda.arn - tags = { - "serverless.consul.hashicorp.com/v1alpha1/lambda/enabled" = "true" - } - variables = { - environment = { - CONSUL_MESH_GATEWAY_URI = var.mesh_gateway_http_addr - CONSUL_SERVICE_UPSTREAMS = "static-server:2345:dc1" - CONSUL_EXTENSION_DATA_PREFIX = "/lambda_extension_data" - } - } - layers = [aws_lambda_layer_version.consul_lambda_extension.arn] - ``` - - - -1. Run the `terraform apply` command and Consul automatically configures a service for the Lambda function. - -### Lambda extension configuration - -Define the following environment variables in your Lambda functions to configure the Lambda extension. The variables apply to each Lambda function in your environment: - -| Variable | Description | Default | -| --- | --- | --- | -| `CONSUL_MESH_GATEWAY_URI` | Specifies the URI where the mesh gateways that the plugin makes requests are running. The mesh gateway should be registered in the same Consul datacenter and partition that the service is running in. For optimal performance, this mesh gateway should run in the same AWS region. | none | -| `CONSUL_EXTENSION_DATA_PREFIX` | Specifies the prefix that the plugin pulls configuration data from. The data must be located in the following directory:
    `"${CONSUL_EXTENSION_DATA_PREFIX}/${CONSUL_SERVICE_PARTITION}/${CONSUL_SERVICE_NAMESPACE}/"` | none | -| `CONSUL_SERVICE_NAMESPACE` | Specifies the Consul namespace the service is registered into. | `default` | -| `CONSUL_SERVICE_PARTITION` | Specifies the Consul partition the service is registered into. | `default` | -| `CONSUL_REFRESH_FREQUENCY` | Specifies the amount of time the extension waits before re-pulling data from the Parameter Store. Use [Go `time.Duration`](https://pkg.go.dev/time@go1.19.1#ParseDuration) string values, for example, `"30s"`.
    The time is added to the duration configured in the Lambda registrator `sync_frequency_in_minutes` configuration. Refer to [Lambda registrator configuration options](/consul/docs/lambda/registration/automate#lambda-registrator-configuration-options). The combined configurations determine how stale the data may become. Lambda functions can run for up to 14 hours, so we recommend configuring a value that results in acceptable staleness for certificates. | `"5m"` | -| `CONSUL_SERVICE_UPSTREAMS` | Specifies a comma-separated list of upstream services that the Lambda function can call. Specify the value as an unlabelled annotation according to the [`consul.hashicorp.com/connect-service-upstreams` annotation format](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-connect-service-upstreams) in Consul on Kubernetes. For example, `"[service-name]:[port]:[optional-datacenter]"` | none | - -## Invoke the Lambda function - -If _intentions_ are enabled in the Consul service mesh, you must create an intention that allows the Lambda function's Consul service to invoke all upstream services prior to invoking the Lambda function. Refer to [Service mesh intentions](/consul/docs/connect/intentions) for additional information. - -There are several ways to invoke Lambda functions. In the following example, the `aws lambda invoke` CLI command invokes the function: - -```shell-session -$ aws lambda invoke --function-name lambda /dev/stdout | cat -``` diff --git a/website/content/docs/lambda/registration/automate.mdx b/website/content/docs/lambda/registration/automate.mdx deleted file mode 100644 index 62f4aa700df9..000000000000 --- a/website/content/docs/lambda/registration/automate.mdx +++ /dev/null @@ -1,190 +0,0 @@ ---- -layout: docs -page_title: Automate Lambda function registration -description: >- - Register AWS Lambda functions with Consul service mesh using the Consul Lambda registrator. The Consul Lambda registrator automates Lambda function registration. ---- - -# Automate Lambda function registration - -This topic describes how to automate Lambda function registration using Consul's Lambda registrator module for Terraform. - -## Introduction - -You can deploy the Lambda registrator to your environment to automatically register and deregister Lambda functions with Consul based on the function's tags. Refer to the [AWS Lambda tags documentation](https://docs.aws.amazon.com/lambda/latest/dg/configuration-tags.html) to learn about tags. - -The registrator also stores and periodically updates information required to make mTLS requests to upstream services in the AWS parameter store. This enables Lambda functions to invoke mesh services. Refer to [Invoke Services from Lambda Functions](/consul/docs/lambda/invoke-from-lambda) for additional information. - -The registrator runs as a Lambda function that is invoked by AWS EventBridge. Refer to the [AWS EventBridge documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html) for additional information. - -EventBridge invokes the registrator using either [AWS CloudTrail](https://docs.aws.amazon.com/lambda/latest/dg/logging-using-cloudtrail.html) to synchronize with Consul in real-time or in scheduled intervals. - -CloudTrail events typically synchronize updates, registration, and deregistration within one minute, but events may occasionally be delayed. - -Scheduled events fully synchronize functions between Lambda and Consul to prevent entropy. By default, EventBridge triggers a full sync every five minutes. - -The following diagram shows the flow of events from EventBridge into Consul: - - - -![Lambda Registrator Architecture](/img/lambda_registrator_architecture.svg) - - - -1. EventBridge invokes the Lambda registrator based on CloudTrail Lambda events or a schedule. -1. Lambda registrator determines how to reconcile Lambda's control plane state - with Consul state and ensures they are in sync by registering, updating, and - deregistering Lambda services. - -## Requirements - -Verify that your environment meets the requirements specified in [Lambda Function Registration Requirements](/consul/docs/lambda/registration). - -## Configuration - -The Lambda registrator stores data in the AWS Parameter Store. You can configure the type of data stored and how to store it. - -### Optional: Store the CA certificate in Parameter Store - -When Lambda registrator makes a request to Consul's [HTTP API](/consul/api-docs) over HTTPS and the Consul API is signed by a custom CA, Lambda registrator uses the CA certificate stored in AWS Parameter Store to verify the authenticity of the Consul API. Refer to the [Parameter Store documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) for additional information. - -You can apply the following Terraform configuration to store Consul's server CA in Parameter Store: - -```hcl -resource "aws_ssm_parameter" "ca-cert" { - name = "/lambda-registrator/ca-cert" - type = "SecureString" - value = -} -``` - -### Optional: Store the ACL token in Parameter Store - -If [Consul access control lists (ACLs)](/consul/docs/security/acl) are enabled, Lambda registrator must present an ACL token stored in Parameter Store to access resources. You can use the Consul CLI, API, or the Terraform provider to facilitate the ACL workflow. The following procedure describes how to create and store a token from the command line: - -1. Create an ACL policy that includes the following rule: - - - - ```hcl - service_prefix "" { - policy = "write" - } - ``` - - - -1. Run `consul acl policy create` to create the policy. The following example creates a policy called `lambda-registrator-policy` containing permissions specified in `rules.hcl`: - ```shell-session - $ consul acl policy create -name "lambda-registrator-policy" -rules @rules.hcl - ``` - -1. Issue the `consul acl token create` command to create the token. The following example creates a token linked to the `lambda-registrator-policy` policy: - ```shell-session - $ consul acl token create -policy-name "lambda-registrator-policy" - ``` - -1. Store the token in Parameter Store by applying the following Terraform: - ```hcl - resource "aws_ssm_parameter" "acl-token" { - name = "/lambda-registrator/acl-token" - type = "SecureString" - value = - } - ``` - -### Optional: Store extension data in Parameter Store - -If you want to enable Lambda functions to invoke services in the mesh, then you must specify a non-empty string in the `consul_extension_data_prefix` configuration. The string represents a path in the AWS Parameter Store so that the registrator can store data necessary for making mTLS requests to upstream services and update the data periodically. If the path does not exist it will be created. - -Lambda registrator encrypts and stores all data for Lambda functions in the AWS Parameter Store according to the [Lambda registrator configuration options](#lambda-registrator-configuration-options). The data is stored in the following directory as a `SecureString` type: - -`${consul_extension_data_prefix}/${}/${}/${}` - -The registrator also requires the following IAM permissions to access the parameter store: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Action": ["ssm:PutParameter","ssm:DeleteParameter"], - "Resource": "arn:aws:ssm:us-east-2:123456789012:parameter/${var.consul_extension_data_prefix}/*" - } - ] -} -``` - -### Lambda Registrator configuration options - -| Name | Description | -| - | - | -| `name` | Specifies the name of the Lambda function associated with the Lambda registrator. The name is also used to construct the Identity and Access Management (IAM) role and policy names used by the Lambda function. | -| `sync_frequency_in_minutes` | Specifies the interval in minutes that EventBridge uses to trigger a full synchronization. Default is `10`. | -| `timeout` | The maximum number of seconds Lambda registrator can run per invocation before timing out. | -| `consul_http_addr` | Specifies the address of the Consul API client. | -| `consul_datacenter` | Specifies the Consul datacenter to synchronize with AWS Lambda state data. By default, the Lambda registrator manages Lambda services for all Consul datacenters. When configured for a specific datacenter, Lambda registrator only manages Lambda services with a matching datacenter tag. Refer to [Supported tags](#supported-tags) for additional information. | -| `consul_extension_data_prefix` | Specifies the path prefix in the AWS Parameter Store under which the registrator manages mTLS data. If Lambda functions call mesh services, the value must be set to a non-empty string starting with `/`. | -| `consul_ca_cert_path` | Specifies the path to the CA certificate stored in the AWS Parameter Store. When Lambda registrator makes an HTTPS request to Consul's API and the Consul API is signed by a custom CA, Lambda registrator uses this CA certificate to verify the authenticity of the Consul API. At startup, Lambda registrator pulls the CA certificate at this path from Parameter Store, writes the certificate to the filesystem and stores the path of that file in `CONSUL_CACERT`. Also refer to [Optional: Store the CA Certificate in Parameter Store](#optional-store-the-ca-certificate-in-parameter-store).| -| `consul_http_token_path` | Specifies the path to the ACL token stored in AWS Parameter Store that Lambda registrator presents to access resources. This parameter is only required when ACLs are enabled for the Consul server. It is used to fetch an ACL token from Parameter Store and is stored in the `CONSUL_HTTP_TOKEN` environment variable. Also refer tp [Optional: Store the ACL Token in Parameter Store](#optional-store-the-acl-token-in-parameter-store).| -| `node_name` | The Consul node name that Lambdas are registered to. Defaults to `lambdas`. | -| `enterprise` | Determines if the Consul server at `consul_http_addr` is running Consul Community Edition or Consul Enterprise. | -| `partitions` | The partitions that Lambda registrator manages. | - -## Deploy the Lambda registrator - -1. Create a Terraform configuration and specify the `lambda-registrator` module. In the following example, the Lambda registrator is deployed to `https://consul.example.com:8501`. Refer to [the Lambda registrator module documentation](https://registry.terraform.io/modules/hashicorp/consul-lambda-registrator/aws/0.1.0-beta5/submodules/lambda-registrator) for additional usage information: - ```hcl - module "lambda-registrator" { - source = "hashicorp/consul-lambda/consul-lambda-registrator" - version = "x.y.z" - name = "consul-lambda-registrator" - consul_http_addr = "https://aecfe39d629774e348a9844439f5e3c1-1471365273.us-east-1.elb.amazonaws.com:8501" - ca_cert_path = aws_ssm_parameter.ca-cert.name - http_token_path = aws_ssm_parameter.acl-token.name - consul_extension_data_prefix = "/lambda_extension_data" - } - ``` - -1. Deploy Lambda registrator with `terraform apply`. - - -## Register Lambda functions - -Lambda registrator registers Lambda functions into Consul, regardless of how the functions are -deployed. The following procedure describes how to register Lambda functions with the Lambda registrator using Terraform, but you can also deploy a Lambda function with CloudFormation, the AWS user -interface, or Cloud Development Kit (CDK): - -1. Add the `aws_lambda_function` resource to your Terraform configuration and specify the name of the Lambda function. -1. Add a `tags` block to the resource and specify the tags you want to use to register the function (refer to [Supported tags](#supported-tags)). - -In the following example, the `example` Lambda function is registered using the `enabled`, `payload-passthrough`, and `invocation-mode` tags: - -```hcl -resource "aws_lambda_function" "example" { - … - function_name = "lambda" - tags = { - "serverless.consul.hashicorp.com/v1alpha1/lambda/enabled" = "true" - "serverless.consul.hashicorp.com/v1alpha1/lambda/payload-passthrough" = "true" - "serverless.consul.hashicorp.com/v1alpha1/lambda/invocation-mode" = "ASYNCHRONOUS" - } -} -``` - -### Supported tags - -The following tags are supported. The path prefix for all tags is `serverless.consul.hashicorp.com/v1alpha1/lambda`. For example, specify the following tag to enable the Lambda registrator to sync the Lambda with Consul: - -`serverless.consul.hashicorp.com/v1alpha1/lambda/enabled`. - -| Tag | Description | -| - | - | -| `/enabled` | Enables the Lambda registrator to sync the Lambda with Consul. | -| `/payload-passthrough` | Determines if the body Envoy receives is converted to JSON or directly passed to Lambda. This attribute is optional and defaults to `false`. | -| `/invocation-mode` | Specifies the [Lambda invocation mode](https://docs.aws.amazon.com/lambda/latest/operatorguide/invocation-modes.html) Consul uses to invoke the Lambda. The default is `SYNCHRONOUS`, but `ASYNCHRONOUS` invocations are also supported. | -| `/datacenter` | Specifies the Consul datacenter in which to register the service. The default is the datacenter configured for Lambda registrator. | -| `/namespace` | Specifies the Consul namespace the service is registered in. Default is `default` if `enterprise` is enabled. | -| `/partition` | Specifies the Consul partition the service is registered in. Defaults is `default` if `enterprise` is enabled. | -| `/aliases` | Specifies a `+`-separated string of Lambda aliases that are registered into Consul. For example, if set to `dev+staging+prod`, the `dev`, `staging`, and `prod` aliases of the Lambda function are registered into Consul. | diff --git a/website/content/docs/lambda/registration/index.mdx b/website/content/docs/lambda/registration/index.mdx deleted file mode 100644 index 16e16cde050b..000000000000 --- a/website/content/docs/lambda/registration/index.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: docs -page_title: Lambda Function Registration Requirements -description: >- - This topic provides an overview of how to register AWS Lambda functions with Consul service mesh and describes the requirements and prerequisites for registering Lambda functions with Consul. ---- -# Lambda Function Registration Requirements - -Verify that your environment meets the requirements and that you have completed the prerequisites before registering Lambda functions. - -## Introduction - -You can either manually register AWS Lambda functions with Consul or use the Lambda registrator to automatically synchronize Lambda state into Consul. We recommend using the Lambda registrator when possible so that you can keep the configuration entry up to date. The registrator automatically registers, reconfigures, and deregisters Lambdas based on the Lambda function's tags. - -## Requirements - -Consul v1.12.1 and later - -## Prerequisites - -Complete the following prerequisites prior to registering your Lambda functions. You only need to perform these steps once. - -### Configure IAM Permissions for Envoy - -The Envoy proxy that invokes Lambda must have the `lambda:InvokeFunction` AWS IAM -permissions. In the following example, the IAM policy -enables an IAM user or role to invoke the `example` Lambda function: - -```json -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "Invoke", - "Effect": "Allow", - "Action": [ - "lambda:InvokeFunction" - ], - "Resource": "arn:aws:lambda:us-east-1:123456789012:function:example" - } - ] -} -``` - -Define AWS IAM credentials in environment variables, EC2 metadata, or -ECS metadata. On [AWS EKS](https://aws.amazon.com/eks/), associate an IAM role with the proxy's `ServiceAccount`. Refer to the [AWS IAM roles for service accounts](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) documentation for instructions. - -### Mesh gateway - -A mesh gateway is required in the following scenarios: - -- Invoking mesh services from Lambda functions -- Invoking Lambda functions from a service deployed to a separate Consul datacenter - -Mesh gateways are optional for enabling services to invoke Lambda functions if they are in the same datacenter. - -The mesh gateway must be running and registered in the relevant Consul datacenters and admin partitions. Refer to the following documentation and tutorials for instructions on how to set up mesh gateways: - -- [Mesh gateway documentation](/consul/docs/connect/gateways#mesh-gateways) -- [Connect Services Across Datacenters with Mesh Gateways tutorial](/consul/tutorials/developer-mesh/service-mesh-gateways) -- [Secure Service Mesh Communication Across Kubernetes Clusters tutorial](/consul/tutorials/kubernetes/kubernetes-mesh-gateways?utm_source=docs%3Fin%3Dconsul%2Fkubernetes) - -When using admin partitions, you must add Lambda services to the `Services` -field of [the `exported-services` configuration -entry](/consul/docs/connect/config-entries/exported-services). - -### Optional: Terminating gateway - -A terminating gateway is an access point in a Consul datacenter to an external service or node. Terminating gateways are optional when invoking Lambda functions from a mesh service, but they do not play a role when invoking services from Lambda functions. - -Refer to the following documentation and tutorials for instructions on how to set up a terminating gateway: - -- [Terminating gateways documentation](/consul/docs/connect/gateways#terminating-gateways) -- [Terminating gateways on Kubernetes documentation](/consul/docs/k8s/connect/terminating-gateways) -- [Connect External Services to Consul With Terminating Gateways tutorial](/consul/tutorials/developer-mesh/terminating-gateways-connect-external-services) - -To register a Lambda service with a terminating gateway, add the service to the -`Services` field of the terminating gateway's `terminating-gateway` -configuration entry. diff --git a/website/content/docs/lambda/registration/manual.mdx b/website/content/docs/lambda/registration/manual.mdx deleted file mode 100644 index fbb07dba3642..000000000000 --- a/website/content/docs/lambda/registration/manual.mdx +++ /dev/null @@ -1,85 +0,0 @@ ---- -layout: docs -page_title: Manual Lambda Function Registration -description: >- - Register AWS Lambda functions with Consul service mesh using the Consul Lambda registrator. The Consul Lambda registrator automates Lambda function registration. ---- - -# Manual Lambda Function Registration - -This topic describes how to manually register Lambda functions into Consul. Refer to [Automate Lambda Function Registration](/consul/docs/lambda/registration/automate) for information about using the Lambda registrator to automate registration. - -## Requirements - -Verify that your environment meets the requirements specified in [Lambda Function Registration Requirements](/consul/docs/lambda/registration). - -To manually register Lambda functions so that mesh services can invoke them, you must create and apply a service registration configuration for the Lambda function and write a [service defaults configuration entry](/consul/docs/connect/config-entries/service-defaults) for the function. - -## Register a Lambda function - -You can manually register Lambda functions if you are unable to automate the process using the Lambda registrator. - -1. Create a configuration for registering the service. You can copy the following example and replace `` with your Consul service name for the Lambda function: - - - - ```json - { - "Node": "lambdas", - "SkipNodeUpdate": true, - "NodeMeta": { - "external-node": "true", - "external-probe": "true" - }, - "Service": { - "Service": "" - } - } - ``` - - - -1. Save the configuration to `lambda.json`. - -1. Send the configuration to the `catalog/register` API endpoint to register the service, for example: - ```shell-session - $ curl --request PUT --data @lambda.json localhost:8500/v1/catalog/register - ``` - -1. Create the `service-defaults` configuration entry and include the AWS tags used to invoke the Lambda function in the `EnvoyExtensions` configuration. Refer to [Supported `EnvoyExtension` arguments](#supported-envoyextension-arguments) for more information. - -The following example creates a `service-defaults` configuration entry named `lambda`: - - - - ```hcl - Kind = "service-defaults" - Name = "" - Protocol = "http" - EnvoyExtensions = [ - { - "Name": "builtin/aws/lambda", - "Arguments": { - "Region": "us-east-2", - "ARN": "", - "PayloadPassthrough": true - } - } - ] - ``` - - - -1. Issue the `consul config write` command to store the configuration entry. For example: - ```shell-session - $ consul config write lambda-service-defaults.hcl - ``` - -### Supported `EnvoyExtension` arguments - -The `lambda` Envoy extension supports the following arguments: - -- `ARN` (`string`) - Specifies the [AWS ARN](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) for the service's Lambda. `ARN` must be set to a valid Lambda function ARN. -- `Region` (`string`) - Specifies the AWS region the Lambda is running in. `Region` must be set to a valid AWS region where the Lambda function exists. -- `PayloadPassthrough` (`boolean: false`) - Determines if the body Envoy receives is converted to JSON or directly passed to Lambda. -- `InvocationMode` (`string: synchronous`) - Determines if Consul configures the Lambda to be invoked using the `synchronous` or `asynchronous` [invocation mode](https://docs.aws.amazon.com/lambda/latest/operatorguide/invocation-modes.html). diff --git a/website/content/docs/manage-traffic/cluster-peering/k8s.mdx b/website/content/docs/manage-traffic/cluster-peering/k8s.mdx new file mode 100644 index 000000000000..8ebcfe93df74 --- /dev/null +++ b/website/content/docs/manage-traffic/cluster-peering/k8s.mdx @@ -0,0 +1,75 @@ +--- +layout: docs +page_title: Manage L7 traffic with cluster peering on Kubernetes +description: >- + Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul on Kubernetes deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects in k8s. +--- + +# Manage L7 traffic with cluster peering on Kubernetes + +This usage topic describes how to configure the `service-resolver` custom resource definition (CRD) to set up and manage L7 traffic between services that have an existing cluster peering connection in Consul on Kubernetes deployments. + +For general guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering](/consul/docs/manage-traffic/cluster-peering/vm). + +## Service resolvers for redirects and failover + +When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/manage-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters. + +However, the `service-splitter` and `service-router` CRDs do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in a `service-resolver` CRD. Then, you can set up a redirect in a second service resolver to interact with the peer service by name. + +For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/fundamentals/config-entry). + +## Configure dynamic traffic between peers + +To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order: + +1. Define the peer cluster as a failover target in the service resolver configuration. + + The following example updates the [`service-resolver` CRD](/consul/docs/reference/config-entry/service-resolver) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend + spec: + connectTimeout: 15s + failover: + '*': + targets: + - peer: 'cluster-02' + service: 'frontend' + namespace: 'default' + ``` + +1. Define the desired behavior in `service-splitter` or `service-router` CRD. + + The following example splits traffic evenly between `frontend` and `frontend-peer`: + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceSplitter + metadata: + name: frontend + spec: + splits: + - weight: 50 + ## defaults to service with same name as configuration entry ("frontend") + - weight: 50 + service: frontend-peer + ``` + +1. Create a second `service-resolver` configuration entry on the local cluster that resolves the name of the peer service you used when splitting or routing the traffic. + + The following example uses the name `frontend-peer` to define a redirect targeting the `frontend` service on the peer `cluster-02`: + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend-peer + spec: + redirect: + peer: 'cluster-02' + service: 'frontend' + ``` \ No newline at end of file diff --git a/website/content/docs/manage-traffic/cluster-peering/vm.mdx b/website/content/docs/manage-traffic/cluster-peering/vm.mdx new file mode 100644 index 000000000000..dcdc80c16b14 --- /dev/null +++ b/website/content/docs/manage-traffic/cluster-peering/vm.mdx @@ -0,0 +1,168 @@ +--- +layout: docs +page_title: Manage L7 traffic for VMs with cluster peering connections +description: >- + Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects and failover. +--- + +# Manage L7 traffic for VMs with cluster peering connections + +This usage topic describes how to configure and apply the [`service-resolver` configuration entry](/consul/docs/reference/config-entry/service-resolver) to set up redirects and failovers between services that have an existing cluster peering connection. + +For Kubernetes-specific guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering on Kubernetes](/consul/docs/manage-traffic/cluster-peering/k8s). + +## Service resolvers for redirects and failover + +When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/manage-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters. + +However, the `service-splitter` and `service-router` configuration entry kinds do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in the `service-resolver` configuration entry. Then, you can set up a redirect in a second service resolver to interact with the peer service by name. + +For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/fundamentals/config-entry). + +## Configure dynamic traffic between peers + +To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order: + +1. Define the peer cluster as a failover target in the service resolver configuration. + + The following examples update the [`service-resolver` configuration entry](/consul/docs/reference/config-entry/service-resolver) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. + + + + ```hcl + Kind = "service-resolver" + Name = "frontend" + ConnectTimeout = "15s" + Failover = { + "*" = { + Targets = [ + {Peer = "cluster-02"} + ] + } + } + ``` + + ```json + { + "ConnectTimeout": "15s", + "Kind": "service-resolver", + "Name": "frontend", + "Failover": { + "*": { + "Targets": [ + { + "Peer": "cluster-02" + } + ] + } + }, + "CreateIndex": 250, + "ModifyIndex": 250 + } + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend + spec: + connectTimeout: 15s + failover: + '*': + targets: + - peer: 'cluster-02' + service: 'frontend' + namespace: 'default' + ``` + + + +1. Define the desired behavior in `service-splitter` or `service-router` configuration entries. + + The following example splits traffic evenly between `frontend` services hosted on peers by defining the desired behavior locally: + + + + ```hcl + Kind = "service-splitter" + Name = "frontend" + Splits = [ + { + Weight = 50 + ## defaults to service with same name as configuration entry ("frontend") + }, + { + Weight = 50 + Service = "frontend-peer" + }, + ] + ``` + + ```json + { + "Kind": "service-splitter", + "Name": "frontend", + "Splits": [ + { + "Weight": 50 + }, + { + "Weight": 50, + "Service": "frontend-peer" + } + ] + } + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceSplitter + metadata: + name: frontend + spec: + splits: + - weight: 50 + ## defaults to service with same name as configuration entry ("frontend") + - weight: 50 + service: frontend-peer + ``` + + + +1. Create a local `service-resolver` configuration entry named `frontend-peer` and define a redirect targeting the peer and its service: + + + + ```hcl + Kind = "service-resolver" + Name = "frontend-peer" + Redirect { + Service = frontend + Peer = "cluster-02" + } + ``` + + ```json + { + "Kind": "service-resolver", + "Name": "frontend-peer", + "Redirect": { + "Service": "frontend", + "Peer": "cluster-02" + } + } + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend-peer + spec: + redirect: + peer: 'cluster-02' + service: 'frontend' + ``` + + \ No newline at end of file diff --git a/website/content/docs/connect/manage-traffic/discovery-chain.mdx b/website/content/docs/manage-traffic/discovery-chain.mdx similarity index 85% rename from website/content/docs/connect/manage-traffic/discovery-chain.mdx rename to website/content/docs/manage-traffic/discovery-chain.mdx index b7ee4e975ef8..b029583c6cc0 100644 --- a/website/content/docs/connect/manage-traffic/discovery-chain.mdx +++ b/website/content/docs/manage-traffic/discovery-chain.mdx @@ -1,22 +1,20 @@ --- layout: docs -page_title: Service Mesh Traffic Management - Discovery Chain +page_title: Configure discovery chain description: >- The discovery chain compiles service router, splitter, and resolver configuration entries to direct traffic to specific instances in a service mesh. Learn about compiled discovery chain results, including discovery graph nodes and targets. --- -# Discovery Chain for Service Mesh Traffic Management - --> **1.6.0+:** This feature is available in Consul versions 1.6.0 and newer. +# Configure discovery chain ~> This topic is part of a [low-level API](/consul/api-docs/discovery-chain) primarily targeted at developers building external [Consul service mesh proxy -integrations](/consul/docs/connect/proxies/integrate). +integrations](/consul/docs/connect/proxy/custom). The service discovery process can be modeled as a "discovery chain" which passes through three distinct stages: routing, splitting, and resolution. Each of these stages is controlled by a set of [configuration -entries](/consul/docs/agent/config-entries). By configuring different phases of +entries](/consul/docs/fundamentals/config-entry). By configuring different phases of the discovery chain a user can control how proxy upstreams are ultimately resolved to specific instances for load balancing. @@ -29,34 +27,34 @@ The configuration entries used in the discovery chain are designed to be simple to read and modify for narrowly tailored changes, but at discovery-time the various configuration entries interact in more complex ways. For example: -- If a [`service-resolver`](/consul/docs/connect/config-entries/service-resolver) +- If a [`service-resolver`](/consul/docs/reference/config-entry/service-resolver) is created with a [service - redirect](/consul/docs/connect/config-entries/service-resolver#service) defined, + redirect](/consul/docs/reference/config-entry/service-resolver#service) defined, then all references made to the original service in any other configuration entry is replaced with the redirect destination. -- If a [`service-resolver`](/consul/docs/connect/config-entries/service-resolver) +- If a [`service-resolver`](/consul/docs/reference/config-entry/service-resolver) is created with a [default - subset](/consul/docs/connect/config-entries/service-resolver#defaultsubset) + subset](/consul/docs/reference/config-entry/service-resolver#defaultsubset) defined then all references made to the original service in any other configuration entry that did not specify a subset will be replaced with the default. -- If a [`service-splitter`](/consul/docs/connect/config-entries/service-splitter) +- If a [`service-splitter`](/consul/docs/reference/config-entry/service-splitter) is created with a [service - split](/consul/docs/connect/config-entries/service-splitter#splits), and the target service has its + split](/consul/docs/reference/config-entry/service-splitter#splits), and the target service has its own `service-splitter` then the overall effect is flattened and only a single aggregate traffic split is ultimately configured in the proxy. -- [`service-resolver`](/consul/docs/connect/config-entries/service-resolver) +- [`service-resolver`](/consul/docs/reference/config-entry/service-resolver) redirect loops must be rejected as invalid. -- [`service-router`](/consul/docs/connect/config-entries/service-router) and - [`service-splitter`](/consul/docs/connect/config-entries/service-splitter) +- [`service-router`](/consul/docs/reference/config-entry/service-router) and + [`service-splitter`](/consul/docs/reference/config-entry/service-splitter) configuration entries require an L7 compatible protocol be set for the service via either a - [`service-defaults`](/consul/docs/connect/config-entries/service-defaults) or - [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults) config + [`service-defaults`](/consul/docs/reference/config-entry/service-defaults) or + [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults) config entry. Violations must be rejected as invalid. - When an [upstream @@ -156,7 +154,7 @@ A single node in the compiled discovery chain. - `Definition` `(ServiceRoute)` - Relevant portion of underlying `service-router` - [route](/consul/docs/connect/config-entries/service-router#routes). + [route](/consul/docs/reference/config-entry/service-router#routes). - `NextNode` `(string)` - The name of the next node in the chain in [`Nodes`](#nodes). @@ -164,7 +162,7 @@ A single node in the compiled discovery chain. splits. - `Weight` `(float32)` - Copy of underlying `service-splitter` - [`weight`](/consul/docs/connect/config-entries/service-splitter#weight) field. + [`weight`](/consul/docs/reference/config-entry/service-splitter#weight) field. - `NextNode` `(string)` - The name of the next node in the chain in [`Nodes`](#nodes). @@ -175,21 +173,21 @@ A single node in the compiled discovery chain. defined for this node and the default was synthesized. - `ConnectTimeout` `(duration)` - Copy of the underlying `service-resolver` - [`ConnectTimeout`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) + [`ConnectTimeout`](/consul/docs/reference/config-entry/service-resolver#connecttimeout) field. If one is not defined the default of `5s` is returned. - `Target` `(string)` - The name of the target to use found in [`Targets`](#targets). - `Failover` `(DiscoveryFailover: )` - Compiled form of the underlying `service-resolver` - [`Failover`](/consul/docs/connect/config-entries/service-resolver#failover) + [`Failover`](/consul/docs/reference/config-entry/service-resolver#failover) definition to use for this request. - `Targets` `(array)` - List of targets found in [`Targets`](#targets) to failover to in order of preference. - `LoadBalancer` `(LoadBalancer: `) - Copy of the underlying `service-resolver` - [`LoadBalancer`](/consul/docs/connect/config-entries/service-resolver#loadbalancer) field. + [`LoadBalancer`](/consul/docs/reference/config-entry/service-resolver#loadbalancer) field. If a `service-splitter` splits between services with differing `LoadBalancer` configuration the first hash-based load balancing policy is copied. @@ -201,7 +199,7 @@ A single node in the compiled discovery chain. - `Service` `(string)` - The service to query when resolving a list of service instances. - `ServiceSubset` `(string: )` - The - [subset](/consul/docs/connect/config-entries/service-resolver#service-subsets) of + [subset](/consul/docs/reference/config-entry/service-resolver#service-subsets) of the service to resolve. - `Partition` `(string)` - The partition to use when resolving a list of service instances. @@ -212,7 +210,7 @@ A single node in the compiled discovery chain. - `Subset` `(ServiceResolverSubset)` - Copy of the underlying `service-resolver` - [`Subsets`](/consul/docs/connect/config-entries/service-resolver#subsets) + [`Subsets`](/consul/docs/reference/config-entry/service-resolver#subsets) definition for this target. - `Filter` `(string: "")` - The @@ -235,7 +233,7 @@ A single node in the compiled discovery chain. - `External` `(bool: false)` - True if this target is outside of this consul cluster. - `ConnectTimeout` `(duration)` - Copy of the underlying `service-resolver` - [`ConnectTimeout`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) + [`ConnectTimeout`](/consul/docs/reference/config-entry/service-resolver#connecttimeout) field. If one is not defined the default of `5s` is returned. - `SNI` `(string)` - This value should be used as the @@ -245,4 +243,4 @@ A single node in the compiled discovery chain. - `Name` `(string)` - The unique name for this target for use when generating load balancer objects. This has a structure similar to [SNI](#sni), but will not be affected by SNI customizations such as - [`ExternalSNI`](/consul/docs/connect/config-entries/service-defaults#externalsni). + [`ExternalSNI`](/consul/docs/reference/config-entry/service-defaults#externalsni). diff --git a/website/content/docs/manage-traffic/failover/index.mdx b/website/content/docs/manage-traffic/failover/index.mdx new file mode 100644 index 000000000000..cbaa6be98c64 --- /dev/null +++ b/website/content/docs/manage-traffic/failover/index.mdx @@ -0,0 +1,54 @@ +--- +layout: docs +page_title: Failover configuration overview +description: Learn about failover strategies and service mesh features you can implement to route traffic if services become unhealthy or unreachable, including sameness groups, prepared queries, and service resolvers. +--- + +# Failover overview + +Services in your mesh may become unhealthy or unreachable for many reasons, but you can mitigate some of the effects associated with infrastructure issues by configuring Consul to automatically route traffic to and from failover service instances. This topic provides an overview of the failover strategies you can implement with Consul. + +## Service failover strategies in Consul + +There are several methods for implementing failover strategies between datacenters in Consul. You can adopt one of the following strategies based on your deployment configuration and network requirements: + +- Configure the `Failover` stanza in a service resolver configuration entry to explicitly define which services should failover and the targeting logic they should follow. +- Make a prepared query for each service that you can use to automate geo-failover. +- Create a sameness group to identify partitions with identical namespaces and service names to establish default failover targets. + +The following table compares these strategies in deployments with multiple datacenters to help you determine the best approach for your service: + +| | `Failover` stanza | Prepared
    query | Sameness groups | +| --- | :---: | :---: | :---: | +| **Supports WAN federation** | ✅ | ✅ | ❌ | +| **Supports cluster peering** | ✅ | ❌ | ✅ | +| **Supports locality-aware routing** | ✅ | ❌ | ✅ | +| **Multi-datacenter failover strength** | ✅ | ❌ | ✅ | +| **Multi-datacenter usage scenario** | Enables more granular logic for failover targeting. | Central policies that can automatically target the nearest datacenter. | Group size changes without edits to existing member configurations. | +| **Multi-datacenter usage scenario** | Configuring failover for a single service or service subset, especially for testing or debugging purposes | WAN-federated deployments where a primary datacenter is configured. Prepared queries are not replicated over peer connections. | Cluster peering deployments with consistently named services and namespaces. | + +Although cluster peering connections support the [`Failover` field of the prepared query request schema](/consul/api-docs/query#failover) when using Consul's service discovery features to [perform dynamic DNS queries](/consul/docs/discover/service/dynamic), they do not support prepared queries for service mesh failover scenarios. + +### Failover configurations for a service mesh with a single datacenter + +You can implement a service resolver configuration entry and specify a pool of failover service instances that other services can exchange messages with when the primary service becomes unhealthy or unreachable. We recommend adopting this strategy as a minimum baseline when implementing Consul service mesh and layering additional failover strategies to build resilience into your application network. + +Refer to the [`Failover` configuration ](/consul/docs/reference/config-entry/service-resolver#failover) for examples of how to configure failover services in the service resolver configuration entry on both VMs and Kubernetes deployments. + +### Failover configuration for WAN-federated datacenters + +If your network has multiple Consul datacenters that are WAN-federated, you can configure your applications to look for failover services with prepared queries. [Prepared queries](/consul/api-docs/) are configurations that enable you to define complex service discovery lookups. This strategy hinges on the secondary datacenter containing service instances that have the same name and residing in the same namespace as their counterparts in the primary datacenter. + +Refer to the [Automate geo-failover with prepared queries tutorial](/consul/tutorials/developer-discovery/automate-geo-failover) for additional information. + +### Failover configuration for peered clusters and partitions + +In networks with multiple datacenters or partitions that share a peer connection, each datacenter or partition functions as an independent unit. As a result, Consul does not correlate services that have the same name, even if they are in the same namespace. + +You can configure sameness groups for this type of network. Sameness groups allow you to define a group of admin partitions where identical services are deployed in identical namespaces. After you configure the sameness group, you can reference the `SamenessGroup` parameter in service resolver, exported service, and service intention configuration entries, enabling you to add or remove cluster peers from the group without making changes to every cluster peer every time. + +You can configure a sameness group so that it functions as the default for failover behavior. You can also reference sameness groups in a service resolver's `Failover` stanza or in a prepared query. Refer to [Failover with sameness groups](/consul/docs/manage-traffic/failover/sameness-group) for more information. + +## Locality-aware routing + +By default, Consul balances traffic to all healthy upstream instances in the cluster, even if the instances are in different network regions and zones. You can configure Consul to route requests to upstreams in the same region and zone, which reduces latency and transfer costs. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. \ No newline at end of file diff --git a/website/content/docs/manage-traffic/failover/k8s.mdx b/website/content/docs/manage-traffic/failover/k8s.mdx new file mode 100644 index 000000000000..711c9ad3b0ee --- /dev/null +++ b/website/content/docs/manage-traffic/failover/k8s.mdx @@ -0,0 +1,123 @@ +--- +layout: docs +page_title: Configure failover services on Kubernetes +description: Learn how to define failover services in Consul on Kubernetes when proxies are in transparent proxy mode. Consul can send traffic to backup service instances if a destination service becomes unhealthy or unresponsive. +--- + +# Configure failover services on Kubernetes + +This topic describes how to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode. If a service becomes unhealthy or unresponsive, Consul can use the service resolver configuration entry to send inbound requests to backup services. Service resolvers are part of the service mesh proxy upstream discovery chain. Refer to [Service mesh traffic management](/consul/docs/manage-traffic) for additional information. + +## Overview + +Complete the following steps to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode: + +- Create a service resolver configuration entry +- Create intentions that allow the downstream service to access the primary and failover service instances. +- Configure your application to call the discovery chain using the Consul DNS or KubeDNS. + +## Requirements + +- `consul-k8s` v1.2.0 or newer. +- Consul service mesh must be enabled. Refer to [How does Consul Service Mesh Work on Kubernetes](/consul/docs/k8s/connect). +- Proxies must be configured to run in transparent proxy mode. +- To query virtual DNS names, you must use Consul DNS. +- To query the discovery chain using KubeDNS, the service resolver must be in the same partition as the running service. + +### ACL requirements + +The default ACLs that the Consul Helm chart configures are suitable for most cases, but if you have more complex security policies for Consul API access, refer to the [ACL documentation](/consul/docs/secure/acl) for additional guidance. + +## Create a service resolver configuration entry + +Specify the target failover in the [`spec.failover.targets`](/consul/docs/reference/config-entry/service-resolver#failover-targets-service) field in the service resolver configuration entry. In the following example, the `api-beta` service is configured to failover to the `api` service in any service subset: + + + +```yaml +apiversion: consul.hashicorp.com/v1alpha1 +kind: ServiceResolver +metadata: + name: api-beta +spec: + failover: + '*': + targets: + - service: api +``` + + + +Refer to the [service resolver configuration entry reference](/consul/docs/reference/config-entry/service-resolver) documentation for information about all service resolver configurations. + +You can apply the configuration using the `kubectl apply` command: + +```shell-session +$ kubectl apply -f api-beta-failover.yaml +``` + +## Create service intentions + +If intentions are not already defined, create and apply intentions that allow the appropriate downstream to access the target service and the failover service. In the following examples, the `frontend` service is allowed to send messages to the `api` service, which is allowed to send messages to the `api-beta` failover service. + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: api +spec: + destination: + name: api + sources: + - name: frontend + action: allow +--- +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: api-beta +spec: + destination: + name: api-beta + sources: + - name: frontend + action: allow +``` + + + +Refer to the [service intentions configuration entry reference](/consul/docs/reference/config-entry/service-intentions) for additional information about configuring service intentions. + +You can apply the configuration using the `kubectl apply` command: + +```shell-session +$ kubectl apply -f frontend-api-api-beta-allow.yaml +``` + +## Configure your application to call the DNS + +Configure your application to contact the discovery chain in either the Consul DNS or the KubeDNS. + +### Consul DNS + +You can query the Consul DNS using the `.virtual.consul` lookup format. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups. + +In the following example, the application queries the Consul catalog for `api-beta` over HTTP. By default, the lookup would query the `default` partition and `default` namespace if Consul Enterprise manages the network infrastructure: + +```text +http://api-beta.virtual.consul/ +``` + +### KubeDNS + +You can query the KubeDNS if the failover service is in the same Kubernetes cluster as the primary service. In the following example, the application queries KubeDNS for `api-beta` over HTTP: + +```text +http://api-beta..svc.cluster.local +``` + +Note that you cannot use KubeDNS if a corresponding Kubernetes service and pod +do not exist. diff --git a/website/content/docs/manage-traffic/failover/prepared-query.mdx b/website/content/docs/manage-traffic/failover/prepared-query.mdx new file mode 100644 index 000000000000..03566b56db6a --- /dev/null +++ b/website/content/docs/manage-traffic/failover/prepared-query.mdx @@ -0,0 +1,200 @@ +--- +layout: docs +page_title: Automate geo-failover with prepared queries +description: Learn about failover strategies and service mesh features you can implement to route traffic if services become unhealthy or unreachable, including sameness groups, prepared queries, and service resolvers. +--- + +# Automate geo-failover with prepared queries + + + + The content of this tutorial also applies to Consul clusters hosted on HashiCorp Cloud (HCP). + + + +Within a single datacenter, Consul provides automatic failover for services by omitting failed service instances from DNS lookups and by providing service health information in APIs. + +When there are no more instances of a service available in the local datacenter, it can be challenging to implement failover policies to other datacenters because typically that logic would need to be written into each application. Fortunately, Consul has a [prepared query](/consul/api-docs/query) API +that provides the capability to let users define failover policies in a centralized way. It's easy to expose these to applications using Consul's DNS interface and it's also available to applications that consume Consul's APIs. + +Failover policies are flexible and can be applied in a variety of ways including: + +- Fully static lists of alternate datacenters. +- Fully dynamic policies that make use of Consul's [network coordinate](/consul/docs/architecture/coordinates) subsystem. +- Automatically determine the next best datacenter to failover to based on network round trip time. + +Prepared queries can be made with policies specific to certain services and prepared query templates can allow one policy to apply to many, or even all services, with just a small number of templates. + +This tutorial shows how to build geo failover policies using prepared queries through a set of examples. It also includes information on how to use prepared +query templates to simplify the failover process. + +## Prerequisites + +To complete this tutorial, you must have a Consul WAN federation of three +Consul clusters. In this example, the local Consul datacenter is called `dc1`, +and it has been federated with two remote datacenters - `dc2` and `dc3`. For +more information on how to federate clusters, please read the +[Federate Multiple Datacenters tutorial](/consul/docs/east-west/wan-federation/create). + +## Prepared query introduction + +Prepared queries are objects that are defined at the datacenter level. They +only need to be created once and are stored on the Consul servers. This method is similar to the values in Consul's KV store. + +Once created, prepared queries can then be invoked by applications to perform the query and get the latest results. + +Here's an example request to create a prepared query: + +```shell-session +$ curl http://127.0.0.1:8500/v1/query \ + --request POST \ + --data @- << EOF +{ + "Name": "banking-app", + "Service": { + "Service": "banking-app", + "Tags": ["v1.2.3"] + } +} +EOF +``` + +This creates a prepared query called "banking-app" that does a lookup for all instances of the "banking-app" service with the tag "v1.2.3". This policy could be used to control which version of a "banking-app" applications should be using in a centralized way. By [updating this prepared query](/consul/api-docs/query#update-prepared-query) to look for the tag "v1.2.4" applications could start to find the newer version of the service without having to reconfigure anything. + +Applications can make use of this query in two ways. + +1. Since we gave the prepared query a name, they can simply do a DNS lookup for "banking-app.query.consul" instead of "banking-app.service.consul". Now with the prepared query, there's the additional filter policy working behind the scenes that the application doesn't have to know about. + +1. Queries can also be executed using the [prepared query execute API](/consul/api-docs/query#execute-prepared-query) for applications that integrate with Consul's APIs directly. + +## Failover policy types + +Using the techniques in this section you will develop prepared queries with failover policies where simply changing application configurations to look up "banking-app.query.consul" instead of "banking-app.service.consul" via DNS will result in automatic geo failover to the next closest [federated](/consul/docs/east-west/wan-federation/create) Consul datacenters, in order of increasing network round trip time. + +Failover is just another policy choice for a prepared query, it works in the same manner as the previous example and is similarly transparent to applications. The failover policy is configured using the `Failover` structure, which contains two fields, both of which are optional, and determine what happens if no healthy nodes are available in the local datacenter when the query is executed. + +- `NearestN` `(int: 0)` - Specifies that the query will be forwarded to up to `NearestN` other datacenters based on their estimated network round trip time using [network coordinates](/consul/docs/architecture/coordinates). + +- `Datacenters` `(array: nil)` - Specifies a fixed list of remote datacenters to forward the query to if there are no healthy nodes in the local datacenter. Datacenters are queried in the order given in the list. + +The following examples use those fields to implement different geo failover policies methods. + +### Static policy + +A static failover policy includes a fixed list of datacenters to contact once there are no healthy instances in the local datacenter. + +Here's the example from the introduction, expanded with a static failover policy: + +```shell-session +$ curl http://127.0.0.1:8500/v1/query \ + --request POST \ + --data @- << EOF +{ + "Name": "banking-app", + "Service": { + "Service": "banking-app", + "Tags": ["v1.2.3"], + "Failover": { + "Datacenters": ["dc2", "dc3"] + } + } +} +EOF +``` + +When this query is executed, such as with a DNS lookup to "banking-app.query.consul", the following actions will occur: + +1. Consul servers in the local datacenter will attempt to find healthy instances of the "banking-app" service with the required tag. +2. If none are available locally, the Consul servers will make an RPC request to the Consul servers in "dc2" to perform the query there. +3. If none are available in "dc2", then an RPC will be made to the Consul servers in "dc3" to perform the query there. +4. Finally an error will be returned if none of these datacenters had any instances available. + +### Dynamic policy + +In a complex federated environment with many Consul datacenters, it can be cumbersome to set static failover policies, so Consul offers a dynamic option based on Consul's [network coordinate](/consul/docs/architecture/coordinates) subsystem. + +Consul continuously maintains an estimate of the network round trip time from the local datacenter to the servers in other datacenters it is federated with. Each server uses the median round trip time from itself to the servers in the remote datacenter. This means that failover can simply try other remote datacenters in order of increasing network round trip time, and if datacenters come and go, or experience network issues, this order will adjust automatically. + +Here's the example from the introduction, expanded with a dynamic failover policy: + +```shell-session +$ curl http://127.0.0.1:8500/v1/query \ + --request POST \ + --data @- << EOF +{ + "Name": "banking-app", + "Service": { + "Service": "banking-app", + "Tags": ["v1.2.3"], + "Failover": { + "NearestN": 2 + } + } +} +EOF +``` + +This query is resolved in a similar fashion to the previous example, except the choice of "dc1" or "dc2", or possibly some other datacenter, is made automatically. + +### Hybrid policy + +It is possible to combine `Datacenters` and `NearestN` in the same policy. The `NearestN` queries will be performed first, followed by the list given by `Datacenters`. + +```shell-session +$ curl http://127.0.0.1:8500/v1/query \ + --request POST \ + --data @- << EOF +{ + "Name": "banking-app", + "Service": { + "Service": "banking-app", + "Tags": ["v1.2.3"], + "Failover": { + "NearestN": 2, + "Datacenters": ["dc2", "dc3"] + } + } +} +EOF +``` + +Note, a given datacenter will only be queried one time during a failover, even if it is selected by both `NearestN` and is listed in `Datacenters`. This is useful for allowing a limited number of round trip-based attempts, followed by a static configuration for some known datacenter to failover to. + +### Prepared query template + +For datacenters with many services, it can be challenging to define a geo failover policy for each service. To relieve this challenge, Consul provides a [prepared query template](/consul/api-docs/query#prepared-query-templates) that allows one prepared query to apply to many, and even all, services. + +Templates can match on prefixes or use full regular expressions to determine which services they match. + +Below is an example request to create a prepared query template that applies a catch-all policy of dynamic geo failover to all services accessed by query lookup (`*.query.consul`). By specifying the `name_prefix_match` type and an empty name, this query template's policy will be applied to any name (`.query.consul`) that doesn't [match a higher-precedence query](/consul/api-docs/query#type). + +```shell-session +$ curl http://127.0.0.1:8500/v1/query \ + --request POST \ + --data @- << EOF +{ + "Name": "", + "Template": { + "Type": "name_prefix_match" + }, + "Service": { + "Service": "${name.full}", + "Failover": { + "NearestN": 2 + } + } +} +EOF +``` + + + + If multiple queries are registered, the most specific one will be selected, so it's possible to have a template like this as a catch-all, and then apply more specific policies to certain services. + + + +With this one prepared query template in place, simply changing application configurations to look up `banking-app.query.consul` instead of `banking-app.service.consul` via DNS will result in automatic geo failover to the next closest federated Consul datacenters, in order of increasing network round trip time. + +## Next steps + +In this tutorial, you learned how to use prepared queries for failover when integrating Consul with other applications. You can now configure your policies to failover to the nearest federated datacenter or to a list of secondary datacenters. You can also create a prepared query template which will help you reduce some complexity of creating policies for each individual service. diff --git a/website/content/docs/manage-traffic/failover/sameness-group.mdx b/website/content/docs/manage-traffic/failover/sameness-group.mdx new file mode 100644 index 000000000000..ec6cc4a41739 --- /dev/null +++ b/website/content/docs/manage-traffic/failover/sameness-group.mdx @@ -0,0 +1,203 @@ +--- +layout: docs +page_title: Failover with sameness groups +description: You can configure sameness groups so that when a service instance fails, traffic automatically routes to an identical service instance. Learn how to use sameness groups to create a failover strategy for deployments with multiple datacenters and cluster peering connections. +--- + +# Failover with sameness groups + +This page describes how to use sameness groups to automatically redirect service traffic to healthy instances in failover scenarios. Sameness groups are a user-defined set of Consul admin partitions with identical registered services. These admin paritions typically belong to Consul datacenters in different cloud regions, which enables sameness groups to participate in several service failover configuration strategies. + +To create a sameness group and configure each Consul datacenter to allow traffic from other members of the group, refer to [create sameness groups](/consul/docs/multi-tenant/sameness-group/vm). + +## Failover strategies + +You can edit a sameness group configuration entry so that all services failover to healthy instances on other members of a sameness group by default. You can also reference the sameness group in other configuration entries to enact other failover strategies for your network. + +You can establish a failover strategy by configuring sameness group behavior in the following locations: + +- Sameness group configuration entry +- Service resolver configuration entry +- Prepared queries + +You can also configure service instances to route to upstreams in the same availability region during a failover. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. + +### Failover with a sameness group configuration entry + +To define failover behavior using a sameness group configuration entry, set `DefaultForFailover=true` and then apply the updated configuration to all clusters that are members of the group. + +In the following example configuration entry, datacenter `dc1` has two partitions, `partition-1` and `partition-2`. A second datacenter, `dc2`, has a single partition named `partition-1`. All three partitions have identically configured services and established cluster peering connections. The configuration entry defines a sameness group, `example-sg` in `dc1`. When redirecting traffic during a failover scenario, Consul attempts to find a healthy instance in a specific order: `dc1-partition-1`, then `dc1-partition-2`, then `dc2-partition-1`. + + + + + +```hcl +Kind = "sameness-group" +Name = "example-sg" +Partition = "partition-1" +DefaultForFailover = true +Members = [ + {Partition = "partition-1"}, + {Partition = "partition-2"}, + {Peer = "dc2-partition-1"} + ] +``` + + + + + +``` +{ + "Kind": "sameness-group", + "Name": "example-sg", + "Partition": "partition-1", + "DefaultForFailover": true, + "Members": [ + { + "Partition": "partition-1" + }, + { + "Partition": "partition-2" + }, + { + "Peer": "dc2-partition-1" + } + ] +} +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: SamenessGroup +metadata: + name: example-sg +spec: + defaultForFailover: true + members: + - partition: partition-1 + - partition: partition-2 + - peer: dc2-partition-1 +``` + + + + +When a sameness group is configured as the failover default, sameness group failover takes place when a service resolver configuration entry does not implement more specific failover behavior. When a service resolver is defined for an upstream, it is used instead of the sameness group for default failover behavior. + +All services registered in the admin partition must failover to another member of the sameness group. You cannot choose subsets of services to use the sameness group as the failover default. If groups do not have identical services, or if a service is registered to some group members but not all members, this failover strategy may produce errors. + +For more information about specifying sameness group members and failover, refer to [sameness group configuration entry reference](/consul/docs/reference/config-entry/sameness-group). + +### Failover with a service resolver configuration entry + +When the sameness group is not configured as the failover default, you can reference the sameness group in a service resolver configuration entry. This approach enables you to use the sameness group as the failover destination for some services registered to group members. + +In the following example configuration, a database service called `db` is filtered into subsets based on a user-defined `version` tag. Services with a `v1` tag belong to the default subset, which uses the `product-group` sameness group for its failover. Instances of `db` with the `v2` tag, meanwhile, fail over to a service named `canary-db`. + + + + + +```hcl +Kind = "service-resolver" +Name = "db" +DefaultSubset = "v1" +Subsets = { + v1 = { + Filter = "Service.Meta.version == v1" + } + v2 = { + Filter = "Service.Meta.version == v2" + } +} +Failover { + v1 = { + SamenessGroup = "product-group" + } + v2 = { + Service = "canary-db" + } +} +``` + + + + + +``` +{ + "Kind": "service-resolver", + "Name": "db", + "DefaultSubset": "v1", + "Subsets": { + "v1": { + "Filter": "Service.Meta.version == v1" + }, + "v2": { + "Filter": "Service.Meta.version == v2" + } + }, + "Failover": { + "v1": { + "SamenessGroup": "product-group" + }, + "v2": { + "Service": "canary-db" + } + } +} +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceResolver +metadata: + name: db +spec: + defaultSubset: v1 + subsets: + v1: + filter: 'Service.Meta.version == v1' + v2: + filter: 'Service.Meta.version == v2' + failover: + v1: + samenessGroup: "product-group" + v2: + service: "canary-db" +``` + + + + +For more information, including additional examples, refer to [service resolver configuration entry reference](/consul/docs/reference/config-entry/service-resolver). + +### Failover with a prepared query + +You can specify a sameness group in a prepared query to return service instances from the first member that has healthy instances. When a member does not have healthy instances, Consul queries group members in the order defined in the list of members in the sameness group configuration entry. + +The following example demonstrates a prepared query that can be referenced with the name `query-1`. It queries members of the sameness group for healthy instances of `db` that are registered to the `store-ns` namespace on partitions named `partition-1`. + +```json +{ + "Name": "query-1", + "Service": { + "Service": "db", + "SamenessGroup": "product-group", + "Partition": "partition-1", + "Namespace": "store-ns" + } +} +``` + +In prepared queries, the sameness group is mutually exclusive with the [`Failover`](/consul/api-docs/query#failover) field because the sameness group includes failover targets based on the sameness group’s members. For more information about using prepared queries, refer to [Enable dynamic DNS queries](/consul/docs/discover/service/dynamic). \ No newline at end of file diff --git a/website/content/docs/manage-traffic/index.mdx b/website/content/docs/manage-traffic/index.mdx new file mode 100644 index 000000000000..4483a5f3fc1b --- /dev/null +++ b/website/content/docs/manage-traffic/index.mdx @@ -0,0 +1,82 @@ +--- +layout: docs +page_title: Manage application traffic +description: >- + Consul can route, split, and resolve Layer 7 traffic in a service mesh to support workflows like canary testing and blue/green deployments. Learn about the three configuration entry kinds that define L7 traffic management behavior in Consul. +--- + +# Manage application traffic + +This topic provides overview information about the application layer traffic management capabilities available in Consul service mesh. These capabilities are also referred to as *Layer 7* or *L7 traffic management*. + +## Introduction + +Consul service mesh allows you to divide application layer traffic between different subsets of service instances. You can leverage L7 traffic management capabilities to perform complex processes, such as configuring backup services for failover scenarios, canary and A-B testing, blue-green deployments, and soft multi-tenancy in which production, QA, and staging environments share compute resources. L7 traffic management with Consul service mesh allows you to designate groups of service instances in the Consul catalog smaller than all instances of single service and configure when that subset should receive traffic. + +You cannot manage L7 traffic with the [built-in proxy](/consul/docs/reference/proxy/built-in), +[native proxies](/consul/docs/automate/native), or some [Envoy proxy escape hatches](/consul/docs/reference/proxy/envoy#escape-hatch-overrides). + +## Discovery chain + +Consul uses a series of stages to discover service mesh proxy upstreams. Each stage represents different ways of managing L7 traffic. They are referred to as the _discovery chain_: + +- routing +- splitting +- resolution + +For information about integrating service mesh proxy upstream discovery using the discovery chain, refer to [Discovery Chain for Service Mesh Traffic Management](/consul/docs/manage-traffic/discovery-chain). + +The Consul UI shows discovery chain stages in the **Routing** tab of the **Services** page: + +![screenshot of L7 traffic visualization in the UI](/img/l7-routing/full.png) + +You can define how Consul manages each stage of the discovery chain in a Consul _configuration entry_. [Configuration entries](/consul/docs/fundamentals/config-entry) modify the default behavior of the Consul service mesh. + +When managing L7 traffic with cluster peering, there are additional configuration requirements to resolve peers in the discovery chain. Refer to [Cluster peering L7 traffic management](/consul/docs/manage-traffic/cluster-peering/vm) for more information. + +### Routing + +The first stage of the discovery chain is the service router. Routers intercept traffic according to a set of L7 attributes, such as path prefixes and HTTP headers, and route the traffic to a different service or service subset. + +Apply a [service router configuration entry](/consul/docs/reference/config-entry/service-router) to implement a router. Service router configuration entries can only reference service splitter or service resolver configuration entries. + +![screenshot of service router in the UI](/img/l7-routing/Router.png) + +### Splitting + +The second stage of the discovery chain is the service splitter. Service splitters split incoming requests and route them to different services or service subsets. Splitters enable staged canary rollouts, versioned releases, and similar use cases. + +Apply a [service splitter configuration entry](/consul/docs/reference/config-entry/service-splitter) to implement a splitter. Service splitters configuration entries can only reference other service splitters or service resolver configuration entries. + +![screenshot of service splitter in the UI](/img/l7-routing/Splitter.png) + +If multiple service splitters are chained, Consul flattens the splits so that they behave as a single service spitter. In the following equation, `splitter[B]` references `splitter[A]`: + +```text +splitter[A]: A_v1=50%, A_v2=50% +splitter[B]: A=50%, B=50% +--------------------- +splitter[effective_B]: A_v1=25%, A_v2=25%, B=50% +``` + +### Resolution + +The third stage of the discovery chain is the service resolver. Service resolvers specify which instances of a service satisfy discovery requests for the provided service name. Service resolvers enable several use cases, including: + +- Designate failovers when service instances become unhealthy or unreachable. +- Configure service subsets based on DNS values. +- Route traffic to the latest version of a service. +- Route traffic to specific Consul datacenters. +- Create virtual services that route traffic to instances of the actual service in specific Consul datacenters. + +Apply a [service resolver configuration entry](/consul/docs/reference/config-entry/service-resolver) to implement a resolver. Service resolver configuration entries can only reference other service resolvers. + +![screenshot of service resolver in the UI](/img/l7-routing/Resolver.png) + +If no resolver is configured for a service, Consul sends all traffic to healthy instances of the service that have the same name in the current datacenter or specified namespace and ends the discovery chain. + +Service resolver configuration entries can also process network layer, also called level 4 (L4), traffic. As a result, you can implement service resolvers for services that communicate over `tcp` and other non-HTTP protocols. + +## Locality-aware routing + +By default, Consul balances traffic to all healthy upstream instances in the cluster, even if the instances are in different network regions and zones. You can configure Consul to route requests to upstreams in the same region and zone, which reduces latency and transfer costs. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. \ No newline at end of file diff --git a/website/content/docs/manage-traffic/k8s.mdx b/website/content/docs/manage-traffic/k8s.mdx new file mode 100644 index 000000000000..fb085ee5a057 --- /dev/null +++ b/website/content/docs/manage-traffic/k8s.mdx @@ -0,0 +1,75 @@ +--- +layout: docs +page_title: Manage traffic for peered clusters on Kubernetes +description: >- + Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul on Kubernetes deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects in k8s. +--- + +# Manage traffic for peered clusters on Kubernetes + +This usage topic describes how to configure the `service-resolver` custom resource definition (CRD) to set up and manage L7 traffic between services that have an existing cluster peering connection in Consul on Kubernetes deployments. + +For general guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering](/consul/docs/manage-traffic/cluster-peering/k8s). + +## Service resolvers for redirects and failover + +When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/manage-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters. + +However, the `service-splitter` and `service-router` CRDs do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in a `service-resolver` CRD. Then, you can set up a redirect in a second service resolver to interact with the peer service by name. + +For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/fundamentals/config-entry). + +## Configure dynamic traffic between peers + +To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order: + +1. Define the peer cluster as a failover target in the service resolver configuration. + + The following example updates the [`service-resolver` CRD](/consul/docs/reference/config-entry/service-resolver) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend + spec: + connectTimeout: 15s + failover: + '*': + targets: + - peer: 'cluster-02' + service: 'frontend' + namespace: 'default' + ``` + +1. Define the desired behavior in `service-splitter` or `service-router` CRD. + + The following example splits traffic evenly between `frontend` and `frontend-peer`: + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceSplitter + metadata: + name: frontend + spec: + splits: + - weight: 50 + ## defaults to service with same name as configuration entry ("frontend") + - weight: 50 + service: frontend-peer + ``` + +1. Create a second `service-resolver` configuration entry on the local cluster that resolves the name of the peer service you used when splitting or routing the traffic. + + The following example uses the name `frontend-peer` to define a redirect targeting the `frontend` service on the peer `cluster-02`: + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend-peer + spec: + redirect: + peer: 'cluster-02' + service: 'frontend' + ``` diff --git a/website/content/docs/k8s/deployment-configurations/argo-rollouts-configuration.mdx b/website/content/docs/manage-traffic/progressive-rollouts/argo.mdx similarity index 100% rename from website/content/docs/k8s/deployment-configurations/argo-rollouts-configuration.mdx rename to website/content/docs/manage-traffic/progressive-rollouts/argo.mdx diff --git a/website/content/docs/manage-traffic/rate-limit.mdx b/website/content/docs/manage-traffic/rate-limit.mdx new file mode 100644 index 000000000000..5dcf0d571302 --- /dev/null +++ b/website/content/docs/manage-traffic/rate-limit.mdx @@ -0,0 +1,144 @@ +--- +layout: docs +page_title: Limit request rates to services in the mesh +description: Learn how to limit the rate of requests to services in a Consul service mesh. Rate limits on requests improves network resilience and availability. +--- + +# Limit request rates to services in the mesh + +This topic describes how to configure Consul to limit the request rate to services in the mesh. + + This feature is available in Consul Enterprise. + +## Introduction + +Consul allows you to configure settings to limit the rate of HTTP requests a service receives from sources in the mesh. Limiting request rates is one strategy for building a resilient and highly-available network. + +Consul applies rate limits per service instance. As an example, if you specify a rate limit of 100 requests per second (RPS) for a service and five instances of the service are available, the service accepts a total of 500 RPS, which equals 100 RPS per instance. + +You can limit request rates for all traffic to a service, as well as set rate limits for specific URL paths on a service. When multiple rate limits are configured on a service, Consul applies the limit configured for the first matching path. As a result, the maximum RPS for a service is equal to the number of service instances deployed for a service multiplied by either the rate limit configured for that service or the rate limit for the path. + +## Requirements + +Consul Enterprise v1.17.0 or later + +## Limit request rates to a service on all paths + +Specify request rate limits in the service defaults configuration entry. Create or edit the existing service defaults configuration entry for your service and specify the following fields: + + + + +1. `RateLimits.InstanceLevel.RequestPerSecond`: Set an average number of requests per second that Consul should allow to the service. The number of requests may momentarily exceed this value up to the value specified in the `RequestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. +1. `RateLimits.InstanceLevel.RequestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service. Consul blocks any additional requests over this limit. + +The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service to an average of 1000 requests per second, but allows the service to accept up to 1500 concurrent requests. + +```hcl +Kind = "service-defaults" +Name = "billing" +Protocol = "http" + +RateLimit { + InstanceLevel { + RequestsPerSecond = 1000 + RequestsMaxBurst = 1500 + } +} +``` + + + + +1. `spec.rateLimits.instanceLevel.requestPerSecond`: Set an average number of requests per second that Consul should allow to the service. The number of requests may momentarily exceed this value up to the value specified in the `requestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. +1. `spec.rateLimits.instanceLevel.requestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service. Consul blocks any additional requests over this limit. + +The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service to an average of 1000 requests per second, but allows the service to accept up to 1500 concurrent requests. + +```yaml +kind: ServiceDefaults +name: billing +protocol: http +rateLimit: + instanceLevel: + requestsPerSecond: 1000 + requestsMaxBurst: 1500 +``` + + + + +Refer to the [service defaults configuration entry reference](/consul/docs/reference/config-entry/service-defaults) for additional specifications and example configurations. + +## Specify request rate limits for specific paths + +Specify request rate limits in the service defaults configuration entry. Create or edit the existing service defaults configuration entry for your service and configure the following parameters: + + + + +1. Add a `RateLimits.InstanceLevel.Routes` block to the configuration entry. The block contains the limits and matching criteria for determining which paths to set limits on. +1. In the `Routes` block, configure one of the following match criteria to determine which path to set the limits on: + - `PathExact`: Specifies the exact path to match on the request path. + - `PathPrefix`: Specifies the path prefix to match on the request path. + - `PathRegex`: Specifies a regular expression to match on the request path. +1. Configure the limits you want to enforce in the `Routes` block as well. You can configure the following parameters: + - `RequestsPerSecond`: Set an average number of requests per second that Consul should allow to the service through the matching path. The number of requests may momentarily exceed this value up to the value specified in the `RequestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. This configuration overrides the value specified in `RateLimits.InstanceLevel.RequestPerSecond` field of the configuration entry. + - `RequestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service through the matching path. Consul blocks any additional requests over this limit. This configuration overrides the value specified in `RateLimits.InstanceLevel.RequestsMaxBurst` field of the configuration entry. + +The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service depending on the path it received the request on. The service is limited to an average of 500 requests when the request is made on an HTTP path with the `/api` prefix. When an instance of the billing service receives a request from the `/login` path, it is limited to an average of 100 requests per second and 500 concurrent connections. + +```hcl +Kind = "service-defaults" +Name = "billing" +Protocol = "http" + +RateLimit { + InstanceLevel { + Routes = [ + { + PathPrefix = "/api" + RequestsPerSecond = 500 + }, + { + PathPrefix = "/login" + RequestsPerSecond = 100 + RequestsMaxBurst = 500 + } + ] + } +} +``` + + + + +1. Add a `spec.rateLimits.instanceLevel.routes` block to the configuration entry. The block contains the limits and matching criteria for determining which paths to set limits on. +1. In the `routes` block, configure one of the following match criteria for enabling Consul to determine which path to set the limits on: + - `pathExact`: Specifies the exact path to match on the request path. When using this field. + - `pathPrefix`: Specifies the path prefix to match on the request path. + - `pathRegex`: Specifies a regular expression to match on the request path. +1. Configure the limits you want to enforce in the `routes` block as well. You can configure the following parameters: + - `requestsPerSecond`: Set an average number of requests per second that Consul should allow to the service through the matching path. The number of requests may momentarily exceed this value up to the value specified in the `requestsMaxBurst` parameter, but Consul temporarily lowers the speed of the transactions. This configuration overrides the value specified in `spec.rateLimits.instanceLevel.requestPerSecond` field of the CRD. + - `requestsMaxBurst`: Set the maximum number of concurrent requests that Consul momentarily allows to the service through the matching path. Consul blocks any additional requests over this limit. This configuration overrides the value specified in `spec.rateLimits.instanceLevel.requestsMaxBurst` field of the CRD. + +The following example configures the default behavior for a service named `billing`. This configuration limits each instance of the billing service depending on the path it received the request on. The service is limited to an average of 500 requests when the request is made on an HTTP path with the `/api` prefix. When an instance of the billing service receives a request from the `/login` path, it is limited to an average of 100 requests per second and 500 concurrent connections. + +```yaml +kind: service-defaults +name: billing +protocol: http +rateLimit: + instanceLevel: + routes: + - pathPrefix: /api + requestsPerSecond: 500 + - pathPrefix: /login + requestsPerSecond: 100 + requestsMaxBurst: 500 +``` + + + + +Refer to the [service defaults configuration entry reference](/consul/docs/reference/config-entry/service-defaults) for additional specifications and example configurations. \ No newline at end of file diff --git a/website/content/docs/manage-traffic/route-local.mdx b/website/content/docs/manage-traffic/route-local.mdx new file mode 100644 index 000000000000..ef8498c46b5b --- /dev/null +++ b/website/content/docs/manage-traffic/route-local.mdx @@ -0,0 +1,360 @@ +--- +layout: docs +page_title: Route traffic to local upstreams +description: Learn how to enable locality-aware routing in Consul so that proxies can send traffic to upstreams in the same region and zone as the downstream service. Routing traffic based on locality can reduce latency and cost. +--- + +# Route traffic to local upstreams + +This topic describes how to enable locality-aware routing so that Consul can prioritize sending traffic to upstream services that are in the same region and zone as the downstream service. + + This feature is available in Consul Enterprise. + +## Introduction + +By default, Consul balances traffic to all healthy upstream instances in the cluster, even if the instances are in different network zones. You can specify the cloud service provider (CSP) locality for Consul server agents and services registered to the service mesh, which enables several benefits: + +- Consul prioritizes the nearest upstream instances when routing traffic through the mesh. +- When upstream service instances becomes unhealthy, Consul prioritizes failing over to instances that are in the same region as the downstream service. Refer to [Failover](/consul/docs/manage-traffic/failover) for additional information about failover strategies in Consul. + +When properly implemented, routing traffic to local upstreams can reduce latency and transfer costs associated with sending requests to other regions. + +### Workflow + +For networks deployed to virtual machines, complete the following steps to route traffic to local upstream services: + +1. Specify the region and zone for your Consul client agents. This allows services to inherit the region and zone configured for the Consul agent that the services are registered with. +1. Specify the localities of your service instances. This step is optional and is only necessary when defining a custom network topology or when your deployed environment requires explicitly set localities for certain service's instances. +1. Configure service mesh proxies to route traffic locally within the partition. + +#### Container orchestration platforms + +If you deployed Consul to a Kubernetes or ECS environment using `consul-k8s` or `consul-ecs`, service instance locality information is inherited from the host machine. As a result, you do not need to specify the regions and zones on containerized platforms unless you are implementing a custom deployment. + +On Kubernetes, Consul automatically populates geographic information about service instances using the `topology.kubernetes.io/region` and `topology.kubernetes.io/zone` labels from the Kubernetes nodes. On AWS ECS, Consul uses the `AWS_REGION` environment variable and `AvailabilityZone` attribute of the ECS task meta. + +### Requirements + +You should only enable locality-aware routing when each set of external upstream instances within the same zone and region have enough capacity to handle requests from downstream service instances in their respective zones. Locality-aware routing is an advanced feature that may adversely impact service capacity if used incorrectly. When enabled, Consul routes all traffic to the nearest set of service instances and only fails over when no healthy instances are available in the nearest set. + +## Specify the locality of your Consul agents + +The `locality` configuration on a Consul client applies to all services registered to the client. + +1. Configure the `locality` block in your Consul client agent configuration files. The `locality` block is a map containing the `region` and `zone` parameters. + + The parameters should match the values for regions and zones defined in your network. Refer to [`locality`](/consul/docs/reference/agent/configuration-file/service-mesh#locality) in the agent configuration reference for additional information. + +1. Start or restart the agent to apply the configuration. Refer to [Starting a Consul agent](/consul/docs/agent#starting-the-consul-agent) for instructions. + +In the following example, the agent is running in the `us-west-1` region and `us-west-1a` zone on AWS: + +```hcl +locality = { + region = "us-west-1" + zone = "us-west-1a" +} +``` + +## Specify the localities of your service instances (optional) + +This step is optional in most scenarios. Refer to [Workflow](#workflow) for additional information. + +1. Configure the `locality` block in your service definition for both downstream (client) and upstream services. The `locality` block is a map containing the `region` and `zone` parameters. When you start a proxy for the service, Consul passes the locality to the proxy so that it can route traffic accordingly. + + The parameters should match the values for regions and zones defined in your network. Refer to [`locality`](/consul/docs/reference/service#locality) in the services configuration reference for additional information. + +1. Verify that your service is also configured with a proxy. Refer to [Define service mesh proxy](/consul/docs/reference/proxy/sidecar#define-service-mesh-proxy) for additional information. +Register or re-register the service to apply the configuration. Refer to [Register services and health checks](/consul/docs/register/service/vm) for instructions. + +In the following example, the `web` service is available in the `us-west-1` region and `us-west-1a` zone on AWS: + +```hcl +service { + id = "web" + locality = { + region = "us-west-1" + zone = "us-west-1a" + } + connect = { sidecar_service = {} } +} +``` + +If registering services manually via the `/agent/service/register` API endpoint, you can specify the `locality` configuration in the payload. Refer to [Register Service](/consul/api-docs/agent/service#register-service) in the API documentation for additional information. + +## Enable service mesh proxies to route traffic locally + +You can configure the default routing behavior for all proxies in the mesh as well as configure the routing behavior for specific services. + +### Configure default routing behavior + +Configure the `PrioritizeByLocality` block in the proxy defaults configuration entry and specify the `failover` mode. This configuration enables proxies in the mesh to use the region and zone defined in the service configuration to route traffic. Refer to [`PrioritizeByLocality`](/consul/docs/reference/config-entry/proxy-defaults#prioritizebylocality) in the proxy defaults reference for details about the configuration. + + + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +PrioritizeByLocality = { + Mode = "failover" +} +``` + + + + + + +```json +{ + "kind": "proxy-defaults", + "name": "global", + "prioritizeByLocality": { + "mode": "failover" + } +} +``` + + + + + +```yaml +apiversion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + prioritizeByLocality: + mode: failover +``` + + + + + +Apply the configuration by either running the [`consul config write` CLI command](/consul/commands/config/write), applying the Kubernetes CRD, or calling the [`/config` HTTP API endpoint](/consul/api-docs/config). + + + + + ```shell-session + $ consul config write proxy-defaults.hcl + ``` + + + + + + ```shell-session + $ kubectl apply -f proxy-defaults.yaml + ``` + + + + + ```shell-session + $ curl --request PUT --data @proxy-defaults.hcl http://127.0.0.1:8500/v1/config + ``` + + + + +### Configure routing behavior for individual service + +1. Create a service resolver configuration entry and specify the following fields: + - `Name`: The name of the target upstream service for which downstream clients should use locality-aware routing. + - `PrioritizeByLocality`: This block enables proxies in the mesh to use the region and zone defined in the service configuration to route traffic. Set the `mode` inside the block to `failover`. Refer to [`PrioritizeByLocality`](/consul/docs/reference/config-entry/service-resolver#prioritizebylocality) in the service resolver reference for details about the configuration. + + + + + + ```hcl + Kind = "service-resolver" + Name = "api" + PrioritizeByLocality = { + Mode = "failover" + } + ``` + + + + + + + ```json + { + "kind": "service-resolver", + "name": "api", + "prioritizeByLocality": { + "mode": "failover" + } + } + ``` + + + + + + ```yaml + apiversion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: api + spec: + prioritizeByLocality: + mode: failover + ``` + + + + + +1. Apply the configuration by either running the [`consul config write` CLI command](/consul/commands/config/write), applying the Kubernetes CRD, or calling the [`/config` HTTP API endpoint](/consul/api-docs/config). + + + + + ```shell-session + $ consul config write api-resolver.hcl + ``` + + + + + + ```shell-session + $ kubectl apply -f api-resolver.yaml + ``` + + + + + ```shell-session + $ curl --request PUT --data @api-resolver.hcl http://127.0.0.1:8500/v1/config + ``` + + + + +### Configure locality on Kubernetes test clusters explicitly + +You can explicitly configure locality for each Kubernetes node so that you can test locality-aware routing with a local Kubernetes cluster or in an environment where `topology.kubernetes.io` labels are not set. + +Run the `kubectl label node` command and specify the locality as arguments. The following example specifies the `us-east-1` region and `us-east-1a` zone for the node: + +```shell-session +kubectl label node $K8S_NODE topology.kubernetes.io/region="us-east-1" topology.kubernetes.io/zone="us-east-1a" +``` + +After setting these values, subsequent service and proxy registrations in your cluster inherit the values from their local Kubernetes node. + +## Verify routes + +The routes from each downstream service instance to the nearest set of healthy upstream instances are the most immediately observable routing changes. + +Consul configures Envoy's built-in [`overprovisioning_factor`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/endpoint/v3/endpoint.proto#config-endpoint-v3-clusterloadassignment) and [outlier detection](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/outlier_detection.proto#config-cluster-v3-outlierdetection) settings to enforce failover behavior. However, Envoy does not provide granular metrics specific to failover or endpoint traffic within a cluster. As a result, using external observability tools that expose network traffic within your environment is the best method for observing route changes. + +To verify that locality-aware routing and failover configurations, you can inspect Envoy's xDS configuration dump for a downstream proxy. Refer to the [consul-k8s CLI docs](/consul/docs/reference/cli/consul-k8s#proxy-read) for details on how to obtain the xDS configuration dump on Kubernetes. For other workloads, use the Envoy [admin interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin) and ensure that you [include EDS](https://www.envoyproxy.io/docs/envoy/latest/operations/admin#get--config_dump?include_eds). + +Inspect the [priority](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/priority#arch-overview-load-balancing-priority-levels) on each set of endpoints under the upstream `ClusterLoadAssignment` in the `EndpointsConfigDump`. Alternatively, the same priorities should be visibile within the output of the [`/clusters?format=json`](https://www.envoyproxy.io/docs/envoy/latest/operations/admin#get--clusters?format=json) admin endpoint. + +```json +{ + "@type": "type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment", + "cluster_name": "web.default.dc1.internal.161d7b5a-bb5f-379c-7d5a-1fc7504f95da.consul", + "endpoints": [ + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "socket_address": { + "address": "10.42.2.6", + "port_value": 20000 + } + }, + "health_check_config": {} + }, + ... + }, + ... + ], + "locality": {} + }, + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "socket_address": { + "address": "10.42.3.6", + "port_value": 20000 + } + }, + "health_check_config": {} + }, + ... + }, + ... + ], + "locality": {}, + "priority": 1 + }, + { + "lb_endpoints": [ + { + "endpoint": { + "address": { + "socket_address": { + "address": "10.42.0.6", + "port_value": 20000 + } + }, + "health_check_config": {} + }, + ... + }, + ... + ], + "locality": {}, + "priority": 2 + } + ], + ... +} +``` + +### Force an observable failover + +To force a failover for testing purposes, scale the upstream service instances in the downstream's local zone or region, if no local zone instances are available, to `0`. + +Note the following behaviors: + + - Consul prioritizes failovers in ascending order starting with `0`. The highest priority, `0`, is not explicitly visible in xDS output. This is because `0` is the default value for that field. + - After Envoy failover configuration is in place, the specific timing of failover is determined by the downstream Envoy proxy, not Consul. Consul health status may not directly correspond to Envoy's failover behavior, which is also dependent on outlier detection. + +Refer to [Troubleshooting](#troubleshooting) if you do not observe the expected behavior. + +## Adjust load balancing and failover behavior + +You can adjust the global or per-service load balancing and failover behaviors by applying the property override Envoy extension. The property override extension allows you to set and remove individual properties on the Envoy resources Consul generates. Refer to [Configure Envoy proxy properties](/consul/docs/reference/proxy/extensions/property-override) for additional information. + +1. Add the `EnvoyExtensions` configuration block to the service defaults or proxy defaults configuration entry. +1. Configure the following settings in the `EnvoyExtensions` configuration: + - [`overprovisioning_factor`](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/endpoint/v3/endpoint.proto#config-endpoint-v3-clusterloadassignment) + - [outlier detection](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/cluster/v3/outlier_detection.proto#config-cluster-v3-outlierdetection) configuration. +1. Apply the configuration. Refer to [Apply the configuration entry](/consul/docs/reference/proxy/extensions/property-override#apply-the-configuration-entry) for details. + +By default, Consul sets `overprovisioning_factor` to `100000`, which enforces total failover, and `max_ejection_percent` to `100`. Refer to the Envoy documentation about these fields before attempting to modify them. + +## Troubleshooting + +If you do not see the expected priorities, verify that locality is configured in the Consul agent and that `PrioritizeByLocality` is enabled in your proxy defaults or service resolver configuration entry. When `PrioritizeByLocality` is enabled but the local proxy lacks locality configuration, Consul emits a warning log to indicate that the policy could not be applied: + +``` +`no local service locality provided, skipping locality failover policy` +``` diff --git a/website/content/docs/manage-traffic/virtual-service.mdx b/website/content/docs/manage-traffic/virtual-service.mdx new file mode 100644 index 000000000000..d9bdd3cec244 --- /dev/null +++ b/website/content/docs/manage-traffic/virtual-service.mdx @@ -0,0 +1,123 @@ +--- +layout: docs +page_title: Route traffic to a virtual service +description: >- + Define virtual services in service resolver config entries so that Consul on Kubernetes can route traffic to virtual services when transparent proxy mode is enabled for Envoy proxies. +--- + +# Route traffic to a virtual service + +This topic describes how to define virtual services so that Consul on Kubernetes can route traffic to virtual services when transparent proxy mode is enabled for Envoy proxies. + +## Overview + +You can define virtual services in service resolver configuration entries so that downstream applications can send requests to a virtual service using a Consul DNS name in peered clusters. Your applications can send requests to virtual services in the same cluster using KubeDNS. Service resolvers are part of the service mesh proxy upstream discovery chain. Refer to [Service mesh traffic management](/consul/docs/manage-traffic) for additional information. + +Complete the following steps to configure failover service instances in Consul on Kubernetes when proxies are in transparent proxy mode: + +1. Create a service resolver configuration entry. +1. Create intentions that allow the downstream service to access the real service and the virtual service. +1. Configure your application to call the discovery chain using the Consul DNS or KubeDNS. + +## Requirements + +- `consul-k8s` v1.2.0 or newer. +- Consul service mesh must be enabled. Refer to [How does Consul service mesh work on Kubernetes](/consul/docs/k8s/connect). +- Proxies must be configured to run in transparent proxy mode. +- To query virtual DNS names, you must use Consul DNS. +- To query the discovery chain using KubeDNS, the service resolver must be in the same partition as the running service. + +### ACL requirements + +The default ACLs that the Consul Helm chart configures are suitable for most cases, but if you have more complex security policies for Consul API access, refer to the [ACL documentation](/consul/docs/secure/acl) for additional guidance. + +## Create a service resolver configuration entry + +Specify the target failover in the [`spec.redirect.service`](/consul/docs/reference/config-entry/service-resolver#spec-redirect-service) field in the service resolver configuration entry. In the following example, the `virtual-api` service is configured to redirect to the `real-api`: + + + +```yaml +apiversion: consul.hashicorp.com/v1alpha1 +kind: ServiceResolver +metadata: + name: virtual-api +spec: + redirect: + service: real-api +``` + + + +Refer to the [service resolver configuration entry reference](/consul/docs/reference/config-entry/service-resolver) documentation for information about all service resolver configurations. + +You can apply the configuration using the `kubectl apply` command: + +```shell-session +$ kubectl apply -f virtual-api-redirect.yaml +``` + +## Create service intentions + +If intentions are not already defined, create and apply intentions that allow the appropriate downstream to access the real service and the target redirect service. In the following examples, the `frontend` service is allowed to send messages to the `virtual-api` and `real-api` services: + + + + +```yaml +apiversion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: virtual-api +spec: + destination: + name: virtual-api + sources: + - name: frontend + action: allow +--- +apiversion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: real-api +spec: + destination: + name: real-api + sources: + - name: frontend + action: allow +``` + + + +Refer to the [service intentions configuration entry reference](/consul/docs/reference/config-entry/service-intentions) for additional information about configuring service intentions. + +You can apply the configuration using the `kubectl apply` command: + +```shell-session +$ kubectl apply -f frontend-api-api-beta-allow.yaml +``` + +## Configure your application to call the DNS + +Configure your application to contact the discovery chain in either the Consul DNS or the KubeDNS. + +### Consul DNS + +You can query the Consul DNS using the `.virtual.consul` lookup format. For Consul Enterprise, your query string may need to include the namespace, partition, or both. Refer to the [DNS documentation](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups) for details on building virtual service lookups. + +In the following example, the application queries the Consul catalog for `virtual-api` over HTTP. By default, the lookup would query the `default` partition and `default` namespace if Consul Enterprise manages the network infrastructure: + +```text +http://virtual-api.virtual.consul/ +``` + +### KubeDNS + +You can query the KubeDNS if the real and virtual services are in the same Kubernetes cluster by specifying the name of the service. In the following example, the application queries KubeDNS for `virtual-api` over HTTP: + +```text +http://virtual-api..svc.cluster.local +``` + +Note that you cannot use KubeDNS if a corresponding Kubernetes service and pod do not exist. diff --git a/website/content/docs/manage-traffic/vm.mdx b/website/content/docs/manage-traffic/vm.mdx new file mode 100644 index 000000000000..1b1c317f8928 --- /dev/null +++ b/website/content/docs/manage-traffic/vm.mdx @@ -0,0 +1,168 @@ +--- +layout: docs +page_title: Manage traffic for peered clusters on VMs +description: >- + Combine service resolver configurations with splitter and router configurations to manage L7 traffic in Consul deployments with cluster peering connections. Learn how to define dynamic traffic rules to target peers for redirects and failover. +--- + +# Manage traffic for peered clusters on VMs + +This usage topic describes how to configure and apply the [`service-resolver` configuration entry](/consul/docs/reference/config-entry/service-resolver) to set up redirects and failovers between services that have an existing cluster peering connection. + +For Kubernetes-specific guidance for managing L7 traffic with cluster peering, refer to [Manage L7 traffic with cluster peering on Kubernetes](/consul/docs/manage-traffic/cluster-peering/k8s). + +## Service resolvers for redirects and failover + +When you use cluster peering to connect datacenters through their admin partitions, you can use [dynamic traffic management](/consul/docs/manage-traffic) to configure your service mesh so that services automatically forward traffic to services hosted on peer clusters. + +However, the `service-splitter` and `service-router` configuration entry kinds do not natively support directly targeting a service instance hosted on a peer. Before you can split or route traffic to a service on a peer, you must define the service hosted on the peer as an upstream service by configuring a failover in the `service-resolver` configuration entry. Then, you can set up a redirect in a second service resolver to interact with the peer service by name. + +For more information about formatting, updating, and managing configuration entries in Consul, refer to [How to use configuration entries](/consul/docs/fundamentals/config-entry). + +## Configure dynamic traffic between peers + +To configure L7 traffic management behavior in deployments with cluster peering connections, complete the following steps in order: + +1. Define the peer cluster as a failover target in the service resolver configuration. + + The following examples update the [`service-resolver` configuration entry](/consul/docs/reference/config-entry/service-resolver) in `cluster-01` so that Consul redirects traffic intended for the `frontend` service to a backup instance in peer `cluster-02` when it detects multiple connection failures. + + + + ```hcl + Kind = "service-resolver" + Name = "frontend" + ConnectTimeout = "15s" + Failover = { + "*" = { + Targets = [ + {Peer = "cluster-02"} + ] + } + } + ``` + + ```json + { + "ConnectTimeout": "15s", + "Kind": "service-resolver", + "Name": "frontend", + "Failover": { + "*": { + "Targets": [ + { + "Peer": "cluster-02" + } + ] + } + }, + "CreateIndex": 250, + "ModifyIndex": 250 + } + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend + spec: + connectTimeout: 15s + failover: + '*': + targets: + - peer: 'cluster-02' + service: 'frontend' + namespace: 'default' + ``` + + + +1. Define the desired behavior in `service-splitter` or `service-router` configuration entries. + + The following example splits traffic evenly between `frontend` services hosted on peers by defining the desired behavior locally: + + + + ```hcl + Kind = "service-splitter" + Name = "frontend" + Splits = [ + { + Weight = 50 + ## defaults to service with same name as configuration entry ("frontend") + }, + { + Weight = 50 + Service = "frontend-peer" + }, + ] + ``` + + ```json + { + "Kind": "service-splitter", + "Name": "frontend", + "Splits": [ + { + "Weight": 50 + }, + { + "Weight": 50, + "Service": "frontend-peer" + } + ] + } + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceSplitter + metadata: + name: frontend + spec: + splits: + - weight: 50 + ## defaults to service with same name as configuration entry ("frontend") + - weight: 50 + service: frontend-peer + ``` + + + +1. Create a local `service-resolver` configuration entry named `frontend-peer` and define a redirect targeting the peer and its service: + + + + ```hcl + Kind = "service-resolver" + Name = "frontend-peer" + Redirect { + Service = frontend + Peer = "cluster-02" + } + ``` + + ```json + { + "Kind": "service-resolver", + "Name": "frontend-peer", + "Redirect": { + "Service": "frontend", + "Peer": "cluster-02" + } + } + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceResolver + metadata: + name: frontend-peer + spec: + redirect: + peer: 'cluster-02' + service: 'frontend' + ``` + + diff --git a/website/content/docs/manage/disaster-recovery/backup-restore.mdx b/website/content/docs/manage/disaster-recovery/backup-restore.mdx new file mode 100644 index 000000000000..02c59f09d0b1 --- /dev/null +++ b/website/content/docs/manage/disaster-recovery/backup-restore.mdx @@ -0,0 +1,142 @@ +--- +layout: docs +page_title: Backup and restore a Consul datacenter +description: >- + Learn how to backup Consul data and state and restore a valid backup using the snapshot tool. +--- + +# Backup and restore a Consul datacenter + +This describes the process to backup and restore a Consul datacenter with a snapshot. + +## Overview + +Creating server backups is an important step in production deployments. Backups +provide a mechanism for the server to recover from an outage (network loss, +operator error, or a corrupted data directory). All servers write to the +`-data-dir` before commit on write requests. The same directory is used on +client agents to persist local state too, but this is not critical and can be +rebuilt when recreating an agent. Local client state is not backed up in this +tutorial, and doesn't need to be in general, only the server's Raft store state. + +Consul provides the [snapshot](/consul/commands/snapshot) +command which can be run using the CLI or the API. The `snapshot` command saves +a point-in-time snapshot of the state of the Consul servers which includes, but +is not limited to: + +- Key-Value entries +- the service catalog +- prepared queries +- sessions +- ACLs + +With [Consul Enterprise](/consul/commands/snapshot/agent), +the `snapshot agent` command runs periodically and writes to local or remote +storage, including Amazon S3, Azure Blob Storage, and Google Cloud Storage. + +By default, all snapshots are taken using `consistent` mode where requests are +forwarded to the leader which verifies that it is still in power before taking +the snapshot. Snapshots will not be saved if the Consul datacenter is degraded +or if no leader is available. To reduce the burden on the leader, it is possible +to [run the snapshot](/consul/commands/snapshot/save) on +any non-leader server using `stale` consistency mode. + +This spreads the load across nodes at the possible expense of losing full +consistency guarantees. Typically this means that a very small number of recent +writes may not be included. The omitted writes are typically limited to data +written in the last `100ms` or less from the recovery point. This is usually +suitable for disaster recovery. However, the system can't guarantee how stale +this may be if executed against a partitioned server. + +## Create your first backup + +The `snapshot save` command for backing up the Consul cluster state has many +configuration options. In a production environment, you will want to configure +ACL tokens and client certificates for security. The configuration options also +allow you to specify the Consul cluster and server to collect the backup data +from. Below are several examples. + +First, you will run the basic snapshot command on one of the servers using the +default configuration, including `consistent` mode. + +```shell-session +$ consul snapshot save backup.snap +Saved and verified snapshot to index 1176 +``` + +The backup will be saved locally in the directory where you ran the command. + +You can view metadata about the backup with the `inspect` subcommand. + +```shell-session +$ consul snapshot inspect backup.snap +ID 2-1182-1542056499724 +Size 4115 +Index 1182 +Term 2 +Version 1 +``` + +To understand each field review the inspect +[documentation](/consul/commands/snapshot/inspect). +Notably, the `Version` field does not correspond to the version of the data. +Rather it is the snapshot format version. + +Next, you will collect the Consul cluster data from a non-leader by specifying +stale mode. + +```shell-session +$ consul snapshot save -stale backup.snap +Saved and verified snapshot to index 2276 +``` + +Once ACLs and agent certificates are configured, they can be passed in as +environment variables or flags. + +```shell-session +$ export CONSUL_HTTP_TOKEN= +``` + +```shell-session +$ consul snapshot save -stale -ca-file= backup.snap +Saved and verified snapshot to index 2287 +``` + +In the above example, you set the token as an ENV and the ca-file with a +command line flag. + +For production use, the `snapshot save` command or +[API](/consul/api-docs/snapshot) should be scripted and run +frequently. In addition to frequently backing up the Consul cluster state, there +are several use cases when you would also want to manually execute `snapshot save`. +First, you should always backup the Consul cluster before upgrading. If the +upgrade does not go according to plan it is often not possible to downgrade due +to changes in the state store format. Restoring from a backup is the only option +so taking one before the upgrade will ensure you have the latest data. Second, +if the Consul cluster loses quorum it may be beneficial to save the state before +the servers become divergent. Finally, you can manually snapshot a Consul +datacenter and use that to bootstrap a new Consul datacenter with the same state. + +Operationally, the backup process does not need to be executed on every server. +Additionally, you can use the configuration options to save the backups to a +mounted filesystem. The mounted filesystem can even be cloud storage, such as +Amazon S3. The enterprise command `snapshot agent` automates this process. + +## Restore from backup + +Running the `restore` process should be straightforward. However, there are a +couple of actions you can take to ensure the process goes smoothly. First, make +sure the Consul datacenter you are restoring is stable and has a leader. You can +verify this using `consul operator raft list-peers` and checking server logs and +telemetry for signs of leader elections or network issues. + +You will only need to run the process once, on the leader. The Raft consensus +protocol ensures that all servers restore the same state. + +```shell-session +$ consul snapshot restore backup.snap +Restored snapshot +``` + +Like the `save` subcommand, restore has many configuration options. In +production, you would again want to use ACLs and certificates for security. diff --git a/website/content/docs/manage/disaster-recovery/federation.mdx b/website/content/docs/manage/disaster-recovery/federation.mdx new file mode 100644 index 000000000000..1184ca7767ff --- /dev/null +++ b/website/content/docs/manage/disaster-recovery/federation.mdx @@ -0,0 +1,360 @@ +--- +layout: docs +page_title: Disaster recovery for WAN-federated datacenters +description: >- + Recover from losing the primary datacenter in a federated Consul environment. Prepare for Consul disaster recovery using best practice recommendations. Implement a backup plan and a disaster recovery plan (DRP). +--- + +# Disaster recovery for WAN-federated datacenters + +This describes the process to backup and restore a federated Consul datacenter. + +## Overview + +Operating a multi-datacenter federated environment requires you to prepare for +many possibilities, including a complete outage of one of your physical +datacenters, or a cloud provider outage that might make one of the +components of your environment temporarily or permanently unavailable. + +This can have different implications depending on your configuration, but it is +something you want to be prepared for, in case the unthinkable happens. + +This tutorial guides you through the process of creating a recovery strategy +from a quorum loss to the complete loss of your primary datacenter. + +The recommended logical steps are listed in three main sections, the first will help you [prepare for an outage event](#disaster-preparation-strategy), the second will help you [restore your primary datacenter](#restore-primary-datacenter), and the third will help you [validate the restore was successful](#restore-and-validate-federation) and ensure federation is re-established. + +## Prerequisites + +### Base configuration + +To complete this tutorial, you will need two wide area network (WAN) joined +Consul datacenters with access control list (ACL) replication enabled. If you +are starting from scratch, follow these tutorials to set up your datacenters, +or use them to check that you have the proper configuration in place: + +- [Deployment Guide](/consul/tutorials/production-deploy/deployment-guide) +- [Securing Consul with ACLs](/consul/tutorials/security/access-control-setup-production) +- [Basic Federation with WAN Gossip](/consul/tutorials/networking/federation-gossip-wan) + +You will also need to enable ACL replication, which you can do by following the steps in the +[ACL Replication for Multiple +Datacenters](/consul/tutorials/security-operations/access-control-replication-multiple-datacenters) +tutorial. + +### Mesh gateways + +If you want to make use of the mesh gateway functionality, you can follow [Connect Services Across Datacenters with Mesh Gateways](/consul/tutorials/developer-mesh/service-mesh-gateways) to setup them for the two datacenters. + +### Central service configuration + +Lastly, you should set [`enable_central_service_config = true`](/consul/docs/reference/agent/configuration-file/general#enable_central_service_config) +on your Consul clients which will allow you to centrally configure the sidecar +and mesh gateway proxies. + +## Disaster recovery process + +The process can be summarized in the following phases: +1. Formulate a disaster preparation strategy +1. Outage event occurs for the primary datacenter +1. Restore primary datacenter +1. Restore and validate federation + +## Disaster preparation strategy + +### Use a recommended architecture + +Not every outage has the same level of impact. A lot of the resiliency of +your Consul federation will rely on proper configuration and the +adoption of a recommended architecture. + +Refer to [Consul Reference Architecture](/consul/tutorials/production-deploy/reference-architecture) to learn the recommended configurations for your +Consul datacenter. + +### Automate your deployment + +Re-deploying an entire datacenter after an outage is a non-trivial operation +that might require a considerable amount of time. You can reduce the time to recover +by following Infrastructure as Code (IaC) principles and using +tools such as Terraform and Vault to help you in the deployment and recovery process. + +The amount of downtime you experience from the loss of your primary datacenter is +directly proportional to the amount of time it takes you to deploy a new +primary datacenter. Reducing the deployment time via automation will not only +help you standardize the process and reduce errors, but also reduce the +downtime you will experience after a datacenter loss. + +### Have a backup strategy + +A Consul datacenter's state is more than just the initial configuration, it +includes data that is generated during normal operations such as KV entries, ACL +tokens, and intentions. When your datacenter fails, this information is lost and +cannot be manually recreated without a backup. For example you may keep you +Consul agent and service configuration in version control, however, you will not +be able to as easily recover security credentials since credentials generated +during the installation process are now different from the ones that were +present in the datacenter that was “lost” in the disaster. + +The process of then updating every secret in Consul to reflect the updated value +is tedious, error prone and hard to debug. + +Restoring from a snapshot ensures all the intentions, KV entries and ACL tokens +are reintroduced. + +You can follow [Backup Consul Data and State](/consul/tutorials/production-deploy/backup-and-restore) to learn how to perform a snapshot of your Consul +datacenter to use in case of disaster. + +### Have a TLS certificate distribution process in place + +Certificates are stored on the agent disk and not saved in a snapshot. This +means you will have to re-generate them. + +Consul comes equipped with a command ,`consul tls`, that permits you to generate +TLS certificates for the agents. This simplifies the automation of deployment by +giving you the ability to generate a CA and TLS certificates as part of the +process. + +As an alternative, we suggest you use Vault as a CA and TLS certificate generator to help you automate +the process. You can follow [Generate mTLS Certificates for Consul with Vault](/consul/tutorials/vault-secure/vault-pki-consul-secure-tls) to learn how to +automate certificate generation and distribution for your Consul server agents. + +### Adopt an adequate ACL down policy + +When your primary datacenter is down you lose your ability to validate +ACL policies. To mitigate this Consul has a configuration parameter, +`down_policy`, that tells Consul which strategy to follow if ACLs +cannot be validated against the primary datacenter. + +By default, Consul adopts the `extend-cache` approach, meaning that in case of an +outage Consul will allow cached ACL objects to be used, ignoring their TTL +values. If a non-cached ACL is used, `extend-cache` acts like `deny`. + +If you changed the `down_policy` to the more restrictive value of `deny`, you +will be impacted more severely from the outage, since +all ACL protected operations in the secondary datacenter will be denied until the primary datacenter is restored. + +## Outage event in the primary datacenter + +Depending on your configuration, multiple levels of outage can occur. + +This tutorial covers how to handle outages that render the primary +datacenter unable to serve requests, including service discovery requests to Consul DNS and intention validation. This can happen in many possible ways, but +the two extremes of the spectrum are the following: + +* **Loss of quorum in the primary datacenter** +This is an outage where the datacenter has less than *(N/2)+1* servers available, where N is the total number +of servers. While some of the nodes are unaffected, not enough are available to form a quorum. +* **Complete loss of the primary datacenter** +This is either a disaster event that completely wipes an entire facility or a +major outage of your cloud provider. + +### Loss of quorum in the primary datacenter + +In the scenario where you have lost enough servers in your primary datacenter that a quorum +cannot be reached, you will experience the same level service failure as if the whole +datacenter is unavailable. + +If the outage is limited to the server nodes, or did not severely impact your +client fleet, it might not be practical to rebuild the whole datacenter only to +re-establish a quorum. + +In this scenario, you can follow the [Outage Recovery](/consul/tutorials/datacenter-operations/recovery-outage) +tutorial to restore your server nodes and make sure they are able to reform +the raft cluster and elect a leader. + +Once the raft cluster is reformed, and the datacenter is again operational, +you can now restart any client agents that might have been affected by the +outage. + +### Complete loss of the primary datacenter + +The worst case scenario in a federated environment is the loss of the primary +datacenter. + +In this scenario, the course of action should aim to restore the lost datacenter as +quickly as possible. The following sections in the tutorial will guide you +through the necessary steps to perform a restore and to make sure all +functionality is re-established. + +## Restore primary datacenter + +The steps necessary to restore your environment after a full outage of your +primary datacenter are the following: + +* Restore your primary datacenter +* Restore the last snapshot to the newly recovered datacenter +* Apply agent tokens to the servers +* Perform a rolling restart of the servers in the primary datacenter +* Perform a rolling restart of the clients in the primary datacenter +* Restore and validate federation + +### Restore datacenter nodes + +You can use the same process you used for the initial deploy of your Consul +datacenter. + +To make sure the new deployment is able to communicate with the secondary +datacenter, you will need to ensure that you are using the same + +* CA certificate +* CA key +* Gossip encryption key + +as the ones you used for the pre-existing deployment. + +### Restore snapshot + +After the datacenter has been restored, you can restore the snapshot to it using +the `consul snapshot` command. + +```shell-session +$ consul snapshot restore -token= backup.snap + +Restored snapshot +``` + + + + The `token` used for the snapshot restore procedure needs to be a +token valid for the newly restored datacenter. If your restored datacenter does +not have ACL enabled, you can restore the snapshot without a token. + + + +### Set Consul ACL agent tokens + +The newly restarted nodes will now be missing the ACL tokens needed to +successfully join the datacenter. Once you restored the ACL system with the +snapshot restore you will be able to set the tokens for the different nodes +using the `consul acl` command. + + + + The following operations require a token with `acl = write` +privileges. Also, after the snapshot is restored the ACL system will now +contain the previous tokens created before the outage. You can use the any +management token you had in your datacenter before the outage. To setup the +token for the request you can either use the `CONSUL_HTTP_TOKEN` environment +variable or to pass it directly using the `-token=` command parameter. + + + +First retrieve the tokens available in the datacenter. + +```shell-session +$ consul acl token list + +... + +AccessorID: 694f15e2-b8f9-c5dd-cb92-8d7bc529df9f +Description: server-1 agent token +Local: false +Create Time: 2021-01-07 21:38:25.331288761 +0000 UTC +Legacy: false +Policies: + a660e45f-3f6e-501c-1303-3d39a83f6ff9 - acl-policy-server-node + +... +``` + +Then retrieve the token using the `AccessorID`. + +```shell-session +$ consul acl token read -id 694f15e2-b8f9-c5dd-cb92-8d7bc529df9f + +AccessorID: 694f15e2-b8f9-c5dd-cb92-8d7bc529df9f +SecretID: 237f1a27-3399-ebf0-29f1-9828d89159b1 +Description: server-1 agent token +Local: false +Create Time: 2021-01-07 21:38:25.331288761 +0000 UTC +Policies: + a660e45f-3f6e-501c-1303-3d39a83f6ff9 - acl-policy-server-node +``` + + + + You will have to set the token on all the server nodes +in order to get them able to re-join the datacenter successfully. Depending on +your configuration you might have different tokens for each server or re-use the +same token for all server agents but in both cases the token needs to be set on +all agents otherwise they will not be able to successfully join the datacenter. + + + +Finally apply the token to the server. + +```shell-session +$ consul acl set-agent-token agent 237f1a27-3399-ebf0-29f1-9828d89159b1 + +ACL token "agent" set successfully +``` + +### Perform a rolling restart of the servers + +After the snapshot is restored and the tokens have been set on the nodes, you +will observe errors in the server logs about duplicate node ids. This happens +because the servers received new node ids when they were reinstalled, and these node ids are different from +the ones stored in the snapshot. + +```log hideClipboard +... +[WARN] agent.fsm: EnsureRegistration failed: error="failed inserting node: Error while renaming Node ID: "835aa73d-78e9-ba63-d2da-8bbfba1329a7": Node name server-1 is reserved by node 4d2d0bfa-6d2d-c373-fff2-16ac612dca7a with name server-1 (172.19.0.3)" +[WARN] agent.fsm: EnsureRegistration failed: error="failed inserting node: Error while renaming Node ID: "49be2708-2f0b-0a1b-0069-9e1420fa251b": Node name server-2 is reserved by node ea086564-8d58-2918-682d-9092651f2157 with name server-2 (172.19.0.4)" +[WARN] agent.fsm: EnsureRegistration failed: error="failed inserting node: Error while renaming Node ID: "3398cdca-dc18-a786-e3a6-6f7deb5dcd0d": Node name server-3 is reserved by node d7529d1e-6924-41d8-fc40-fa0f131cc558 with name server-3 (172.19.0.5)" +... +``` + +To resolve these errors, you need to perform a `consul leave` on each server and +then start the server again. + +Once servers are restarted, the node ids will be set to the expected value +and this will resolve the errors in the logs. + +### Perform a rolling restart of the clients + +The same log errors will be present on the clients. After completing the server +restarts, you must perform the same operations on the clients to resolve the log +errors. + +## Restore and validate federation + +Once restored, it is possible that the new servers will have a different IP than +the one used to wan join them from the secondary datacenter. In that scenario, the +federation will be broken. + +To verify the federation you can use `consul members` on the primary datacenter. + +```shell-session +$ consul members -wan +Node Address Status Type Build Protocol DC Segment +server-1.primary 172.19.0.3:8302 alive server 1.9.0 2 primary +server-2.primary 172.19.0.4:8302 alive server 1.9.0 2 primary +server-3.primary 172.19.0.5:8302 alive server 1.9.0 2 primary +``` + +To restore the federation you must perform a rolling restart of the secondary +datacenter servers using the new primary datacenter servers' IP in the +`retry-join-wan` parameter. + +After the rolling restart, you should be able to observe all the servers in the +`consul members` command output. + +```shell-session +$ consul members -wan +Node Address Status Type Build Protocol DC Segment +server-1.secondary 172.19.0.7:8302 alive server 1.9.0 2 secondary +server-2.secondary 172.19.0.8:8302 alive server 1.9.0 2 secondary +server-3.secondary 172.19.0.9:8302 alive server 1.9.0 2 secondary +server-1.primary 172.19.0.3:8302 alive server 1.9.0 2 primary +server-2.primary 172.19.0.4:8302 alive server 1.9.0 2 primary +server-3.primary 172.19.0.5:8302 alive server 1.9.0 2 primary +``` + +## Next steps + +In this tutorial, you learned how to restore your federated environment in the +event of a full outage of your primary datacenter. + +You can follow [Provide Fault Tolerance with Redundancy Zones](/consul/tutorials/operate-consul/redundancy-zones) to learn how Consul +Enterprise helps you in making the deployment more robust by using different +zones (such as AWS Availability Zones) for your Consul deployment. diff --git a/website/content/docs/manage/disaster-recovery/index.mdx b/website/content/docs/manage/disaster-recovery/index.mdx new file mode 100644 index 000000000000..746d8ab0673f --- /dev/null +++ b/website/content/docs/manage/disaster-recovery/index.mdx @@ -0,0 +1,103 @@ +--- +layout: docs +page_title: Disaster recovery overview +description: >- + Prepare for Consul disaster recovery using best practice recommendations. Implement a backup plan and a disaster recovery plan (DRP) to minimize downtime in case a disaster event happens in your deployment. +--- + +# Disaster recovery overview + +This topic provides an overview of the best practices for preparing a disaster recovery strategy for your Consul cluster. + +## Introduction + +Disaster recovery is an important part of business continuity planning. Your strategy depends on the following considerations: + +- **Recovery point objective (RPO)** - The maximum amount of data loss that can be incurred from a disaster, failure, or comparable event. RPO is measured as a unit of time and there is usually a 1-to-1 correlation between RPO and backup frequency. +- **Recovery time objective (RTO)** - The amount of time that passes between application failure and full availability which includes how much time it takes to recover from the disaster or failure. RTO could be relatively short for a customer that already has another datacenter location available for disaster recovery purposes and replication of services and data occurs on a regular basis. You can also leverage leverage automation technologies to recover more quickly from a disaster. + +When a Consul cluster loses quorum, or if you lose the server agents completely, restoration requires you to build a new Consul cluster from the latest snapshot. You may also need to reinstall and configure Consul client agents. Automation technologies can greatly reduce the amount of time that it will take to perform these steps. Keep these facts in in mind when deciding between RPO and RTO. + +## Snapshot recommendations + +Our recommended method for backing up Consul state uses the [built-in Consul snapshot feature](/consul/commands/snapshot), which is available through the HTTP API or CLI. The tutorial [Backup Consul Data and State](/consul/tutorials/production-deploy/backup-and-restore) covers this in further detail. + + +Consul snapshots contain extremely sensitive data, such as credentials in recoverable form. Store snapshots on an encrypted medium with sufficiently strict access controls in place. + + +You should take snapshots of Consul clusters on a regular basis and store them on mounted or external storage. We suggest the use of object storage versus block or file based storage, such as Azure blobs, Google Cloud storage, or AWS S3 storage. Avoid local or ephemeral storage. + +You can automate this process using the [Consul Enterprise Snapshot agent](/consul/commands/snapshot/agent). + +We recommend that you regularly test and validate the restore process for critical systems to ensure that everything works as expected. This testing process is typically defined in a _Disaster recovery plan (DRP)_, which is a formal document created by an organization that contains the processes used to recover access to systems and data after a catastrophic event. DRPs typically also include a set of processes for testing and validating disaster recovery procedures and establish a regular cadence for these events. + +## ACL considerations + +When you restore a snapshot to a new Consul cluster, note the following behavior regarding tokens and the ACL system. + +| Token persistence enabled | ACL token provided in Consul client config | Consul client requires a re-configuration | +| --- | --- | --- | +| | | | +| Yes | No | No | +| No | Yes | No | +| Yes | Yes | No | +| No | No | Yes | + +- If [token persistence was enabled on client agents](/consul/docs/reference/agent/configuration-file/acl#acl_enable_token_persistence) when the snapshot was captured, then the client agents will resume function after you restore the cluster's server agents. +- If [the ACL token is specified directly in the client agent configuration](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent) when when the snapshot was captured, then the client agents resume function after you restore the cluster's server agents. +- If client agents were not configured in a way that persists access to a token, then client agents will not resume function after the restore because they not have permissions to register with the new Consul cluster. This situation applies when the ACL token was set using the API or CLI, or if the ACL token was set in an environment variable. Use the [`consul acl set-agent-token` command](/consul/commands/acl/set-agent-token#agent) or the `CONSUL_HTTP_TOKEN` variable to update the token on client agents before you restore a cluster with a snapshot. + +## Service failure recommendations + +To architect against outages caused by disasters that impact services registered with Consul, use [cluster peering failover with sameness groups](/consul/docs/multi-tenant/sameness-group/vm). With this setup, Consul can transparently failover requests to an unhealthy service to the same service in a different region and datacenter. + +## Region failure recommendations + +In the event of a total region failure, Consul and your services are likely down. To architect against this situation, [deploy Consul and your services in multiple regions with a global failover policy](/consul/tutorials/operate-consul/redundancy-zones) so that Consul reroutes network traffic to the alternate region during a disaster. Deploying identical Consul servers and services across multiple cloud regions satisfies datacenter latency requirements and limits the blast radius during large-scale disasters. + +## Multi-cluster disaster recovery considerations + +The disaster recovery considerations for single Consul cluster deployments apply to Consul multi-cluster deployments as well. However, there a few additional considerations that are specific to Consul multi-cluster deployments using WAN Federation. + +When you design and architect your WAN-federated Consul environment, it is important to consider the critical role of the primary datacenter in the multi-cluster deployment. The primary Consul datacenter serves as the source of truth for the following data. + +1. Certificate Authority management, if you use the built-in Consul CA. The root CA resides in the primary Consul datacenter and must sign the certificates for the additional Consul datacenters. +1. ACLs +1. Intentions + +It is important to consider both placement of the primary Consul datacenter as well as the steps required to recover from a disaster. The recommended approach is reviewed in detail below. + +### Clientless primary Consul datacenter + +Once you establish and federate a primary Consul datacenter; you cannot migrate, change, or move it. An effective pattern for large Consul multi-cluster deployments is to have a dedicated primary Consul datacenter with the sole purpose of serving as a primary. You would only include Consul servers in this primary datacenter and not connect any client nodes or services. This primary Consul datacenter can then be federated normally with other Consul datacenters, which will each contain both servers and clients. + +This approach provides two distinct advantages. + +- It becomes easier to move the primary Consul datacenter. For example, you may want to migrate it from an on premises datacenter to a cloud environment. Typically, this would entail performing a backup and restore of the primary Consul datacenter to the alternate location. Review the [Disaster Recovery for the Primary Datacenter tutorial](/consul/tutorials/datacenter-operations/recovery-outage-primary) for guidance on restoring a Consul cluster. +- In the event of a disaster, the additional Consul datacenters can still continue to function independently of the primary Consul datacenter although functionality will be reduced until the primary Consul datacenter is brought back online. See the table below for more details. + +### Primary Consul datacenter outage behaviors + +The table below assumes that the primary Consul datacenter is offline. It is implied that when referencing 'any Consul datacenter' that the primary Consul datacenter is not included. + +| Consul Cluster Functionality | Within local Consul datacenter | Within any Consul datacenter | Comments | +| --- | --- | --- | --- | +| Read ACLs | ✔ | ✔ | Assumes that the default setting of ‘extend cache’ is used for the ACL down policy | +| Create/Update/Delete ACLs | ✖ | ✖ | | +| Read Intentions | ✔ | ✔ | Assumes that Intentions were created when primary datacenter was online | +| Create/Update/Delete Intentions | ✖ | ✖ | | +| Create/Read/Update/Delete KV Store items | ✔ | ✔ | | +| Create/Read/Update/Delete Services | ✔ | ✔ | | +| Certificate Generation & Renewal | ✖ | ✖ | Certificates must be signed by the primary Consul datacenter | + +## Additional guidance + +For more information on disaster recovery, including detailed instructions on how to backup and restore Consul datacenters, refer to the following resources: + +- [Consul Disaster Recovery for the Primary Datacenter](/consul/tutorials/datacenter-operations/recovery-outage-primary) +- [Consul Outage Recovery](/consul/tutorials/datacenter-operations/recovery-outage) +- [Consl Redundancy Zones](/consul/tutorials/operate-consul/redundancy-zones) +- [Consul Backup & Restore](/consul/tutorials/production-deploy/backup-and-restore) +- [Disaster Recovery for Consul on + Kubernetes](/consul/tutorials/kubernetes-production/kubernetes-disaster-recovery) diff --git a/website/content/docs/manage/dns/forwarding/enable.mdx b/website/content/docs/manage/dns/forwarding/enable.mdx new file mode 100644 index 000000000000..06b4040a314d --- /dev/null +++ b/website/content/docs/manage/dns/forwarding/enable.mdx @@ -0,0 +1,452 @@ +--- +layout: docs +page_title: Enable DNS forwarding +description: -> + Learn how to configure different DNS servers to perform DNS forwarding to Consul servers. +--- + +# Enable DNS forwarding + +This page describes the process to enable DNS forwarding to Consul servers. + +You can apply these operations on every node where a Consul agent is running. + +## Requirements + +To enable DNS forwarding, your deployment must have the following: + +- A running Consul server instance +- One or more Consul client nodes with registered services in the Consul catalog +- The `iptables` command available, or one of the following local DNS servers: + - [systemd-resolved](#systemd-resolved) + - [BIND](#bind) + - [Dnsmasq](#dnsmasq) + - [Unbound](#unbound) + - [macOS system resolver](#macos) + +### Network address configuration + +The example configurations on this page assumes Consul's DNS server is listening on the loopback interface on the same node of the local DNS server. + +If Consul is not listening on the loopback IP, replace the references to `localhost` and `120.0.0.1` in the configuration and commands with the appropriate IP address for your environment. + +## systemd-resolved + +[`systemd-resolved`](https://www.freedesktop.org/software/systemd/man/latest/systemd-resolved.service.html) is a system service that provides network name resolution to local applications. It is the default local DNS server for many Linux distributions. + +To configure the `systemd-resolved` service so that it sends `.consul` domain queries to Consul, create a `consul.conf` file located in the `/etc/systemd/resolved.conf.d/` directory. + + + + +Add a `[Resolve]` section to your resolved configuration. + + + +```ini +[Resolve] +DNS=127.0.0.1 +DNSSEC=false +Domains=~consul +``` + + + +### Define port for Consul DNS server + +When using systemd 245 and older, you cannot specify port numbers in the `DNS` configuration field. systemd-resolved only uses port `53`, which is a privileged port. + +When you cannot specify ports for the system's configuration, there are two workarounds: + - [Configure Consul DNS service to listen on port `53`](/consul/docs/reference/agent/configuration-file/general#dns_port) instead of `8600`. + - Map port `53` to `8600` using `iptables`. + +Binding to port `53` usually requires running Consul as a privileged user or running Linux with the `CAP_NET_BIND_SERVICE` capability. +When using the Consul Docker image, add the following to the environment to allow Consul to use the port: `CONSUL_ALLOW_PRIVILEGED_PORTS=yes`. + +To avoid running Consul as a privileged user, the following `iptables` commands are sufficient to map port `53` to `8600` and redirect DNS queries to Consul. + +```shell-session +# iptables --table nat --append OUTPUT --destination localhost --protocol udp --match udp --dport 53 --jump REDIRECT --to-ports 8600 && \ + iptables --table nat --append OUTPUT --destination localhost --protocol tcp --match tcp --dport 53 --jump REDIRECT --to-ports 8600 +``` + + + + + +Systemd 246 and newer allow you to specify the DNS port directly in the `systemd-resolved` configuration file. +Previous versions of systemd required iptables rules to direct DNS traffic to Consul. + +Add a `[Resolve]` section to your resolved configuration. + + + +```ini +[Resolve] +DNS=127.0.0.1:8600 +DNSSEC=false +Domains=~consul +``` + + + + + + +PTR record queries are still sent to the other configured resolvers, in addition to Consul. + +After creating the resolved configuration, restart `systemd-resolved`. + +```shell-session +# systemctl restart systemd-resolved +``` + +The command produces no output. + +### Validate the systemd-resolved configuration + +Validate that `systemd-resolved` is active. + +```shell-session +# systemctl is-active systemd-resolved +active +``` + +Verify that `systemd-resolved` is configured to forward queries for the `consul` domain to Consul. + +```shell-session +# resolvectl domain +Global: ~consul +Link 2 (eth0): +``` + +Verify that `systemd-resolved` is able to resolve the Consul server address. + +```shell-session +# resolvectl query consul.service.consul +consul.service.consul: 127.0.0.1 + +-- Information acquired via protocol DNS in 6.6ms. +-- Data is authenticated: no +``` + +Confirm that `/etc/resolv.conf` points to the `stub-resolv.conf` file managed by `systemd-resolved`. + +```shell-session +$ ls -l /etc/resolv.conf +lrwxrwxrwx 1 root root 37 Jul 14 10:10 /etc/resolv.conf -> /run/systemd/resolve/stub-resolv.conf +``` + +Confirm that the IP address for `systemd-resolved`'s stub resolver is the configured `nameserver`. + + + +```shell-session +$ cat /etc/resolv.conf +## This file is managed by man:systemd-resolved(8). Do not edit. +## +## This is a dynamic resolv.conf file for connecting local clients to the +## internal DNS stub resolver of systemd-resolved. This file lists all +## configured search domains. +## +## Run "resolvectl status" to see details about the uplink DNS servers +## currently in use. +## +## Third party programs must not access this file directly, but only through the +## symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way, +## replace this symlink by a static file or a different symlink. +## +## See man:systemd-resolved.service(8) for details about the supported modes of +## operation for /etc/resolv.conf. + +nameserver 127.0.0.53 +options edns0 +``` + + + +Ensure that the operating system can resolve DNS queries to the `.consul` domain. + +```shell-session +$ host consul.service.consul +consul.service.consul has address 127.0.0.1 +``` + +### Using any local resolver with systemd + +By default, the local resolver stub in the `resolved.conf` file is configured to listen for UDP and TCP requests at `127.0.0.53:53`. However, you can set the `DNSStubListener` option to `false` so that your system can use any DNS configuration, as long as it loads earlier than `resolved`. + + + +```plaintext +DNSStubListener=false +``` + + + +### Configuring systemd-resolved for Docker + +By default, Docker replaces host files that use `localhost` addresses with its own DNS settings. This behavior may cause issues with `systemd-resolved` and Docker containers. + +To prevent Docker from replacing the host DNS settings, configure a stub resolver address in `systemd-resolved` and set the `dns` option in the `/etc/docker/daemon.json` file to the corresponding address. + + + +```plaintext +[Resolve] +DNS=127.0.0.1:8600 +DNSSEC=false +Domains=~consul +DNSStubListener=yes +DNSStubListenerExtra=172.17.0.1 +``` + + + + + +```json +{ + "dns": ["172.17.0.1"] + +} +``` + + + +After you make these changes, restart the `systemd-resolved` and `docker` services. + +## Dnsmasq + +Use [dnsmasq](http://www.thekelleys.org.uk/dnsmasq/doc.html) if you have a small network and need a lightweight DNS solution. + + + +If your distribution uses systemd, disable `systemd-resolved` before you follow these steps. + + + +Configure the `dnsmasq.conf` file or a series of files in the `/etc/dnsmasq.d` directory. Add server settings to your configuration file so that requests for the `consul` domain are forwarded to Consul DNS. + + + +```plaintext +# Enable forward lookup of the 'consul' domain: +server=/consul/127.0.0.1#8600 + +# Uncomment and modify as appropriate to enable reverse DNS lookups for +# common netblocks found in RFC 1918, 5735, and 6598: +#rev-server=0.0.0.0/8,127.0.0.1#8600 +#rev-server=10.0.0.0/8,127.0.0.1#8600 +#rev-server=100.64.0.0/10,127.0.0.1#8600 +#rev-server=127.0.0.1/8,127.0.0.1#8600 +#rev-server=169.254.0.0/16,127.0.0.1#8600 +#rev-server=172.16.0.0/12,127.0.0.1#8600 +#rev-server=192.168.0.0/16,127.0.0.1#8600 +#rev-server=224.0.0.0/4,127.0.0.1#8600 +#rev-server=240.0.0.0/4,127.0.0.1#8600 +# Accept DNS queries only from hosts whose address is on a local subnet. +#local-service +# Don't poll /etc/resolv.conf for changes. +#no-poll +# Don't read /etc/resolv.conf. Get upstream servers only from the command +# line or the dnsmasq configuration file (see the "server" directive below). +#no-resolv +# Specify IP address(es) of other DNS servers for queries not handled +# directly by consul. There is normally one 'server' entry set for every +# 'nameserver' parameter found in '/etc/resolv.conf'. See dnsmasq(8)'s +# 'server' configuration option for details. +#server=1.2.3.4 +#server=208.67.222.222 +#server=8.8.8.8 +# Set the size of dnsmasq's cache. The default is 150 names. Setting the +# cache size to zero disables caching. +#cache-size=65536 +``` + + + +Restart the `dnsmasq` service after creating the configuration. + +Refer to [`dnsmasq(8)`](http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html) for additional configuration settings such as specifying IP addresses for queries not handled directly by Consul. + + +## BIND + +[BIND](https://www.isc.org/bind/) is a robust DNS system. Its most prominent component, `named`, performs both of the main DNS server roles, acts as an authoritative name server for DNS zones, and is a recursive resolver in the network. + + + +If your distribution uses systemd, disable `systemd-resolved` before you follow these steps. + + + +To configure the BIND service to send `.consul` domain queries to Consul: + +1. Create a `named` configuration file with `DNSSEC` disabled. +1. Create a zone configuration file to manage the `.consul` domain. + +### Named configuration file + +Edit the `/etc/named.conf` to configure your BIND instance. Remember to disable `DNSSEC` so that Consul and BIND can communicate. Add an `include` section to include the zone file that you create in the next step. + +The following example shows a BIND configuration with `DNSSEC` disabled. + + + +```plaintext +options { + listen-on port 53 { 127.0.0.1; }; + listen-on-v6 port 53 { ::1; }; + directory "/var/named"; + dump-file "/var/named/data/cache_dump.db"; + statistics-file "/var/named/data/named_stats.txt"; + memstatistics-file "/var/named/data/named_mem_stats.txt"; + allow-query { localhost; }; + recursion yes; + + dnssec-enable no; + dnssec-validation no; + + /* Path to ISC DLV key */ + bindkeys-file "/etc/named.iscdlv.key"; + + managed-keys-directory "/var/named/dynamic"; +}; + +include "/etc/named/consul.conf"; +``` + + + +### Zone configuration file + +Set up a zone for your Consul-managed records in `consul.conf`. + + + +```dns-zone-file +zone "consul" IN { + type forward; + forward only; + forwarders { 127.0.0.1 port 8600; }; +}; +``` + + + +## Unbound + +Use [Unbound](https://www.unbound.net/) when you need a fast and lean DNS resolver for Linux and macOS. + + + +If your distribution uses systemd, disable `systemd-resolved` before you follow these steps. + + + + +The following example demonstrates a configuration for the `consul.conf` file in the `/etc/unbound/unbound.conf.d` directory. + +Add `server` and `stub-zone` settings to your Unbound configuration file. + + + +```plaintext +#Allow insecure queries to local resolvers +server: + do-not-query-localhost: no + domain-insecure: "consul" + +#Add consul as a stub-zone +stub-zone: + name: "consul" + stub-addr: 127.0.0.1@8600 +``` + + + +You may have to add the following line to the bottom of your `/etc/unbound/unbound.conf` file for the new configuration to be included. + + + +```plaintext +include: "/etc/unbound/unbound.conf.d/*.conf" +``` + + + +## iptables + +[iptables](https://www.netfilter.org/projects/iptables/index.html) is a generic firewalling software that allows you to define traffic rules for your system. + +If you do not have a local DNS server on the Consul agent node, use `iptables` to forward DNS requests on port `53` to the Consul agent running on the same machine without using a secondary service. + +This configuration realizes full DNS forwarding, which means that all DNS queries for the host are forwarded to Consul, not just the ones for the `.consul` top level domain. Consul's default configuration resolves only the `.consul` top level domain, so you must set the [`recursors`](/consul/docs/reference/agent/configuration-file/general#recursors) flag if you want your node to be able to resolve other domains when using `iptables` configuration. + +If you use DNS relay hosts in your network, do not place them on the same host as Consul. The redirects may intercept the traffic. + +### Configure Consul recursors + +Add recursors to your Consul configuration. + + + +```hcl +# DNS recursors +recursors = [ "1.1.1.1" ] +``` + + + +Recursors should not include the `localhost` address because the `iptables` redirects would intercept the requests. + +You can replace the `1.1.1.1` address in the example with another DNS server address. This is suitable for situations where an external DNS +service is already running in your infrastructure and is used as the recursor. + +### Create iptables rules + +After you configure Consul to use a valid recursor, add rules to `iptables` to redirect traffic from port `53` to port `8600`. + +```shell-session +# iptables -t nat -A PREROUTING -p udp -m udp --dport 53 -j REDIRECT --to-ports 8600 \ + iptables -t nat -A PREROUTING -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 8600 \ + iptables -t nat -A OUTPUT -d localhost -p udp -m udp --dport 53 -j REDIRECT --to-ports 8600 \ + iptables -t nat -A OUTPUT -d localhost -p tcp -m tcp --dport 53 -j REDIRECT --to-ports 8600 +``` + +## macOS + +On macOS systems, use the macOS system resolver to point all `.consul` requests to Consul. +The `man 5 resolver` command describes this feature in more detail. + +The following instructions require `sudo` or root access. + +To configure the macOS system resolver to forward DNS queries to Consul, add a resolver entry in the `/etc/resolver/` directory that points at the Consul agent. + +If you do not have this folder, create it. + +```shell-session +# mkdir -p /etc/resolver +``` + +Create a new file `/etc/resolver/consul` with `nameserver` and `port` entries. + + + +```plaintext +nameserver 127.0.0.1 +port 8600 +``` + + + +The configuration informs the macOS resolver daemon to forward all `.consul` TLD requests to `127.0.0.1` on port `8600`. + +## Next steps + +This instructions on this page helped you configure your node to forward DNS requests to Consul. + +To learn more on how to query Consul DNS once forwarding is enabled, refer to [DNS forwarding workflow](/consul/docs/services/discovery/dns-forwarding#workflow). + +For more information on other DNS features and configurations available in Consul, refer to [DNS usage overview](/consul/docs/discover/dns). diff --git a/website/content/docs/manage/dns/forwarding/index.mdx b/website/content/docs/manage/dns/forwarding/index.mdx new file mode 100644 index 000000000000..ae262ea605ef --- /dev/null +++ b/website/content/docs/manage/dns/forwarding/index.mdx @@ -0,0 +1,190 @@ +--- +layout: docs +page_title: DNS forwarding +description: -> + Learn how to configure your local DNS servers to perform DNS forwarding to Consul servers. +--- + +# DNS forwarding + +This page describes the process to configure different DNS servers to enable DNS forwarding to Consul servers. + +You may apply these operations on every node where a Consul agent is running. + +## Introduction + +You deployed a Consul datacenter and want to use Consul DNS interface for name resolution. + +When configured with default values, Consul exposes the DNS interface on port +8600. By default, Consul serves DNS from port 53. On most operating systems, +this requires elevated privileges. It is also common, for most operating +systems, to have a local DNS server already running on port 53. + +Instead of running Consul with an administrative or root account, you may forward appropriate queries to Consul, running on an unprivileged port, from another DNS server or using port redirect. + +There are two configurations for a node's DNS forwarding behavior: + + - **Conditional DNS forwarding**: The local DNS servers are configured to forward to Consul only queries relative to the `.consul` zone. All other queries are still served via the default DNS server in the node. + - **Full DNS forwarding**: Consul serves all DNS queries and forwards to a remote DNS server the ones outside `.consul` domain. + +### Conditional DNS forwarding + +We recommend the conditional DNS forwarding approach. This configuration lowers the Consul agent's resource consumption by limiting the number of DNS requests it handles. + +![Consul DNS conditional forwarding - Only .consul requests are routed to Consul](/img/consul-dns-conditional-forwarding.png#light-theme-only) +![Consul DNS conditional forwarding - Only .consul requests are routed to Consul](/img/consul-dns-conditional-forwarding-dark.png#dark-theme-only) + +In this configuration, Consul only serves queries relative to the `.consul` domain. There is no unnecessary load on Consul servers to serve queries from different domains. + +This behavior is not enabled by default. + +### Full DNS forwarding + +This approach can be useful in scenarios where the Consul agent's node is allocated limited resources and you want to avoid the overhead of running a local DNS server. In this configuration, Consul serves all DNS queries for all domains and forwards the ones outside the `.consul` domain to one or more configured forwarder servers. + +![Consul DNS forwarding - All requests are routed to Consul](/img/consul-dns-forwarding.png#light-theme-only) +![Consul DNS forwarding - All requests are routed to Consul](/img/consul-dns-forwarding-dark.png#dark-theme-only) + +This behavior is not enabled by default. Consul standard configuration only resolves DNS records inside the `.consul` zone. To enable DNS forwarding, you need to set the [recursors](/consul/docs/reference/agent/configuration-file/general#recursors) option in your Consul configuration. + +In this scenario, if a Consul DNS reply includes a `CNAME` record pointing outside the `.consul` top level domain, then the DNS reply only includes `CNAME` records by default. + +When `recursors` is set and the upstream resolver is functioning correctly, Consul tries to resolve CNAMEs and include any records (for example, A, AAAA, PTR) for them in its DNS reply. In these scenarios, Consul is used for full DNS forwarding and is able to serve queries for all domains. + +## Workflow + +To use DNS forwarding in Consul deployments, complete the following steps: + +1. Configure the local DNS service to enable DNS forwarding to Consul. Follow the instructions for one of the following services: + + - [systemd-resolved](/consul/docs/services/discovery/dns-forwarding/enable#systemd-resolved) + - [BIND](/consul/docs/services/discovery/dns-forwarding/enable#bind) + - [Dnsmasq](/consul/docs/services/discovery/dns-forwarding/enable#dnsmasq) + - [Unbound](/consul/docs/services/discovery/dns-forwarding/enable#unbound) + - [iptables](/consul/docs/services/discovery/dns-forwarding/enable#iptables) + - [macOS system resolver](/consul/docs/services/discovery/dns-forwarding/enable#macOS) + +1. Query the Consul DNS to confirm that DNS forwarding functions correctly. + + ```shell-session + $ dig consul.service.consul A + + ; <<>> DiG 9.16.48-Debian <<>> consul.service.consul A + ;; global options: +cmd + ;; Got answer: + ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51736 + ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 + + ;; OPT PSEUDOSECTION: + ; EDNS: version: 0, flags:; udp: 65494 + ;; QUESTION SECTION: + ;consul.service.consul. IN A + + ;; ANSWER SECTION: + consul.service.consul. 0 IN A 10.0.4.140 + consul.service.consul. 0 IN A 10.0.4.121 + consul.service.consul. 0 IN A 10.0.4.9 + + ;; Query time: 4 msec + ;; SERVER: 127.0.0.53#53(127.0.0.53) + ;; WHEN: Wed Jun 26 20:47:05 UTC 2024 + ;; MSG SIZE rcvd: 98 + + ``` + +1. Optionally, verify reverse DNS. + + ```shell-session + $ dig 140.4.0.10.in-addr.arpa. PTR + + ; <<>> DiG 9.16.48-Debian <<>> 140.4.0.10.in-addr.arpa. PTR + ;; global options: +cmd + ;; Got answer: + ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35085 + ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 + + ;; OPT PSEUDOSECTION: + ; EDNS: version: 0, flags:; udp: 65494 + ;; QUESTION SECTION: + ;140.4.0.10.in-addr.arpa. IN PTR + + ;; ANSWER SECTION: + 140.4.0.10.in-addr.arpa. 0 IN PTR consul-server-0.node.dc1.consul. + + ;; Query time: 0 msec + ;; SERVER: 127.0.0.53#53(127.0.0.53) + ;; WHEN: Wed Jun 26 20:47:57 UTC 2024 + ;; MSG SIZE rcvd: 97 + + ``` + + Use the `short` option for `dig` to only get the node name instead of the full output. + + ```shell-session + $ dig +short -x 10.0.4.140 + consul-server-0.node.dc1.consul. + ``` + +## Troubleshooting + +If your DNS server does not respond but you do get an answer from Consul, turn on your DNS server's query log to check for errors. + +### systemd-resolved + +Enable query logging for `systemd-resolved`: + +```shell-session +# resolvectl log-level debug +``` + +Check query log: + +```shell-session +# journalctl -r -u systemd-resolved +``` + +Disable query logging: + +```shell-session +# resolvectl log-level info +``` + +DNS forwarding may fail if you use the default `systemd-resolved` configuration and attempt to bind to `0.0.0.0`. The default configuration uses a DNS stub that listens for UDP and TCP requests at `127.0.0.53`. As a result, attempting to bind to `127.0.0.53` conflicts with the running stub. You can disable the stub as described in the [Using any local resolver with systemd](/consul/docs/services/discovery/dns-forwarding/enable#using-any-local-resolver-with-systemd) section to troubleshoot this problem. + +### Dnsmasq + +To enable query log refer to [Dnsmasq documentation](https://thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html). + +In particular, look for the `log-queries` and `log-facility` configuration option. + +When query log is enabled, it is possible to force Dnsmasq to emit a full cache dump using the `SIGUSR1` signal. + +### BIND + +Enable query log: + +```shell-session +$ rndc querylog +``` + +Check logs: + +```shell-session +$ tail -f /var/log/messages +``` + +The log may show errors like this: + + + +```plaintext +error (no valid RRSIG) resolving +error (no valid DS) resolving +``` + + + +This error indicates that `DNSSEC` is not disabled properly. + +If you receive errors about network connections, verify that there are no firewall +or routing problems between the servers running BIND and Consul. diff --git a/website/content/docs/manage/dns/forwarding/k8s.mdx b/website/content/docs/manage/dns/forwarding/k8s.mdx new file mode 100644 index 000000000000..324697a2f66e --- /dev/null +++ b/website/content/docs/manage/dns/forwarding/k8s.mdx @@ -0,0 +1,259 @@ +--- +layout: docs +page_title: Resolve Consul DNS requests in Kubernetes +description: >- + Use a k8s ConfigMap to configure KubeDNS or CoreDNS so that you can use Consul's `.service.consul` syntax for queries and other DNS requests. In Kubernetes, this process uses either stub-domain or proxy configuration. +--- + +# Resolve Consul DNS requests in Kubernetes + +This topic describes how to configure Consul DNS in +Kubernetes using a +[stub-domain configuration](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#configure-stub-domain-and-upstream-dns-servers) +if using KubeDNS or a [proxy configuration](https://coredns.io/plugins/forward/) if using CoreDNS. + +Once configured, DNS requests in the form `.service.consul` will +resolve for services in Consul. This works from all Kubernetes namespaces. + +-> **Note:** If you want requests to just `` (without the `.service.consul`) to resolve, then you'll need +to turn on [Consul to Kubernetes Service Sync](/consul/docs/k8s/service-sync#consul-to-kubernetes). + +## Consul DNS Cluster IP + +To configure KubeDNS or CoreDNS you'll first need the `ClusterIP` of the Consul +DNS service created by the [Helm chart](/consul/docs/reference/k8s/helm). + +The default name of the Consul DNS service will be `consul-dns`. Use +that name to get the `ClusterIP`: + +```shell-session +$ kubectl get svc consul-dns --output jsonpath='{.spec.clusterIP}' +10.35.240.78% +``` + +For this installation the `ClusterIP` is `10.35.240.78`. + +-> **Note:** If you've installed Consul using a different helm release name than `consul` +then the DNS service name will be `-consul-dns`. + +## KubeDNS + +If using KubeDNS, you need to create a `ConfigMap` that tells KubeDNS +to use the Consul DNS service to resolve all domains ending with `.consul`: + +Export the Consul DNS IP as an environment variable: + +```bash +export CONSUL_DNS_IP=10.35.240.78 +``` + +And create the `ConfigMap`: + +```shell-session +$ cat < **Note:** The `stubDomain` can only point to a static IP. If the cluster IP +of the Consul DNS service changes, then it must be updated in the config map to +match the new service IP for this to continue +working. This can happen if the service is deleted and recreated, such as +in full cluster rebuilds. + +-> **Note:** If using a different zone than `.consul`, change the stub domain to +that zone. + +Now skip ahead to the [Verifying DNS Works](#verifying-dns-works) section. + +## CoreDNS Configuration + +If using CoreDNS instead of KubeDNS in your Kubernetes cluster, you will +need to update your existing `coredns` ConfigMap in the `kube-system` namespace to +include a `forward` definition for `consul` that points to the cluster IP of the +Consul DNS service. + +Edit the `ConfigMap`: + +```shell-session +$ kubectl edit configmap coredns --namespace kube-system +``` + +And add the `consul` block below the default `.:53` block and replace +`` with the DNS Service's IP address you +found previously. + +```diff +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + addonmanager.kubernetes.io/mode: EnsureExists + name: coredns + namespace: kube-system +data: + Corefile: | + .:53 { + + } ++ consul { ++ errors ++ cache 30 ++ forward . ++ } +``` + +-> **Note:** The consul proxy can only point to a static IP. If the cluster IP +of the `consul-dns` service changes, then it must be updated to the new IP to continue +working. This can happen if the service is deleted and recreated, such as +in full cluster rebuilds. + +-> **Note:** If using a different zone than `.consul`, change the key accordingly. + +## OpenShift clusters + +-> **Note:** OpenShift CLI `oc` is utilized below complete the following steps. You can find more details on how to install OpenShift CLI from [Getting started with OpenShift CLI](https://docs.openshift.com/container-platform/latest/cli_reference/openshift_cli/getting-started-cli.html). + +You can use DNS forwarding to override the default forwarding configuration in the `/etc/resolv.conf` file by specifying the `consul-dns` service for the `consul` subdomain (zone). + +First, find out the `consul-dns` service clusterIP: + +```shell-session +$ oc get svc consul-dns --namespace consul --output jsonpath='{.spec.clusterIP}' +172.30.186.254 +``` + +Take note of that IP address, and then edit the `default` DNS Operator: + +```shell-session +$ oc edit edit dns.operator/default +``` + +Append the following `servers` section entry to the `spec` section of the DNS Operator configuration: + +```yaml +spec: + servers: + - name: consul-server + zones: + - consul + forwardPlugin: + policy: Random + upstreams: + - 172.30.186.254 # Set to clusterIP of consul-dns service +``` + +Save the configuration changes. Next, verify the `dns-default` configmap has been updated with a `consul` forwarding zone: + +```shell-session +$ oc get configmap/dns-default -n openshift-dns -o yaml + + +... +data: + Corefile: | + # consul-server + consul:5353 { + prometheus 127.0.0.1:9153 + forward . 172.30.186.254 { + policy random + } + errors + log . { + class error + } + bufsize 1232 + cache 900 { + denial 9984 30 + } + } +... +``` + +## Verifying DNS Works + +To verify DNS works, run a simple job to query DNS. Save the following +job to the file `job.yaml` and run it: + + + +```yaml +apiVersion: batch/v1 +kind: Job +metadata: + name: dns +spec: + template: + spec: + containers: + - name: dns + image: anubhavmishra/tiny-tools + command: ['dig', 'consul.service.consul'] + restartPolicy: Never + backoffLimit: 4 +``` + + + +```shell-session +$ kubectl apply --filename job.yaml +``` + +Then query the pod name for the job and check the logs. You should see +output similar to the following showing a successful DNS query. If you see +any errors, then DNS is not configured properly. + +```shell-session +$ kubectl get pods --show-all | grep dns +dns-lkgzl 0/1 Completed 0 6m + +$ kubectl logs dns-lkgzl +; <<>> DiG 9.11.2-P1 <<>> consul.service.consul +;; global options: +cmd +;; Got answer: +;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 4489 +;; flags: qr aa rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 4 + +;; OPT PSEUDOSECTION: +; EDNS: version: 0, flags:; udp: 4096 +;; QUESTION SECTION: +;consul.service.consul. IN A + +;; ANSWER SECTION: +consul.service.consul. 0 IN A 10.36.2.23 +consul.service.consul. 0 IN A 10.36.4.12 +consul.service.consul. 0 IN A 10.36.0.11 + +;; ADDITIONAL SECTION: +consul.service.consul. 0 IN TXT "consul-network-segment=" +consul.service.consul. 0 IN TXT "consul-network-segment=" +consul.service.consul. 0 IN TXT "consul-network-segment=" + +;; Query time: 5 msec +;; SERVER: 10.39.240.10#53(10.39.240.10) +;; WHEN: Wed Sep 12 02:12:30 UTC 2018 +;; MSG SIZE rcvd: 206 +``` diff --git a/website/content/docs/manage/dns/views/enable.mdx b/website/content/docs/manage/dns/views/enable.mdx new file mode 100644 index 000000000000..113724768b65 --- /dev/null +++ b/website/content/docs/manage/dns/views/enable.mdx @@ -0,0 +1,106 @@ +--- +layout: docs +page_title: Enable Consul DNS proxy for Kubernetes +description: -> + Learn how to schedule a Consul DNS proxy for a Kubernetes Pod so that your services can return Consul DNS results for service discovery. +--- + +# Enable Consul DNS proxy for Kubernetes + +This page describes the process to deploy a Consul DNS proxy in a Kubernetes pod so that services can resolve Consul DNS requests. For more information, refer to [Consul DNS views for Kubernetes](/consul/docs/manage/dns/views). + +## Prerequisites + +You must meet the following minimum application versions to enable the Consul DNS proxy for Kubernetes: + +- Consul v1.20.0 or higher +- Either Consul on Kubernetes or the Consul Helm chart, v1.6.0 or higher + +## Update Helm values + +To enable the Consul DNS proxy, add the required [Helm values](/consul/docs/reference/k8s/helm) to your Consul on Kubernetes deployment. + +```yaml +connectInject: + enabled: true +dns: + enabled: true + proxy: true +``` + +### ACLs + +We recommend that you create a dedicated [ACL token with DNS +permissions](/consul/docs/secure/acl/token/dns) for the Consul DNS proxy. The +Consul DNS proxy requires these ACL permissions. + +```hcl +node_prefix "" { + policy = "read" +} + +service_prefix "" { + policy = "read" +} +``` + +Manage ACL tokens with Consul on Kubernetes, or configure the DNS proxy to +access a token stored in Kubernetes secret. To use a Kubernetes secret, add the +following configuration to your Helm chart. + +```yaml +dns: + proxy: + aclToken: + secretName: + secretKey: +``` + +## Retrieve Consul DNS proxy's address + +To look up the IP address for the Consul DNS proxy in the Kubernetes pod, run the following command. + +```shell-session +$ kubectl get services –-all-namespaces --selector="app=consul,component=dns-proxy" --output jsonpath='{.spec.clusterIP}' +10.96.148.46 +``` + +Use this address when you update the ConfigMap resource. + +## Update Kubernetes ConfigMap + +Create or update a [ConfigMap object in the Kubernetes cluster](https://kubernetes.io/docs/concepts/configuration/configmap/) so that Kubernetes forwards DNS requests with the `.consul` domain to the IP address of the Consul DNS proxy. + +The following example of a `coredns-custom` ConfigMap configures Kubernetes to forward Consul DNS requests in the cluster to the Consul DNS Proxy running on `10.96.148.46`. This resource modifies the CoreDNS without modifications to the original `Corefile`. + +```yaml +kind: ConfigMap +metadata: + name: coredns-custom + namespace: kube-system +data: + consul.server: | + consul:53 { + errors + cache 30 + forward . 10.96.148.46 + reload + } +``` + +After updating the DNS configuration, perform a rolling restart of the CoreDNS service. + +```shell-session +kubectl -n kube-system rollout restart deployment coredns +``` + +For more information about using a `coredns-custom` resource, refer to the [Rewrite DNS guide in the Azure documentation](https://learn.microsoft.com/en-us/azure/aks/coredns-custom#rewrite-dns). For general information about modifying a ConfigMap, refer to [the Kubernetes documentation](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns). + +## Next steps + +After you enable the Consul DNS proxy, services in the Kubernetes cluster can resolve Consul DNS addresses. + +- To learn more about Consul DNS for service discovery, refer to [DNS usage overview](/consul/docs/discover/dns). +- If your datacenter has ACLs enabled, create a [Consul ACL token](/consul/docs/secure/acl/token) for the Consul DNS proxy and then restart the DNS proxy. +- To enable service discovery across admin partitions, [export services between partitions](/consul/docs/reference/config-entry/exported-services). +- To use Consul DNS for service discovery with other runtimes, across cloud regions, or between cloud providers, [establish a cluster peering connection](/consul/docs/east-west/cluster-peering/establish/k8s). diff --git a/website/content/docs/manage/dns/views/index.mdx b/website/content/docs/manage/dns/views/index.mdx new file mode 100644 index 000000000000..71186db4172a --- /dev/null +++ b/website/content/docs/manage/dns/views/index.mdx @@ -0,0 +1,58 @@ +--- +layout: docs +page_title: Consul DNS views for Kubernetes +description: -> + Kubernetes clusters can use the Consul DNS proxy to return service discovery results from the Consul catalog. Learn about how to configure your Kubernetes cluster so that applications can resolve Consul DNS addresses without gossip communication. +--- + +# Consul DNS views for Kubernetes + +This page describes how to schedule a dedicated Consul DNS proxy in a Kubernetes Pod so that applications in Kubernetes can resolve Consul DNS addresses. You can use the Consul DNS proxy to enable service discovery across admin partitions in Kubernetes deployments without needing to deploy Consul client agents. + +## Introduction + +Kubernetes operators typically choose networking tools such as [kube-dns](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) or [CoreDNS](https://kubernetes.io/docs/tasks/administer-cluster/coredns/) for their service discovery operations, and choose to bypass Consul DNS entirely. These DNS options are often sufficient for service networking operations within a single Kubernetes cluster. + +Consul on Kubernetes supports [configuring Kubernetes to resolve Consul DNS](/consul/docs/k8s/dns). However, two common challenges result when you rely on these configurations: + +- Kubernetes requires Consul to use gossip communication with agents or dataplanes in order to enable Consul DNS. +- Consul requires that admin partitions be included in the DNS address. Otherwise, DNS queries assume the `default` partition by default. + +The `consul-dns` proxy does not require the presence of Consul client agents or Consul dataplanes, removing gossip communication as a requirement for Consul DNS on Kubernetes. The proxy is also designed for deployment in a Kubernetes cluster with [external servers enabled](/consul/docs/deploy/server/k8s/external). When a cluster runs in a non-default admin partition and uses the proxy to query external servers, Consul automatically recognizes the admin partition that originated the request and returns service discovery results scoped to that specific admin partition. + +To use Consul DNS for service discovery on Kubernetes, deploy a `dns-proxy` service in each Kubernetes Pod that needs to resolve Consul DNS. Kubernetes sends all DNS requests to the Kubernetes controller first. The controller forwards requests for the `.consul` domain to the `dns-proxy` service, which then queries the Consul catalog and returns service discovery results. + +## Workflows + +The process to enable Consul DNS views for service discovery in Kubernetes deployments consists of the following steps: + +1. In a cluster configured to use [external Consul servers](/consul/docs/deploy/server/k8s/external), update the Helm values for your Consul on Kubernetes deployment so that `dns.proxy.enabled=true`. When you apply the updated configuration, Kubernetes deploys the Consul DNS proxy. +1. Look up the IP address for the Consul DNS proxy in the Kubernetes cluster. +1. Update the ConfigMap resource in the Kubernetes cluster so that it forwards requests for the `.consul` domain to the IP address of the Consul DNS proxy. + +For more information about the underlying concepts described in this workflow, refer to [DNS forwarding overview](/consul/docs/manage/dns/forwarding). + +### OpenShift clusters + +The process to enable Consul DNS views for service discovery in OpenShift deployments is similar to the process on Kubernetes clusters. Complete the following steps: + +1. Look up the IP address for the Consul DNS proxy in the OpenShift cluster. +1. Update the default DNS Operator configuration in the OpenShift cluster so that it forwards requests for the `.consul` domain to the K8s cluster IP address of the Consul DNS service. +1. Optionally, [test the DNS resolver](/consul/docs/manage/dns/forwarding/k8s#verifying-dns-works) to make sure it was configured correctly. + +## Benefits + +Consul on Kubernetes currently uses [Consul dataplanes](/consul/docs/architecture/control-plane/dataplane) by default. These lightweight processes provide Consul access to the sidecar proxies in the service mesh but leave Kubernetes in charge of most other service discovery and service mesh operations. + +- **Use Kubernetes DNS and Consul DNS in a single deployment**. The Consul DNS proxy enables any application in a Pod to resolve an address through Consul DNS without disrupting the underlying Kubernetes DNS functionality. +- **Consul service discovery using fewer resources**. When you use the Consul DNS proxy for service discovery, you do not need to schedule Consul client agents or dataplanes as sidecars. One Kubernetes Service that uses the same resources as a single Consul dataplane provides Pods access to the Consul service catalog. +- **Consul DNS without gossip communication**. The Consul DNS service runs on both Consul server and Consul client agents, which use [gossip communication](/consul/docs/secure/encryption/gossip/enable) to ensure that service discovery results are up-to-date. The Consul DNS proxy provides access to Consul DNS without the security overhead of agent-to-agent gossip. + +## Constraints and limitations + +If you experience issues using the Consul DNS proxy for Kubernetes, refer to the following list of technical constraints and limitations: + +- You must use Kubernetes as your runtime to use the Consul DNS proxy. You cannot schedule the Consul DNS proxy in other container-based environments. +- To perform DNS lookups on other admin partitions, you must [export services + between partitions](/consul/docs/reference/config-entry/exported-services) + before you can query them. diff --git a/website/content/docs/manage/index.mdx b/website/content/docs/manage/index.mdx new file mode 100644 index 000000000000..c5d93bd5d2b3 --- /dev/null +++ b/website/content/docs/manage/index.mdx @@ -0,0 +1,50 @@ +--- +layout: docs +page_title: Manage Consul +description: >- + You can control several cluster-wide operations for all Consul agents. Learn about Consul operations and how to configure their underlying infrastructure, including DNS forwarding and operations at scale. +--- + +# Manage Consul + +This topic provides an overview of the processes involved in managing a group of server agents running in a Consul cluster. [Consul Enterprise](/consul/docs/enterprise) provides additional features to make deployments more resilient and help you manage cluster operations. + +## Introduction + +After you start the Consul servers, you can control several cluster-wide operations for all agents. + +Additional operations can improve a cluster's resiliency, help you prepare for recovery in case of a disaster, or fine tune agents when operating at scale. + +## DNS forwarding + +Consul exposes its DNS interface on port `8600`. Consul DNS runs alongside a system's default DNS server, commonly exposed on port `53`. To use Consul DNS in your network, you must configure the local service to forward DNS requests to Consul. + +Refer to the following topics for more information. + +- [DNS forwarding overview](/consul/docs/manage/dns/forwarding) +- [Enable DNS forwarding on VMs](/consul/docs/manage/dns/forwarding/enable) +- [Enable DNS forwarding on Kubernetes](/consul/docs/manage/dns/forwarding/k8s) + +### DNS views + +Kubernetes users also have the option to run a lightweight Consul DNS proxy so that services can resolve Consul DNS addresses with client agents or dataplanes. For more information, refer to [Consul DNS views](/consul/docs/manage/dns/views). + +## Consul at scale + +Consul datacenters can support thousands of nodes as members. When operating Consul at this scale, there are additional considerations for you to take into account as you architect your network. For more information, refer to [Recommendations for operating Consul at scale](/consul/docs/manage/scale). + +### Enterprise features + +Consul Enterprise supports the following features to improve the resiliency of your Consul datacenter and its operations. + +- [Automated backups](/consul/docs/manage/scale/automated-backup). Run an automated Consul snapshot agent to regularly backup your datacenter to a cloud storage bucket. +- [Read replicas](/consul/docs/manage/scale/read-replica). Deploy additional Consul server agents to improve catalog response time at scale. +- [Redundancy zones](/consul/docs/manage/scale/redundancy-zone). Deploy Consul servers across availability zones in a single cloud region in case a single physical data center fails. + +## Agent rate limits + +You can configure rate limits on RPC and gRPC traffic for Consul agents to mitigate the risks to Consul servers when client agents or services send excessive read or write requests to Consul resources. For more information, refer to [Agent trafic rate limiting](/consul/docs/manage/rate-limit). + +## Disaster recovery + +If a datacenter goes down, you can use snapshots to restore a Consul cluster's operations and registered services. For more information, refer to [disaster recovery](/consul/docs/manage/disaster-recovery). \ No newline at end of file diff --git a/website/content/docs/manage/rate-limit/global.mdx b/website/content/docs/manage/rate-limit/global.mdx new file mode 100644 index 000000000000..0a2daea9308b --- /dev/null +++ b/website/content/docs/manage/rate-limit/global.mdx @@ -0,0 +1,62 @@ +--- +layout: docs +page_title: Set a global limit on traffic rates +description: Use global rate limits to prevent excessive rates of requests to Consul servers. +--- + +# Set a global limit on traffic rates + +This topic describes how to configure rate limits for RPC and gRPC traffic to the Consul server. + +## Introduction + +Rate limits apply to each Consul server separately and limit the number of read requests or write requests to the server on the RPC and internal gRPC endpoints. + +Because all requests coming to a Consul server eventually perform an RPC or an internal gRPC request, global rate limits apply to Consul's user interfaces, such as the HTTP API interface, the CLI, and the external gRPC endpoint for services in the service mesh. + +Refer to [Initialize Rate Limit Settings](/consul/docs/agent/limits/init-rate-limits) for additional information about right-sizing your gRPC request configurations. + +## Set a global rate limit for a Consul server + +Configure the following settings in your Consul server configuration to limit the RPC and gRPC traffic rates. + +- Set the rate limiter [`mode`](/consul/docs/reference/agent/configuration-file/general#mode-1) +- Set the [`read_rate`](/consul/docs/reference/agent/configuration-file/general#read_rate) +- Set the [`write_rate`](/consul/docs/reference/agent/configuration-file/general#write_rate) + +In the following example, the Consul server is configured to prevent more than `500` read and `200` write RPC calls: + + + +```hcl +limits = { + rate_limit = { + mode = "enforcing" + read_rate = 500 + write_rate = 200 + } +} +``` + +```json +{ + "limits" : { + "rate_limit" : { + "mode" : "enforcing", + "read_rate" : 500, + "write_rate" : 200 + } + } +} + +``` + + + +## Monitor request rate traffic + +You should continue to monitor request traffic to ensure that request rates remain within the threshold you defined. Refer to [Monitor traffic rate limit data](/consul/docs/manage/rate-limit/monitor) for instructions about checking metrics and log entries, as well as troubleshooting information. + +## Disable request rate limits + +Set the [`limits.request_limits.mode`](/consul/docs/reference/agent/configuration-file/general#mode-1) to `disabled` to allow services to exceed the specified read and write requests limits, even limits specified in the [control plane request limits configuration entry](/consul/docs/reference/config-entry/control-plane-request-limit). Note that any other mode specified in the agent configuration only applies to global traffic rate limits. diff --git a/website/content/docs/manage/rate-limit/index.mdx b/website/content/docs/manage/rate-limit/index.mdx new file mode 100644 index 000000000000..2c4dee8c7ce0 --- /dev/null +++ b/website/content/docs/manage/rate-limit/index.mdx @@ -0,0 +1,60 @@ +--- +layout: docs +page_title: Agent traffic rate limiting +description: Rate limiting is a set of Consul server agent configurations that you can use to mitigate the risks to Consul servers when clients send excessive requests to Consul resources. +--- + +# Agent traffic rate limiting + +This topic provides overview information about the traffic rates limits you can configure for Consul datacenters. + +## Introduction + +Configuring rate limits on RPC and gRPC traffic mitigates the risks to Consul servers when client agents or services send excessive read or write requests to Consul resources. A _read_ request is defined as any request that does not modify Consul internal state. A _write_ request is defined as any request that modifies Consul internal state. Configure read and write request limits independently. + +## Workflow + +You can set global limits on the rate of read and write requests that affect individual servers in the datacenter. You can set limits for all source IP addresses, which enables you to specify a budget for read and write requests to prevent any single source IP from overwhelming the Consul server and negatively affecting the network. The following steps describe the general process for setting global read and write rate limits: + +1. Set arbitrary limits to begin understanding the upper boundary of RPC and gRPC loads in your network. Refer to [Initialize rate limit settings](/consul/docs/manage/rate-limit/initialize) for additional information. + +1. Monitor the metrics and logs and readjust the initial configurations as necessary. Refer to [Monitor rate limit data](/consul/docs/manage/rate-limit/monitor) + +1. Define your final operational limits based on your observations. If you are defining global rate limits, refer to [Set global traffic rate limits](/consul/docs/manage/rate-limit/global) for additional information. For information about setting limits per source IP address, refer to [Limit traffic rates for a source IP](/consul/docs/manage/rate-limit/source). + + +Setting limits per source IP requires Consul Enterprise. + + +### Order of operations + +You can define request rate limits in the agent configuration and in the control plane request limit configuration entry. The configuration entry also supports rate limit configurations for Consul resources. Consul performs the following order of operations when determining request rate limits: + +![Flowchart that describes the order of operations for determining request rate limits.](/img/agent-rate-limiting-ops-order.jpg#light-theme-only) +![Flowchart that describes the order of operations for determining request rate limits.](/img/agent-rate-limiting-ops-order-dark.jpg#dark-theme-only) + + + +## Kubernetes + +To define global rate limits, configure the `request_limits` settings in the Consul Helm chart. Refer to the [Helm chart reference](/consul/docs/reference/k8s/helm) for additional information. Refer to the [control plane request limit configuration entry reference](/consul/docs/reference/config-entry/control-plane-request-limit) for information about applying a CRD for limiting traffic rates from source IPs. diff --git a/website/content/docs/manage/rate-limit/initialize.mdx b/website/content/docs/manage/rate-limit/initialize.mdx new file mode 100644 index 000000000000..8b163d56cf46 --- /dev/null +++ b/website/content/docs/manage/rate-limit/initialize.mdx @@ -0,0 +1,31 @@ +--- +layout: docs +page_title: Initialize rate limit settings +description: Learn how to determine regular and peak loads in your network so that you can set the initial global rate limit configurations. +--- + +# Initialize rate limit settings + +Because each network has different needs and application, you need to find out what the regular and peak loads in your network are before you set traffic limits. We recommend completing the following steps to benchmark request rates in your environment so that you can implement limits appropriate for your applications. + +1. In the agent configuration file, specify a global rate limit with arbitrary values based on the following conditions: + + - Environment where Consul servers are running + - Number of servers and the projected load + - Existing metrics expressing requests per second + +1. Set the [`limits.request_limits.mode`](/consul/docs/reference/agent/configuration-file/general#mode) parameter in the agent configuration to `permissive`. In the following example, the configuration allows up to 1000 reads and 500 writes per second for each Consul agent: + + ```hcl + request_limits { + mode = "permissive" + read_rate = 1000.0 + write_rate = 500.0 + } + ``` +1. Observe the logs and metrics for your application's typical cycle, such as a 24 hour period. Refer to [Monitor traffic rate limit data](/consul/docs/manage/rate-limit/monitor) for additional information. Call the [`/agent/metrics`](/consul/api-docs/agent#view-metrics) HTTP API endpoint and check the data for the following metrics: + + - `rpc.rate_limit.exceeded` with value `global/read` for label `limit_type` + - `rpc.rate_limit.exceeded` with value `global/write` for label `limit_type` + +1. If the limits are not reached, set the `mode` configuration to `enforcing`. Otherwise, continue to adjust and iterate until you find your network's unique limits. diff --git a/website/content/docs/manage/rate-limit/monitor.mdx b/website/content/docs/manage/rate-limit/monitor.mdx new file mode 100644 index 000000000000..52e90bcaed40 --- /dev/null +++ b/website/content/docs/manage/rate-limit/monitor.mdx @@ -0,0 +1,79 @@ +--- +layout: docs +page_title: Monitor traffic rate limit data +description: Learn about the metrics and logs you can use to monitor server rate limiting activity, include rate limits for read operations and writer operations +--- + +# Monitor traffic rate limit data + +This topic describes Consul functionality that enables you to monitor read and write request operations taking place in your network. Use the functionality to help you understand normal workloads and set safe limits on the number of requests Consul client agents and services can make to Consul servers. + +## Access rate limit logs + +Consul prints a log line for each rate limit request. The log provides the necessary information for identifying the source of the request and the configured limit. The log provides the information necessary for identifying the source of the request and the configured limit. Consul prints the log `DEBUG` log level and can drop the log to avoid affecting the server health. Dropping a log line increments the `rpc.rate_limit.log_dropped` metric. + +The following example log shows that RPC request from `127.0.0.1:53562` to `KVS.Apply` exceeded the limit: + +```text +2023-02-17T10:01:15.565-0500 [DEBUG] agent.server.rpc-rate-limit: RPC +exceeded allowed rate limit: rpc=KVS.Apply source_addr=127.0.0.1:53562 +limit_type=global/write limit_enforced=false +``` + +Refer to [`log_file`](/consul/docs/reference/agent/configuration-file/log#log_file) for information about where to retrieve log files. + +## Review rate limit metrics + +Consul captures the following metrics associated with rate limits: + +- Type of limit +- Operation +- Rate limit mode + +Call the `/agent/metrics` API endpoint to view the metrics associated with rate limits. Refer to [View Metrics](/consul/api-docs/agent#view-metrics) for API usage information. In the following example, Consul dropped a call to the consul service because it exceeded the limit by one call: + +```shell-session +$ curl http://127.0.0.1:8500/v1/agent/metrics +{ + . . . + "Counters": [ + { + "Name": "consul.rpc.rate_limit.exceeded", + "Count": 1, + "Sum": 1, + "Min": 1, + "Max": 1, + "Mean": 1, + "Stddev": 0, + "Labels": { + "service": "consul" + } + }, + { + "Name": "consul.rpc.rate_limit.log_dropped", + "Count": 1, + "Sum": 1, + "Min": 1, + "Max": 1, + "Mean": 1, + "Stddev": 0, + "Labels": {} + } + ], + . . . +} +``` + +Refer to [Telemetry](/consul/docs/reference/agent/telemetry) for additional information. + +## Request denials + +When an HTTP request is denied for rate limiting reason, Consul returns one of the following errors: + +- **429 Resource Exhausted**: Indicates that a server is not able to perform the request but that another server could potentially fulfill it. This error is most common on stale reads because any server may fulfill stale read requests. To resolve this type of error, we recommend immediately retrying the request to another server. If the request came from a Consul client agent, the agent automatically retries the request up to the limit set in the [`rpc_hold_timeout`](/consul/docs/reference/agent/configuration-file/general#rpc_hold_timeout) configuration . + +- **503 Service Unavailable**: Indicates that server is unable to perform the request and that no other server can fulfill the request, either. This usually occurs on consistent reads or for writes. In this case we recommend retrying according to an exponential backoff schedule. If the request came from a Consul client agent, the agent automatically retries the request according to the [`rpc_hold_timeout`](/consul/docs/reference/agent/configuration-file/general#rpc_hold_timeout) configuration. + +Refer to [Rate limit reached on the +server](/consul/docs/troubleshoot/common-errors#rate-limit-reached-on-the-server) +for additional information. diff --git a/website/content/docs/manage/rate-limit/source.mdx b/website/content/docs/manage/rate-limit/source.mdx new file mode 100644 index 000000000000..59d4fc6417aa --- /dev/null +++ b/website/content/docs/manage/rate-limit/source.mdx @@ -0,0 +1,72 @@ +--- +layout: docs +page_title: Limit traffic rates for a source IP address +description: Learn how to set read and request rate limits on RPC and gRPC traffic from all source IP addresses to a Consul resource. +--- + +# Limit traffic rates from source IP addresses + +This topic describes how to configure RPC and gRPC traffic rate limits for source IP addresses. This enables you to specify a budget for read and write requests to prevent any single source IP from overwhelming the Consul server and negatively affecting the network. For information about setting global traffic rate limits, refer to [Set a global limit on traffic rates](/consul/docs/manage/rate-limit/global). For an overview of Consul's server rate limiting capabilities, refer to [Limit traffic rates overview](/consul/docs/manage/rate-limit). + + + +This feature requires Consul Enterprise. Refer to the [feature compatibility matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +## Overview + +You can set limits on the rate of read and write requests from source IP addresses to specific resources, which mitigates the risks to Consul servers when consul clients send excessive requests to a specific resource type. Before configuring traffic rate limits, you should complete the initialization process to understand normal traffic loads in your network. Refer to [Initialize rate limit settings](/consul/docs/manage/rate-limit/initialize) for additional information. + +Complete the following steps to configure traffic rate limits from a source IP address: + +1. Define rate limits in a control plan request limit configuration entry. You can set limits for different types of resources calls. + +1. Apply the configuration entry to enact the limits. + +You should also monitor read and write rate activity and make any necessary adjustments. Refer to [Monitor rate limit data](/consul/docs/manage/rate-limit/monitor) for additional information. + +## Define rate limits + +Create a control plane request limit configuration entry in the `default` partition. The configuration entry applies to all client requests targeting any partition. Refer to the [control plane request limit configuration entry](/consul/docs/reference/config-entry/control-plane-request-limit) reference documentation for details about the available configuration parameters. + +Specify the following parameters: + +- `kind`: This must be set to `control-plane-request-limit`. +- `name`: Specify the name of the service that you want to limit read and write operations to. +- `read_rate`: Specify overall number of read operations per second allowed from the service. +- `write_rate`: Specify overall number of write operations per second allowed from the service. + +You can also configure limits on calls to the key-value store, ACL system, and Consul catalog. + +## Apply the configuration entry + +If your network is deployed to virtual machines, use the `consul config write` command and specify the control plane request limit configuration entry to apply the configuration. For Kubernetes-orchestrated networks, use the `kubectl apply` command. + + + + +```shell-session +$ consul config write control-plane-request-limit.hcl +``` + + + + +```shell-session +$ consul config write control-plane-request-limit.json +``` + + + + +```shell-session +$ kubectl apply control-plane-request-limit.yaml +``` + + + + +## Disable request rate limits + +Set the [limits.request_limits.mode](/consul/docs/reference/agent/configuration-file/general#mode) in the agent configuration to `disabled` to allow services to exceed the specified read and write requests limits. The `disabled` mode applies to all request rate limits, even limits specified in the [control plane request limits configuration entry](/consul/docs/reference/config-entry/control-plane-request-limit). Note that any other mode specified in the agent configuration only applies to global traffic rate limits. diff --git a/website/content/docs/manage/scale/automated-backup.mdx b/website/content/docs/manage/scale/automated-backup.mdx new file mode 100644 index 000000000000..203c5900c685 --- /dev/null +++ b/website/content/docs/manage/scale/automated-backup.mdx @@ -0,0 +1,27 @@ +--- +layout: docs +page_title: Automated Backups (Enterprise) +description: >- + Learn about launching the snapshot agent to automatically backup files to a cloud storage provider so that you can restore Consul servers. Supported providers include Amazon S3, Google Cloud Storage, and Azure Blob Storage. +--- + +# Automated Backups + + + +This feature requires HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +Consul Enterprise enables you to run the snapshot agent within your environment as a service (Systemd as an example) or scheduled through other means. Once running, the snapshot agent service operates as a highly available process that integrates with the snapshot API to automatically manage taking snapshots, backup rotation, and sending backup files offsite to Amazon S3 (or another S3-compatible endpoint), Google Cloud Storage, or Azure Blob Storage. + +This capability provides an enterprise solution for backup and restoring the state of Consul servers within an environment in an automated manner. These snapshots are atomic and point-in-time. Consul datacenter backups include (but are not limited to): + +- Key/Value Store Entries +- Service Catalog Registrations +- Prepared Queries +- Sessions +- Access Control Lists (ACLs) +- Namespaces + +For more experience leveraging Consul's snapshot functionality, complete the [Datacenter Backups in Consul](/consul/tutorials/production-deploy/backup-and-restore?utm_source=docs) tutorial. For detailed configuration information on configuring the Consul Enterprise's snapshot agent, review the [Consul Snapshot Agent documentation](/consul/commands/snapshot/agent). diff --git a/website/content/docs/manage/scale/autopilot.mdx b/website/content/docs/manage/scale/autopilot.mdx new file mode 100644 index 000000000000..da8cb1e11309 --- /dev/null +++ b/website/content/docs/manage/scale/autopilot.mdx @@ -0,0 +1,298 @@ +--- +layout: docs +page_title: Consul autopilot +description: >- + Use Autopilot features to monitor the Raft cluster, introduce stable servers, and clean up dead servers. +--- + +# Consul autopilot + +This page describes Consul autopilot, which supports automatic, operator-friendly management of Consul +servers. It includes cleanup of dead servers, monitoring the state of the Raft +cluster, and stable server introduction. + +To use autopilot features (with the exception of dead server cleanup), the +[`raft_protocol`](/consul/docs/reference/agent/configuration-file/raft#raft_protocol) +setting in the Consul agent configuration must be set to 3 or higher on all +servers. In Consul `0.8` this setting defaults to 2; in Consul `1.0` it will +default to 3. For more information, check the [Version Upgrade +section](/consul/docs/upgrade/version-specific) on Raft protocol +versions in Consul `1.0`. + +In this tutorial you will learn how Consul tracks the stability of servers, how +to tune those conditions, and get some details on the other autopilot's features. + +- Server Stabilization +- Dead server cleanup +- Redundancy zones (only available in Consul Enterprise) +- Automated upgrades (only available in Consul Enterprise) + +Note, in this tutorial we are using examples from a Consul `1.7` datacenter, we +are starting with Autopilot enabled by default. + +## Default configuration + +The configuration of Autopilot is loaded by the leader from the agent's +[autopilot settings](/consul/docs/reference/agent/configuration-file/general#autopilot) +when initially bootstrapping the datacenter. Since autopilot and its features +are already enabled, you only need to update the configuration to disable them. + +All Consul servers should have Autopilot and its features either enabled or +disabled to ensure consistency across servers in case of a failure. +Additionally, Autopilot must be enabled to use any of the features, but the +features themselves can be configured independently. Meaning you can enable or +disable any of the features separately, at any time. + +You can check the default values using the `consul operator` CLI command or +using the [`/v1/operator/autopilot` +endpoint](/consul/api-docs/operator/autopilot) + + + + +```shell-session +$ consul operator autopilot get-config +``` + +```plaintext hideClipboard +CleanupDeadServers = true +LastContactThreshold = 200ms +MaxTrailingLogs = 250 +MinQuorum = 0 +ServerStabilizationTime = 10s +RedundancyZoneTag = "" +DisableUpgradeMigration = false +UpgradeVersionTag = "" +``` + + + + +```shell-session +$ curl http://127.0.0.1:8500/v1/operator/autopilot/configuration +``` + +```json +{ + "CleanupDeadServers": true, + "LastContactThreshold": "200ms", + "MaxTrailingLogs": 250, + "MinQuorum": 0, + "ServerStabilizationTime": "10s", + "RedundancyZoneTag": "", + "DisableUpgradeMigration": false, + "UpgradeVersionTag": "", + "CreateIndex": 5, + "ModifyIndex": 5 +} +``` + + + + +### Autopilot and Consul snapshots + +Changes to the autopilot configuration are persisted in the Raft database +maintained by the Consul servers. This means that autopilot configuration will +be included in the Consul snapshot data. Any snapshot taken prior to autopilot +configuration changes will contain the old configuration, and should be +considered unsafe to restore since they will remove the change and cause +unpredictable behaviors for the automations that might rely on the new +configuration. + +We recommend that you take a snapshot after any changes to the autopilot +configuration, and consider that as the last safe point in time to roll-back in +case a restore is needed. + +## Server health checking + +An internal health check runs on the leader to track the stability of servers. + +A server is considered healthy if all of the following conditions are true. + +- It has a SerfHealth status of 'Alive'. +- The time since its last contact with the current leader is below + `LastContactThreshold` (that by default is `200ms`). +- Its latest Raft term matches the leader's term. +- The number of Raft log entries it trails the leader by does not exceed + `MaxTrailingLogs` (that by default is `250`). + +The status of these health checks can be viewed through the +`/v1/operator/autopilot/health` HTTP endpoint, with a top level `Healthy` field +indicating the overall status of the datacenter: + +```shell-session +$ curl localhost:8500/v1/operator/autopilot/health | jq . +``` + +```json +{ + "Healthy": true, + "FailureTolerance": 1, + "Servers": [ + { + # ... + "Name": "server-dc1-1", + "Address": "10.20.10.11:8300", + "SerfStatus": "alive", + "Version": "1.7.2", + "Leader": false, + # ... + "Healthy": true, + "Voter": true, + # ... + }, + { + # ... + "Name": "server-2", + "Address": "10.20.10.12:8300", + "SerfStatus": "alive", + "Version": "1.7.2", + "Leader": false, + # ... + "Healthy": true, + "Voter": true, + # ... + }, + { + # ... + "Name": "server-3", + "Address": "10.20.10.13:8300", + "SerfStatus": "alive", + "Version": "1.7.2", + "Leader": false, + # ... + "Healthy": true, + "Voter": false, + # ... + } + ] +} +``` + +## Server stabilization time + +When a new server is added to the datacenter, there is a waiting period where it +must be healthy and stable for a certain amount of time before being promoted to +a full, voting member. This is defined by the `ServerStabilizationTime` +autopilot's parameter and by default is 10 seconds. + +In case your configuration require a different amount of time for the node to +get ready, for example in case you have some extra VM checks at startup that +might affect node resource availability, you can tune the parameter and assign +it a different duration. + +```shell-session +$ consul operator autopilot set-config -server-stabilization-time=15s +``` + +```plaintext hideClipboard +Configuration updated! +``` + +Use the `get-config` command to check the configuration. + +```shell-session +$ consul operator autopilot get-config +``` + +```plaintext hideClipboard +CleanupDeadServers = true +LastContactThreshold = 200ms +MaxTrailingLogs = 250 +MinQuorum = 0 +ServerStabilizationTime = 15s +RedundancyZoneTag = "" +DisableUpgradeMigration = false +UpgradeVersionTag = "" +``` + +## Dead server cleanup + +If autopilot is disabled, it will take 72 hours for dead servers to be +automatically reaped or an operator must write a script to `consul force-leave`. +If another server failure occurred it could jeopardize the quorum, even if the +failed Consul server had been automatically replaced. Autopilot helps prevent +these kinds of outages by quickly removing failed servers as soon as a +replacement Consul server comes online. When servers are removed by the cleanup +process they will enter the "left" state. + +With Autopilot's dead server cleanup enabled, dead servers will periodically be +cleaned up and removed from the Raft peer set to prevent them from interfering +with the quorum size and leader elections. The cleanup process will also be +automatically triggered whenever a new server is successfully added to the +datacenter. + +We suggest leaving the feature enabled to avoid introducing manual steps in +the Consul management to make sure the faulty nodes are not remaining in the +Raft pool for too long without the need for manual pruning. In test scenarios or +in environments where you want to delegate the faulty node pruning to an +external tool or system you can disable the dead server cleanup feature using +the `consul operator` command. + +```shell-session +$ consul operator autopilot set-config -cleanup-dead-servers=false +``` + +```plaintext hideClipboard +Configuration updated! +``` + +Use the `get-config` command to check the configuration. + +```shell-session +$ consul operator autopilot get-config +``` + +```plaintext hideClipboard +CleanupDeadServers = false +LastContactThreshold = 200ms +MaxTrailingLogs = 250 +MinQuorum = 0 +ServerStabilizationTime = 10s +RedundancyZoneTag = "" +DisableUpgradeMigration = false +UpgradeVersionTag = "" +``` + +## Enterprise features + +Consul Enterprise customer can take advantage of two more features of autopilot +to further strengthen and automate Consul operations. + +### Redundancy zones + +Consul’s redundancy zones provide high availability in the case of server +failure through the Enterprise feature of autopilot. Autopilot allows you to add +read replicas to your datacenter that will be promoted to the "voting" status in +case of voting server failure. + +You can use this tutorial to implement isolated failure domains such as AWS +Availability Zones (AZ) to obtain redundancy within an AZ without having to +sustain the overhead of a large quorum. + +Check [provide fault tolerance with redundancy zones](/consul/tutorials/operate-consul/redundancy-zones) +to learn more on the functionality. + +### Automated upgrades + +Consul’s automatic upgrades provide a simplified way to upgrade existing Consul +datacenter. This functionally is provided through the Enterprise feature of +autopilot. Autopilot allows you to add new servers directly to the datacenter +and waits until you have enough servers running the new version to perform a +leadership change and demote the old servers as "non-voters". + +Check [automate upgrades with Consul Enterprise](/consul/tutorials/datacenter-operations/upgrade-automation) +to learn more on the functionality. + +## Next steps + +In this tutorial you got an overview of the autopilot features and got examples +on how and when tune the default values. + +To learn more about the Autopilot settings you did not configure in this tutorial, +[last_contact_threshold](/consul/docs/reference/agent/configuration-file/general#last_contact_threshold) +and +[max_trailing_logs](/consul/docs/reference/agent/configuration-file/general#max_trailing_logs), +either read the agent configuration documentation or use the help flag with the +operator autopilot `consul operator autopilot set-config -h`. diff --git a/website/content/docs/manage/scale/index.mdx b/website/content/docs/manage/scale/index.mdx new file mode 100644 index 000000000000..5e1e348675d3 --- /dev/null +++ b/website/content/docs/manage/scale/index.mdx @@ -0,0 +1,285 @@ +--- +layout: docs +page_title: Recommendations for operating Consul at scale +description: >- + When using Consul for large scale deployments, you can ensure network resilience by tailoring your network to your needs. Learn more about HashiCorp's recommendations for deploying Consul at scale. +--- + +# Recommendations for operating Consul at scale + +This page describes how Consul's architecture impacts its performance with large scale deployments and shares recommendations for operating Consul in production at scale. + +## Overview + +Consul is a distributed service networking system deployed as a centralized set of servers that coordinate network activity using sidecars that are located alongside user workloads. When Consul is used for its service mesh capabilities, servers also generate configurations for Envoy proxies that run alongside service instances. These proxies support service mesh capabilities like end-to-end mTLS and progressive deployments. + +Consul can be deployed in either a single datacenter or across multiple datacenters by establishing WAN federation or peering connections. In this context, a datacenter refers to a named environment whose hosts can communicate with low networking latency. Typically, users map a Consul datacenter to a cloud provider region such as AWS `us-east-1` or Azure `East US`. + +To ensure consistency and high availability, Consul servers share data using the [Raft consensus protocol](/consul/docs/concept/consensus). When persisting data, Consul uses BoltDB to store Raft logs and a custom file format for state snapshots. For more information, refer to [Consul architecture](/consul/docs/architecture/control-plane). + +## General deployment recommendations + +This section provides general configuration and monitoring recommendations for operating Consul at scale. + +### Data plane resiliency + +To make service-to-service communication resilient against outages and failures, we recommend spreading multiple service instances for a service across fault domains. Resilient deployments spread services across multiples of the following: + +- Infrastructure-level availability zones +- Runtime platform instances, such as Kubernetes clusters +- Consul datacenters + +In the event that any individual domain experiences a failure, service failover ensures that healthy instances in other domains remain discoverable. Consul automatically provides service failover between instances within a single [admin partition](/consul/docs/multi-tenant/admin-partition) or datacenter. + +Service failover across Consul datacenters must be configured in the datacenters before you can use it. Use one of the following methods to configure failover across datacenters: + +- **If you are using Consul service mesh**: Implement failover using [service-resolver configuration entries](/consul/docs/reference/config-entry/service-resolver#failover). +- **If you are using Consul service discovery without service mesh**: Implement [geo-redundant failover using prepared queries](/consul/tutorials/developer-discovery/automate-geo-failover). + +### Control plane resiliency + +When a large number services are deployed to a single datacenter, the Consul servers may experience slower network performance. To make the control plane more resilient against slowdowns and outages, limit the size of individual datacenters by spreading deployments across availability zones, runtimes, and datacenters. + +#### Datacenter size + +To ensure resiliency, we recommend limiting deployments to a maximum of 5,000 Consul client agents per Consul datacenter. There are two reasons for this recommendation: + +1. **Blast radius reduction**: When Consul suffers a server outage in a datacenter or region, _blast radius_ refers to the number of Consul clients or dataplanes attached to that datacenter that can no longer communicate as a result. We recommend limiting the total number of clients attached to a single Consul datacenter in order to reduce the size of its blast radius. Even though Consul is able to run clusters with 10,000 or more nodes, it takes longer to bring larger deployments back online after an outage, which impacts time to recovery. +1. **Agent gossip management**: Consul agents use the [gossip protocol](/consul/docs/concept/gossip) to share membership information in a gossip pool. By default, all client agents in a single Consul datacenter are in a single gossip pool. Whenever an agent joins or leaves the gossip pool, the other agents propagate that event throughout the pool. If a Consul datacenter experiences _agent churn_, or a consistently high rate of agents joining and leaving a single pool, cluster performance may be affected by gossip messages being generated faster than they can be transmitted. The result is an ever-growing message queue. + +To mitigate these risks, we recommend a maximum of 5,000 Consul client agents in a single gossip pool. There are several strategies for making gossip pools smaller: + +1. Run exactly one Consul agent per host in the infrastructure. +1. Break up the single Consul datacenter into multiple smaller datacenters. +1. Enterprise users can define [network segments](/consul/docs/multi-tenant/network-segment) to divide the single gossip pool in the Consul datacenter into multiple smaller pools. + +If appropriate for your use case, we recommend breaking up a single Consul datacenter into multiple smaller datacenters. Running multiple datacenters reduces your network’s blast radius more than applying network segments. + +Be aware that the number 5,000 is a heuristic for deployments. The number of agents you deploy per datacenter is limited by performance, not Consul itself. Because gossip stability risk is determined by _the rate of agent churn_ rather than _the number of nodes_, a gossip pool with mostly static nodes may be able to operate effectively with more than 5,000 agents. Meanwhile, a gossip pool with highly dynamic agents, such as spot fleet instances and serverless functions where 10% of agents are replaced each day, may need to be smaller than 5,000 agents. + +For additional information about the specific tests we conducted on Consul deployments at scale in order to generate these recommendations, refer to [Consul Scale Test Report to Observe Gossip Stability](https://www.hashicorp.com/blog/consul-scale-test-report-to-observe-gossip-stability) on the HashiCorp blog. + +For most use cases, a limit of 5,000 agents is appropriate. When the `consul.serf.queue.Intent` metric is consistently high, it is an indication that the gossip pool cannot keep up with the sustained level of churn. In this situation, reduce the churn by lowering the number agents per datacenter. + +#### Kubernetes-specific guidance + +In Kubernetes, even though it is possible to deploy Consul agents inside pods alongside services running in the same pod, this unsupported deployment pattern has known performance issues at scale. At large volumes, pod registration and deregistration in Kubernetes causes gossip instability that can lead to cascading failures as services are marked unhealthy, resulting in further cluster churn. + +In Consul v1.14 and higher, Consul on Kubernetes does not need to run client agents on every node in a cluster for service discovery and service mesh. This deployment configuration lowers Consul’s resource usage in the data plane, but requires additional resources in the control plane to process [xDS resources](/consul/docs/reference/agent/configuration-file/xds#xds-server-parameters). To learn more, refer to [simplified service mesh with Consul Dataplane](/consul/docs/architecture/control-plane/dataplane). + +**If you use Kubernetes and Consul as a backend for Vault**: Use Vault’s integrated storage backend instead of Consul. A runtime dependency conflict prevents Consul dataplanes from being compatible with Vault. If you need to use Consul v1.14 and higher as a backend for Vault in your Kubernetes deployment, create a separate Consul datacenter that is not federated or peered to your other Consul servers. You can size this datacenter according to your needs and use it exclusively for backend storage for Vault. + +## Consul server deployment recommendations + +Consul server agents are an important part of Consul’s architecture. This section summarizes the differences between running managed and self-managed servers, as well as recommendations on the number of servers to run, how to deploy servers across redundancy zones, hardware requirements, and cloud provider integrations. + +### Consul server runtimes + +Consul servers can be deployed on a few different runtimes: + +- **HashiCorp Cloud Platform (HCP) Consul (Managed)**. These Consul servers are deployed in a hosted environment managed by HCP. To get started with HCP Consul servers in Kubernetes or VM deployments, refer to the [Deploy HCP Consul tutorial](/consul/tutorials/get-started-hcp/hcp-gs-deploy). +- **VMs or bare metal servers (Self-managed)**. To get started with Consul on VMs or bare metal servers, refer to the [Deploy Consul server tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy). For a full list of configuration options, refer to [Agents Overview](/consul/docs/fundamentals/agent). +- **Kubernetes (Self-managed)**. To get started with Consul on Kubernetes, refer to the [Deploy Consul on Kubernetes tutorial](/consul/tutorials/get-started-kubernetes/kubernetes-gs-deploy). +- **Other container environments, including Docker, Rancher, and Mesos (Self-managed)**. + +@include 'alerts/hcp-dedicated-eol.mdx' + +When operating Consul at scale, self-managed VM or bare metal server deployments offer the most flexibility. Some Consul Enterprise features that can enhance fault tolerance and read scalability, such as [redundancy zones](/consul/docs/manage/scale/redundancy-zone) and [read replicas](/consul/docs/manage/scale/read-replica), are not available to server agents on Kubernetes runtimes. To learn more, refer to [Consul Enterprise feature availability by runtime](/consul/docs/enterprise#feature-availability-by-runtime). + +### Number of Consul servers + +Determining the number of Consul servers to deploy on your network has two key considerations: + +1. **Fault tolerance**: The number of server outages your deployment can tolerate while maintaining quorum. Additional servers increase a network’s fault tolerance. +1. **Performance scalability**: To handle more requests, additional servers produce latency and slow the quorum process. Having too many servers impedes your network instead of helping it. + +Fault tolerance should determine your initial decision for how many Consul server agents to deploy. Our recommendation for the number of servers to deploy depends on whether you have access to Consul Enterprise redundancy zones: + +- **With redundancy zones**: Deploy 6 Consul servers across 3 availability zones. This deployment provides the performance of a 3 server deployment with the fault tolerance of a 7 server deployment. +- **Without redundancy zones**: Deploy 5 Consul servers across 3 availability zones. All 5 servers should be voting servers, not [read replicas](/consul/docs/manage/scale/read-replica). + +For more details, refer to [Improving Consul Resilience](/consul/docs/concept/reliability). + +### Server requirements + +To ensure your server nodes are a sufficient size, we recommend reviewing [hardware sizing for Consul servers](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers). If your network needs to handle heavy workloads, refer to our recommendations in [read-heavy workload sources and solutions](#read-heavy-workload-sources-and-solutions) and [write-heavy workload sources and solutions](#write-heavy-workload-sources-and-solutions). + +#### File descriptors + +Consul's agents use network sockets for gossip communication with the other nodes and agents. As a result, servers create file descriptors for connections from clients, connections from other servers, watch handlers, health checks, and log files. For write-heavy clusters, you must increase the size limit for the number of file descriptions from the default value, 1024. We recommend using a number that is two times higher than your expected number of clients in the cluster. + +#### Auto scaling groups + +Auto scaling groups (ASGs) are infrastructure associations in cloud providers used to ensure a specific number of replicas are available for a deployment. When using ASGs for Consul servers, there are specific requirements and processes for bootstrapping Raft and maintaining quorum. + +We recommend using the [`bootstrap-expect` command-line flag](/consul/commands/agent#_bootstrap_expect) during cluster creation. However, if you spawn new servers to add to a cluster or upgrade servers, do not configure them to automatically bootstrap. If `bootstrap-expect` is set on these replicas, it is possible for them to create a separate Raft system, which causes a _split brain_ and leads to errors and general cluster instability. + +#### NUMA architecture awareness + +Some cloud providers offer extremely large instance sizes with Non-Uniform Memory Access (NUMA) architectures. Because the Go runtime is not NUMA aware, Consul is not NUMA aware. Even though you can run Consul on NUMA architecture, it will not take advantage of the multiprocessing capabilities. + +### Consistency modes + +Consul offers different [consistency modes](/consul/api-docs/features/consistency#stale) for both its DNS and HTTP APIs. + +#### DNS + +We strongly recommend using [stale consistency mode for DNS lookups](/consul/api-docs/features/consistency#consul-dns-queries) to optimize for performance over consistency when operating at scale. It is enabled by default and configured with `dns_config.allow_stale`. + +We also recommend that you do not configure [`dns_config.max_stale` to limit the staleness of DNS responses](/consul/api-docs/features/consistency#limiting-staleness-advanced-usage), as it may result in a prolonged outage if your Consul servers become overloaded. If bounded result consistency is required by a service, consider modifying the service to use consistent service discovery HTTP API queries instead of DNS lookups. + +Avoid using [`dns_config.use_cache`](/consul/docs/reference/agent/configuration-file/dns#dns_use_cache) when operating Consul at scale. Because the Consul agent cache allocates memory for each requested route and each allocation can live up to 3 days, severe memory issues may occur. To implement DNS caching, we instead recommend that you [configure TTLs for services and nodes](/consul/tutorials/networking/dns-caching#ttl) to enable the DNS client to cache responses from Consul. + +#### HTTP API + +By default, all HTTP API read requests use the [`default` consistency mode](/consul/api-docs/features/consistency#default-1) unless overridden on a per-request basis. We do not recommend changing the default consistency mode for HTTP API requests. + +We also recommend that you do not configure [`http_config.discovery_max_stale`](/consul/api-docs/features/consistency#changing-the-default-consistency-mode-advanced-usage) to limit the staleness of HTTP responses. + +## Resource usage and metrics recommendations + +While operating Consul, monitor the CPU load on the Consul server agents and use metrics from agent telemetry to figure out the cause. Procedures for mitigating heavy resource usage depend on whether the load is caused by read operations, write operations, or Consul’s consensus protocol. + +### Read-heavy workload sources and solutions + +The highest CPU load usually belongs to the current leader. If the CPU load is high, request load is likely a major contributor. Check the following [server health metrics](/consul/docs/reference/agent/telemetry#server-health): + +- `consul.rpc.*` - Traditional RPC metrics. The most relevant metrics for understanding server CPU load in read-heavy workloads are `consul.rpc.query` and `consul.rpc.queries_blocking`. +- `consul.grpc.server.*` - Metrics for the number of streams being processed by the server. +- `consul.xds.server.*` - Metrics for the Envoy xDS resources being processed by the server. In Consul v1.14 and higher, these metrics have the potential to become a significant source of read load. Refer to [Consul dataplanes](/consul/docs/architecture/control-plane/dataplane) for more information. + +Depending on your needs, choose one of the following strategies to mitigate server CPU load: + +- The fastest mitigation strategy is to vertically scale servers. However, this strategy increases compute costs and does not scale indefinitely. +- The most effective long term mitigation strategy is to use [stale consistency mode](/consul/api-docs/features/consistency#stale) for as many read requests as possible. In Consul v1.12 and higher, operators can use the [`consul.rpc.server.call` metric](/consul/docs/reference/agent/telemetry#server-workload) to identify the most frequent type of read requests made to the Consul servers. Cross reference the results with each endpoint’s [HTTP API documentation](/consul/api-docs) and use stale consistency for endpoints that support it. +- If most read requests already use stale consistency mode and you still need to reduce your request load, add more non-voting servers to your deployment. You can use either [redundancy zones](/consul/docs/manage/scale/redundancy-zone) or [read replicas](/consul/docs/manage/scale/read-replica) to scale reads without impacting write latency. We recommend adding more servers to redundancy zones because they improve both fault tolerance and stale read scalability. +- In Consul v1.14 and higher, servers handle Envoy XDS streams for [Consul Dataplane deployments](/consul/docs/architecture/control-plane/dataplane) in stale consistency mode. As a result, server consistency mode is not configurable. Use the `consul.xds.server.*` metrics to identify issues related to XDS streams. + +### Write-heavy workload sources and solutions + +Consul is write-limited by disk I/O. For write-heavy workloads, we recommend using NVMe disks. + +As a starting point, you should make sure your hardware meets the requirements for [large size server clusters](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers), which has 7500+ IOps and 250+ MB/s disk throughput. IOps should be around 5 to 10 times the expected write rate. Conduct further analysis around disk sizing and your expected write rates to understand your network’s specific needs. + +If you use network storage, such as AWS EBS, we recommend provisioned I/O volumes. While general purpose volumes function properly, their burstable IOps make it harder to capacity plan. A small peak in writes may not trigger alerts, but as usage grows you may reach a point where the burst limit runs out and workload performance worsens. + +For more information, refer to the [server performance read/write tuning](/consul/docs/deploy/server/vm/requirements#read-write-tuning). + +### Raft database performance sources and solutions + +Consul servers use the [Raft consensus protocol](/consul/docs/concept/consensus) to maintain a consistent and fault-tolerant state. Raft stores most Consul data in a MemDB database, which is an in-memory database with indexing. In order to tolerate restarts and power outages, Consul writes Raft logs to disk using BoltDB. Refer to [Agent telemetry](/consul/docs/reference/agent/telemetry) for more information on metrics for detecting write health. + +To monitor overall transaction performance, check for spikes in the [Transaction timing metrics](/consul/docs/reference/agent/telemetry#transaction-timing). You can also use the [Raft replication capacity issues metrics](/consul/docs/reference/agent/telemetry#raft-replication-capacity-issues) to monitor Raft log snapshots and restores, as spikes and longer durations can be symptoms of overall write and disk contention issues. + +In Consul v1.11 and higher, you can also monitor Raft performance with the [`consul.raft.boltdb.*` metrics](/consul/docs/reference/agent/telemetry#bolt-db-performance). We recommend monitoring `consul.raft.boltdb.storeLogs` for increased activity above normal operating patterns. + +Refer to [Consul agent telemetry](/consul/docs/reference/agent/telemetry#bolt-db-performance) for more information on agent metrics and how to use them. + +#### Raft database size + +Raft writes logs to BoltDB, which is designed as a single grow-only file. As a result, if you add 1GB of log entries and then you take a snapshot, only a small number of recent log entries may appear in the file. However, the actual file on disk never shrinks smaller than the 1GB size it grew. + +If you need to reclaim disk space, use the `bbolt` CLI to copy the data to a new database and repoint to the new database in the process. However, be aware that the `bbolt compact` command requires the database to be offline while being pointed to the new database. + +In many cases, including in large clusters, disk space is not a primary concern because Raft logs rarely grow larger than a small number of GiB. However, an inflated file with lots of free space significantly degrades write performance overall due to _freelist management_. + +After they are written to disk, Raft logs are eventually captured in a snapshot and log nodes are removed from BoltDB. BoltDB keeps track of the pages for the removed nodes in its freelist. BoltDB also writes this freelist to disk every time there is a Raft write. When the Raft log grows large quickly and then gets truncated, the size of the freelist can become very large. In the worst case reported to us, the freelist was over 10MB. When this large freelist is written to disk on every Raft commit, the result is a large write amplification for what should be a small Raft commit. + +To figure out if a Consul server’s disk performance issues are the result of BoldDB’s freelist, try the following strategies: + +- Compare network bandwidth inbound to the server against disk write bandwidth. If _disk write bandwidth_ is greater than or equal to 5 times the _inbound network bandwidth_, the disks are likely experiencing freelist management performance issues. While BoltDB freelist may cause problems at ratios lower than 5 to 1, high write bandwidth to inbound bandwidth ratios are a reliable indicator that BoltDB freelist is causing a problem. +- Use the [`consul.raft.leader.dispatchLog` metric](/consul/docs/reference/agent/telemetry#server-health) to get information about how long it takes to write a batch of logs to disk. +- In Consul v1.13 and higher, you can use [Raft thread saturation metrics](/consul/docs/reference/agent/telemetry#raft-thread-saturation) to figure out if Raft is experiencing back pressure and is unable to accept new work due disk limitations. + +In Consul v1.11 and higher, you can prevent BoltDB from writing the freelist to disk by setting [`raftboltdb.NoFreelistSync`](/consul/docs/reference/agent/configuration-file/raft#nofreelistsync) to `true`. This setting causes BoltDB to retain the freelist in memory instead. However, be aware that when BoltDB restarts, it needs to scan the database file to manually create the freelist. Small delays in startup may occur. On a fast disk, we measured these delays at the order of tens of seconds for a raft.db file that was 5GiB in size with only 250MiB of used pages. + +In general, set [`raftboltdb.NoFreelistSync`](/consul/docs/reference/agent/configuration-file/raft#nofreelistsync) to `true` to produce the following effects: + +- Reduce the amount of data written to disk +- Increase the amount of time it takes to load the raft.db file on startup + +We recommend operators optimize networks according to their individual concerns. For example, if your server runs into disk performance issues but Consul servers do not restart often, setting [`raftboltdb.NoFreelistSync`](/consul/docs/reference/agent/configuration-file/raft#nofreelistsync) to `true` may solve your problems. However, the same action causes issues for deployments with large database files and frequent server restarts. + +#### Raft snapshots + +Each state change produces a Raft log entry, and each Consul server receives the same sequence of log entries, which results in servers sharing the same state. The sequence of Raft logs is periodically compacted by the leader into a _snapshot_ of state history. These snapshots are internal to Raft and are not the same as the snapshots generated through Consul's API, although they contain the same data. Raft snapshots are stored in the server's data directory in the `raft/` folder, alongside the logs in `raft.db`. + +When you add a new Consul server, it must catch up to the current state. It receives the latest snapshot from the leader followed by the sequence of logs between that snapshot and the leader’s current state. Each Raft log has a sequence number and each snapshot contains the last sequence number included in the snapshot. A combination of write-heavy workloads, a large state, congested networks, or busy servers makes it possible for new servers to struggle to catch up to the current state before the next log they need from the leader has already been truncated. The result is a _snapshot install loop_. + +For example, if snapshot A on the leader has an index of 99 and the current index is 150, then when a new server comes online the leader streams snapshot A to the new server for it to restore. However, this snapshot only enables the new server to catch up to index 99. Not only does the new server still need to catch up to index 150, but the leader continued to commit Raft logs in the meantime. + +When the leader takes snapshot B at index 199, it truncates the logs that accumulated between snapshot A and snapshot B, which means it truncates Raft logs with indexes between 100 and 199. + +Because the new server restored snapshot A, the new server has a current index of 99. It requests logs 100 to 150 because index 150 was the current index when it started the replication restore process. At this point, the leader recognizes that it only has logs 200 and higher, and does not have logs for indexes 100 to 150. The leader determines that the new server’s state is stale and starts the process over by sending the new server the latest snapshot, snapshot B. + +Consul keeps a configurable number of [Raft trailing logs](/consul/docs/reference/agent/configuration-file/raft#raft_trailing_logs) to prevent the snapshot install loop from repeating. The trailing logs are the last logs that went into the snapshot, and the new server can more easily catch up to the current state using these logs. The default Raft trailing logs configuration value is suitable for most deployments. + +In Consul v1.10 and higher, operators can try to prevent a snapshot install loop by monitoring and comparing Consul servers’ `consul.raft.rpc.installSnapshot` and `consul.raft.leader.oldestLogAge` timing metrics. Monitor these metrics for the following situations: + +- After truncation, the lowest number on `consul.raft.leader.oldestLogAge` should always be at least two times higher than the lowest number for `consul.raft.rpc.installSnapshot`. +- If these metrics are too close, increase the number of Raft trailing logs, which increases `consul.raft.leader.oldestLogAge`. Do not set the Raft trailing logs higher than necessary, as it can negatively affect write throughput and latency. + +For more information, refer to [Raft Replication Capacity Issues](/consul/docs/reference/agent/telemetry#raft-replication-capacity-issues). + +## Performance considerations for specific use cases + +This section provides configuration and monitoring recommendations for Consul deployments according to the features you prioritize and their use cases. + +### Service discovery + +To optimize performance for service discovery, we recommend deploying multiple small clusters with consistent numbers of service instances and watches. + +Several factors influence Consul performance at scale when used primarily for its service discovery and health check features. The factors you have control over include: + +- The overall number of registered service instances +- The use of [stale reads](/consul/api-docs/features/consistency#consul-dns-queries) for DNS queries +- The number of entities, such as Consul client agents or dataplane components, that are monitoring Consul for changes in a service's instances, including registration and health status. When any service change occurs, all of those entities incur a computational cost because they must process the state change and reconcile it with previously known data for the service. In addition, the Consul server agents also incur a computational cost when sending these updates. +- Number of [watches](/consul/docs/automate/watch) monitoring for changes to a service. +- Rate of catalog updates, which is affected by the following events: + - A service instance’s health check status changes + - A service instance’s node loses connectivity to Consul servers + - The contents of the [service definition file](/consul/docs/reference/service) changes + - Service instances are registered or deregistered + - Orchestrators such as Kubernetes or Nomad move a service to a new node + +These factors can occur in combination with one another. Overall, the amount of work the servers complete for service discovery is the product of these factors: + +- Data size, which changes as the number of services and service instances increases +- The catalog update rate +- The number of active watches + +Because it is typical for these factors to increase in number as clusters grow, the CPU and network resources the servers require to distribute updates may eventually exceed linear growth. + +In situations where you can’t run a Consul client agent alongside the service instance you want to register with Consul, such as instances hosted externally or on legacy infrastructure, we recommend using [Consul ESM](https://github.com/hashicorp/consul-esm). + +Consul ESM enables health checks and monitoring for external services. When using Consul ESM, we recommend running multiple instances to ensure redundancy. + +### Service mesh + +Because Consul’s service mesh uses service discovery subsystems, service mesh performance is also optimized by deploying multiple small clusters with consistent numbers of service instances and watches. Service mesh performance is influenced by the following additional factors: + +- The [transparent proxy](/consul/docs/connect/transparent-proxy) feature causes client agents to listen for service instance updates across all services instead of a subset. To prevent performance issues, we recommend that you do not use the permissive intention, `default: allow`, with the transparent proxy feature. When combined, every service instance update propagates to every proxy, which causes additional server load. +- When you use the [built-in service mesh CA provider](/consul/docs/secure-mesh/certificate/built-in#built-in-ca), Consul leaders are responsible for signing certificates used for mTLS across the service mesh. The impact on CPU utilization depends on the total number of service instances and configured certificate TTLs. You can use the [CA provider configuration options](/consul/docs/reference/agent/configuration-file/service-mesh#common-ca-config-options) to control the number of requests a server processes. We recommend adjusting [`csr_max_concurrent`](/consul/docs/reference/agent/configuration-file/service-mesh#ca_csr_max_concurrent) and [`csr_max_per_second`](/consul/docs/reference/agent/configuration-file/service-mesh#ca_csr_max_per_second) to suit your environment. + +### K/V store + +While the K/V store in Consul has some similarities to object stores we recommend that you do not use it as a primary application data store. + +When using Consul's K/V store for application configuration and metadata, we recommend the following to optimize performance: + +- Values must be below 512 KB and transactions should be below 64 operations. +- The keyspace must be well bound. While 10,000 keys may not affect performance, millions of keys are more likely to cause performance issues. +- Total data size must fit in memory, with additional room for indexes. We recommend that the in-memory size is 3 times the raw key value size. +- Total data size should remain below 1 GB. Larger snapshots are possible on suitably fast hardware, but they significantly increase recovery times and the operational complexity needed for replication. We recommend limiting data size to keep the cluster healthy and able to recover during maintenance and outages. +- The K/V store is optimized for reading. To know when you need to make changes to server resources and capacity, we recommend carefully monitoring update rates after they exceed more than a hundred updates per second across the cluster. +- We recommend that you do not use the K/V store as a general purpose database or object store. + +In addition, we recommend that you do not use the [blocking query mechanism](/consul/api-docs/features/blocking) to listen for updates when your K/V store’s update rate is high. When a K/V result is updated too fast, blocking query loops degrade into busy loops. These loops consume excessive client CPU and cause high server load until appropriately throttled. Watching large key prefixes is unlikely to solve the issue because returning the entire key prefix every time it updates can quickly consume a lot of bandwidth. + +### Backend for Vault + +At scale, using Consul as a backend for Vault results in increased memory and CPU utilization on Consul servers. It also produces unbounded growth in Consul’s data persistence layer that is proportional to both the amount of data being stored in Vault and the rate the data is updated. + +In situations where Consul handles large amounts of data and has high write throughput, we recommend adding monitoring for the [capacity and health of raft replication on servers](/consul/docs/reference/agent/telemetry#raft-replication-capacity-issues). If the server experiences heavy load when the size of its stored data is large enough, a follower may be unable to catch up on replication and become a voter after restarting. This situation occurs when the time it takes for a server to restore from disk takes longer than it takes for the leader to write a new snapshot and truncate its logs. Refer to [Raft snapshots](#raft-snapshots) for more information. + +Vault v1.4 and higher provides [integrated storage](/vault/docs/concepts/integrated-storage) as its recommended storage option. If you currently use Consul as a storage backend for Vault, we recommend switching to integrated storage. For a comparison between Vault's integrated storage and Consul as a backend for Vault, refer to [storage backends in the Vault documentation](/vault/docs/configuration/storage#integrated-storage-vs-consul-as-vault-storage). For detailed guidance on migrating the Vault backend from Consul to Vault's integrated storage, refer to the [storage migration tutorial](/vault/docs/configuration/storage#integrated-storage-vs-consul-as-vault-storage). Integrated storage improves resiliency by preventing a Consul outage from also affecting Vault functionality. diff --git a/website/content/docs/manage/scale/read-replica.mdx b/website/content/docs/manage/scale/read-replica.mdx new file mode 100644 index 000000000000..3cad055a2748 --- /dev/null +++ b/website/content/docs/manage/scale/read-replica.mdx @@ -0,0 +1,18 @@ +--- +layout: docs +page_title: Read Replicas (Enterprise) +description: >- + Learn how you can add non-voting servers to datacenters as read replicas to provide enhanced read scalability without impacting write latency. +--- + +# Enhanced Read Scalability with Read Replicas + + + +This feature requires HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +Consul Enterprise provides the ability to scale clustered Consul servers to include voting servers and read replicas. Read replicas still receive data from the cluster replication, however, they do not take part in quorum election operations. Expanding your Consul cluster in this way can scale reads without impacting write latency. + +For more details, review the [Consul server configuration](/consul/docs/fundamentals/agent) documentation and the [-read-replica](/consul/commands/agent#_read_replica) configuration flag. diff --git a/website/content/docs/manage/scale/redundancy-zone.mdx b/website/content/docs/manage/scale/redundancy-zone.mdx new file mode 100644 index 000000000000..faca7c660a80 --- /dev/null +++ b/website/content/docs/manage/scale/redundancy-zone.mdx @@ -0,0 +1,20 @@ +--- +layout: docs +page_title: Redundancy Zones (Enterprise) +description: >- + Redundancy zones are regions of a cluster containing "hot standby" servers, or non-voting servers that can replace voting servers in the event of a failure. Learn about redundancy zones and how they improve resiliency and increase fault tolerance without affecting latency. +--- + +# Redundancy Zones + + + +This feature requires self-managed Consul Enterprise. Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +Consul Enterprise redundancy zones provide both scaling and resiliency benefits by enabling the deployment of non-voting servers alongside voting servers on a per availability zone basis. + +When using redundancy zones, if an operator chooses to deploy Consul across 3 availability zones, they could have 2 (or more) servers (1 voting/1 non-voting) in each zone. In the event that a voting member in an availability zone fails, the redundancy zone configuration would automatically promote the non-voting member to a voting member. In the event that an entire availability zone was lost, a non-voting member in one of the existing availability zones would promote to a voting member, keeping server quorum. This capability functions as a "hot standby" for server nodes while also providing (and expanding) the capabilities of [enhanced read scalability](/consul/docs/manage/scale/read-replica) by also including recovery capabilities. + +For more information, complete the [Redundancy Zones](/consul/tutorials/datacenter-operations/autopilot-datacenter-operations#redundancy-zones) tutorial and reference the [Consul Autopilot](/consul/commands/operator/autopilot) documentation. diff --git a/website/content/docs/agent/monitor/alerts.mdx b/website/content/docs/monitor/alerts.mdx similarity index 82% rename from website/content/docs/agent/monitor/alerts.mdx rename to website/content/docs/monitor/alerts.mdx index d8dcc902478c..7c30abd4268c 100644 --- a/website/content/docs/agent/monitor/alerts.mdx +++ b/website/content/docs/monitor/alerts.mdx @@ -1,13 +1,13 @@ --- layout: docs -page_title: Consul monitoring and alerts recommendations +page_title: Consul monitoring and alerts description: >- - Apply best practices towards Consul monitoring and alerts. + You can use Consul to monitor host resources and create alerts when resource consumption reaches a defined level. Learn about best practices for Consul monitoring and alerts. --- -# Consul monitoring and alerts recommendations +# Consul monitoring and alerts -This document will guide you through which host resources to monitor and how monitoring tools can help you set up alerts to notify you when your Consul cluster exceeds its limits. By monitoring Consul and setting up alerts, you can ensure Consul works as expected for all your service discovery and service mesh needs. +This page describes the host resources you should monitor and how monitoring tools can help you set up alerts to notify you when your Consul cluster exceeds its limits. By monitoring Consul and setting up alerts, you can ensure Consul works as expected for all your service discovery and service mesh needs. ## Instance level monitoring @@ -15,15 +15,16 @@ While each host environment and Consul deployment is unique, these recommendatio A Consul datacenter is the smallest unit of Consul infrastructure that can perform basic Consul operations like service discovery or service mesh. A datacenter contains at least one Consul server agent, but a real-world deployment contains three or five server agents and several Consul client agents. -Consul server agents store all state information, including service and node IP addresses, health checks, and configuration. Consul clients report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. If you have Kubernetes workloads, you can also run Consul with an alternate service mesh configuration that deploys Envoy proxies but not client agents. Refer to [Simplified service mesh with Consul dataplanes](/consul/docs/connect/dataplane) for more information. +Consul server agents store all state information, including service and node IP addresses, health checks, and configuration. Consul clients report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. If you have Kubernetes workloads, you can also run Consul with an alternate service mesh configuration that deploys Envoy proxies but not client agents. Refer to [Simplified service mesh with Consul dataplanes](/consul/docs/architecture/control-plane/dataplane) for more information. We recommend monitoring the following parameters for Consul agents health: + - Disk space and file handles -- [RAM utilization](/consul/docs/agent/telemetry#memory-usage) +- [RAM utilization](/consul/docs/reference/agent/telemetry#memory-usage) - CPU utilization - Network activity and utilization -We recommend using an [application performance monitoring (APM) system](#monitoring-tools) to track these metrics. For a full list of key metrics, visit the [Key metrics](/consul/docs/agent/telemetry#key-metrics) section of Telemetry documentation. +We recommend using an [application performance monitoring (APM) system](#monitoring-tools) to track these metrics. For a full list of key metrics, visit the [Key metrics](/consul/docs/reference/agent/telemetry#key-metrics) section of Telemetry documentation. ## Recommendations for host-level alerts @@ -35,7 +36,7 @@ Once you have established a baseline for your metrics, use them and the followin ### Memory alert recommendations -Consul uses RAM as the primary storage for data on its leader node, while periodically flushing it to disk. Reference the [Memory usage](/consul/docs/agent/telemetry#memory-usage) section of the Telemetry documentation for more details. The recommended instance type depends on your hosting provider. Refer to the [Hardware sizing for Consul servers](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers) for recommended instance types for most cloud providers along with other up-to-date hardware recommendations. +Consul uses RAM as the primary storage for data on its leader node, while periodically flushing it to disk. Reference the [Memory usage](/consul/docs/reference/agent/telemetry#memory-usage) section of the Telemetry documentation for more details. The recommended instance type depends on your hosting provider. Refer to the [Hardware sizing for Consul servers](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers) for recommended instance types for most cloud providers along with other up-to-date hardware recommendations. When determining how much RAM you should allocate, we recommend enough RAM for your server agents to contain between 2 to 4 times the working set size. You can determine the working set size by noting the value of `consul.runtime.alloc_bytes` in the telemetry data. diff --git a/website/content/docs/k8s/deployment-configurations/datadog.mdx b/website/content/docs/monitor/datadog.mdx similarity index 99% rename from website/content/docs/k8s/deployment-configurations/datadog.mdx rename to website/content/docs/monitor/datadog.mdx index c3df799f4df4..794607a7b826 100644 --- a/website/content/docs/k8s/deployment-configurations/datadog.mdx +++ b/website/content/docs/monitor/datadog.mdx @@ -280,7 +280,7 @@ flow is outbound (toward the Datadog Agent) as opposed to inbound (toward the `/ #### Metrics Data Collected - - Full list of metrics sent via DogstatsD consists of those listed in the [Agent Telemetry](https://developer.hashicorp.com/consul/docs/agent/telemetry) documentation. + - Full list of metrics sent via DogstatsD consists of those listed in the [Agent Telemetry](https://developer.hashicorp.com/consul/docs/reference/agent/telemetry) documentation. ## Datadog Checks: Official Consul Integration @@ -465,4 +465,4 @@ Use of this method maps to Datadog as described in [Mapping Prometheus Metrics t The integration, by default, uses a wildcard (`".*"`) to collect **_all_** metrics emitted from the `/v1/agent/metrics` endpoint. -Please refer to the [Agent Telemetry](https://developer.hashicorp.com/consul/docs/agent/telemetry) documentation for a full list and description of the metrics data collected. +Please refer to the [Agent Telemetry](https://developer.hashicorp.com/consul/docs/reference/agent/telemetry) documentation for a full list and description of the metrics data collected. \ No newline at end of file diff --git a/website/content/docs/monitor/index.mdx b/website/content/docs/monitor/index.mdx new file mode 100644 index 000000000000..954142624b03 --- /dev/null +++ b/website/content/docs/monitor/index.mdx @@ -0,0 +1,121 @@ +--- +layout: docs +page_title: Monitor Consul overview +description: >- + Learn about the Consul components you can monitor and how to monitor these components. +--- + +# Monitor Consul overview + +This provides an overview of monitoring your Consul control and data plane. By keeping track of these components and setting up alerts, you can better maintain the overall health and resilience of your service mesh. + +## Background + +A Consul datacenter is the smallest unit of Consul infrastructure that can perform basic Consul operations like service discovery or service mesh. A datacenter contains at least one Consul server agent, but a real-world deployment contains three or five server agents and several Consul client agents. + +The Consul control plane consists of server agents that store all state information, including service and node IP addresses, health checks, and configuration. In addition, the control plane is responsible for securing the mesh, facilitating service discovery, health checking, policy enforcement, and other similar operational concerns. In addition, the control plane contains client agents that report node and service health status to the Consul cluster. In a typical deployment, you must run client agents on every compute node in your datacenter. + +The Consul data plane consists of proxies deployed locally alongside each service instance. These proxies, called sidecar proxies, receive mesh configuration data from the control plane, and control network communication between their local service instance and other services in the network. The sidecar proxy handles inbound and outbound service connections, and ensures TLS connections between services are both verified and encrypted. + +If you have Kubernetes workloads, you can also run Consul with an alternate service mesh configuration that deploys Envoy proxies but not client agents. Refer to [Simplified service mesh with Consul dataplanes](/consul/docs/architecture/control-plane/dataplane) for more information. + +## Consul control plane monitoring + +The Consul control plane consists of the following components: + +- RPC Communication between Consul servers and clients. +- Data plane routing instructions for the Envoy Layer 7 proxy. +- Serf Traffic: LAN and WAN +- Consul cluster peering and server federation + +It is important to monitor and establish baseline and alert thresholds for Consul control plane components for abnormal behavior detection. Note that these alerts can also be triggered by some planned events like Consul cluster upgrades, configuration changes, or leadership change. + +To help monitor your Consul control plane, we recommend to establish a baseline and standard deviation for the following: + +- [Server health](/consul/docs/reference/agent/telemetry#server-health) +- [Leadership changes](/consul/docs/reference/agent/telemetry#leadership-changes) +- [Key metrics](/consul/docs/reference/agent/telemetry#key-metrics) +- [Autopilot](/consul/docs/reference/agent/telemetry#autopilot) +- [Network activity](/consul/docs/reference/agent/telemetry#network-activity-rpc-count) +- [Certificate authority expiration](/consul/docs/reference/agent/telemetry#certificate-authority-expiration) + +It is important to have a highly performant network with low network latency. Ensure network latency for gossip in all datacenters are within the 8ms latency budget for all Consul agents. View the [Production server requirements](/consul/docs/install/performance#production-server-requirements) for more information. + +### Raft recommendations + +Consul uses [Raft for consensus protocol](/consul/docs/concept/consensus). High saturation of the Raft goroutines can lead to elevated latency in the rest of the system and may cause the Consul cluster to be unstable. As a result, it is important to monitor Raft to track your control plane health. We recommend the following actions to keep control plane healthy: +- Create an alert that notifies you when [Raft thread saturation](/consul/docs/reference/agent/telemetry#raft-thread-saturation) exceeds 50%. +- Monitor [Raft replication capacity](/consul/docs/reference/agent/telemetry#raft-replication-capacity-issues) when Consul is handling large amounts of data and high write throughput. +- Lower [`raft_multiplier`](/consul/docs/install/performance#production) to keep your Consul cluster stable. The value of `raft_multiplier` defines the scaling factor for Consul. Default value for raft_multiplier is 5. + + A short multiplier minimizes failure detection and election time but may trigger frequently in high latency situations. This can cause constant leadership churn and associated unavailability. A high multiplier reduces the chances that spurious failures will cause leadership churn but it does this at the expense of taking longer to detect real failures and thus takes longer to restore Consul cluster availability. + + Wide networks with higher latency will perform better with larger `raft_multiplier` values. + +Raft uses BoltDB for storing data and maintaining its own state. Refer to the [Bolt DB performance metrics](/consul/docs/reference/agent/telemetry#bolt-db-performance) when you are troubleshooting Raft performance issues. + +## Consul data plane monitoring + +The data plane of Consul consists of Consul clients or [Connect proxies](/consul/docs/connect/proxy) interacting with each other through service-to-service communication. Service-to-service traffic always stays within the data plane, while the control plane only enforces traffic rules. Monitoring service-to-service communication is important but may become extremely complex in an enterprise setup with multiple services communicating to each other across federated Consul clusters through mesh, ingress and terminating gateways. + +### Service monitoring + +You can extract the following service-related information: + +- Use the [`catalog`](/consul/commands/catalog) command or the Consul UI to query all registered services in a Consul datacenter. +- Use the [`/agent/service/:service_id`](/consul/api-docs/agent/service#get-service-configuration) API endpoint to query individual services. Connect proxies use this endpoint to discover embedded configuration. + +### Proxy monitoring + +Envoy is the supported Connect proxy for Consul service mesh. For virtual machines (VMs), Envoy starts as a sidecar service process. For Kubernetes, Envoy starts as a sidecar container in a Kubernetes service pod. +Refer to the [Supported Envoy versions](/consul/docs/connect/proxies/envoy#supported-versions) documentation to find the compatible Envoy versions for your version of Consul. + +For troubleshooting service mesh issues, set Consul logs to `trace` or `debug`. The following example annotation sets Envoy logging to `debug`. + +```yaml +annotations: + consul.hashicorp.com/envoy-extra-args: '--log-level debug --disable-hot-restart' +``` + +Refer to the [Enable logging on Envoy sidecar pods](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-envoy-extra-args) documentation for more information. + +#### Envoy Admin Interface + +To troubleshoot service-to-service communication issues, monitor Envoy host statistics. Envoy exposes a local administration interface that can be used to query and modify different aspects of the server on port `19000` by default. Envoy also exposes a public listener port to receive mTLS connections from other proxies in the mesh on port `20000` by default. + +All endpoints exposed by Envoy are available at the node running Envoy on port `19000`. The node can either be a pod in Kubernetes or VM running Consul Service Mesh. For example, if you forward the Envoy port to your local machine, you can access the Envoy admin interface at `http://localhost:19000/`. + +The following Envoy admin interface endpoints are particularly useful: + +- The `listeners` endpoint lists all listeners running on `localhost`. This allows you to confirm whether the upstream services are binding correctly to Envoy. + +```shell-session +$ curl http://localhost:19000/listeners +public_listener:192.168.19.168:20000::192.168.19.168:20000 +Outbound_listener:127.0.0.1:15001::127.0.0.1:15001 +``` + +- The `/clusters` endpoint displays information about the xDS clusters, such as service requests and mTLS related data. The following example shows a truncated output. + +```shell-session +$ http://localhost:19000/clusters +`local_app::observability_name::local_app +local_app::default_priority::max_connections::1024 +local_app::default_priority::max_pending_requests::1024 +local_app::default_priority::max_requests::1024 +local_app::default_priority::max_retries::3 +local_app::high_priority::max_connections::1024 +local_app::high_priority::max_pending_requests::1024 +local_app::high_priority::max_requests::1024 +local_app::high_priority::max_retries::3 +local_app::added_via_api::true +## ... +``` + +Visit the main admin interface (`http://localhost:19000`) to find the full list of possible Consul admin endpoints. Refer to the [Envoy docs](https://www.envoyproxy.io/docs/envoy/latest/operations/admin) for more information. + +## Next steps + +In this guide, you learned recommendations for monitoring your Consul control and data plane. + +To learn about monitoring the Consul host and instance resources, visit our [Monitoring best practices](/well-architected-framework/reliability/reliability-monitoring-service-to-service-communication-with-envoy) documentation. \ No newline at end of file diff --git a/website/content/docs/monitor/log/agent.mdx b/website/content/docs/monitor/log/agent.mdx new file mode 100644 index 000000000000..455446436cae --- /dev/null +++ b/website/content/docs/monitor/log/agent.mdx @@ -0,0 +1,24 @@ +--- +layout: docs +page_title: Log agent behavior +description: >- + You can output a Consul agent's log to a file and start the agent in the background so that Consul does not lock the active terminal window when you start a Consul agent. +--- + +# Log agent behavior + +This page describes the process to output a log of the Consul agent's behavior. + +## Overview + +First, make sure your user has write permissions in the Consul data directory. + +```shell-session +$ sudo chmod g+w /opt/consul/ +``` + +Then start the Consul process. By default the Consul agent outputs logs to the open terminal. To start the agent without locking the terminal and output to a log instead, run the following command. + +```shell-session +$ consul agent -config-dir=/etc/consul.d/ > /tmp/consul-server.log 2>&1 & +``` diff --git a/website/content/docs/monitor/log/audit.mdx b/website/content/docs/monitor/log/audit.mdx new file mode 100644 index 000000000000..4be1d5664565 --- /dev/null +++ b/website/content/docs/monitor/log/audit.mdx @@ -0,0 +1,211 @@ +--- +layout: docs +page_title: Audit Logging (Enterprise) +description: >- + Audit logging secures Consul by capturing a record of HTTP API access and usage. Learn how to format agent configuration files to enable audit logs and specify the path to save logs to. +--- + +# Audit Logging + + + +This feature requires HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +With Consul Enterprise v1.8.0+, audit logging can be used to capture a clear and actionable log of authenticated events (both attempted and committed) that Consul processes via its HTTP API. These events are then compiled into a JSON format for easy export and contain a timestamp, the operation performed, and the user who initiated the action. + +Audit logging enables security and compliance teams within an organization to get greater insight into Consul access and usage patterns. + +Complete the [Capture Consul Events with Audit Logging](/consul/tutorials/datacenter-operations/audit-logging) tutorial to learn more about Consul's audit logging functionality, + +For detailed configuration information on configuring the Consul Enterprise's audit logging, review the Consul [Audit Log](/consul/docs/reference/agent/configuration-file/general#audit) documentation. + +## Example Configuration + +Audit logging must be enabled on every agent in order to accurately capture all operations performed through the HTTP API. To enable logging, add the [`audit`](/consul/docs/reference/agent/configuration-file/general#audit) stanza to the agent's configuration. + +-> **Note**: Consul only logs operations which are initiated via the HTTP API. The audit log does not record operations that take place over the internal RPC communication channel used for agent communication. + + + + +The following example configures a destination called "My Sink". Since rotation is enabled, audit events will be stored at files named: `/tmp/audit-.json`. The log file will be rotated either every 24 hours, or when the log file size is greater than 25165824 bytes (24 megabytes). + + + +```hcl +audit { + enabled = true + sink "My sink" { + type = "file" + format = "json" + path = "/tmp/audit.json" + delivery_guarantee = "best-effort" + rotate_duration = "24h" + rotate_max_files = 15 + rotate_bytes = 25165824 + } +} +``` + +```json +{ + "audit": { + "enabled": true, + "sink": { + "My sink": { + "type": "file", + "format": "json", + "path": "/tmp/audit.json", + "delivery_guarantee": "best-effort", + "rotate_duration": "24h", + "rotate_max_files": 15, + "rotate_bytes": 25165824 + } + } + } +} +``` + +```yaml +server: + auditLogs: + enabled: true + sinks: + - name: My Sink + type: file + format: json + path: /tmp/audit.json + delivery_guarantee: best-effort + rotate_duration: 24h + rotate_max_files: 15 + rotate_bytes: 25165824 +``` + + + + + + +The following example configures a destination called "My Sink" which emits audit logs to standard out. + + + +```hcl +audit { + enabled = true + sink "My sink" { + type = "file" + format = "json" + path = "/dev/stdout" + delivery_guarantee = "best-effort" + } +} +``` + +```json +{ + "audit": { + "enabled": true, + "sink": { + "My sink": { + "type": "file", + "format": "json", + "path": "/dev/stdout", + "delivery_guarantee": "best-effort" + } + } + } +} +``` + +```yaml +server: + auditLogs: + enabled: true + sinks: + - name: My Sink + type: file + format: json + path: /dev/stdout + delivery_guarantee: best-effort +``` + + + + + + +## Example Audit Log + +In this example a client has issued an HTTP GET request to look up the `ssh` service in the `/v1/catalog/service/` endpoint. + +Details from the HTTP request are recorded in the audit log. The `stage` field is set to `OperationStart` which indicates the agent has begun processing the request. + +The value of the `payload.auth.accessor_id` field is the accessor ID of the [ACL token](/consul/docs/secure/acl#tokens) which issued the request. + + + +```json +{ + "created_at": "2020-12-08T12:30:29.196365-05:00", + "event_type": "audit", + "payload": { + "id": "e4a20aec-d250-72c4-2aea-454fe8ae8051", + "version": "1", + "type": "HTTPEvent", + "timestamp": "2020-12-08T12:30:29.196206-05:00", + "auth": { + "accessor_id": "08f05787-3609-8001-65b4-922e5d52e84c", + "description": "Bootstrap Token (Global Management)", + "create_time": "2020-12-01T11:01:51.652566-05:00" + }, + "request": { + "operation": "GET", + "endpoint": "/v1/catalog/service/ssh", + "remote_addr": "127.0.0.1:64015", + "user_agent": "curl/7.54.0", + "host": "127.0.0.1:8500" + }, + "stage": "OperationStart" + } +} +``` + + + +After the request is processed, a corresponding log entry is written for the HTTP response. The `stage` field is set to `OperationComplete` which indicates the agent has completed processing the request. + + + +```json +{ + "created_at": "2020-12-08T12:30:29.202935-05:00", + "event_type": "audit", + "payload": { + "id": "1f85053f-badb-4567-d239-abc0ecee1570", + "version": "1", + "type": "HTTPEvent", + "timestamp": "2020-12-08T12:30:29.202863-05:00", + "auth": { + "accessor_id": "08f05787-3609-8001-65b4-922e5d52e84c", + "description": "Bootstrap Token (Global Management)", + "create_time": "2020-12-01T11:01:51.652566-05:00" + }, + "request": { + "operation": "GET", + "endpoint": "/v1/catalog/service/ssh", + "remote_addr": "127.0.0.1:64015", + "user_agent": "curl/7.54.0", + "host": "127.0.0.1:8500" + }, + "response": { + "status": "200" + }, + "stage": "OperationComplete" + } +} +``` + + diff --git a/website/content/docs/monitor/telemetry/agent.mdx b/website/content/docs/monitor/telemetry/agent.mdx new file mode 100644 index 000000000000..12ea59ed2352 --- /dev/null +++ b/website/content/docs/monitor/telemetry/agent.mdx @@ -0,0 +1,371 @@ +--- +layout: docs +page_title: Agents - Enable Telemetry Metrics +description: >- + Configure agent telemetry to collect operations metrics you can use to debug and observe Consul behavior and performance. Learn about configuration options, the metrics you can collect, and why they're important. +--- + +# Agent Telemetry + +The Consul agent collects various runtime metrics about the performance of +different libraries and subsystems. These metrics are aggregated on a ten +second (10s) interval and are retained for one minute. An _interval_ is the period of time between instances of data being collected and aggregated. + +When telemetry is being streamed to an external metrics store, the interval is defined to be that store's flush interval. + +|External Store|Interval (seconds)| +|:--------|:--------| +|[dogstatsd](https://docs.datadoghq.com/developers/dogstatsd/?tab=hostagent#how-it-works)|10s| +|[Prometheus](https://vector.dev/docs/reference/configuration/sinks/prometheus_exporter/#flush_period_secs)| 60s| +|[statsd](https://github.com/statsd/statsd/blob/master/docs/metric_types.md#timing)|10s| + +To view this data, you must send a signal to the Consul process: on Unix, +this is `USR1` while on Windows it is `BREAK`. Once Consul receives the signal, +it will dump the current telemetry information to the agent's `stderr`. + +This telemetry information can be used for debugging or otherwise +getting a better view of what Consul is doing. Review the [Monitoring and +Metrics tutorial](/consul/tutorials/day-2-operations/monitor-datacenter-health?utm_source=docs) to learn how collect and interpret Consul data. + +By default, all metric names of gauge type are prefixed with the hostname of the consul agent, e.g., +`consul.hostname.server.isLeader`. To disable prefixing the hostname, set +`telemetry.disable_hostname=true` in the [agent configuration](/consul/docs/reference/agent/configuration-file/telemetry). + +Additionally, if the [`telemetry` configuration options](/consul/docs/reference/agent/configuration-file/telemetry) +are provided, the telemetry information will be streamed to a +[statsite](http://github.com/armon/statsite) or [statsd](http://github.com/etsy/statsd) server where +it can be aggregated and flushed to Graphite or any other metrics store. +For a configuration example for Telegraf, review the [Monitoring with Telegraf tutorial](/consul/tutorials/day-2-operations/monitor-health-telegraf?utm_source=docs). + +This +information can also be viewed with the [metrics endpoint](/consul/api-docs/agent#view-metrics) in JSON +format or using [Prometheus](https://prometheus.io/) format. + + + +```log +[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.num_goroutines': 19.000 +[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.alloc_bytes': 755960.000 +[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.malloc_count': 7550.000 +[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.free_count': 4387.000 +[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.heap_objects': 3163.000 +[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_pause_ns': 1151002.000 +[2014-01-29 10:56:50 -0800 PST][G] 'consul-agent.runtime.total_gc_runs': 4.000 +[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.accept': Count: 5 Sum: 5.000 +[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.agent.ipc.command': Count: 10 Sum: 10.000 +[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events': Count: 5 Sum: 5.000 +[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.foo': Count: 4 Sum: 4.000 +[2014-01-29 10:56:50 -0800 PST][C] 'consul-agent.serf.events.baz': Count: 1 Sum: 1.000 +[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.memberlist.gossip': Count: 50 Min: 0.007 Mean: 0.020 Max: 0.041 Stddev: 0.007 Sum: 0.989 +[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Intent': Count: 10 Sum: 0.000 +[2014-01-29 10:56:50 -0800 PST][S] 'consul-agent.serf.queue.Event': Count: 10 Min: 0.000 Mean: 2.500 Max: 5.000 Stddev: 2.121 Sum: 25.000 +``` + + + +# Key Metrics + +These are some metrics emitted that can help you understand the health of your cluster at a glance. A [Grafana dashboard](https://grafana.com/grafana/dashboards/13396) is also available, which is maintained by the Consul team and displays these metrics for easy visualization. For a full list of metrics emitted by Consul, see [Metrics Reference](#metrics-reference) + +### Transaction timing + +| Metric Name | Description | Unit | Type | +| :----------------------- | :----------------------------------------------------------------------------------- | :--------------------------- | :------ | +| `consul.kvs.apply` | Measures the time it takes to complete an update to the KV store. | ms | timer | +| `consul.txn.apply` | Measures the time spent applying a transaction operation. | ms | timer | +| `consul.raft.apply` | Counts the number of Raft transactions applied during the interval. This metric is only reported on the leader. | raft transactions / interval | counter | +| `consul.raft.commitTime` | Measures the time it takes to commit a new entry to the Raft log on the leader. | ms | timer | + +**Why they're important:** Taken together, these metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. Generally these should all be fairly consistent and no more than a few milliseconds. Sudden changes in any of the timing values could be due to unexpected load on the Consul servers, or due to problems on the servers themselves. + +**What to look for:** Deviations (in any of these metrics) of more than 50% from baseline over the previous hour. + +### Leadership changes + +| Metric Name | Description | Unit | Type | +| :------------------------------- | :------------------------------------------------------------------------------------------------------------- | :-------- | :------ | +| `consul.raft.leader.lastContact` | Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. | ms | timer | +| `consul.raft.state.candidate` | Increments whenever a Consul server starts an election. | elections | counter | +| `consul.raft.state.leader` | Increments whenever a Consul server becomes a leader. | leaders | counter | +| `consul.server.isLeader` | Track if a server is a leader(1) or not(0). | 1 or 0 | gauge | + +**Why they're important:** Normally, your Consul cluster should have a stable leader. If there are frequent elections or leadership changes, it would likely indicate network issues between the Consul servers, or that the Consul servers themselves are unable to keep up with the load. + +**What to look for:** For a healthy cluster, you're looking for a `lastContact` lower than 200ms, `leader` > 0 and `candidate` == 0. Deviations from this might indicate flapping leadership. + +### Certificate Authority Expiration + +| Metric Name | Description | Unit | Type | +| :------------------------- | :---------------------------------------------------------------------------------- | :------ | :---- | +| `consul.mesh.active-root-ca.expiry` | The number of seconds until the root CA expires, updated every hour. | seconds | gauge | +| `consul.mesh.active-signing-ca.expiry` | The number of seconds until the signing CA expires, updated every hour. | seconds | gauge | +| `consul.agent.tls.cert.expiry` | The number of seconds until the server agent's TLS certificate expires, updated every hour. | seconds | gauge | + +** Why they're important:** Consul Mesh requires a CA to sign all certificates +used to connect the mesh and the mesh network ceases to work if they expire and +become invalid. The Root is particularly important to monitor as Consul does +not automatically rotate it. The TLS certificate metric monitors the certificate +that the server's agent uses to connect with the other agents in the cluster. + +** What to look for:** The Root CA should be monitored for an approaching +expiration, to indicate it is time for you to rotate the "root" CA either +manually or with external automation. Consul should rotate the signing (intermediate) certificate +automatically, but we recommend monitoring the rotation. When the certificate does not rotate, check the server agent logs for +messages related to the CA system. The agent TLS certificate's rotation handling +varies based on the configuration. + +### Autopilot + +| Metric Name | Description | Unit | Type | +| :------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :----------- | :---- | +| `consul.autopilot.healthy` | Tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. | health state | gauge | + +**Why it's important:** Autopilot can expose the overall health of your cluster with a simple boolean. + +**What to look for:** Alert if `healthy` is 0. Some other indicators of an unhealthy cluster would be: +- `consul.raft.commitTime` - This can help reflect the speed of state store +changes being performed by the agent. If this number is rising, the server may +be experiencing an issue due to degraded resources on the host. +- [Leadership change metrics](#leadership-changes) - Check for deviation from +the recommended values. This can indicate failed leadership elections or +flapping nodes. + +### Memory usage + +| Metric Name | Description | Unit | Type | +| :--------------------------- | :----------------------------------------------------------------- | :---- | :---- | +| `consul.runtime.alloc_bytes` | Measures the number of bytes allocated by the Consul process. | bytes | gauge | +| `consul.runtime.sys_bytes` | Measures the total number of bytes of memory obtained from the OS. | bytes | gauge | + +**Why they're important:** Consul keeps all of its data in memory. If Consul consumes all available memory, it will crash. + +**What to look for:** If `consul.runtime.sys_bytes` exceeds 90% of total available system memory. + +**NOTE:** This metric is calculated using Go's runtime package +[MemStats](https://golang.org/pkg/runtime/#MemStats). This will have a +different output than using information gathered from `top`. For more +information, see [GH-4734](https://github.com/hashicorp/consul/issues/4734). + +### Garbage collection + +| Metric Name | Description | Unit | Type | +| :--------------------------------- | :---------------------------------------------------------------------------------------------------- | :--- | :---- | +| `consul.runtime.total_gc_pause_ns` | Number of nanoseconds consumed by stop-the-world garbage collection (GC) pauses since Consul started. | ns | gauge | + +**Why it's important:** GC pause is a "stop-the-world" event, meaning that all runtime threads are blocked until GC completes. Normally these pauses last only a few nanoseconds. But if memory usage is high, the Go runtime may GC so frequently that it starts to slow down Consul. + +**What to look for:** Warning if `total_gc_pause_ns` exceeds 2 seconds/minute, critical if it exceeds 5 seconds/minute. + +**NOTE:** `total_gc_pause_ns` is a cumulative counter, so in order to calculate rates (such as GC/minute), +you will need to apply a function such as InfluxDB's [`non_negative_difference()`](https://docs.influxdata.com/influxdb/v1.5/query_language/functions/#non-negative-difference). + +### Network activity - RPC Count + +| Metric Name | Description | Unit | Type | +| :--------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- | :------ | +| `consul.client.rpc` | Increments whenever a Consul agent makes an RPC request to a Consul server | requests | counter | +| `consul.client.rpc.exceeded` | Increments whenever a Consul agent makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/consul/docs/reference/agent/configuration-file/general#limits) configuration. | requests | counter | +| `consul.client.rpc.failed` | Increments whenever a Consul agent makes an RPC request to a Consul server and fails. | requests | counter | + +**Why they're important:** These measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. A high RPC count, especially from `consul.client.rpcexceeded` meaning that the requests are being rate-limited, could imply a misconfigured Consul agent. + +**What to look for:** +Sudden large changes to the `consul.client.rpc` metrics (greater than 50% deviation from baseline). +`consul.client.rpc.exceeded` or `consul.client.rpc.failed` count > 0, as it implies that an agent is being rate-limited or fails to make an RPC request to a Consul server + +### Raft Thread Saturation + +| Metric Name | Description | Unit | Type | +| :----------------------------------- | :----------------------------------------------------------------------------------------------------------------------- | :--------- | :----- | +| `consul.raft.thread.main.saturation` | An approximate measurement of the proportion of time the main Raft goroutine is busy and unavailable to accept new work. | percentage | sample | +| `consul.raft.thread.fsm.saturation` | An approximate measurement of the proportion of time the Raft FSM goroutine is busy and unavailable to accept new work. | percentage | sample | + +**Why they're important:** These measurements are a useful proxy for how much +capacity a Consul server has to accept additional write load. High saturation +of the Raft goroutines can lead to elevated latency in the rest of the system +and cause cluster instability. + +**What to look for:** Generally, a server's steady-state saturation should be +less than 50%. + +**NOTE:** These metrics are approximate and under extremely heavy load won't +give a perfect fine-grained view of how much headroom a server has available. +Instead, treat them as an early warning sign. + +** Requirements: ** +* Consul 1.13.0+ + +### Raft Replication Capacity Issues + +| Metric Name | Description | Unit | Type | +| :--------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------- | :------ | +| `consul.raft.fsm.lastRestoreDuration` | Measures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent. | ms | gauge | +| `consul.raft.leader.oldestLogAge` | The number of milliseconds since the _oldest_ log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with `consul.raft.fsm.lastRestoreDuration` and `consul.raft.rpc.installSnapshot` to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. | ms | gauge | +| `consul.raft.rpc.installSnapshot` | Measures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state. | ms | timer | + +**Why they're important:** These metrics allow operators to monitor the health +and capacity of raft replication on servers. **When Consul is handling large +amounts of data and high write throughput** it is possible for the cluster to +get into the following state: + * Write throughput is high (say 500 commits per second or more) and constant + * The leader is writing out a large snapshot every minute or so + * The snapshot is large enough that it takes considerable time to restore from + disk on a restart or from the leader if a follower gets behind + * Disk IO available allows the leader to write a snapshot faster than it can be + restored from disk on a follower + +Under these conditions, a follower after a restart may be unable to catch up on +replication and become a voter again since it takes longer to restore from disk +or the leader than the leader takes to write a new snapshot and truncate its +logs. Servers retain +[`raft_trailing_logs`](/consul/docs/reference/agent/configuration-file/raft#raft_trailing_logs) (default +`10240`) log entries even if their snapshot was more recent. On a leader +processing 500 commits/second, that is only about 20 seconds worth of logs. +Assuming the leader is able to write out a snapshot and truncate the logs in +less than 20 seconds, there will only be 20 seconds worth of "recent" logs +available on the leader right after the leader has taken a snapshot and never +more than about 80 seconds worth assuming it is taking a snapshot and truncating +logs every 60 seconds. + +In this state, followers must be able to restore a snapshot into memory and +resume replication in under 80 seconds otherwise they will never be able to +rejoin the cluster until write rates reduce. If they take more than 20 seconds +then there will be a chance that they are unlucky with timing when they restart +and have to download a snapshot again from the servers one or more times. If +they take 50 seconds or more then they will likely fail to catch up more often +than they succeed and will remain non-voters for some time until they happen to +complete the restore just before the leader truncates its logs. + +In the worst case, the follower will be left continually downloading snapshots +from the leader which are always too old to use by the time they are restored. +This can put additional strain on the leader transferring large snapshots +repeatedly as well as reduce the fault tolerance and serving capacity of the +cluster. + +Since Consul 1.5.3 +[`raft_trailing_logs`](/consul/docs/reference/agent/configuration-file/raft#raft_trailing_logs) has been +configurable. Increasing it allows the leader to retain more logs and give +followers more time to restore and catch up. The tradeoff is potentially +slower appends which eventually might affect write throughput and latency +negatively so setting it arbitrarily high is not recommended. Before Consul +1.10.0 it required a rolling restart to change this configuration on the leader +though and since no followers could restart without loosing health this could +mean loosing cluster availability and needing to recover the cluster from a loss +of quorum. + +Since Consul 1.10.0 +[`raft_trailing_logs`](/consul/docs/reference/agent/configuration-file/raft#raft_trailing_logs) is now +reloadable with `consul reload` or `SIGHUP` allowing operators to increase this +without the leader restarting or loosing leadership allowing the cluster to be +recovered gracefully. + +Monitoring these metrics can help avoid or diagnose this state. + +**What to look for:** + +`consul.raft.leader.oldestLogAge` should look like a saw-tooth wave increasing +linearly with time until the leader takes a snapshot and then jumping down as +the oldest logs are truncated. The lowest point on that line should remain +comfortably higher (i.e. 2x or more) than the time it takes to restore a +snapshot. + +There are two ways a snapshot can be restored on a follower: from disk on +startup or from the leader during an `installSnapshot` RPC. The leader only +sends an `installSnapshot` RPC if the follower is new and has no state, or if +it's state is too old for it to catch up with the leaders logs. + +`consul.raft.fsm.lastRestoreDuration` shows the time it took to restore from +either source the last time it happened. Most of the time this is when the +server was started. It's a gauge that will always show the last restore duration +(in Consul 1.10.0 and later) however long ago that was. + +`consul.raft.rpc.installSnapshot` is the timing information from the leader's +perspective when it installs a new snapshot on a follower. It includes the time +spent transferring the data as well as the follower restoring it. Since these +events are typically infrequent, you may need to graph the last value observed, +for example using `max_over_time` with a large range in Prometheus. While the +restore part will also be reflected in `lastRestoreDuration`, it can be useful +to observe this too since the logs need to be able to cover this entire +operation including the snapshot delivery to ensure followers can always catch +up safely. + +Graphing `consul.raft.leader.oldestLogAge` on the same axes as the other two +metrics here can help see at a glance if restore times are creeping dangerously +close to the limit of what the leader is retaining at the current write rate. + +Note that if servers don't restart often, then the snapshot could have grown +significantly since the last restore happened so last restore times might not +reflect what would happen if an agent restarts now. + +### License Expiration + +| Metric Name | Description | Unit | Type | +| :-------------------------------- | :--------------------------------------------------------------- | :---- | :---- | +| `consul.system.licenseExpiration` | Number of hours until the Consul Enterprise license will expire. | hours | gauge | + +**Why they're important:** + +This measurement indicates how many hours are left before the Consul Enterprise license expires. When the license expires some +Consul Enterprise features will cease to work. An example of this is that after expiration, it is no longer possible to create +or modify resources in non-default namespaces or to manage namespace definitions themselves even though reads of namespaced +resources will still work. + +**What to look for:** + +This metric should be monitored to ensure that the license doesn't expire to prevent degradation of functionality. + + +### Bolt DB Performance + +| Metric Name | Description | Unit | Type | +| :-------------------------------- | :--------------------------------------------------------------- | :---- | :---- | +| `consul.raft.boltdb.freelistBytes` | Represents the number of bytes necessary to encode the freelist metadata. When [`raft_logstore.boltdb.no_freelist_sync`](/consul/docs/reference/agent/configuration-file/raft#raft_logstore_boltdb_no_freelist_sync) is set to `false` these metadata bytes must also be written to disk for each committed log. | bytes | gauge | +| `consul.raft.boltdb.logsPerBatch` | Measures the number of logs being written per batch to the db. | logs | sample | +| `consul.raft.boltdb.storeLogs` | Measures the amount of time spent writing logs to the db. | ms | timer | +| `consul.raft.boltdb.writeCapacity` | Theoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can perform | logs/second | sample | + + +** Requirements: ** +* Consul 1.11.0+ + +**Why they're important:** + +The `consul.raft.boltdb.storeLogs` metric is a direct indicator of disk write performance of a Consul server. If there are issues with the disk or +performance degradations related to Bolt DB, these metrics will show the issue and potentially the cause as well. + +**What to look for:** + +The primary thing to look for are increases in the `consul.raft.boltdb.storeLogs` times. Its value will directly govern an +upper limit to the throughput of write operations within Consul. + +In Consul each write operation will turn into a single Raft log to be committed. Raft will process these +logs and store them within Bolt DB in batches. Each call to store logs within Bolt DB is measured to record how long +it took as well as how many logs were contained in the batch. Writing logs in this fashion is serialized so that +a subsequent log storage operation can only be started after the previous one completed. The maximum number +of log storage operations that can be performed each second is represented with the `consul.raft.boltdb.writeCapacity` +metric. When log storage operations are becoming slower you may not see an immediate decrease in write capacity +due to increased batch sizes of the each operation. However, the max batch size allowed is 64 logs. Therefore if +the `logsPerBatch` metric is near 64 and the `storeLogs` metric is seeing increased time to write each batch to disk, +then it is likely that increased write latencies and other errors may occur. + +There can be a number of potential issues that can cause this. Often times it could be performance of the underlying +disks that is the issue. Other times it may be caused by Bolt DB behavior. Bolt DB keeps track of free space within +the `raft.db` file. When needing to allocate data it will use existing free space first before further expanding the +file. By default, Bolt DB will write a data structure containing metadata about free pages within the DB to disk for +every log storage operation. Therefore if the free space within the database grows excessively large, such as after +a large spike in writes beyond the normal steady state and a subsequent slow down in the write rate, then Bolt DB +could end up writing a large amount of extra data to disk for each log storage operation. This has the potential +to drastically increase disk write throughput, potentially beyond what the underlying disks can keep up with. To +detect this situation you can look at the `consul.raft.boltdb.freelistBytes` metric. This metric is a count of +the extra bytes that are being written for each log storage operation beyond the log data itself. While not a clear +indicator of an actual issue, this metric can be used to diagnose why the `consul.raft.boltdb.storeLogs` metric +is high. + +If Bolt DB log storage performance becomes an issue and is caused by free list management then setting +[`raft_logstore.boltdb.no_freelist_sync`](/consul/docs/reference/agent/configuration-file/raft#raft_logstore_boltdb_no_freelist_sync) to `true` in the server's configuration +may help to reduce disk IO and log storage operation times. Disabling free list syncing will however increase +the startup time for a server as it must scan the raft.db file for free space instead of loading the already +populated free list structure. + +Consul includes an experiment backend configuration that you can use instead of BoldDB. Refer to [Experimental WAL LogStore backend](/consul/docs/deploy/server/wal/) for more information. + diff --git a/website/content/docs/monitor/telemetry/appdynamics.mdx b/website/content/docs/monitor/telemetry/appdynamics.mdx new file mode 100644 index 000000000000..692790415edc --- /dev/null +++ b/website/content/docs/monitor/telemetry/appdynamics.mdx @@ -0,0 +1,350 @@ +--- +layout: docs +page_title: Monitor Consul with AppDynamics CNS +description: >- + Use HashiCorp's Consul Monitoring Extension for AppDynamics CNS to collect Consul telemetry data and node information. +--- + +# Monitor Consul with AppDynamics CNS + +This page describes the process to monitor Consul telemetry data with the Consul monitoring extension for AppDynamics CNS. + +## Overview + +Consul produces a range of metrics in various formats, which enables operators +to measure the health and stability of a datacenter, and diagnose or predict +potential issues. + +This tutorial will show you how to use the +[HashiCorp Consul Monitoring Extension](https://github.com/hashicorp/consul-appd-extension) +for AppDynamics CNS to collect Consul telemetry data and node information. + +The integration enables you to collect information about +[HashiCorp Cloud Platform (HCP) Consul](/consul/tutorials/get-started-hcp/hcp-gs-deploy) +environments through Consul client agents. AppDynamics CNS is also +[HCP Consul Verified](/hcp/docs/consul). +You will install [AppDynamics Machine Agent](https://docs.appdynamics.com/display/PRO45/Infrastructure+Visibility) +on the Consul agents and use [statsite](https://github.com/statsite/statsite) as +a metrics aggregator to collect node telemetry metrics. + +The extension provides you with two example dashboards that will be imported +into your AppDynamics controller. + +## Prerequisites and configuration + +In order to complete this tutorial the following prerequisites need to be satisfied: + +- A running Consul or [HCP Consul](/consul/tutorials/get-started-hcp/hcp-gs-deploy) datacenter to monitor with AppDynamics. +- An AppDynamics SaaS or On-Prem controller version 4.5 or greater. +- The Consul agents that are being monitored will also need to meet AppDynamics + Machine Agent [requirements and supported environments](https://docs.appdynamics.com/display/PRO45/Standalone+Machine+Agent+Requirements+and+Supported+Environments). + +## Setup monitoring + +You will perform the integration in four steps: + +- Install AppDynamics Machine Agent including the JRE on the Consul nodes. +- Install HashiCorp Consul Monitoring Extension for AppDynamics CNS. +- Build and install statsite (which is a metrics aggregator like statsd) to collect node metrics. +- Configure Consul to send telemetry data to the AppDynamics agent. + +### Install AppDynamics Machine Agent bundle + +To integrate your Consul datacenter with AppDynamics you will install and +configure the AppDynamics Machine Agent on all the nodes you want to monitor and +configure Consul to send telemetry data to it. + +1. Download the AppDynamics [Machine Agent Bundle](https://download.appdynamics.com/download/#version=&apm=machine&os=&platform_admin_os=&appdynamics_cluster_os=&events=&eum=&page=1). + +2. As `root` or super user, unzip and setup the service file. + + ```shell-session + $ sudo su + ``` + + ```shell-session + $ mkdir -p /opt/appdynamics/machine-agent + ``` + + ```shell-session + $ unzip ./machineagent-bundle-64bit-linux-4.5.15.2316.zip -d /opt/appdynamics/machine-agent + ``` + + ```shell-session + $ cp /opt/appdynamics/machine-agent/etc/systemd/system/appdynamics-machine-agent.service /etc/systemd/system/appdynamics-machine-agent.service + ``` + + ```shell-session + $ systemctl daemon-reload + ``` + + -> An official installation guide is available on the [AppDynamics documentation website](https://docs.appdynamics.com/display/PRO45/Linux+Install+Using+ZIP+with+Bundled+JRE) + +3. Configure the AppDynamics Machine Agent to communicate with the controller + when running in standalone mode. + + ```shell-session + $ cd /opt/appdynamics/machine-agent/conf + ``` + + ```shell-session + $ vi controller-info.xml + ``` + + Note, this requires editing the controller file `controller-info.xml`. You + will need to obtain your AppDynamics Controller access information and + configure it in `controller-info.xml` file before you begin the steps below. + Refer to [configure the Standalone Machine Agent](https://docs.appdynamics.com/display/PRO45/Configure+the+Standalone+Machine+Agent) for the configuration reference. + +4. It is highly recommended to increase the value of `maxMetrics` so that data + doesn't get truncated. Add Java options in AppDynamics agent service + definition to increase the value of `maxMetrics`. + + ```shell-session + $ sed -i 's/#Environment="JAVA_OPTS=-D= -D="/Environment="JAVA_OPTS=-Dappdynamics.agent.maxMetrics=10000"/g' /etc/systemd/system/appdynamics-machine-agent.service + ``` + +### Configure AppDynamics machine agent for Consul telemetry + +Now that the agent is configured you can install the monitoring extension for +AppDynamics CNS. + +Clone the [consul-appd-extension repo](https://github.com/hashicorp/consul-appd-extension) and copy the contents of folder +`statsite` into `/opt/appdynamics/machine-agent/monitors/StatSite`: + +```git +$ git clone https://github.com/hashicorp/consul-appd-extension.git +Cloning into 'consul-appd-extension'... +... +``` + +```shell-session +$ ls -1 ./consul-appd-extension/statsite/ +monitor.xml +output.py +statsite.conf +statsite.sh +``` + +```shell-session +$ mkdir -p /opt/appdynamics/machine-agent/monitors/StatSite +``` + +```shell-session +$ cp ./consul-appd-extension/statsite/* /opt/appdynamics/machine-agent/monitors/StatSite +``` + +### Compile and install statsite + +The extension uses statsite to import statsd Consul metrics into AppDynamics. To +achieve this, it will need a statsite executable embedded in the agent folder. +This following sections demonstrate how to build statsite for your nodes. + +~> Due to the amount of dependencies necessary to successfully build statsite it +is recommended to perform the build activity on a machine different from the +Consul agents to avoid dependency issues or installing un-necessary software on +the nodes. Once the build is completed you can distribute the binary to the +Consul nodes to perform the integration. + +#### Install dependencies + +This tutorial provides you with commands to perform the build for different Linux +flavors but you should refer to the official +[statsite installation guide](https://github.com/statsite/statsite/blob/master/INSTALL.md) +to find the best set of steps for your OS. + + + + +```shell-session +$ apt-get update +``` + +```shell-session +$ apt-get -y install build-essential libtool autoconf automake scons python-setuptools lsof git texlive check +``` + + + + +```shell-session +$ yum update +``` + +```shell-session +$ yum groupinstall -y 'Development Tools' +``` + +```shell-session +$ yum install -y install libtool autoconf automake scons python-setuptools lsof git texlive check +``` + + + + +#### Compile + +```shell-session +$ cd ~ && wget https://github.com/statsite/statsite/archive/v0.8.0.zip +``` + +```shell-session +$ unzip v0.8.0.zip && cd statsite-0.8.0 +``` + +```shell-session +$ ./bootstrap.sh +``` + +```shell-session +$ ./configure +``` + +```shell-session +$ make +``` + +#### Install + +The installation will require you to copy the `statsite` executable into +`/opt/appdynamics/machine-agent/monitors/StatSite` for every node you want to +monitor. + +```shell-session +$ cp ./src/statsite /opt/appdynamics/machine-agent/monitors/StatSite +``` + +### Configure Consul to send metrics to statsite + +The last step in the integration is to configure your Consul agents to send +their telemetry data to AppDynamics CNS via statsite. + + + + + + +```hcl +telemetry { + dogstatsd_addr = "localhost:8125" +} +``` + + + +You can apply it to Consul by copying the file into your Consul configuration +folder and then by performing a rolling restart of all the nodes that will use +the integration. + +```shell-session +$ cp ./consul-statsite.hcl /etc/consul.d/ +``` + + + + +The extension repository provides you with a configuration file already set to +use the statsite instance you embedded in AppDynamics machine agent bundle. + + + +```json +{ + "telemetry": { + "statsite_address": "localhost:8125" + } +} +``` + + + +You can apply it to Consul by copying the file into your Consul configuration +folder and then by performing a rolling restart of all the nodes that will use +the integration. + +```shell-session +$ cp ./consul-appd-extension/consul-statsite.json /etc/consul.d/ +``` + + + + +## Display telemetry Metrics + +For a complete list of Consul metrics supported by these integrations, as well +as details on what each metric means, consult the Consul +[telemetry documentation](/consul/docs/reference/agent/telemetry). + + + + The AppDynamics Integration with HCP Consul collects a subset of +Consul's default metrics that do not pertain to [server health](/consul/docs/reference/agent/telemetry#server-health). + + + +### Finding metrics + +All metrics reported by the extension will be available in the Metric Browser +at `Controller > Applications > Application > Metric Browser`. + +- `Application Infrastructure Performance > Consul > Custom Metrics > statsd > consul` + lists the available Consul metrics: + +![AppDynamics CNS Custom Metrics Consul](/img/cloud-integrations/appdynamics-ui-available-metrics-consul.png) + +- `Application Infrastructure Performance > Consul > Custom Metrics > statsd > envoy` + lists metrics available for the Connect service mesh. + +![AppDynamics CNS Custom Metrics Service Mesh ](/img/cloud-integrations/appdynamics-ui-available-metrics-service-mesh.png) + +### Custom dashboard + +The extension repository provides two custom dashboards to get you started on +monitoring Consul. They are located in the `dashboards` folder. + +```shell-session +$ ls -1 ./dashboards +``` + +```plaintext hideClipboard +consul_dashboard.json +consul_servicemesh_dashboard.json +``` + +To import the dashboards: + +1. Log into your AppDynamics controller. Select the `Dashboards & Reports tab > Dashboards > Import`. +1. Upload the `.json` dashboard file. + +Once uploaded the dashboards will be available in the `Dashboard & Reports tab`. + +![AppDynamics CNS Custom Dashboard Consul Metrics](/img/cloud-integrations/appdynamics-ui-dashboard-consul.png) + +![AppDynamics CNS Custom Dashboard Service Mesh Metrics](/img/cloud-integrations/appdynamics-ui-dashboard-service-mesh.png) + + + + You will need to change the value for the key `applicationName` +within the templates to match your application name. + + + +### Custom health rules + +AppDynamics CNS provides the ability to customize health rules, the policy +statements that define triggers. Health rules for Consul are created +against the applications that are using its service discovery and service mesh +so that the metrics for the application as well as Consul can be seen against +particular applications in AppDynamics. + +![AppDynamics CNS custom health rules screen](/img/cloud-integrations/appdynamics-ui-health-rules.png) + +## Next steps + +In this tutorial you learned how to integrate AppDynamics machine agent with +Consul and HCP Consul to collect metrics and import custom dashboards into +AppDynamics CNS to have a monitoring starting point and consider your options +for visualizing, aggregating, and alerting on those metrics. + +For more information on how to personalize your dashboards you can refer to +AppDynamics documentation for [creating custom dashboards](https://docs.appdynamics.com/display/PRO45/Custom+Dashboards). \ No newline at end of file diff --git a/website/content/docs/monitor/telemetry/dataplane.mdx b/website/content/docs/monitor/telemetry/dataplane.mdx new file mode 100644 index 000000000000..987568dcbebb --- /dev/null +++ b/website/content/docs/monitor/telemetry/dataplane.mdx @@ -0,0 +1,37 @@ +--- +layout: docs +page_title: Enable Dataplane telemetry metrics +description: >- + Configure telemetry to collect metrics you can use to debug and observe Consul Dataplane behavior and performance. +--- + +# Enable Dataplane telemetry metrics + +Consul Dataplane collects metrics about its own status and performance. The following external metrics stores are supported: + +- [DogstatsD](https://docs.datadoghq.com/developers/dogstatsd/) +- [Prometheus](https://prometheus.io/docs/prometheus/latest/) +- [StatsD](https://github.com/statsd/statsd) + +Consul Dataplane uses the same external metrics store that is configured for Envoy. To enable telemetry for Consul Dataplane, enable telemetry for Envoy by specifying an external metrics store in the proxy-defaults configuration entry or directly in the proxy.config field of the proxy service definition. Refer to the [Envoy bootstrap configuration](/consul/docs/connect/proxies/envoy#bootstrap-configuration) for details. + +## Merge Prometheus metrics + +When you use Prometheus metrics, Consul Dataplane configures Envoy to serve merged metrics through a single endpoint. The dataplane collects and merges metrics from the following sources: + +- Consul Dataplane +- The Envoy process managed by Consul Dataplane +- (optionally) Your service instance running alongside Consul Dataplane + +## Metrics Reference + +Consul Dataplane supports the following metrics: + +| Metric Name | Description | Unit | Type | +| :------------------------------------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------- | :------ | +| `consul_dataplane.connect_duration` | Measures the time `consul-dataplane` spends connecting to a Consul server, including the time to discover Consul server addresses and to complete other setup prior to Envoy opening the xDS stream. | ms | timer | +| `consul_dataplane.connected` | Indicates whether `consul-dataplane` is currently connected to a Consul server. | 1 or 0 | gauge | +| `consul_dataplane.connection_errors` | Measures the number of errors encountered on gRPC streams. This is labeled with the gRPC error status code. | number of errors | gauge | +| `consul_dataplane.discover_servers_duration` | Measures the time `consul-dataplane` spends discovering Consul server IP addresses. | ms | timer | +| `consul_dataplane.envoy_connected` | Indicates whether Envoy is currently connected to `consul-dataplane` and able to receive xDS updates. | 1 or 0 | gauge | +| `consul_dataplane.login_duration` | Measures the time `consul-dataplane` spends logging in to an ACL auth method. | ms | timer | \ No newline at end of file diff --git a/website/content/docs/monitor/telemetry/telegraf.mdx b/website/content/docs/monitor/telemetry/telegraf.mdx new file mode 100644 index 000000000000..1e7ca8a9ab9a --- /dev/null +++ b/website/content/docs/monitor/telemetry/telegraf.mdx @@ -0,0 +1,355 @@ +--- +layout: docs +page_title: Monitor Consul datacenter health with Telegraf +description: >- + Learn how to use Telegraf to visualize Consul agent and datacenter metrics. +--- + +# Monitor Consul datacenter health with Telegraf + +This page describes the process to set up Telegraf to monitor Consul datacenter telemetry. + +## Overview + +Consul makes a range of metrics in various formats available so operators can +measure the health and stability of a datacenter, and diagnose or predict +potential issues. + +There are number of monitoring tools and options available, but for the purposes +of this tutorial you are going to use the [telegraf_plugin][] in conjunction with +the StatsD protocol supported by Consul. + +You can read the full list of metrics available with Consul in the +[telemetry documentation](/consul/docs/reference/agent/telemetry). + +In this tutorial you will: + +- Configure Telegraf to collect StatsD and host level metrics +- Configure Consul to send metrics to Telegraf +- Review an example of metrics visualization +- Understand important metrics to aggregate and alert on + +## Install Telegraf + +The process for installing Telegraf depends on your operating system. We +recommend following the [official Telegraf installation documentation][telegraf-install]. + +## Configure Telegraf + +Telegraf acts as a StatsD agent and can collect additional metrics about the +hosts where Consul agents are running. Telegraf itself ships with a wide range +of [input plugins][telegraf-input-plugins] to collect data from lots of sources +for this purpose. + +You are going to enable some of the most common input plugins to monitor CPU, +memory, disk I/O, networking, and process status, since these are useful for +debugging Consul datacenter issues. + +The `telegraf.conf` file starts with global options: + + + +```toml +[agent] + interval = "10s" + flush_interval = "10s" + omit_hostname = false +``` + + + +You set the default collection interval to 10 seconds and ask Telegraf to +include a `host` tag in each metric. + +As mentioned above, Telegraf also allows you to set additional tags on the +metrics that pass through it. In this case, you are adding tags for the server +role and datacenter. You can then use these tags in Grafana to filter queries +(for example, to create a dashboard showing only servers with the +`consul-server` role, or only servers in the `us-east-1` datacenter). + + + +```toml +[global_tags] + role = "consul-server" + datacenter = "us-east-1" +``` + + + +Next, set up a StatsD listener on UDP port 8125, with instructions to calculate +percentile metrics and to parse DogStatsD-compatible tags, when they're sent: + + + +```toml +[[inputs.statsd]] + protocol = "udp" + service_address = ":8125" + delete_gauges = true + delete_counters = true + delete_sets = true + delete_timings = true + percentiles = [90] + metric_separator = "_" + parse_data_dog_tags = true + allowed_pending_messages = 10000 + percentile_limit = 1000 +``` + + + +The full reference to all the available StatsD-related options in Telegraf is +[here][telegraf-statsd-input]. + +Now, you can configure inputs for things like CPU, memory, network I/O, and disk +I/O. Most of them don't require any configuration, but make sure the `interfaces` +list in `inputs.net` matches the interface names you get from `ifconfig`. + + + +```toml +[[inputs.cpu]] + percpu = true + totalcpu = true + collect_cpu_time = false + +[[inputs.disk]] + # mount_points = ["/"] + # ignore_fs = ["tmpfs", "devtmpfs"] + +[[inputs.diskio]] + # devices = ["sda", "sdb"] + # skip_serial_number = false + +[[inputs.kernel]] + # no configuration + +[[inputs.linux_sysctl_fs]] + # no configuration + +[[inputs.mem]] + # no configuration + +[[inputs.net]] + interfaces = ["enp0s*"] + +[[inputs.netstat]] + # no configuration + +[[inputs.processes]] + # no configuration + +[[inputs.swap]] + # no configuration + +[[inputs.system]] + # no configuration +``` + + + +Another useful plugin is the [procstat][telegraf-procstat-input] plugin, which +reports metrics for processes you select: + + + +```toml +[[inputs.procstat]] + pattern = "(consul)" +``` + + + +Telegraf even includes a [plugin][telegraf-consul-input] that monitors the +health checks associated with the Consul agent, using Consul API to query the +data. + +It's important to note: the plugin itself will not report the telemetry, Consul +will report those stats already using StatsD protocol. + + + +```toml +[[inputs.consul]] + address = "localhost:8500" + scheme = "http" +``` + + + +## Telegraf configuration for Consul + +Asking Consul to send telemetry to Telegraf is as simple as adding a `telemetry` +section to your agent configuration: + + + +```hcl +telemetry { + dogstatsd_addr = "localhost:8125" + disable_hostname = true +} +``` + +```json +{ + "telemetry": { + "dogstatsd_addr": "localhost:8125", + "disable_hostname": true + } +} +``` + + + +You only need to specify two options. The `dogstatsd_addr` +specifies the hostname and port of the StatsD daemon. + +Note that the configuration specifies DogStatsD format instead of plain StatsD, +which tells Consul to send [tags][tagging] with each metric. Tags can be used by +Grafana to filter data on your dashboards (for example, displaying only the data +for which `role=consul-server`). Telegraf is compatible with the DogStatsD +format and allows you to add your own tags too. + +The second option tells Consul not to insert the hostname in the names of the +metrics it sends to StatsD, since the hostnames will be sent as tags. Without +this option, the single metric `consul.raft.apply` would become multiple +metrics: + +```plaintext hideClipboard + consul.server1.raft.apply + consul.server2.raft.apply + consul.server3.raft.apply +``` + +If you are using a different agent (e.g. Circonus, Statsite, or plain StatsD), +you may want to change this configuration, and you can find the configuration +reference [here][consul-telemetry-config]. + +## Visualize Telegraf Consul metrics + +You can use a tool like [Grafana][] or [Chronograf][] to visualize metrics from +Telegraf. + +Here is an example Grafana dashboard: + +![Grafana Consul Datacenter](/img/consul-grafana-screenshot.png 'Grafana Dashboard') + +## Metric aggregates and alerting from Telegraf + +### Memory usage + +| Metric Name | Description | +| :------------------ | :------------------------------------------------------------- | +| `mem.total` | Total amount of physical memory (RAM) available on the server. | +| `mem.used_percent` | Percentage of physical memory in use. | +| `swap.used_percent` | Percentage of swap space in use. | + +**Why they're important:** Consul keeps all of its data in memory. If Consul +consumes all available memory, it will crash. You should also monitor total +available RAM to make sure some RAM is available for other processes, and swap +usage should remain at 0% for best performance. + +**What to look for:** If `mem.used_percent` is over 90%, or if +`swap.used_percent` is greater than 0. + +### File descriptors + +| Metric Name | Description | +| :------------------------- | :------------------------------------------------------------------ | +| `linux_sysctl_fs.file-nr` | Number of file handles being used across all processes on the host. | +| `linux_sysctl_fs.file-max` | Total number of available file handles. | + +**Why it's important:** Practically anything Consul does -- receiving a +connection from another host, sending data between servers, writing snapshots to +disk -- requires a file descriptor handle. If Consul runs out of handles, it +will stop accepting connections. Check [the Consul FAQ][consul_faq_fds] for more +details. + +By default, process and kernel limits are fairly conservative. You will want to +increase these beyond the defaults. + +**What to look for:** If `file-nr` exceeds 80% of `file-max`. + +### CPU usage + +| Metric Name | Description | +| :--------------- | :--------------------------------------------------------------- | +| `cpu.user_cpu` | Percentage of CPU being used by user processes (such as Consul). | +| `cpu.iowait_cpu` | Percentage of CPU time spent waiting for I/O tasks to complete. | + +**Why they're important:** Consul is not particularly demanding of CPU time, but +a spike in CPU usage might indicate too many operations taking place at once, +and `iowait_cpu` is critical -- it means Consul is waiting for data to be +written to disk, a sign that Raft might be writing snapshots to disk too often. + +**What to look for:** if `cpu.iowait_cpu` greater than 10%. + +### Network activity - bytes received + +| Metric Name | Description | +| :--------------- | :------------------------------------------- | +| `net.bytes_recv` | Bytes received on each network interface. | +| `net.bytes_sent` | Bytes transmitted on each network interface. | + +**Why they're important:** A sudden spike in network traffic to Consul might be +the result of a misconfigured application client causing too many requests to +Consul. This is the raw data from the system, rather than a specific Consul +metric. + +**What to look for:** Sudden large changes to the `net` metrics (greater than +50% deviation from baseline). + +**NOTE:** The `net` metrics are counters, so in order to calculate rates (such +as bytes/second), you will need to apply a function such as +[non_negative_difference][]. + +### Disk activity + +| Metric Name | Description | +| :------------------- | :---------------------------------- | +| `diskio.read_bytes` | Bytes read from each block device. | +| `diskio.write_bytes` | Bytes written to each block device. | + +**Why they're important:** If the Consul host is writing a lot of data to disk, +such as under high volume workloads, there may be frequent major I/O spikes +during leader elections. This is because under heavy load, Consul is +checkpointing Raft snapshots to disk frequently. + +It may also be caused by Consul having debug/trace logging enabled in +production, which can impact performance. + +Too much disk I/O can cause the rest of the system to slow down or become +unavailable, as the kernel spends all its time waiting for I/O to complete. + +**What to look for:** Sudden large changes to the `diskio` metrics (greater than +50% deviation from baseline, or more than 3 standard deviations from baseline). + +**NOTE:** The `diskio` metrics are counters, so in order to calculate rates +(such as bytes/second), you will need to apply a function such as +[non_negative_difference][]. + +## Next steps + +In this tutorial, you learned how to set up Telegraf with Consul to collect +metrics, and considered your options for visualizing, aggregating, and alerting +on those metrics. To learn about other factors (in addition to monitoring) that +you should consider when running Consul in production, check the +[Production Checklist][prod-checklist]. + +[non_negative_difference]: https://docs.influxdata.com/influxdb/v1.5/query_language/functions/#non-negative-difference +[consul_faq_fds]: /consul/docs/troubleshoot/faq#q-does-consul-require-certain-user-process-resource-limits- +[telegraf_plugin]: https://github.com/influxdata/telegraf/tree/master/plugins/inputs/consul +[telegraf-install]: https://docs.influxdata.com/telegraf/v1.6/introduction/installation/ +[telegraf-consul-input]: https://github.com/influxdata/telegraf/tree/release-1.6/plugins/inputs/consul +[telegraf-statsd-input]: https://github.com/influxdata/telegraf/tree/release-1.6/plugins/inputs/statsd +[telegraf-procstat-input]: https://github.com/influxdata/telegraf/tree/release-1.6/plugins/inputs/procstat +[telegraf-input-plugins]: https://docs.influxdata.com/telegraf/v1.6/plugins/inputs/ +[tagging]: https://docs.datadoghq.com/getting_started/tagging/ +[consul-telemetry-config]: /consul/docs/reference/agent/configuration-file/telemetry +[consul-telemetry-ref]: /consul/docs/reference/agent/telemetry +[telegraf-input-plugins]: https://docs.influxdata.com/telegraf/v1.6/plugins/inputs/ +[grafana]: https://www.influxdata.com/partners/grafana/ +[chronograf]: https://www.influxdata.com/time-series-platform/chronograf/ +[prod-checklist]: /consul/tutorials/production-deploy/production-checklist diff --git a/website/content/docs/multi-tenant/admin-partition/index.mdx b/website/content/docs/multi-tenant/admin-partition/index.mdx new file mode 100644 index 000000000000..894ca7d8b41c --- /dev/null +++ b/website/content/docs/multi-tenant/admin-partition/index.mdx @@ -0,0 +1,73 @@ +--- +page_title: Create admin partitions +description: |- + Admin partitions define boundaries between services managed by separate teams, enabling a service mesh across k8s clusters controlled by a single Consul server. Learn about their requirements and how to deploy admin partitions on Kubernetes. +--- + +# Admin partitions overview + +This topic provides and overview of admin partitions, which are entities that define one or more administrative boundaries for single Consul deployments. + +## Introduction + +Admin partitions exist a level above namespaces in the identity hierarchy. They contain one or more namespaces and allow multiple independent tenants to share a Consul server cluster. As a result, admin partitions enable you to define administrative and communication boundaries between services managed by separate teams or belonging to separate stakeholders. They can also segment production and non-production services within the Consul deployment. + +As of Consul v1.11, every _datacenter_ contains a single administrative partition named `default` when created. With Consul Enterprise, operators have the option of creating multiple partitions within a single datacenter. + +-> **Preexisting nodes**: Admin partitions were introduced in Consul 1.11. Nodes existed in global scope prior to 1.11. After upgrading to Consul 1.11 or later, all nodes will be scoped to an admin partition, which will be the `default` partition when initially upgrading an existing deployment or for CE versions. + +There are tutorials available to help you get started with admin partitions. + +- [Multi-Tenancy with Administrative Partitions](/consul/tutorials/enterprise/consul-admin-partitions?utm_source=docs) +- [Multi Cluster Applications with Consul Enterprise Admin Partitions](/consul/tutorials/kubernetes/kubernetes-admin-partitions?utm_source=docs) + +### Default Admin Partition + +Each Consul cluster will have a default admin partition named `default`. The `default` partition must contain the Consul servers. The `default` admin partition is different from other partitions that may be created because the namespaces and resources in this partition are replicated between datacenters when they are federated. + +Any resource created without specifying an admin partition will inherit the partition of the ACL token used to create the resource. + +-> **Preexisting resources and the `default` partition**: Admin partitions were introduced in Consul 1.11. After upgrading to Consul 1.11 or later, the `default` partition will contain all resources created in previous versions. + +### Naming Admin Partitions + +Only characters that are valid in DNS names can be used to name admin partitions. +Names must also begin with a lowercase letter. + +### Namespaces + +When an admin partition is created, it will include the `default` namespace. You can create additional namespaces within the partition. Resources created within a namespace are not shared across partitions. + +### Cross-datacenter Replication + +Only resources in the `default` admin partition will be replicated to secondary datacenters (also see [Known Limitations](#known-limitations)). + +### DNS Queries + +When queried, the DNS interface returns results for a single admin partition. +The query may explicitly specify the admin partition to use in the lookup. +If you do not specify an admin partition in the query, +the lookup uses the admin partition of the Consul agent that received the query. +Server agents always exist within the `default` admin partition. +Client agents are configured to operate within a specific admin partition. + +By default, Consul on Kubernetes uses [Consul dataplanes](/consul/docs/architecture/control-plane/dataplane) instead of client agents to manage communication between service instances. But to use the Consul DNS for service discovery, you must start a Consul client in client admin partitions. + +### Service Mesh Configurations + +The partition in which [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults) and [`mesh`](/consul/docs/reference/config-entry/mesh) configurations are created define the scope of the configurations. Services registered in a partition will use the `proxy-defaults` and `mesh` configurations that have been created in the partition. + +### Cross-partition Networking + +You can configure services to be discoverable by downstream services in any partition within the datacenter. Specify the upstream services that you want to be available for discovery by configuring the `exported-services` configuration entry in the partition where the services are registered. Refer to the [`exported-services` documentation](/consul/docs/reference/config-entry/exported-services) for details. Additionally, the requests made by downstream applications must have the correct DNS name for the Virtual IP Service lookup to occur. Service Virtual IP lookups allow for communications across Admin Partitions when using Transparent Proxy. Refer to the [Service Virtual IP Lookups for Consul Enterprise](/consul/docs/services/discovery/dns-static-lookups#service-virtual-ip-lookups-for-consul-enterprise) for additional information. + +-> **Export mesh gateway **: When ACL is enabled in Consul-k8s and `meshgateway.mode` is set to `local`, the `mesh-gateway` service must be exported to their consumers for cross-partition traffic. + +### Cluster Peering + +You can use [cluster peering](/consul/docs/connect/cluster-peering/) between two admin partitions to connect clusters owned by different operators. Without Consul Enterprise, cluster peering is limited to the `default` partitions in each datacenter. Enterprise users can [establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/vm) between any two admin partitions as long as the partitions are in separate datacenters. It is not possible to establish cluster peering connections between two partitions in a single datacenter. + +## Known Limitations + +- Only the `default` admin partition is supported when federating multiple Consul datacenters in a WAN. +- Admin partitions have no theoretical limit. We intend to conduct a large-scale test to identify a recommended max in the future. \ No newline at end of file diff --git a/website/content/docs/multi-tenant/admin-partition/k8s.mdx b/website/content/docs/multi-tenant/admin-partition/k8s.mdx new file mode 100644 index 000000000000..86de872130f1 --- /dev/null +++ b/website/content/docs/multi-tenant/admin-partition/k8s.mdx @@ -0,0 +1,221 @@ +--- +page_title: Configure partitions on Kubernetes +description: |- + Admin partitions define boundaries between services managed by separate teams, enabling a service mesh across k8s clusters controlled by a single Consul server. Learn about their requirements and how to deploy admin partitions on Kubernetes. +--- + +# Configure partitions on Kubernetes + +This page describes how to use Consul admin partitions in Kubernetes deployments. + +## Requirements + +One of the primary use cases for admin partitions is for enabling a service mesh across multiple Kubernetes clusters. The following requirements must be met to create admin partitions on Kubernetes: + +- If you are deploying Consul servers on Kubernetes, then ensure that the Consul servers are deployed within the same Kubernetes cluster. Consul servers may be deployed external to Kubernetes and configured using the `externalServers` stanza. +- Workloads deployed on the same Kubernetes cluster as the Consul Servers must use the `default` partition. If the workloads are required to run on a non-default partition, then the clients must be deployed in a separate Kubernetes cluster. +- A Consul Enterprise license must be installed on each Kubernetes cluster. +- The helm chart for consul-k8s v0.39.0 or greater. +- Consul 1.11.1-ent or greater. +- A designated Kubernetes `LoadBalancer` service must be exposed on the Consul server cluster. This enable the following communication channels to the Consul servers: + - RPC on port 8300 + - Gossip on port 8301 + - HTTPS API requests on port 443 API requests +- Mesh gateways must be deployed as a Kubernetes `LoadBalancer` service on port 443 across all Kubernetes clusters. +- Cross-partition networking must be implemented as described in [Cross-Partition Networking](#cross-partition-networking). + +## Usage + +This section describes how to deploy Consul admin partitions to Kubernetes clusters. Refer to the [admin partition CLI documentation](/consul/commands/partition) for information about command line usage. + +### Deploying Consul with Admin Partitions on Kubernetes + +The expected use case is to create admin partitions on Kubernetes clusters. This is because many organizations prefer to use cloud-managed Kubernetes offerings to provision separate Kubernetes clusters for individual teams, business units, or environments. This is opposed to deploying a single, large Kubernetes cluster. Organizations encounter problems, however, when they attempt to use a service mesh to enable multi-cluster use cases, such as administration tasks and communication between nodes. + +The following procedure will result in an admin partition in each Kubernetes cluster. The Consul clients running in the cluster with servers will be in the `default` partition. Another partition called `clients` will also be created. + +#### Prepare to install Consul across multiple Kubernetes clusters + +Verify that your Consul deployment meets the [Kubernetes Requirements](#kubernetes-requirements) before proceeding. + +1. Verify that your VPC is configured to enable connectivity between the pods running workloads and Consul servers. Refer to your virtual cloud provider's documentation for instructions on configuring network connectivity. +1. Set environment variables to use with shell commands. + + ```shell-session + $ export HELM_RELEASE_SERVER=server + $ export HELM_RELEASE_CLIENT=client + $ export SERVER_CONTEXT= + $ export CLIENT_CONTEXT= + ``` + +1. Create the license secret in server cluster. + + ```shell-session + $ kubectl create --context ${SERVER_CONTEXT} namespace consul + $ kubectl create secret --context ${SERVER_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic + ``` + +1. Create the license secret in the non-default partition cluster for your workloads. This step must be repeated for every additional non-default partition cluster. + + ```shell-session + $ kubectl create --context ${CLIENT_CONTEXT} namespace consul + $ kubectl create secret --context ${CLIENT_CONTEXT} --namespace consul generic license --from-file=key=./path/to/license.hclic + ``` + +#### Install the Consul server cluster + +1. Set your context to the server cluster. + + ```shell-session + $ kubectl config use-context ${SERVER_CONTEXT} + ``` + +1. Create a server configuration values file to override the default Consul Helm chart settings: + + + + + + ```yaml + global: + enableConsulNamespaces: true + tls: + enabled: true + image: hashicorp/consul-enterprise:1.16.3-ent + adminPartitions: + enabled: true + acls: + manageSystemACLs: true + enterpriseLicense: + secretName: license + secretKey: key + meshGateway: + enabled: true + ``` + + + + + Refer to the [Helm Chart Configuration reference](/consul/docs/reference/k8s/helm) for details about the parameters you can specify in the file. + +1. Install the Consul server(s) using the values file created in the previous step: + + ```shell-session + $ helm install ${HELM_RELEASE_SERVER} hashicorp/consul --version "1.0.0" --create-namespace --namespace consul --values server.yaml + ``` + +1. After the server starts, get the external IP address for partition service so that it can be added to the client configuration (`externalServers.hosts`). The IP address is used to bootstrap connectivity between servers and workload pods on the non-default partition cluster.
    + + ```shell-session + $ kubectl get services --selector="app=consul,component=server" --namespace consul --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}" + 34.135.103.67 + ``` + +1. Get the Kubernetes authentication method URL for the non-default partition cluster running your workloads: + + ```shell-session + $ kubectl config view --output "jsonpath={.clusters[?(@.name=='${CLIENT_CONTEXT}')].cluster.server}" + ``` + + Use the IP address printed to the console to configure the `externalServers.k8sAuthMethodHost` parameter in the workload configuration file for your non-default partition cluster running your workloads. + +1. Copy the server certificate to the non-default partition cluster running your workloads. + + ```shell-session + $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-cert --context ${SERVER_CONTEXT} -n consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename - + ``` + +1. Copy the server key to the non-default partition cluster running your workloads: + + ```shell-session + $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-ca-key --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename - + ``` + +1. If ACLs were enabled in the server configuration values file, copy the token to the non-default partition cluster running your workloads: + + ```shell-session + $ kubectl get secret ${HELM_RELEASE_SERVER}-consul-partitions-acl-token --context ${SERVER_CONTEXT} --namespace consul --output yaml | kubectl apply --namespace consul --context ${CLIENT_CONTEXT} --filename - + ``` + +#### Install on the non-default partition clusters running workloads + +1. Switch to the workload non-default partition clusters running your workloads: + + ```shell-session + $ kubectl config use-context ${CLIENT_CONTEXT} + ``` + +1. Create a configuration for each non-default admin partition. + + + + + + ```yaml + global: + name: consul + enabled: false + enableConsulNamespaces: true + image: hashicorp/consul-enterprise:1.16.3-ent + adminPartitions: + enabled: true + name: clients + tls: + enabled: true + caCert: + secretName: server-consul-ca-cert # See step 6 from `Install Consul server cluster` + secretKey: tls.crt + caKey: + secretName: server-consul-ca-key # See step 7 from `Install Consul server cluster` + secretKey: tls.key + acls: + manageSystemACLs: true + bootstrapToken: + secretName: server-consul-partitions-acl-token # See step 8 from `Install Consul server cluster` + secretKey: token + enterpriseLicense: + secretName: license + secretKey: key + externalServers: + enabled: true + hosts: [34.135.103.67] # See step 4 from `Install Consul server cluster` + tlsServerName: server.dc1.consul + k8sAuthMethodHost: https://104.154.156.146 # See step 5 from `Install Consul server cluster` + meshGateway: + enabled: true + ``` + + + + +1. Install the non-default partition clusters running your workloads: + + ```shell-session + $ helm install ${HELM_RELEASE_CLIENT} hashicorp/consul --version "1.0.0" --create-namespace --namespace consul --values client.yaml + ``` + +### Verifying the Deployment + +You can log into the Consul UI to verify that the partitions appear as expected. + +1. Set your context to the server cluster. + + ```shell-session + $ kubectl config use-context ${SERVER_CONTEXT} + ``` + +1. If ACLs are enabled, you will need the partitions ACL token, which can be read from the Kubernetes secret. The token is an encoded string that must be decoded in base64, e.g.: + + ```shell-session + $ kubectl get secret --namespace consul --context ${SERVER_CONTEXT} --template "{{ .data.token | base64decode }}" ${HELM_RELEASE_SERVER}-consul-bootstrap-acl-token + ``` + + The example command gets the secret from the default partition cluster, decodes the secret, and prints the token to the console. + +1. Open the Consul UI in a browser using the external IP address and port number described in a previous step (see [step 4](#get-external-ip-address)). + +1. Click **Log in** and enter the decoded token when prompted. + +You will see the `default` and `clients` partitions available in the **Admin Partition** drop-down menu. + +![Partitions will appear in the Admin Partitions drop-down menu within the Consul UI.](/img/admin-partitions/consul-admin-partitions-verify-in-ui.png) \ No newline at end of file diff --git a/website/content/docs/multi-tenant/index.mdx b/website/content/docs/multi-tenant/index.mdx new file mode 100644 index 000000000000..5e1b65bb087e --- /dev/null +++ b/website/content/docs/multi-tenant/index.mdx @@ -0,0 +1,44 @@ +--- +layout: docs +page_title: Administrate multi-tenant Consul datacenters +description: >- + This page provides an overview of Consul's multi-tenancy features, including admin partitions, namespaces, network segmwents, and sameness groups. +--- + +# Administrate multi-tenant Consul datacenters + +This page provides an overview of Consul's multi-tenancy features. A single Consul datacenter can support multiple teams and organizations by restricting resources, service traffic, and user access to a combination of admin partitions, namespaces, networking segments, and sameness groups. + +Consul Community Edition supports the `default` partition and `default` namespace, but does not support multi-tenancy. For more information, refer to [Consul Enterprise](/consul/docs/enterprise). + +## Introduction + +In large enterprise organizations, configuring, deploying, securing, and managing separate Consul datacenters for each team or project can be an impractical and resource-intensive solution. Consul Enterprise users can implement multi-tenant configurations of Consul server clusters so that teams can share a set of Consul servers. This arrangement can lower deployment costs while maintaining network security and preventing conflicts in resource names. + +The two main elements in Consul's multi-tenancy support are _admin partitions_ and _namespaces_. Consul namespaces are distinct from Kubernetes namespaces, but you can configure Consul to mirror existing Kubernetes namespaces. Consul also supports multi-tenancy configurations for networks that are segmented according to firewalls, and enables operators to manage a set of admin partitions and namespaces using sameness groups. + +## Admin partitions + +@include 'text/descriptions/admin-partition.mdx' + +## Namespaces + +@include 'text/descriptions/namespace.mdx' + +## Network segments + +@include 'text/descriptions/network-segment.mdx' + +## Sameness groups + +@include 'text/descriptions/sameness-group.mdx' + +## Guidance + +The following resources are available to help you learn about Consul multi-tenancy and its usage. + +@include 'text/guidance/multi-tenant.mdx' + +## Constraints, limitations, and troubleshooting + +@include 'text/limitations/multi-tenant.mdx' diff --git a/website/content/docs/multi-tenant/namespace/index.mdx b/website/content/docs/multi-tenant/namespace/index.mdx new file mode 100644 index 000000000000..cbb7fb3d065d --- /dev/null +++ b/website/content/docs/multi-tenant/namespace/index.mdx @@ -0,0 +1,25 @@ +--- +page_title: Namespaces overview +description: |- + Namespaces reduce operational challenges in large deployments. Learn how to define a namespace so that multiple users or teams can access and use the same datacenter without impacting each other. +--- + +# Namespaces overview + + + +This feature requires +HashiCorp Cloud Platform (HCP) or self-managed Consul Enterprise. +Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +With Consul Enterprise 1.7.0+, data for different users or teams +can be isolated from each other with the use of namespaces. Namespaces help reduce operational challenges +by removing restrictions around uniqueness of resource names across distinct teams, and enable operators +to provide self-service through delegation of administrative privileges. + +For more information on how to use namespaces with Consul Enterprise please review the following tutorials: + +- [Register and Discover Services within Namespaces](/consul/tutorials/namespaces/namespaces-share-datacenter-access?utm_source=docs) - Register multiple services within different namespaces in Consul. +- [Setup Secure Namespaces](/consul/tutorials/namespaces/namespaces-secure-shared-access?utm_source=docs) - Secure resources within a namespace and delegate namespace ACL rights via ACL tokens. \ No newline at end of file diff --git a/website/content/docs/multi-tenant/namespace/k8s.mdx b/website/content/docs/multi-tenant/namespace/k8s.mdx new file mode 100644 index 000000000000..f2c5290e8d90 --- /dev/null +++ b/website/content/docs/multi-tenant/namespace/k8s.mdx @@ -0,0 +1,62 @@ +--- +page_title: Using Consul and Kubernetes namespaces +description: |- + Namespaces reduce operational challenges in large deployments. Learn how to define a namespace so that multiple users or teams can access and use the same datacenter without impacting each other. +--- + +# Using Consul and Kubernetes namespaces + +Consul Enterprise 1.7+ supports Consul namespaces. When Kubernetes pods are registered +into Consul, you can control which Consul namespace they are registered into. + +There are three options available: + +1. **Single Destination Namespace** – Register all Kubernetes pods, regardless of namespace, + into the same Consul namespace. + + This can be configured with: + + ```yaml + global: + enableConsulNamespaces: true + + connectInject: + enabled: true + consulNamespaces: + consulDestinationNamespace: 'my-consul-ns' + ``` + + -> **NOTE:** If the destination namespace does not exist we will create it. + +1. **Mirror Namespaces** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes namespace. + For example, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `ns-1`. + If a mirrored namespace does not exist in Consul, it will be created. + + This can be configured with: + + ```yaml + global: + enableConsulNamespaces: true + + connectInject: + enabled: true + consulNamespaces: + mirroringK8S: true + ``` + +1. **Mirror Namespaces With Prefix** - Register each Kubernetes pod into a Consul namespace with the same name as its Kubernetes + namespace **with a prefix**. + For example, given a prefix `k8s-`, pod `foo` in Kubernetes namespace `ns-1` will be synced to the Consul namespace `k8s-ns-1`. + + This can be configured with: + + ```yaml + global: + enableConsulNamespaces: true + + connectInject: + enabled: true + consulNamespaces: + mirroringK8S: true + mirroringK8SPrefix: 'k8s-' + ``` \ No newline at end of file diff --git a/website/content/docs/multi-tenant/namespace/vm.mdx b/website/content/docs/multi-tenant/namespace/vm.mdx new file mode 100644 index 000000000000..e6101b7453e0 --- /dev/null +++ b/website/content/docs/multi-tenant/namespace/vm.mdx @@ -0,0 +1,407 @@ +--- +page_title: Setup Consul namespaces on VMs +description: |- + Delegate administrative privileges to namespace operators and remove resource uniqueness restrictions using Consul Enterprise namespace and Access Control Lists (ACLS). +--- + +# Setup Consul namespaces on VMs + +This page describes the process to setup secure namespaces in Consul when your cluster runs on VMs. + + + + The namespace functionality demonstrated here +requires HashiCorp Cloud Platform (HCP) or self-managed +[Consul Enterprise](https://www.hashicorp.com/products/consul/pricing/). +If you've purchased or wish to try out Consul Enterprise, refer to +[how to access Consul Enterprise](/consul/docs/enterprise#access-consul-enterprise). + + + +Namespaces provide separation for teams within a single organization enabling +them to share access to one or more Consul datacenters without conflict. This +allows teams to deploy services without name conflicts and create more granular +access to the datacenter with namespaced ACLs. Additionally, namespaces with +ACLs allows you to delegate access control to specific resources within the +datacenter including services, Connect service mesh proxies, key/value pairs, and sessions. + +This tutorial has two main sections, configuring namespaces and creating ACL tokens +within a namespace. You must configure namespaces before creating namespace +tokens. + +## Prerequisites + +To complete this tutorial you will need: + +- A Consul Enterprise version 1.13.1 cluster or newer with + ACLs enabled. For more information, see the + [Secure Consul with Access Control Lists guide](/consul/tutorials/security/access-control-setup-production). + +- An ACL token with `operator=write` and `acl=write` privileges, or an ACL + token with the built-in global management policy. See the + [Consul ACL policies documentation](/consul/docs/security/acl/acl-policies#built-in-policies) + for more information. + + + + The content of this tutorial also applies to Consul clusters hosted on HashiCorp Cloud (HCP). + + + +## Configure namespaces + +First, you will create two namespaces that allow you to separate the datacenter +for two teams. Each namespace will have an operator responsible for managing +data and access within their namespace. The namespace operator will only have +access to view and update data and access in their namespace. This allows for +complete isolation of data between teams. + +To configure and manage namespaces, you will need one super-operator who has +visibility into the entire datacenter. It will be their responsibility to set up +namespaces. To complete this tutorial, you should be the super-operator. + +### Create namespace definitions + +You will need to create two files to define the namespaces for the `app-team` and `db-team`. + + + + + + +```hcl +name = "app-team", +description = "Namespace for app-team managing the production dashboard application" +``` + + + + + +```hcl +name = "db-team", +description = "Namespace for db-team managing the production counting application" +``` + + + + + + + + +```json +{ + "name": "app-team", + "description": "Namespace for app-team managing the production dashboard application" +} +``` + + + + + +```json +{ + "name": "db-team", + "description": "Namespace for db-team managing the production counting application" +} +``` + + + + + + +These namespace definitions are for minimal configuration, with only the name +and description. Learn more about namespace options in the +[documentation](/consul/docs/multi-tenant/namespace#namespace-definition). + +### Initialize the namespaces + +Use the Consul CLI to create each namespace by providing Consul with the namespace definition files. You will need `operator=write` privileges. + + + + the example refers to the JSON files to load the configuration. If you decided to use HCL to configure your namespaces, change the file extension to `hcl`. + + + +```shell-session +$ consul namespace write app-team.json +``` + +```plaintext hideClipboard +Name: app-team +Description: + Namespace for app-team managing the production dashboard application +``` + +After successfully creating the app-team namespace, create the db-team namespace. + +```shell-session +$ consul namespace write db-team.json +``` + +```plaintext hideClipboard +Name: db-team +Description: + Namespace for db-team managing the production counting application +``` + +Finally, ensure both namespaces were created successfully by viewing all +namespaces. You will need `operator=read` privileges, which are included with +the `operator=write` privileges, a requirement from the prerequisites. + +```shell-session +$ consul namespace list +``` + +```plaintext hideClipboard +app-team: + Description: + Namespace for app-team managing the production dashboard application +db-team: + Description: + Namespace for db-team managing the production counting application +default: + Description: + Builtin Default Namespace +``` + +Alternatively, you can view each namespace with the `consul namespace read ` +command. After you create a namespace, you can update or delete it +using the Consul [CLI](/consul/commands/namespace). + +## Delegate token management with namespaces + +Next, you will delegate token management to multiple operators. One of the key +benefits of namespaces is the ability to delegate responsibilities of token +management to more operators. This allows you to provide unrestricted access to +portions of the datacenter, ideally to one or a few operators per namespace. + +The namespace operators are then responsible for managing access to services, +Consul KV, and other resources within their namespaces. Namespaces do not have +any impact on compute or other node resources. Additionally, the namespace +operator should further delegate service-access privileges to developers or +end-users. This is consistent with the current ACL management workflow. Before +namespaces, only one or a few operators managed tokens for an entire datacenter. + +Namespace operators will only be aware of data within their namespaces, unless +they are intentionally given access otherwise. Without global privileges, they +will not be able to locate other namespaces. + +Note, nodes are not namespaced, so namespace-operators will be able to locate all +the agents in the datacenter. + +## Create namespace management tokens + +First, the super-operator should use the [built-in +namespace-policy](/consul/docs/security/acl/acl-policies#built-in-policies) +to create a token for each of the namespace operators. Note, the +namespace-management policy ultimately grants unrestricted privileges for their +namespace. You will need `acl=write` privileges to create namespace tokens. + +```shell-session +$ consul acl token create \ + -namespace app-team \ + -description "App Team Administrator" \ + -policy-name "namespace-management" +``` + +If the command is successful, Consul will return the token information. + +```plaintext hideClipboard +AccessorID: 3cbb3a83-6b11-00fb-5eae-7384746a9e7b +SecretID: 6877ad53-53ca-8061-00a7-55759955a870 +Namespace: app-team +Description: App Team Administrator +Local: false +Create Time: 2019-12-11 16:19:44.057622 -0600 CST +Policies: + da57f91d-efeb-bfe2-f0e9-685e0ce99bde - namespace-management +``` + +```shell-session +$ consul acl token create \ + -namespace db-team \ + -description "DB Team Administrator" \ + -policy-name "namespace-management" +``` + +Both token should be generated successfully before continuing. + +```plaintext hideClipboard +AccessorID: 838e5883-65f7-6ae9-20f6-8af3b895a6c4 +SecretID: 0c3aeb84-5497-67ea-9ad0-57db99d24d8a +Namespace: db-team +Description: DB Team Administrator +Local: false +Create Time: 2019-12-11 16:37:56.668698 -0600 CST +Policies: + 0d8ab5f1-b63c-7014-94b7-0b96e6b417f8 - namespace-management +``` + +### Default namespace policy privileges + +Most importantly, the default policy grants privileges to create tokens, which +enables the holders to grant themselves any additional privileges needed for any +operation within their namespace. The namespace-policy includes the following +privileges. + +```hcl +acl = "write" + +key_prefix "" { + policy = "write" +} + +node_prefix "" { + # node policy is restricted to read within a namespace + policy = "read" +} + +session_prefix "" { + policy = "write" +} + +service_prefix "" { + policy = "write" + intentions = "write" +} +``` + +### View namespace management tokens + +To view tokens within a namespace, you will need to use the `-namespace` command-line flag. + +```shell-session +$ consul acl token list -namespace app-team +``` + +The output will return all tokens in the namespace. In this tutorial example, there is only one. + +```plaintext hideClipboard +AccessorID: a5b8e5b2-96b0-86ee-10d3-3836e9c69380 +Namespace: app-team +Description: App Team Administrator +Local: false +Create Time: 2020-02-06 17:57:49.574545525 +0000 UTC +Legacy: false +Policies: + 7140ad92-ed31-4779-2835-1f5d2ed347cd - namespace-management +``` + +If no flag is provided the command will return the tokens in the global namespace, if you +have the correct privileges. + +## Create a developer token + +Now that you have a management token for each namespace, you can create tokens +that restrict privileges for end-users, only providing the minimum necessary +privileges for their role. In this example you will give the developers on the +db-team the ability to register their own services and allow or deny +communication between services in their team’s namespace with intentions. + + + + Depending on your company’s security model you may want to delegate +intentions management to a different set of users than service registration. + + + +### Use the db-team operator token + +To ensure the db-team operator token, created previously in this tutorial, has the +correct privileges set it as the environment variable. + +```shell-session +$ CONSUL_HTTP_TOKEN= +``` + +If any of the following commands fail with a permission error, than the token +was not created correctly. + +### Create the policy + +Create a HCL file named `db-developer-policy.hcl` and paste in the following. + + + +```hcl +service_prefix "" { + policy = "write" + intention = "write" +} +``` + + + +This policy allows writing services and intentions for those services. + +Using the Consul CLI, create the policy using the policy file. + +```shell-session +$ consul acl policy create \ + -name developer-policy \ + -description "Write services and intentions" \ + -namespace db-team \ + -rules @db-developer-policy.hcl +``` + +### Create the token + +Using the developer policy defined previously, create a token for the +developer in the db-team namespace. + +```shell-session +$ consul acl token create \ + -description "DB developer token" \ + -namespace db-team \ + -policy-name developer-policy +``` + +The output will provide token information, including the namespace +where the token is located. + +```plaintext hideClipboard +AccessorID: db7ad943-e08b-0fed-4282-41a8b189a740 +SecretID: aa63ba44-bd88-0b0c-5475-34141fe6874c +Namespace: db-team +Description: DB developer token +Local: false +Create Time: 2020-02-06 21:33:22.562854069 +0000 UTC +Policies: + acb579f8-51b7-cb90-cd5b-f26a54b15715 - developer-policy +``` + +## Next steps + +In this tutorial, you learned how to create namespaces and how to secure the +resources within a namespace. You created management tokens for two namespaces +and then a developer token for the db-team. + +Note, the super-operator can also create policies that can be shared by all +namespaces. Shared policies are universal and should be created in the `default` +namespace. + +Continue onto the [Register and Discover Services within Namespaces](/consul/tutorials/namespaces/namespaces-share-datacenter-access) tutorial to +learn how to register services within a namespace. + +### Namespace inheritance with tokens + +A token's namespace can be inherited during CLI and API requests related to services, +intentions, Consul KV, checks, and ACLs. This means that if you register a +service with a token in the app-team namespace, the service will be registered +in that namespace without having to specify it explicitly. In the tutorial you did +not use namespace inheritance from the token, since you explicitly used the +namespace flag. + + + + Services registered with service definition configuration files must +have both the ACL token and namespace name defined in the [service definition +configuration file](/consul/docs/fundamentals/config-entry), +as these files are parsed prior to resolving the token. + + \ No newline at end of file diff --git a/website/content/docs/multi-tenant/network-segment/index.mdx b/website/content/docs/multi-tenant/network-segment/index.mdx new file mode 100644 index 000000000000..2ef8f7260f1d --- /dev/null +++ b/website/content/docs/multi-tenant/network-segment/index.mdx @@ -0,0 +1,61 @@ +--- +layout: docs +page_title: Network segments overview +description: >- + Network segments enable LAN gossip traffic within a datacenter when network rules or firewalls prevent specific sets of clients from communicating directly. Learn about segmented network concepts. +--- + +# Network segments overview + +Network segmentation is the practice of dividing a network into multiple segments or subnets that act as independent networks. This topic provides an overview of concepts related to operating Consul in a segmented network. + + + +This feature requires Consul Enterprise version 0.9.3 or later. +Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +## Segmented networks + +Consul requires full connectivity between all agents in a datacenter within a LAN gossip pool. In some environments, however, business policies enforced through network rules or firewalls prevent full connectivity between all agents. These environments are called _segmented networks_. Network segments are isolated LAN gossip pools that only require full connectivity between agent members on the same segment. + +To use Consul in a segmented network, you must define the segments in your server agent configuration and direct client agents to join one of the segments. The Consul network segment configuration should match the LAN gossip pool boundaries. The following diagram shows how a network may be segmented: + +![Consul datacenter agent connectivity with network segments](/img/network-segments/consul-network-segments-multiple.png) + +## Default network segment + +By default, all Consul agents are part of a shared Serf LAN gossip pool, referred to as the `` network segment. Because all agents are within the same segment, full mesh connectivity within the datacenter is required. The following diagram shows the `` network segment: + +![Consul datacenter default agent connectivity: one network segment](/img/network-segments/consul-network-segments-single.png) + +## Segment membership + +Server agents are members of all segments. The datacenter includes the `` segment, as well as additional segments defined in the `segments` server agent configuration option. Refer to the [`segments`](/consul/docs/reference/agent/configuration-file/general#segments) documentation for additional information. + +Each client agent can only be a member of one segment at a time. Client agents are members of the `` segment unless they are configured to join a different segment. +For a client agent to join the Consul datacenter, it must connect to another agent (client or server) within its configured segment. + +Read the [Network Segments documentation](/consul/docs/multi-tenant/network-segment) to learn more about network segments. + +-> **Info:** Network segments enable you to operate a Consul datacenter without full +mesh (LAN) connectivity between agents. To federate multiple Consul datacenters +without full mesh (WAN) connectivity between all server agents in all datacenters, +use [Network Areas (Enterprise)](/consul/docs/east-west/network-area). + +## Consul networking models + +Network segments are a subset of other Consul networking models. Understanding the broader models will help you segment your network. Refer to [Architecture Overview](/consul/docs/architecture/control-plane) for additional information about the following concepts. + +### Clusters + +You can segment networks within a Consul _cluster_. A cluster is one or more Consul servers that form a Raft quorum and one or more Consul clients that are members of the same [datacenter](/consul/commands/agent#_datacenter). The cluster is sometimes called the _local cluster_. Consul clients discover and make RPC requests to Consul servers in their local cluster through the gossip mechanism. Consul CE uses LAN gossip for intra-cluster communication between agents. + +### LAN gossip pool + +A set of fully-connected Consul agents is a _LAN gossip pool_. LAN gossip pools use the Serf protocol to maintain a shared view of the members of the pool for different purposes, such as finding a Consul server in a local cluster or finding servers in a remote cluster. A segmented LAN gossip pool limits a group of agents to only connect with the agents in its segment. + +## Network segments versus network areas + +Network segments enable you to operate a Consul datacenter without full mesh connectivity between agents using a LAN gossip pool. To federate multiple Consul datacenters without full mesh connectivity between all server agents in all datacenters, use [network areas](/consul/docs/east-west/network-area). Network areas are a Consul Enterprise capability. diff --git a/website/content/docs/multi-tenant/network-segment/vm.mdx b/website/content/docs/multi-tenant/network-segment/vm.mdx new file mode 100644 index 000000000000..8c2010572d7d --- /dev/null +++ b/website/content/docs/multi-tenant/network-segment/vm.mdx @@ -0,0 +1,192 @@ +--- +page_title: Create network segments +description: |- + Learn how to create Consul network segments to enable services in the LAN gossip pool to communicate across communication boundaries. +--- + +# Create network segments on virtual machines (VMs) + +This topic describes how to create Consul network segments so that services can connect to other services in the LAN gossip pool that have been placed into separate communication boundaries. Refer to [Network Segments Overview](/consul/docs/multi-tenant/network-segment) for additional information. + +## Requirements + +- Consul Enterprise 0.9.3+ + +## Define segments in the server configuration + +1. Add the `segments` block to your server configuration. Refer to the [`segments`](/consul/docs/reference/agent/configuration-file/general#segments) documentation for details about how to define the configuration. + + In the following example, an `alpha` segment is configured to listen for traffic on port `8303` and a `beta` segment is configured to listen to traffic on port `8304`: + + + + ```hcl + segments = [ + { + name = "alpha" + bind = "10.0.0.1" + advertise = "10.0.0.1" + port = 8303 + }, + { + name = "beta" + bind = "10.0.0.1" + advertise = "10.0.0.1" + port = 8304 + } + ] + ``` + + ```json + { + "segments": [ + { + "name": "alpha", + "bind": "10.0.0.1", + "advertise": "10.0.0.1", + "port": 8303 + }, + { + "name": "beta", + "bind": "10.0.0.1", + "advertise": "10.0.0.1", + "port": 8304 + } + ] + } + ``` + + + +1. Start the server using the `consul agent` command. Copy the address for each segment listener so that you can [direct clients to join the segment](#configure-clients-to-join-segments) when you start them: + + ```shell-session + $ consul agent -config-file server.hcl + [INFO] serf: EventMemberJoin: server1.dc1 10.20.10.11 + [INFO] serf: EventMemberJoin: server1 10.20.10.11 + [INFO] consul: Started listener for LAN segment "alpha" on 10.20.10.11:8303 + [INFO] serf: EventMemberJoin: server1 10.20.10.11 + [INFO] consul: Started listener for LAN segment "beta" on 10.20.10.11:8304 + [INFO] serf: EventMemberJoin: server1 10.20.10.11 + ``` +1. Verify that the server is a member of all segments: + + ```shell-session + $ consul members + Node Address Status Type Build Protocol DC Segment + server1 10.20.10.11:8301 alive server 1.14+ent 2 dc1 + ``` + +## Configure clients to join segments + +Client agents can only be members of one segment at a time. You can direct clients to join a segment by specifying the address and name of the segment with the [`-join`](/consul/commands/agent#_join) and [`-segment`](/consul/commands/agent#_segment) command line flags when starting the agent. + +```shell-session +$ consul agent -config-file client.hcl -join 10.20.10.11:8303 -segment alpha +``` + +Alternatively, you can add the [`retry_join`](/consul/docs/reference/agent/configuration-file/join#retry_join) and [`segment`](/consul/docs/reference/agent/configuration-file/general#segment-1) parameters to your client agent configuration file: + +```hcl +node_name = "consul-client" +server = false +datacenter = "dc1" +data_dir = "consul/client-data" +log_level = "INFO" +retry_join = ["10.20.10.11:8303"] +segment = "alpha" +``` + +## Verify segments + +You can use the CLI, API, or GUI to verify which segments your agents have joined. + + + + + +Run the `consul members` command to verify that the client agents are joined to the correct segments: + + + +```shell-session +$ consul members +Node Address Status Type Build Protocol DC Partition Segment +server 192.168.4.159:8301 alive server 1.14+ent 2 dc1 default +client1 192.168.4.159:8447 alive client 1.14+ent 2 dc1 default alpha +``` + + + +You can also pass the name of a segment in the `-segment` flag to view agents in a specific segment. Note that server agents display their LAN listener port for the specified segment the segment filter applied. In the following example, the command returns port `8303` for alpha, rather than for the `` segment port: + + + +```shell-session +$ consul members -segment alpha +Node Address Status Type Build Protocol DC Segment +server1 10.20.10.11:8301 alive server 1.14+ent 2 dc1 alpha +client1 10.20.10.21:8303 alive client 1.14+ent 2 dc1 alpha +``` + + + +Refer to the [`members`](/consul/commands/members) documentation for additional information. + + + + + +Call the `/agent/members` API endpoint to view members that the agent sees in the cluster gossip pool. + + + +```shell-session +$ curl http://127.0.0.1:8500/v1/agent/members?segment=alpha + +{ + "Addr" : "192.168.4.163", + "DelegateCur" : 4, + "DelegateMax" : 5, + "DelegateMin" : 2, + "Name" : "consul-client", + "Port" : 8447, + "ProtocolCur" : 2, + "ProtocolMax" : 5, + "ProtocolMin" : 1, + "Status" : 1, + "Tags" : { + "build" : "1.13.1+ent:5bd604e6", + "dc" : "dc1", + "ft_admpart" : "1", + "ft_ns" : "1", + "id" : "aeaf70d7-57f7-7eaf-e246-6edfe8386e9c", + "role" : "node", + "segment" : "alpha", + "vsn" : "2", + "vsn_max" : "3", + "vsn_min" : "2" + } +} +``` + + + +Refer to the [`/agent/members` API endpoint documentation](/consul/api-docs/agent#list-members) for additional information. + + + + +If the UI is enabled in your agent configuration, the segment name appears in the node’s Metadata tab. + +1. Open the URL for the UI. By default, the UI is `localhost:8500`. +1. Click **Node** in the sidebar and click on the name of the client agent you want to check. +1. Click the **Metadata** tab. The network segment appears as a key-value pair. + + + + + +## Related resources + +You can also create and run a prepared query to query for additional information about the services registered to client nodes. Prepared queries are HTTP API endpoint features that enable you to run complex queries of Consul nodes. Refer [Prepared Query HTTP Endpoint](/consul/api-docs/query) for usage. diff --git a/website/content/docs/multi-tenant/sameness-group/k8s.mdx b/website/content/docs/multi-tenant/sameness-group/k8s.mdx new file mode 100644 index 000000000000..99304e9840a6 --- /dev/null +++ b/website/content/docs/multi-tenant/sameness-group/k8s.mdx @@ -0,0 +1,299 @@ +--- +page_title: Create sameness groups on Kubernetes +description: |- + Learn how to create sameness groups between partitions and cluster peers on Kubernetes so that Consul can identify instances of the same service across partitions and datacenters. +--- + +# Create sameness groups on Kubernetes + +This page describes how to create a sameness group, which designates a set of admin partitions as functionally identical in a Consul deployment running Kubernetes. Adding an admin partition to a sameness group enables Consul to recognize services registered to remote partitions with cluster peering connections as instances of the same service when they share a name and Consul namespace. + +For information about configuring a failover strategy using sameness groups, refer to [Failover with sameness groups](/consul/docs/manage-traffic/failover/sameness-group). + +## Workflow + +Sameness groups are a user-defined set of partitions with identical +configurations, including custom resource definitions (CRDs) for service and +proxy defaults. Partitions on separate clusters should have an established +cluster peering connection in order to recognize each other. + +To create and use sameness groups in your network, complete the following steps: + +- **Create sameness group custom resource definitions (CRDs) for each member of the group**. For each partition that you want to include in the sameness group, you must write and apply a sameness group CRD that defines the group's members from that partition's perspective. Refer to the [sameness group configuration entry reference](/consul/docs/reference/config-entry/sameness-group) for details on configuration hierarchy, default values, and specifications. +- **Export services to members of the sameness group**. You must write and apply an exported services CRD that makes the partition's services available to other members of the group. Refer to [exported services configuration entry reference](/consul/docs/reference/config-entry/exported-services) for additional specification information. +- **Create service intentions for each member of the sameness group**. For each partition that you want to include in the sameness group, you must write and apply service intentions CRDs to authorize traffic to your services from all members of the group. Refer to the [service intentions configuration entry reference](/consul/docs/reference/config-entry/service-intentions) for additional specification information. + +## Requirements + +- All datacenters where you want to create sameness groups must run Consul v1.16 or later. Refer to [upgrade instructions](/consul/docs/upgrade/k8s) for more information about how to upgrade your deployment. +- A [Consul Enterprise license](/consul/docs/enterprise/license) is required. + +### Before you begin + +Before creating a sameness group, take the following actions to prepare your +network. + +#### Check Consul namespaces and service naming conventions + +Sameness groups are defined at the partition level. Consul assumes all partitions in the group have identical configurations, including identical service names and identical Consul namespaces. This behavior occurs even when two partitions in the group contain functionally different services that share a common name and namespace. For example, if distinct services both named `api` were registered to different members of a sameness group, it could lead to errors because requests may be sent to the incorrect service. + +To prevent errors, check the names of the services deployed to your network and the namespaces they are deployed in. Pay particular attention to the default namespace to confirm that services have unique names. If different services share a name, you should either change one of the service’s names or deploy one of the services to a different namespace. + +#### Deploy mesh gateways for each partition + +Mesh gateways are required for cluster peering connections and recommended to secure cross-partition traffic in a single datacenter. Therefore, we recommend securing your network, and especially your production environment, by deploying mesh gateways to each datacenter. Refer to [mesh gateways specifications](/consul/docs/k8s/connect/cluster-peering/tech-specs#mesh-gateway-specifications) for more information about configuring mesh gateways. + +#### Establish cluster peering relationships between remote partitions + +You must establish connections with cluster peers before you can create a sameness group that includes them. A cluster peering connection exists between two admin partitions in different datacenters, and each connection between two partitions must be established separately with each peer. Refer to [establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/k8s) for step-by-step instructions. + +To establish cluster peering connections and define a group as part of the same workflow, follow instructions up to [Export services between clusters](/consul/docs/k8s/connect/cluster-peering/usage/establish-peering#export-services-between-clusters). You can use the same exported services and service intention configuration entries to establish the cluster peering connection and create the sameness group. + +## Create a sameness group + +To create a sameness group, you must write and apply a set of three CRDs for each partition that is a member of the group: + +- Sameness group CRDs: Define the sameness group from each partition’s perspective. +- Exported services CRDs: Make services available to other partitions in the group. +- Service intentions CRDs: Authorize traffic between services across partitions. + +### Define the sameness group from each partition's perspective + +To define a sameness group for a partition, create a [sameness group CRD](/consul/docs/reference/config-entry/sameness-group) that describes the partitions and cluster peers that are part of the group. Typically, this order follows this pattern: + +1. The local partition +1. Other partitions in the same datacenter +1. Partitions with established cluster peering relationships + +If you want all services to failover to other instances in the sameness group by default, set `spec.defaultForFailover=true` and list the group members in the order you want to use in a failover scenario. Refer to [failover with sameness groups](/consul/docs/manage-traffic/failover/sameness-group) for more information. + +Be aware that the sameness group CRDs are different for each partition. The following example demonstrates how to format three different CRDs for three partitions that are part of the sameness group `product-group` when Partition 1 and Partition 2 are in DC1, and the third partition is Partition 1 in DC2. + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: SamenessGroup +metadata: + name: product-group +spec: + defaultForFailover: true + members: + - partition: partition-1 + - partition: partition-2 + - peer: dc2-partition-1 +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: SamenessGroup +metadata: + name: product-group +spec: + defaultForFailover: true + members: + - partition: partition-2 + - partition: partition-1 + - peer: dc2-partition-1 +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: SamenessGroup +metadata: + name: product-group +spec: + defaultForFailover: true + members: + - partition: partition-1 + - peer: dc1-partition-1 + - peer: dc1-partition-2 +``` + + + + + +After you create the CRD, apply it to the Consul server with the `kubectl apply` +command. + +```shell-session +$ kubectl apply -f product-group.yaml +``` + +Then, repeat the process to create and apply a CRD for every partition that is a member of the sameness group. + +### Export services to other partitions in the sameness group + +To make services available to other members of the sameness group, you must write and apply an [exported services CRD](/consul/docs/reference/config-entry/exported-services) for each partition in the group. This CRD exports the local partition's services to the rest of the group members. In each CRD, set the sameness group as the `consumer` for the exported services. You can export multiple services in a single exported services configuration entry. + +Because you are configuring the consumer to reference the sameness group instead of listing out each partition and cluster peer, you do not need to edit this configuration again when you add a partition or peer to the group. + +The following example demonstrates how to format three different `ExportedServices` CRDs to make a service named `api` deployed to the `store` namespace of each partition available to all other group members. + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +Kind: ExportedServices +metadata: + name: partition-1 +spec: + services: + - name: api + namespace: store + consumers: + - samenessGroup: product-group +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +Kind: ExportedServices +metadata: + name: partition-2 +spec: + services: + - name: api + namespace: store + consumers: + - samenessGroup: product-group +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +Kind: ExportedServices +metadata: + name: partition-1 +spec: + services: + - name: api + namespace: store + consumers: + - samenessGroup: product-group +``` + + + + + +For more information about exporting services, including examples of CRDs that export multiple services at the same time, refer to the [exported services configuration entry reference](/consul/docs/reference/config-entry/exported-services). + +After you create the CRD, apply it to the Consul server with the `kubectl apply` +command. + +```shell-session +$ kubectl apply -f product-group-export.yaml +``` + +#### Export services for cluster peers and sameness groups as part of the same workflow + +Creating a cluster peering connection between two partitions and then adding the partitions to a sameness group requires that you write and apply two separate exported services CRDs. One CRD exports services to the peer, and a second CRD exports services to other members of the group. + +If your goal for peering clusters is to create a sameness group, you can write and apply a single exported services configuration entry by configuring the `services[].consumers` block with the `samenessGroup` field instead of the `peer` field. Be aware that this scenario requires you to write the `SamenessGroup` CRD to Kubernetes before you apply the `ExportedServices` CRD that references the sameness group. + +### Create service intentions to authorize traffic between group members + +Exporting the service to other members of the sameness group makes the services visible to remote partitions, but you must also create service intentions so that local services are authorized to send and receive traffic from a member of the sameness group. + +For each partition that is a member of the group, write and apply a [service intentions CRD](/consul/docs/reference/config-entry/service-intentions) that defines intentions for the services that are part of the group. In the `sources` block of the configuration entry, include the service name, its namespace, the sameness group and grant `allow` permissions. + +Because you are using the sameness group in the `sources` block rather than listing out each partition and cluster peer, you do not have to make further edits to the service intentions configuration entries when members are added to or removed from the group. + +The following example demonstrates how to format three different `ServiceIntentions` CRDs to make a service named `api` available to all instances of `payments` deployed in all members of the sameness group including the local partition. In this example, `api` is deployed to the `store` namespace in all three partitions. + + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: api-intentions +spec: + sources: + - name: api + action: allow + namespace: store + samenessGroup: product-group +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: api-intentions +spec: + sources: + - name: api + action: allow + namespace: store + samenessGroup: product-group +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: api-intentions +spec: + sources: + - name: api + action: allow + namespace: store + samenessGroup: product-group +``` + + + + + +Refer to [create and manage intentions](/consul/docs/secure-mesh/intention/create) for more information about how to create and apply service intentions in Consul. + +After you create the CRD, apply it to the Consul server with the `kubectl apply` +command. + +```shell-session +$ kubectl apply -f api-intentions.yaml +``` + +#### Create service intentions for cluster peers and sameness groups as part of the same workflow + +Creating a cluster peering connection between two partitions and then adding the partitions to a sameness group requires that you write and apply two separate service intentions CRDs. One CRD authorizes services for the peer, and a second CRD authorizes services for other members of the group. + +If your goal for peering clusters is to create a sameness group, you can write and apply a single service intentions CRD by configuring the `sources` block with the `samenessGroup` field instead of the `peer` field. Be aware that this scenario requires you to write the `SamenessGroup` CRD to Kubernetes before you apply the `ServiceIntentions` CRD that references the sameness group. + +## Next steps + +When `defaultForFailover=true` in a sameness group CRD, additional upstream configuration is not required. + +After creating a sameness group, you can also set up failover between services in a sameness group. Refer to [Failover with sameness groups](/consul/docs/manage-traffic/failover/sameness-group) for more information. diff --git a/website/content/docs/multi-tenant/sameness-group/vm.mdx b/website/content/docs/multi-tenant/sameness-group/vm.mdx new file mode 100644 index 000000000000..56d15098157b --- /dev/null +++ b/website/content/docs/multi-tenant/sameness-group/vm.mdx @@ -0,0 +1,309 @@ +--- +page_title: Create sameness groups +description: |- + Learn how to create sameness groups between partitions and cluster peers so that Consul can identify instances of the same service across partitions and datacenters. +--- + +# Create sameness groups + +This page describes how to create a sameness group, which designates a set of admin partitions as functionally identical in your network. Adding an admin partition to a sameness group enables Consul to recognize services registered to remote partitions with cluster peering connections as instances of the same service when they share a name and namespace. + +For information about configuring a failover strategy using sameness groups, refer to [Failover with sameness groups](/consul/docs/manage-traffic/failover/sameness-group). + +## Workflow + +Sameness groups are a user-defined set of partitions with identical configurations, including configuration entries for service and proxy defaults. Partitions on separate clusters should have an established cluster peering connection in order to recognize each other. + +To create and use sameness groups in your network, complete the following steps: + +- **Create sameness group configuration entries for each member of the group**. For each partition that you want to include in the sameness group, you must write and apply a sameness group configuration entry that defines the group’s members from that partition’s perspective. Refer to the [sameness group configuration entry reference](/consul/docs/reference/config-entry/sameness-group) for details on configuration hierarchy, default values, and specifications. +- **Export services to members of the sameness group**. You must write and apply an exported services configuration entry that makes the partition’s services available to other members of the group. Refer to [exported services configuration entry reference](/consul/docs/reference/config-entry/exported-services) for additional specification information. +- **Create service intentions to authorize other members of the sameness group**. For each partition that you want to include in the sameness group, you must write and apply service intentions configuration entries to authorize traffic to your services from all members of the group. Refer to the [service intentions configuration entry reference](/consul/docs/reference/config-entry/service-intentions) for additional specification information. + +## Requirements + +- All datacenters where you want to create sameness groups must run Consul v1.16 or later. Refer to [upgrade instructions](/consul/docs/upgrade/instructions) for more information about how to upgrade your deployment. +- A [Consul Enterprise license](/consul/docs/enterprise/license) is required. + +### Before you begin + +Before creating a sameness group, take the following actions to prepare your network. + +#### Check namespace and service naming conventions + +Sameness groups are defined at the partition level. Consul assumes all partitions in the group have identical configurations, including identical service names and identical namespaces. This behavior occurs even when partitions in the group contain functionally different services that share a common name and namespace. For example, if distinct services named `api` were registered to different members of a sameness group, it could lead to errors because requests may be sent to the incorrect service. + +To prevent errors, check the names of the services deployed to your network and the namespaces they are deployed in. Pay particular attention to the default namespace to confirm that services have unique names. If different services share a name, you should either change one of the service's names or deploy one of the services to a different namespace. + +#### Deploy mesh gateways for each partition + +Mesh gateways are required for cluster peering connections and recommended to secure cross-partition traffic in a single datacenter. Therefore, we recommend securing your network, and especially your production environment, by deploying mesh gateways to each datacenter. Refer to [mesh gateways specifications](/consul/docs/east-west/cluster-peering/tech-specs#mesh-gateway-specifications) for more information about configuring mesh gateways. + +#### Establish cluster peering relationships between remote partitions + +You must establish connections with cluster peers before you can create a sameness group that includes them. A cluster peering connection exists between two admin partitions in different datacenters, and each connection between two partitions must be established separately with each peer. Refer to [establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/vm) for step-by-step instructions. + +To establish cluster peering connections and define a group as part of the same workflow, follow instructions up to [Export services between clusters](/consul/docs/east-west/cluster-peering/establish/vm#export-services-between-clusters). You can use the same exported services and service intention configuration entries to establish the cluster peering connection and create the sameness group. + +## Create a sameness group + +To create a sameness group, you must write and apply a set of three configuration entries for each partition that is a member of the group. + +- Sameness group configuration entries: Defines the sameness group from each partition's perspective. +- Exported services configuration entries: Makes services available to other partitions in the group. +- Service intentions configuration entries: Authorizes traffic between services across partitions. + +### Define the sameness group from each partition's perspective + +To define a sameness group for a partition, create a [sameness group configuration entry](/consul/docs/reference/config-entry/sameness-group) that describes the partitions and cluster peers that are part of the group. Typically, the order follows this pattern: + +1. The local partition +1. Other partitions in the same datacenter +1. Partitions with established cluster peering relationships + +If you want all services to failover to other instances in the sameness group by default, set `DefaultForFailover=true` and list the group members in the order you want to use in a failover scenario. Refer to [failover with sameness groups](/consul/docs/manage-traffic/failover/sameness-group) for more information. + +Be aware that the sameness group configuration entries are different for each partition. The following example demonstrates how to format three different configuration entries for three partitions that are part of the sameness group `product-group` when Partition 1 and Partition 2 are in DC1, and the third partition is Partition 1 in DC2. + + + + + +```hcl +Kind = "sameness-group" +Name = "product-group" +Partition = "partition-1" +Members = [ + {Partition = "partition-1"}, + {Partition = "partition-2"}, + {Peer = "dc2-partition-1"} + ] +``` + + + + + +```hcl +Kind = "sameness-group" +Name = "product-group" +Partition = "partition-2" +Members = [ + {Partition = "partition-2"}, + {Partition = "partition-1"}, + {Peer = "dc2-partition-1"} + ] +``` + + + + + +```hcl +Kind = "sameness-group" +Name = "product-group" +Partition = "partition-1" +Members = [ + {Partition = "partition-1"}, + {Peer = "dc1-partition-1"}, + {Peer = "dc1-partition-2"} + ] +``` + + + + + +After you create the configuration entry, apply it to the Consul server with the +`consul config write` command. + +```shell-session +$ consul config write product-group.hcl +``` + +Then, repeat the process to create and apply a configuration entry for every partition that is a member of the sameness group. + +### Export services to other partitions in the sameness group + +To make services available to other members of the sameness group, you must write and apply an [exported services configuration entry](/consul/docs/reference/config-entry/exported-services) to each partition in the group. This configuration entry exports the local partition's services to the rest of the group members. In each configuration entry, set the sameness group as the `Consumer` for the exported services. You can export multiple services in a single exported services configuration entry. + +Because you are configuring the consumer to reference the sameness group instead of listing out each partition and cluster peer, you do not need to edit this configuration again when you add a partition or peer to the group. + +The following example demonstrates how to format three different `exported-service` configuration entries to make a service named `api` deployed to the `store` namespace of each partition available to all other group members: + + + + + +```hcl +Kind = "exported-services" +Name = "product-sg-export" +Partition = "partition-1" +Services = [ + { + Name = "api" + Namespace = "store" + Consumers = [ + {SamenessGroup="product-group"} + ] + } + ] +``` + + + + + +```hcl +Kind = "exported-services" +Name = "product-sg-export" +Partition = "partition-2" +Services = [ + { + Name = "api" + Namespace = "store" + Consumers = [ + {SamenessGroup="product-group"} + ] + } + ] +``` + + + + + +```hcl +Kind = "exported-services" +Name = "product-sg-export" +Partition = "partition-1" +Services = [ + { + Name = "api" + Namespace = "store" + Consumers = [ + {SamenessGroup="product-group"} + ] + } + ] +``` + + + + + +For more information about exporting services, including examples of configuration entries that export multiple services at the same time, refer to the [exported services configuration entry reference](/consul/docs/reference/config-entry/exported-services). + +After you create the configuration entry, apply it to the Consul server with the +`consul config write` command. + +```shell-session +$ consul config write product-sg-export.hcl +``` + +#### Export services for cluster peers and sameness groups as part of the same workflow + +Creating a cluster peering connection between two partitions and then adding the partitions to a sameness group requires that you write and apply two separate exported services configuration entries. One configuration entry exports services to the peer, and a second entry exports services to other members of the group. + +If your goal for peering clusters is to create a sameness group, you can write and apply a single exported services configuration entry by configuring the `Services[].Consumers` block with the `SamenessGroup` field instead of the `Peer` field. + +Be aware that this scenario requires you to write the `sameness-group` configuration entry to Consul before you apply the `exported-services` configuration entry that references the sameness group. + +### Create service intentions to authorize traffic between group members + +Exporting the service to other members of the sameness group makes the services visible to remote partitions, but you must also create service intentions so that local services are authorized to send and receive traffic from a member of the sameness group. + +For each partition that is member of the group, write and apply a [service intentions configuration entry](/consul/docs/reference/config-entry/service-intentions) that defines intentions for the services that are part of the group. In the `Sources` block of the configuration entry, include the service name, its namespace, the sameness group, and grant `allow` permissions. + +Because you are using the sameness group in the `Sources` block rather than listing out each partition and cluster peer, you do not have to make further edits to the service intentions configuration entries when members are added to or removed from the group. + +The following example demonstrates how to format three different `service-intentions` configuration entries to make a service named `api` available to all instances of `payments` deployed in all members of the sameness group including the local partition. In this example, `api` is deployed to the `store` namespace in all three partitions. + + + + + + +```hcl +Kind = "service-intentions" +Name = "api-intentions" +Namespace = "store" +Partition = "partition-1" +Sources = [ + { + Name = "api" + Action = "allow" + Namespace = "store" + SamenessGroup = "product-group" + } +] +``` + + + + + +```hcl +Kind = "service-intentions" +Name = "api-intentions" +Namespace = "store" +Partition = "partition-2" +Sources = [ + { + Name = "api" + Action = "allow" + Namespace = "store" + SamenessGroup = "product-group" + } +] +``` + + + + + +```hcl +Kind = "service-intentions" +Name = "api-intentions" +Namespace = "store" +Partition = "partition-1" +Sources = [ + { + Name = "api" + Action = "allow" + Namespace = "store" + SamenessGroup = "product-group" + } +] +``` + + + + + +Refer to [create and manage intentions](/consul/docs/secure-mesh/intention/create) for more information about how to create and apply service intentions in Consul. + +After you create the configuration entry, apply it to the Consul server with the +`consul config write` command. + +```shell-session +$ consul config write api-intentions.hcl +``` + +#### Create service intentions for cluster peers and sameness groups as part of the same workflow + +Creating a cluster peering connection between two remote partitions and then adding the partitions to a sameness group requires that you write and apply two separate service intention configuration entries. One configuration entry authorizes services to the peer, and a second entry authorizes services to other members of the group. + +If you are peering clusters with the goal of creating a sameness group, it is possible to combine these workflows by using a single service intentions configuration entry. + +Configure the `Sources` block with the `SamenessGroup` field instead of the `Peer` field. Be aware that this scenario requires you to write the `sameness-group` configuration entry to Consul before you apply the `service-intentions` configuration entry that references the sameness group. + +## Next steps + +When `DefaultForFailover=true` in a sameness group configuration entry, additional upstream configuration is not required. + +After creating a sameness group, you can use them with static Consul DNS lookups and dynamic DNS lookups (prepared queries) for service discovery use cases. You can also set up failover between services in a sameness group. Refer to the following topics for more details: + +- [Static Consul DNS lookups](/consul/docs/discover/service/static) +- [Dynamic Consul DNS lookups](/consul/docs/discover/service/dynamic) +- [Failover overview](/consul/docs/manage-traffic/failover) diff --git a/website/content/docs/nia/api/health.mdx b/website/content/docs/nia/api/health.mdx deleted file mode 100644 index 6a04aba5eae2..000000000000 --- a/website/content/docs/nia/api/health.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -layout: docs -page_title: Consul-Terraform-Sync Health API -description: >- - Consul-Terraform-Sync Health API ---- - -# Health - -The `/health` endpoint returns a successful response when Consul-Terraform-Sync (CTS) is available and running. Requests to this endpoint are not logged, which makes it suitable for health checks that constantly poll CTS. - -| Method | Path | Produces | -| ------ | ------------------- | ------------------ | -| `GET` | `/health` | `application/json` | - -### Response Statuses - -| Status | Reason | -| ------ | ---------------------------------------------------- | -| 200 | CTS is healthy | - -### Example - -The following request makes a `GET` call to the `health` endpoint: - -```shell-session -$ curl --request GET \ - localhost:8558/v1/health -``` - -Response: - -```json -{} -``` diff --git a/website/content/docs/nia/api/index.mdx b/website/content/docs/nia/api/index.mdx deleted file mode 100644 index 2e0cf3d84054..000000000000 --- a/website/content/docs/nia/api/index.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: docs -page_title: Consul-Terraform-Sync API -description: >- - Consul-Terraform-Sync provides an API interface that lets you query CTS instance health, check CTS instance status, and run CTS tasks. ---- - -# Consul-Terraform-Sync API Overview - -Consul-Terraform-Sync (CTS) provides an HTTP API interface for querying CTS instances and running and managing tasks. - -## Port - -The API is served at the default port `8558` or a different port if set with [`port` configuration](/consul/docs/nia/configuration#port) - -## Version prefix - -All API routes are prefixed with `/v1/`. This documentation is for v1 of the API, which is the only version currently. - -Example: `localhost:8558/v1/status` - -## Request ID - -Each call to a CTS API endpoint returns a `request_id` field. The field is a string generated by the API. Example: - -`"request_id": "e1bc2236-3d0e-5f5e-dc51-164a1cf6da88"` - -## Error messages - -The API sends a response code in the 200 range if the call is successful. If the call is unsuccessful, the API sends an error message that includes additional information when possible. Refer to [Error Messages](/consul/docs/nia/usage/errors-ref) for additional information. \ No newline at end of file diff --git a/website/content/docs/nia/api/tasks.mdx b/website/content/docs/nia/api/tasks.mdx deleted file mode 100644 index f45253c34310..000000000000 --- a/website/content/docs/nia/api/tasks.mdx +++ /dev/null @@ -1,404 +0,0 @@ ---- -layout: docs -page_title: Consul-Terraform-Sync Tasks API -description: >- - Consul-Terraform-Sync Tasks API ---- - -# Tasks - -The `/tasks` endpoints interact with the tasks that Consul-Terraform-Sync (CTS) is responsible for running. - -If you [run CTS with high availability enabled](/consul/docs/nia/usage/run-ha), you can send requests to the `/tasks` endpoint on CTS leader or follower instances. Requests to a follower instance, however, return a 400 Bad Request and error message. The error message depends on what information the follower instance is able to obtain about the leader. Refer to [Error Messages](/consul/docs/nia/usage/errors-ref) for more information. - -## Get Tasks - -This endpoint returns information about all existing tasks. - -| Method | Path | Produces | -| ------ | ------------------- | ------------------ | -| `GET` | `/tasks` | `application/json` | - -### Response Statuses - -| Status | Reason | -| ------ | ---------------------------------------------------- | -| 200 | Successfully retrieved and returned a list of tasks | - -### Response Fields - -| Name | Type | Description | -| ------------ | ------------- | ---------------------------------------------------------------------------------- | -| `request_id` | string | The ID of the request. Used for auditing and debugging purposes. | -| `tasks` | list[[Task](/consul/docs/nia/api/tasks#task-object)] | A list of Tasks | - -### Example: Retrieve all tasks - -In this example, CTS contains a single task - -Request: - -```shell-session -$ curl --request GET \ - localhost:8558/v1/tasks -``` - -Response: - -```json -{ - "request_id": "0e0290f9-94df-3a4a-fd17-72467a16083c", - "tasks": [ - { - "buffer_period": { - "enabled": true, - "max": "25s", - "min": "5s" - }, - "condition": { - "services": { - "cts_user_defined_meta": {}, - "datacenter": "", - "filter": "", - "names": ["api", "web"], - "namespace": "", - "use_as_module_input": true - } - }, - "description": "Writes the service name, id, and IP address to a file", - "enabled": true, - "module": "path/to/module", - "module_input": {}, - "name": "task_a", - "providers": ["my-provider"], - "variables": {}, - "version": "" - } - ] -} -``` - -## Get Task - -This endpoint returns information about an existing task. - -| Method | Path | Produces | -| ------ | ------------------- | ------------------ | -| `GET` | `/tasks/:task_name` | `application/json` | - -### Request Parameters - -| Name | Required | Type | Description | Default | -| ------------ | -------- | ------ | ------------------------------------------ | ------- | -| `:task_name` | Required | string | Specifies the name of the task to retrieve | none | - -### Response Statuses - -| Status | Reason | -| ------ | ---------------------------------------------------- | -| 200 | Successfully retrieved and returned task information | -| 404 | Task with the given name not found | - -### Response Fields - -| Name | Type | Description | -| ------------ | ------ | ---------------------------------------------------------------------------------- | -| `request_id` | string | The ID of the request. Used for auditing and debugging purposes. | -| `task` | object | The task's configuration information. See [task object](#task-object) for details. | - -### Example: Retrieve a task - -Request: - -```shell-session -$ curl --request GET \ - localhost:8558/v1/tasks/task_a -``` - -Response: - -```json -{ - "request_id": "b7559ab0-5111-381b-367a-0dfb7e216d41", - "task": { - "buffer_period": { - "enabled": true, - "max": "20s", - "min": "5s" - }, - "condition": { - "consul_kv": { - "datacenter": "", - "namespace": "", - "path": "my_key", - "recurse": false, - "use_as_module_input": true - } - }, - "description": "task triggering on consul-kv", - "enabled": true, - "module": "path/to/module", - "module_input": { - "services": { - "cts_user_defined_meta": {}, - "datacenter": "", - "filter": "", - "names": ["api"], - "namespace": "" - } - }, - "name": "task_a", - "providers": [], - "variables": {}, - "version": "" - } -} -``` - -# Create Task - -This endpoint allows for creation of a task. It only supports creation of a new task, and does not support updating a task. - -| Method | Path | Produces | -| ------ | -------- | ------------------ | -| `POST` | `/tasks` | `application/json` | - -### Request Parameters - -| Name | Required | Type | Description | Default | -| ----- | -------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -| `run` | Optional | string | Values can only be `"inspect"` and `"now"`.
    • `"inspect"`: Does not create the task but returns information on if/how resources would be changed for the proposed task creation.
    • `"now"`: Creates the task accordingly and immediately runs the task, rather than allowing the task to run at the natural daemon cadence
    | none | - -### Request Body - -| Name | Required | Type | Description | -| ------ | -------- | ------ | ---------------------------------------------------------------------------------- | -| `task` | Required | object | The task's configuration information. See [task object](#task-object) for details. | - -### Response Statuses - -| Status | Reason | -| ------ | ----------------------------- | -| 201 | Successfully created the task | - -### Response Fields - -| Name | Type | Description | -| ------------ | ------ | ------------------------------------------------------------------------------------------------- | -| `request_id` | string | The ID of the request. Used for auditing and debugging purposes. | -| `task` | object | The task's configuration information after creation. See [task object](#task-object) for details. | - -### Example: Create a task - -Payload: - -```json -{ - "task": { - "description": "Writes the service name, id, and IP address to a file", - "enabled": true, - "name": "task_a", - "providers": ["my-provider"], - "condition": { - "services": { - "names": ["web", "api"] - } - }, - "module": "path/to/module" - } -} -``` - -Request: - -```shell-session -$ curl --header "Content-Type: application/json" \ - --request POST \ - --data @payload.json \ - localhost:8558/v1/tasks -``` - -Response: - -```json -{ - "request_id": "0428ed18-1359-874c-8b27-742e43ebd7e7", - "task": { - "buffer_period": { - "enabled": true, - "max": "25s", - "min": "5s" - }, - "condition": { - "services": { - "cts_user_defined_meta": {}, - "datacenter": "", - "filter": "", - "names": ["api", "web"], - "namespace": "", - "use_as_module_input": true - } - }, - "description": "Writes the service name, id, and IP address to a file", - "enabled": true, - "module": "path/to/module", - "module_input": {}, - "name": "task_a", - "providers": ["my-provider"], - "variables": {}, - "version": "" - } -} -``` - -## Update Task - -This endpoint allows patch updates to specifically supported fields for existing tasks. Currently only supports updating a task's [`enabled` value](/consul/docs/nia/configuration#enabled-7). - -| Method | Path | Produces | -| ------- | ------------------- | ------------------ | -| `PATCH` | `/tasks/:task_name` | `application/json` | - -### Request Parameters - -| Name | Required | Type | Description | Default | -| ------------ | -------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- | -| `:task_name` | Required | string | Specifies the name of the task to update | none | -| `run` | Optional | string | Values can only be `"inspect"` and `"now"`.
    • `"inspect"`: Does not update the task but returns information on if/how resources would be changed for the proposed task update.
    • `"now"`: Updates the task accordingly and immediately runs the task, rather than allowing the task to run at the natural daemon cadence
    | none | - -### Request Body - -| Name | Required | Type | Description | -| --------- | -------- | ------- | -------------------------------------------------------- | -| `enabled` | Required | boolean | Whether the task is enabled to run and manage resources. | - -### Response Statuses - -| Status | Reason | -| ------ | ---------------------------------- | -| 200 | Successfully updated the task | -| 404 | Task with the given name not found | - -### Response Fields - -| Name | Type | Description | -| ------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `inspect` | object | Information on how resources would be changed given a proposed task update, similar to [inspect-mode](/consul/docs/nia/cli/#inspect-mode). This is only returned when passed the `run=inspect` request parameter. | -| `inspect.changes_present` | boolean | Whether or not changes were detected for running the proposed task update. | -| `inspect.plan` | string | The Terraform plan generated for the proposed task update. Note: a non-empty string does not necessarily indicate changes were detected. | - -### Example: Disable a task - -Request: - -```shell-session -$ curl --header "Content-Type: application/json" \ - --request PATCH \ - --data '{"enabled":false}' \ - localhost:8558/v1/tasks/task_a -``` - -Response: - -```json -{} -``` - -### Example: Inspect enabling a task - -Request: - -```shell-session -$ curl --header "Content-Type: application/json" \ - --request PATCH \ - --data '{"enabled":true}' \ - localhost:8558/v1/tasks/task_a?run=inspect -``` - -Response if no changes present: - -```json -{ - "inspect": { - "changes_present": false, - "plan": "No changes. Infrastructure is up-to-date. " - } -} -``` - -Response if changes present: - -```json -{ - "inspect": { - "changes_present": true, - "plan": " Plan: 3 to add, 0 to change, 0 to destroy." - } -} -``` - -## Delete Task - -This endpoint allows for deletion of existing tasks. It marks a task for deletion based on the name provided. If the task is not running, it will be deleted immediately. Otherwise, it will be deleted once the task is completed. - --> **Note:** Deleting a task will not destroy the infrastructure managed by the task. - -| Method | Path | Produces | -| -------- | ------------------- | ------------------ | -| `DELETE` | `/tasks/:task_name` | `application/json` | - -### Request Parameters - -| Name | Required | Type | Description | Default | -| ------------ | -------- | ------ | ------------------------------------------ | ------- | -| `:task_name` | Required | string | Specifies the name of the task to retrieve | none | - -### Response Statuses - -| Status | Reason | -| ------ | ----------------------------------------- | -| 202 | Successfully marked the task for deletion | -| 404 | Task with the given name not found | - -### Response Fields - -| Name | Type | Description | -| ------------ | ------ | ---------------------------------------------------------------- | -| `request_id` | string | The ID of the request. Used for auditing and debugging purposes. | - -### Sample Request - -```shell-session -$ curl --request DELETE \ - localhost:8558/v1/tasks/task_a -``` - -### Sample Response - -```json -{ - "request_id": "9b23eea7-a435-2797-c71e-10c15766cd73" -} -``` - -## Task Object - -The task object is used by the Task APIs as part of a request or response. It represents the task's [configuration information](/consul/docs/nia/configuration#task). - -| Name | Required | Type | Description | Default | -| ----------------------- | ------------ | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------ | -| `buffer_period` | object | Optional | The buffer period for triggering task execution. | The global buffer period configured for CTS. | -| `buffer_period.enabled` | bool | Optional | Whether the buffer period is enabled or disabled. | The global buffer period's enabled value configured for CTS. | -| `buffer_period.min` | string | Optional | The minimum period of time to wait after changes are detected before triggering the task. | The global buffer period's min value configured for CTS. | -| `buffer_period.max` | string | Optional | The maximum period of time to wait after changes are detected before triggering the task. | The global buffer period's max value configured for CTS. | -| `condition` | object | Required | The [condition](/consul/docs/nia/configuration#task-condition) on which to trigger the task to execute.

    If the task has the deprecated `services` field configured as a module input, it is represented here as `condition.services`. | none | -| `description` | string | Optional | The human readable text to describe the task. | none | -| `enabled` | bool | Optional | Whether the task is enabled or disabled from executing. | `true` | -| `module` | string | Required | The location of the Terraform module. | none | -| `module_input` | object | Optional | The additional [module input(s)](/consul/docs/nia/configuration#task-source-input) that the tasks provides to the Terraform module on execution.

    If the task has the deprecated `services` field configured as a module input, it is represented here as `module_input.services`. | none | -| `name` | string | Required | The unique name of the task. | none | -| `providers` | list[string] | Optional | The list of provider names that the task's module uses. | none | -| `variables` | map[string] | Optional | The map of variables that are provided to the task's module. | none | -| `version` | string | Optional | The version of the configured module that the task uses. | The latest version. | -| `terraform_version` | string | Optional | **Deprecated in CTS 0.6.0 and will be removed in 0.8.0. Review `terraform_version` in `terraform_cloud_workspace` instead.** The version of Terraform to use for the HCP Terraform workspace associated with the task. This is only available when used with the [HCP Terraform driver](/consul/docs/nia/configuration#hcp-terraform-driver). | The latest compatible version supported by the organization. | -| `terraform_cloud_workspace` | object | Optional | The [configurable attributes of the HCP Terraform workspace](/consul/docs/nia/configuration#terraform_cloud_workspace) associated with the task. This option is only available when used with the [HCP Terraform driver](/consul/docs/nia/configuration#hcp-terraform-driver).| none | diff --git a/website/content/docs/nia/architecture.mdx b/website/content/docs/nia/architecture.mdx deleted file mode 100644 index 14ff0da3c0c0..000000000000 --- a/website/content/docs/nia/architecture.mdx +++ /dev/null @@ -1,79 +0,0 @@ ---- -layout: docs -page_title: Architecture -description: >- - Learn about the Consul-Terraform-Sync architecture and high-level CTS components, such as the Terraform driver and tasks. ---- - -# Consul-Terraform-Sync Architecture - -Consul-Terraform-Sync (CTS) is a service-oriented tool for managing network infrastructure near real-time. CTS runs as a daemon and integrates the network topology maintained by your Consul cluster with your network infrastructure to dynamically secure and connect services. - -## CTS workflow - -The following diagram shows the CTS workflow as it monitors the Consul service catalog for updates. - -[![Consul-Terraform-Sync Architecture](/img/nia-highlevel-diagram.svg)](/img/nia-highlevel-diagram.svg) - -1. CTS monitors the state of Consul’s service catalog and its KV store. This process is described in [Watcher and views](#watcher-and-views). -1. CTS detects a change. -1. CTS prompts Terraform to update the state of the infrastructure. - - -## Watcher and views - -CTS uses Consul’s [blocking queries](/consul/api-docs/features/blocking) functionality to monitor Consul for updates. If an endpoint does not support blocking queries, CTS uses polling to watch for changes. These mechanisms are referred to in CTS as *watchers*. - -The watcher maintains a separate thread for each value monitored and runs any tasks that depend on the watched value whenever it is updated. These threads are referred to as _views_. For example, a thread may run a task to update a proxy when the watcher detects that an instance has become unhealthy . - -## Tasks - -A task is the action triggered by the updated data monitored in Consul. It -takes that dynamic service data and translates it into a call to the -infrastructure application to configure it with the updates. It uses a driver -to push out these updates, the initial driver being a local Terraform run. An -example of a task is to automate a firewall security policy rule with -discovered IP addresses for a set of Consul services. - -## Drivers - -A driver encapsulates the resources required to communicate the updates to the -network infrastructure. The following [drivers](/consul/docs/nia/network-drivers#terraform) are supported: - -- Terraform driver -- HCP Terraform driver - -Each driver includes a set of providers that [enables support](/consul/docs/nia/terraform-modules) for a wide variety of infrastructure applications. - -## State storage and persistence - -The following types of state information are associated with CTS. - -### Terraform state information - -By default, CTS stores [Terraform state data](/terraform/language/state) in the Consul KV, but you can specify where this information is stored by configuring the `backend` setting in the [Terraform driver configuration](/consul/docs/nia/configuration#backend). The data persists if CTS stops and the backend is configured to a remote location. - -### CTS task and event data - -By default, CTS stores task and event data in memory. This data is transient and does not persist. If you configure [CTS to run with high availability enabled](/consul/docs/nia/usage/run-ha), CTS stores the data in the Consul KV. High availability is an enterprise feature that promotes CTS resiliency. When high availability is enabled, CTS stores and persists task changes and events that occur when an instance stops. - -The data stored when operating in high availability mode includes task changes made using the task API or CLI. Examples of task changes include creating a new task, deleting a task, and enabling or disabling a task. You can empty the leader’s stored state information by starting CTS with the [`-reset-storage` flag](/consul/docs/nia/cli/start#options). - -## Instance compatibility checks (high availability) - -If you [run CTS with high availability enabled](/consul/docs/nia/usage/run-ha), CTS performs instance compatibility checks to ensure that all instances in the cluster behave consistently. Consistent instance behavior enables CTS to properly perform automations configured in the state storage. - -The CTS instance compatibility check reports an error if the task [module](/consul/docs/nia/configuration#module) is configured with a local module, but the module does not exist on the CTS instance. Refer to the [Terraform documentation](/terraform/language/modules/sources#module-sources) for additional information about module sources. Example log: - -```shell-session -[ERROR] ha.compat: error="compatibility check failure: stat ./example-module: no such file or directory" -``` -Refer to [Error Messages](/consul/docs/nia/usage/errors-ref) for additional information. - -CTS instances perform a compatibility check on start-up based on the stored state and every five minutes after starting. If the check detects an incompatible CTS instance, it generates a log so that an operator can address it. - -CTS logs the error message and continues to run when it finds an incompatibility. CTS can still elect an incompatible instance to be the leader, but tasks affected by the incompatibility do not run successfully. This can happen when all active CTS instances enter [`once-mode`](/consul/docs/nia/cli/start#modes) and run the tasks once when initially elected. - -## Security guidelines - -We recommend following the network security guidelines described in the [Secure Consul-Terraform-Sync for Production](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-secure?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. The tutorial contains a checklist of best practices to secure your CTS installation for a production environment. diff --git a/website/content/docs/nia/cli/index.mdx b/website/content/docs/nia/cli/index.mdx deleted file mode 100644 index 9e338819da57..000000000000 --- a/website/content/docs/nia/cli/index.mdx +++ /dev/null @@ -1,93 +0,0 @@ ---- -layout: docs -page_title: Consul-Terraform-Sync CLI -description: >- - How to use the Consul-Terraform-Sync CLI ---- - -# Consul-Terraform-Sync Command (CLI) - -Consul-Terraform-Sync (CTS) is controlled via an easy to use command-line interface (CLI). CTS is only a single command-line application: `consul-terraform-sync`. CTS can be run as a daemon and execute CLI commands. When CTS is run as a daemon, it acts as a server to the CLI commands. Users can use the commands to interact and modify the daemon as it is running. The complete list of commands is in the navigation to the left. Both the daemon and commands return a non-zero exit status on error. - -## Daemon - -~> Running CTS as a daemon without using a command is deprecated in CTS 0.6.0 and will be removed at a much later date in a major release. For information on the preferred way to run CTS as a daemon review the [`start` command docs](/consul/docs/nia/cli/start) - -When CTS runs as a daemon, there is no default configuration to start CTS. You must set a configuration flag -config-file or -config-dir. For example: - -```shell-session -$ consul-terraform-sync start -config-file=config.hcl -``` - -To review a list of available flags, use the `-help` or `-h` flag. - -## Commands - -In addition to running the daemon, CTS has a set of commands that act as a client to the daemon server. The commands provide a user-friendly experience interacting with the daemon. The commands use the CTS APIs but does not correspond one-to-one with it. Please review the individual commands in the left navigation for more details. - -To get help for a command, run: `consul-terraform-sync -h` - -### CLI Structure - -CTS commands follow the below structure - -```shell-session -consul-terraform-sync [options] [args] -``` - -- `options`: Flags to specify additional settings. There are general options that can be used across all commands and command-specific options. -- `args`: Required arguments specific to a commands - -Example: - -```shell-session -consul-terraform-sync task disable -http-addr=http://localhost:2000 task_a -``` - -### Autocompletion - -The `consul-terraform-sync` command features opt-in autocompletion for flags, subcommands, and -arguments (where supported). - -To enable autocompletion, run: -```shell-session -$ consul-terraform-sync -autocomplete-install -``` - -~> After you install autocomplete, you must restart your shell for the change to take effect. - -When you start typing a CTS command, press the `` key to show a -list of available completions. To show available flag completes, type `-`. - -Autocompletion will query the running CTS server to return helpful argument suggestions. For example, for the `task disable` command, autocompletion will return the names of all enabled tasks that can be disabled. - -When autocompletion makes the query to the running CTS server, it will also use any `CTS_*` environment variables (for example `CTS_ADDRESS`) set on the CTS server. - -#### Example: Use autocomplete to discover how to disable a task - -Assume a tab is typed at the end of each prompt line: - -```shell-session -$ consul-terraform-sync -start task - -$ consul-terraform-sync task -create delete disable enable - -$ consul task disable -task_name_a task_name_b task_name_c -``` - -### General Options - -Below are options that can be used across all commands: - -| Option | Required | Type | Description | Default | -| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------- | -| `-port` | Optional | integer | **Deprecated in Consul-Terraform-Sync 0.5.0 and will be removed in a later version.** Use `-http-addr` option instead to specify the address and port of the Consul-Terraform-Sync API.

    Port from which the CTS daemon serves its API.
    The value is prepended with `http://localhost:`, but you can specify a different scheme or address with the `-http-addr` if necessary. | `8558` | -| `-http-addr` | Optional | string | Address and port of the CTS API. You can specify an IP or DNS address.

    Alternatively, you can specify a value using the `CTS_ADDRESS` environment variable. | `http://localhost:8558` | -| `-ssl-verify` | Optional | boolean | Enables verification for TLS/SSL connections to the API if set to true. This does not affect insecure HTTP connections.

    Alternatively, you can specify the value using the `CTS_SSL_VERIFY` environment variable. | `true` | -| `-ca-cert` | Optional | string | Path to a PEM-encoded certificate authority file that is used to verify TLS/SSL connections. Takes precedence over `-ca-path` if both are provided.

    Alternatively, you can specify the value using the `CTS_CACERT` environment variable. | none | -| `-ca-path` | Optional | string | Path to a directory containing a PEM-encoded certificate authority file that is used to verify TLS/SSL connections.

    Alternatively, you can specify the value using the `CTS_CAPATH` environment variable. | none | -| `-client-cert`                                                   | Optional | string | Path to a PEM-encoded client certificate that the CTS API requires when [`verify_incoming`](/consul/docs/nia/configuration#verify_incoming) is set to `true` on the API.

    Alternatively, you can specify the value using the `CTS_CLIENT_CERT` environment variable. | none | -| `-client-key` | Optional | string | Path to a PEM-encoded client key for the certificate configured with the `-client-cert` option. This is required if `-client-cert` is set and if [`verify_incoming`](/consul/docs/nia/configuration#verify_incoming) is set to `true` on the CTS API.

    Alternatively, you can specify the value using the `CTS_CLIENT_KEY` environment variable. | none | diff --git a/website/content/docs/nia/cli/task.mdx b/website/content/docs/nia/cli/task.mdx deleted file mode 100644 index d5ead540cc65..000000000000 --- a/website/content/docs/nia/cli/task.mdx +++ /dev/null @@ -1,212 +0,0 @@ ---- -layout: docs -page_title: Task Command -description: >- - Consul-Terraform-Sync supports task commands for users to modify tasks while the daemon is running ---- - -# task - -## task create - -`task create` command creates a new task so that it will run and update task resources. The command generates and outputs a Terraform plan, similar to [inspect-mode](/consul/docs/nia/cli/start#modes), of how resources will be modified if the task is created. The command will then ask for user approval before creating the task. - -It is not to be used for updating a task and will not create a task if the task name already exists. - -### Usage - -`consul-terraform-sync task create [options] -task-file=` - -**Options:** - -In addition to [general options](/consul/docs/nia/cli#general-options) this command also supports the following: - -| Name | Required | Type | Description | -| ------------ | -------- | ------ | ------------------------------------------------------------------------------------------------------------------- | -| `-task-file` | Required | string | The path to the task configuration file. See [configuration information](/consul/docs/nia/configuration#task) for details. | -| `-auto-approve`   | Optional | boolean | Whether to skip the interactive approval of the plan before creating. | - -### Example - -task_example.hcl: - -```hcl -task { - name = "task_a" - description = "" - enabled = true - providers = [] - module = "org/example/module" - version = "1.0.0" - variable_files = [] - condition "services" { - names = ["web", "api"] - } -} -``` - -Shell Session: - -```shell-session -$ consul-terraform-sync task create -task-file=task_example.hcl -==> Inspecting changes to resource if creating task 'task_a'... - - Generating plan that Consul-Terraform-Sync will use Terraform to execute - - Request ID: 1da3e8e0-87c3-069b-51a6-46903e794a76 - Request Payload: - - // - - Plan: - -Terraform used the selected providers to generate the following execution -plan. Resource actions are indicated with the following symbols: - + create - -Terraform will perform the following actions: - -// - -Plan: 1 to add, 0 to change, 0 to destroy. - -==> Creating the task will perform the actions described above. - Do you want to perform these actions for 'task_a'? - - This action cannot be undone. - - Consul-Terraform-Sync cannot guarantee Terraform will perform - these exact actions if monitored services have changed. - - Only 'yes' will be accepted to approve, enter 'no' or leave blank to reject. - -Enter a value: yes - -==> Creating and running task 'api-task'... - The task creation request has been sent to the CTS server. - Please be patient as it may take some time to see a confirmation that this task has completed. - Warning: Terminating this process will not stop task creation. - -==> Task 'task_a' created - Request ID: '78eddd74-0f08-83d6-72b2-6aaac1424fba' -``` - -## task enable - -`task enable` command enables an existing task so that it will run and update task resources. If the task is already enabled, it will return a success. The command generates and outputs a Terraform plan, similar to [inspect-mode](/consul/docs/nia/cli#inspect-mode), of how resources will be modified if task becomes enabled. If resources changes are detected, the command will ask for user approval before enabling the task. If no resources are detected, the command will go ahead and enable the task. - -### Usage - -`consul-terraform-sync task enable [options] ` - -**Arguments:** - -| Name | Required | Type | Description | -| ----------- | -------- | ------ | ------------------------------- | -| `task_name` | Required | string | The name of the task to enable. | - -**Options:** - -In addition to [general options](/consul/docs/nia/cli/#general-options) this command also supports the following: - -| Name | Required | Type | Description | -| --------------- | -------- | ------- | ------------------------------- | -| `-auto-approve` | Optional | boolean | Whether to skip the interactive approval of the plan before enabling. | - -### Example - -```shell-session -$ consul-terraform-sync task enable task_a -==> Inspecting changes to resource if enabling 'task_a'... - - Generating plan that Consul-Terraform-Sync will use Terraform to execute - -An execution plan has been generated and is shown below. -Resource actions are indicated with the following symbols: - + create - -Terraform will perform the following actions: - -// - -Plan: 3 to add, 0 to change, 0 to destroy. - -==> Enabling the task will perform the actions described above. - Do you want to perform these actions for 'task_a'? - - This action cannot be undone. - - Consul-Terraform-Sync cannot guarantee Terraform will perform - these exact actions if monitored services have changed. - - Only 'yes' will be accepted to approve. - -Enter a value: yes - -==> Enabling and running 'task_a'... - -==> 'task_a' enable complete! -``` - -## task disable - -`task disable` command disables an existing task so that it will no longer run and update task resources. If the task is already disabled, it will return a success. - -### Usage - -`consul-terraform-sync task disable [options] ` - -**Arguments:** - -| Name | Required | Type | Description | -| ----------- | -------- | ------ | -------------------------------- | -| `task_name` | Required | string | The name of the task to disable. | - -**Options:** Currently only supporting [general options](/consul/docs/nia/cli#general-options) - -### Example - -```shell-session -$ consul-terraform-sync task disable task_b -==> Waiting to disable 'task_b'... - -==> 'task_b' disable complete! -``` - -## task delete - -`task delete` command deletes an existing task. The command will ask the user for approval before deleting the task. The task will be marked for deletion and will be deleted immediately if it is not running. Otherwise, the task will be deleted once it has completed. - --> **Note:** Deleting a task will not destroy the infrastructure managed by the task. - -### Usage - -`consul-terraform-sync task delete [options] ` - -**Arguments:** - -| Name | Required | Type | Description | -| ----------- | -------- | ------ | ------------------------------- | -| `task_name` | Required | string | The name of the task to delete. | - -**Options:** - -In addition to [general options](/consul/docs/nia/cli#general-options) this command also supports the following: - -| Name | Required | Type | Description | -| --------------- | -------- | ------- | ------------------------------- | -| `-auto-approve` | Optional | boolean | Whether to skip the interactive approval of the task deletion. | - -### Example - -```shell-session -$ consul-terraform-sync task delete task_a -==> Do you want to delete 'task_a'? - - This action cannot be undone. - - Deleting a task will not destroy the infrastructure managed by the task. - - If the task is not running, it will be deleted immediately. - - If the task is running, it will be deleted once it has completed. - Only 'yes' will be accepted to approve, enter 'no' or leave blank to reject. - -Enter a value: yes - -==> Marking task 'task_a' for deletion... - -==> Task 'task_a' has been marked for deletion and will be deleted when not running. -``` diff --git a/website/content/docs/nia/configuration.mdx b/website/content/docs/nia/configuration.mdx deleted file mode 100644 index ec07f3474d3f..000000000000 --- a/website/content/docs/nia/configuration.mdx +++ /dev/null @@ -1,898 +0,0 @@ ---- -layout: docs -page_title: Consul-Terraform-Sync Configuration -description: >- - Consul-Terraform-Sync requires a Terraform Provider, a Terraform Module and a running Consul Cluster outside of the consul-terraform-sync daemon. ---- - -# Consul-Terraform-Sync Configuration - -This topic contains configuration reference information for Consul-Terraform-Sync (CTS). Pass configuration settings in an HCL or JSON configuration file to configure the CTS daemon. Refer to the [HashiCorp Configuration Language](https://github.com/hashicorp/hcl) to learn the HCL syntax. - -## Global configurations - -Top level options are reserved for configuring CTS. - -```hcl -log_level = "INFO" -working_dir = "sync-tasks" -port = 8558 -id = "cts-01" - -syslog { - facility = "local2" -} - -buffer_period { - enabled = true - min = "5s" - max = "20s" -} - -tls { - enabled = true - cert = "/path/to/cert.pem" - key = "/path/to/key.pem" - verify_incoming = true - ca_cert = "/path/to/ca.pem" -} -``` - -- `buffer_period` - Configures the default buffer period for all dynamic [tasks](#task) to dampen the effects of flapping services to downstream network devices. It defines the minimum and maximum amount of time to wait for the cluster to reach a consistent state and accumulate changes before triggering task executions. The default is enabled to reduce the number of times downstream infrastructure is updated within a short period of time. This is useful to enable in systems that have a lot of flapping. Buffer periods do not apply to scheduled tasks. - - `enabled` - (bool: true) Enable or disable buffer periods globally. Specifying `min` will also enable it. - - `min` - (string: "5s") The minimum period of time to wait after changes are detected before triggering related tasks. - - `max` - (string: "20s") The maximum period of time to wait after changes are detected before triggering related tasks. If `min` is set, the default period for `max` is 4 times the value of `min`. -- `log_level` - (string: "INFO") The log level to use for CTS logging. This defaults to "INFO". The available log levels are "TRACE", "DEBUG", "INFO", "WARN", and "ERR". -- `port` - (int: 8558) The port for CTS to use to serve API requests. -- `id` (string: Generated ID with the format `cts-`) The ID of the CTS instance. CTS uses the ID as the service ID for CTS if service registration is enabled. CTS also uses the ID to identify the instance in a high availability cluster. -- `syslog` - Specifies the syslog server for logging. - - `enabled` - (bool) Enable syslog logging. Specifying other option also enables syslog logging. - - `facility` - (string: "local0") Name of the syslog facility to log to. - - `name` - (string: "consul-terraform-sync") Name to use for the daemon process when logging to syslog. -- `working_dir` - (string: "sync-tasks") Specifies the base working directory for managing the artifacts generated by CTS for each task, including Terraform configuration files. CTS will also generate a subdirectory for each task, e.g., `./sync-tasks/task-name`. The subdirectory is the working directory for the task. Use the [`task.working_dir`](#working_dir-1) option to override task-level working directories. -- `tls` - Configure TLS on the CTS API. - - `enabled` - (bool: false) Enable TLS. Providing a value for any of the TLS options will enable this parameter implicitly. - - `cert` - (string) The path to a PEM-encoded certificate file used for TLS connections to the CTS API. - - `key` - (string) The path to the PEM-encoded private key file used with the certificate configured by `cert`. - - `verify_incoming` - (bool: false) Enable mutual TLS. Requires all incoming connections to the CTS API to use a TLS connection and provide a certificate signed by a Certificate Authority specified by the `ca_cert` or `ca_path`. - - `ca_cert` - (string) The path to a PEM-encoded certificate authority file used to verify the authenticity of the incoming client connections to the CTS API when `verify_incoming` is set to true. Takes precedence over `ca_path` if both are configured. - - `ca_path` - (string) The path to a directory of PEM-encoded certificate authority files used to verify the authenticity of the incoming client connections to the CTS API when `verify_incoming` is set to true. -- `license_path` - (string) **Deprecated in CTS 0.6.0 and will be removed in a future release. Use [license block](/consul/docs/nia/configuration#license) instead.** Configures the path to the file that contains the license. You must specify a license in order to use enterprise features. You can also set the license by defining the `CONSUL_LICENSE` and `CONSUL_LICENSE_PATH` environment variables. For more information, refer to [Setting the License](/consul/docs/nia/enterprise/license#setting-the-license). - -## License - -The `license` block configures how CTS loads its license with options to: -- Configure CTS to automatically retrieve a license from Consul. -- Provide a path to a license file. - -When a license block is not configured, CTS uses automatic license retrieval. - -```hcl -license { - path = "path/to/license.lic" - auto_retrieval { - enabled = true - } -} -``` - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `path` | Optional | string | Configures the path to the file containing a license. If a path to a license is configured, this license is used until you enable automatic license retrieval. You can also set the license by defining the `CONSUL_LICENSE` and `CONSUL_LICENSE_PATH` environment variables. To learn more, review [Setting the License](/consul/docs/nia/enterprise/license#setting-the-license). | none | -| `auto_retrieval` | Optional | object | Configures the license auto-retrieval used by CTS. To learn more, review [Auto-Retrieval](/consul/docs/nia/configuration#auto-retrieval) for details | Review [Auto-Retrieval](/consul/docs/nia/configuration#auto-retrieval) for defaults. | - - -### Auto-retrieval - -You can use the `auto_retrieval` block to configure the automatic license retrieval in CTS. When enabled, CTS attempts to retrieve a new license from its configured Consul Enterprise backend once a day. If CTS cannot retrieve a license and the current license is reaching its expiration date, CTS attempts to retrieve a license with increased frequency, as defined by the [License Expiration Date Handling](/consul/docs/nia/enterprise/license#license-expiration-handling). - -~> Enabling `auto_retrieval` is recommended when using HCP Consul Dedicated, as HCP Consul Dedicated licenses expire more frequently than Consul Enterprise licenses. Without auto-retrieval enabled, you have to restart CTS every time you load a new license. - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `enabled` | Optional | string | If set to true, enables license auto-retrieval | true | - -## Consul connection - -The `consul` block configures the CTS connection with a Consul agent so that CTS can perform queries for task execution. It also configures the automatic registration of CTS as a service with Consul. - --> **Note:** Use HTTP/2 to improve Consul-Terraform-Sync performance when communicating with the local Consul process. [TLS/HTTPS](/consul/docs/agent/config/config-files) must be configured for the local Consul with the [cert_file](/consul/docs/agent/config/config-files#cert_file) and [key_file](/consul/docs/agent/config/config-files#key_file) parameters set. For the Consul-Terraform-Sync configuration, set `tls.enabled = true` and set the `address` parameter to the HTTPS URL, e.g., `address = example.consul.com:8501`. If using self-signed certificates for Consul, you will also need to set `tls.verify = false` or add the certificate to `ca_cert` or `ca_path`. - -To read more on suggestions for configuring the Consul agent, see [run an agent](/consul/docs/nia/usage/requirements#run-an-agent). - -```hcl -consul { - address = "localhost:8500" - auth {} - tls {} - token = null - transport {} - service_registration { - service_name = "cts" - address = "172.22.0.2" - default_check { - address = "http://172.22.0.2:8558" - } - } -} -``` - - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `address` | Optional | string | The address of the Consul agent. It may be an IP or FQDN. | `localhost:8500` | -| `token` | Optional | string | The ACL token to use for client communication with the local Consul agent. See [ACL Requirements](/consul/docs/nia/configuration#acl-requirements) for required privileges.

    The token can also be provided through the `CONSUL_TOKEN` or `CONSUL_HTTP_TOKEN` environment variables. | none | -| `auth` | Optional | [auth](/consul/docs/nia/configuration#auth) | HTTP basic authentication for communicating with Consul || -| `tls` | Optional | [tls](/consul/docs/nia/configuration#tls-1) | Secure client connection with Consul || -| `transport` | Optional | [transport](/consul/docs/nia/configuration#transport) | Low-level network connection details || -| `service_registration` | Optional| [service_registration](/consul/docs/nia/configuration#service-registration) | Options for how CTS will register itself as a service with a health check to Consul. || - -### ACL requirements -The following table describes the ACL policies required by CTS. For more information, refer to the [Secure Consul-Terraform-Sync for Production](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-secure?utm_source=docs#configure-acl-privileges-for-consul-terraform-sync) tutorial. - -| Policy | Resources | -| ------ | --------- | -| `service:read` | Any services monitored by tasks | -| `node:read` | Any nodes hosting services monitored by tasks | -| `keys:read` | Any Consul KV pairs monitored by tasks | -| `namespace:read` | Any namespaces for resources monitored by tasks | -| `service:write` | The CTS service when [service registration](/consul/docs/nia/configuration#service-registration) is enabled | -| `keys:write` | `consul-terraform-sync/` Only required when using Consul as the [Terraform backend](/consul/docs/nia/configuration#backend). | - - -### Auth -Configures HTTP basic authentication for communicating with Consul. - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `enabled` | Optional | boolean | Enables using HTTP basic authentication | `false` | -| `username` | Optional | string | Username for authentication| none | -| `password` | Optional | string | Password for authentication| none | - -### TLS - -Configure TLS to use a secure client connection with Consul. Using HTTP/2 can solve issues related to hitting Consul's maximum connection limits, as well as improve efficiency when processing many blocking queries. This option is required for Consul-Terraform-Sync when connecting to a [Consul agent with TLS verification enabled for HTTPS connections](/consul/docs/agent/config/config-files#verify_incoming). - -If Consul is using a self-signed certificate that you have not added to the global CA chain, you can set this certificate with `ca_cert` or `ca_path`. Alternatively, you can disable SSL verification by setting `verify` to false. However, disabling verification is a potential security vulnerability. - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `enabled` | Optional | boolean | Enable TLS. Providing a value for any of the TLS options enables this parameter implicitly. | `false` | -| `verify` | Optional | boolean | Enables TLS peer verification, which checks the global certificate authority (CA) chain to make sure the certificates returned by Consul are valid. | `true` | -| `ca_cert` | Optional | string | The path to a PEM-encoded certificate authority file used to verify the authenticity of the connection to Consul over TLS.

    Can also be provided through the `CONSUL_CACERT` environment variable. | none | -| `ca_path` | Optional | string | The path to a directory of PEM-encoded certificate authority files used to verify the authenticity of the connection to Consul over TLS.

    Can also be provided through the `CONSUL_CAPATH` environment variable. | none | -| `cert` | Optional | string | The path to a PEM-encoded client certificate file provided to Consul over TLS in order for Consul to verify the authenticity of the connection from CTS. Required if Consul has `verify_incoming` set to true.

    Can also be provided through the `CONSUL_CLIENT_CERT` environment variable. | none | -| `key` | Optional | string | The path to the PEM-encoded private key file used with the client certificate configured by `cert`. Required if Consul has `verify_incoming` set to true.

    Can also be provided through the `CONSUL_CLIENT_KEY` environment variable. | none | -| `server_name` | Optional | string | The server name to use as the Server Name Indication (SNI) for Consul when connecting via TLS.

    Can also be provided through the `CONSUL_TLS_SERVER_NAME` environment variable. | none | - -### Transport -Configures the low-level network connection details to Consul. - -To achieve the shortest latency between a Consul service update to a task execution, configure `max_idle_conns_per_host` equal to or greater than the number of services in automation across all tasks. This value should be lower than the configured [`http_max_conns_per_client`](/consul/docs/agent/config/config-files#http_max_conns_per_client) for the Consul agent. - -If `max_idle_conns_per_host` and the number of services in automation is greater than the Consul agent limit, CTS may error due to connection limits (status code 429). You may increase the agent limit with caution. _Note: requests to the Consul agent made by Terraform subprocesses or any other process on the same host as CTS will contribute to the Consul agent connection limit._ - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `dial_keep_alive` | Optional | string | The amount of time for keep-alives. | `30s` | -| `dial_timeout` | Optional | string | The amount of time to wait to establish a connection. | `30s` -| `disable_keep_alives` | Optional | boolean | Determines if keep-alives should be used. Disabling this significantly decreases performance. | `false`| -| `idle_conn_timeout` | Optional | string | The timeout for idle connections. | `5s` | -| `max_idle_conns` | Optional | integer | The maximum number of total idle connections across all hosts. The limit is disabled by default. | `0` | -| `max_idle_conns_per_host` | Optional | integer | The maximum number of idle connections per remote host. The majority of connections are established with one host, the Consul agent. | `100`| -| `tls_handshake_timeout` | Optional | string | The amount of time to wait to complete the TLS handshake. | `10s`| - - -### Service registration -CTS automatically registers itself with Consul as a service with a health check, using the [`id`](/consul/docs/nia/configuration#id) configuration as the service ID. CTS deregisters itself with Consul when CTS stops gracefully. If CTS is unable to register with Consul, then it will log the error and continue without exiting. - -Service registration requires that the [Consul token](/consul/docs/nia/configuration#consul) has an ACL policy of `service:write` for the CTS service. - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `enabled` | Optional | boolean | Enables CTS to register itself as a service with Consul. When service registration is enabled for a [CTS instance configured for high availability](/consul/docs/nia/usage/run-ha), the instance also registers itself with a new tag using the `cts-cluster:` format. | `true` | -| `service_name` | Optional | string | The service name for CTS. We recommended specifying the same name used for [`high_availability.cluster.name`](#high-availability-cluster) value if [CTS is configured for high availability](/consul/docs/nia/usage/run-ha). | `consul-terraform-sync` | -| `address` | Optional | string | The IP address or hostname for CTS. | IP address of the Consul agent node | -| `namespace` | Optional | string | The namespace to register CTS in. | In order of precedence:
    1. Inferred from the CTS ACL token
    2. The `default` namespace. | -| `default_check.enabled` | Optional | boolean | Enables CTS to create the default health check. | `true` | -| `default_check.address` | Optional | string | The address to use for the default HTTP health check. Needs to include the scheme (`http`/`https`) and the port, if applicable. | `http://localhost:` or `https://localhost:`. Determined from the [port configuration](/consul/docs/nia/configuration#port) and whether [TLS is enabled](/consul/docs/nia/configuration#enabled-2) on the CTS API. | - -The default health check is an [HTTP check](/consul/docs/services/usage/checks#http-checks) that calls the [Health API](/consul/docs/nia/api/health). The following table describes the values CTS sets for this default check, corresponding to the [Consul register check API](/consul/api-docs/agent/check#register-check). If an option is not listed in this table, then CTS is using the default value. - -| Parameter | Value | -| --------- | ----- | -| Name | `CTS Health Status`| -| ID | `-health` | -| Namespace | `service_registration.namespace` | -| Notes | `Check created by Consul-Terraform-Sync`| -| DeregisterCriticalServiceAfter | `30m` | -| ServiceID | [`id`](/consul/docs/nia/configuration#id) | -| Status | `critical` | -| HTTP | `/v1/health` | -| Method | `GET` | -| Interval | `10s` | -| Timeout | `2s` | -| TLSSkipVerify | `false` | - -## High availability - -Add a `high_availability` block to your configuration to enable CTS to run in high availability mode. Refer to [Run Consul-Terraform-Sync with High Availability](/consul/docs/nia/usage/run-ha) for additional information. The `high_availability` block contains the following configuration items. - -### High availability cluster - -The `cluster` parameter contains configurations for the cluster you want to operate with high availability enabled. You can configure the following options: - -| Parameter | Description| Required | Type | -| --------- | ---------- | -------- | ------| -| `name` | Specifies the name of the cluster operating with high availability enabled. | Required | String | -| `storage` | Configures how CTS stores state information. Refer to [State storage and persistence](/consul/docs/nia/architecture#state-storage-and-persistence) for additional information. You can define storage for the `"consul"` resource. Refer to [High availability cluster storage](#high-availability-cluster-storage) for additional information. | Optional | Object | - -#### High availability cluster storage - -The `high_availability.cluster.storage` object contains the following configurations. - -| Parameter | Description| Required | Type | -| --------- | ---------- | -------- | ------| -| `parent_path` | Defines a parent path in the Consul KV for CTS to store state information. Default is `consul-terraform-sync/`. CTS automatically appends the cluster name to the parent path, so the effective default directory for state information is `consul-terraform-sync/`. | Optional | String | -| `namespace` | Specifies the namespace to use when storing state in the Consul KV. Default is inferred from the CTS ACL token. The fallback default is `default`. | Optional | String | -| `session_ttl` | Specifies the session time-to-live for leader elections. You must specify a value greater than the `session_ttl_min` configured for Consul. A longer `session_ttl` results in a longer leader election after a failover. Default is `15s`. | Optional | String | - -### High availability instance - -The `instance` parameter is an object that contains configurations unique to the CTS instance. You specify the following configurations: -- `address`: (Optional) String value that specifies the IP address of the CTS instance to advertise to other instances. This parameter does not have a default value. - - -## Service - -~> **Note:** Deprecated in CTS 0.5.0 and will be removed in a future major release. `service` blocks are used to define the `task` block's `services` fields, which were also deprecated and replaced with [Services Condition](/consul/docs/nia/configuration#services-condition) and [Services Module Input](/consul/docs/nia/configuration#services-module-input). `service` block configuration can be replaced by configuring the equivalent fields of the corresponding Services Condition and Services Module Input. Refer to [0.5.0 release notes](/consul/docs/release-notes/consul-terraform-sync/v0_5_x#deprecate-service-block) for examples. - -A `service` block is an optional block to explicitly define the services configured in the `task` block's `services` field (deprecated). `service` blocks do not define services configured in the `task` block's `condition "services"` or `module_input "services` blocks. - -A `service` block is only necessary for services that have non-default values e.g. custom datacenter. Services that do not have a `service` block configured will assume default values. To configure multiple services, specify multiple `service` blocks. If a `service` block is configured, the service can be referred in `task.services` by service name or ID. If a `service` block is not configured, it can only be referred to by service name. - -```hcl -service { - name = "web" - datacenter = "dc1" - description = "all instances of the service web in datacenter dc1" -} -``` - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `name` | Required | string | Consul logical name of the service. | none | -| `id` | Optional | string | Service ID for CTS. This is used to explicitly identify the service config for a task to use. If no ID is provided, the service is identified by the service name within a [task definition](#task). | none | -| `description` | Optional | string | Human-readable text to describe the service | none | -| `datacenter` | Optional | string | Name of a datacenter to query for the task. | Datacenter of the agent that CTS queries. | -| `namespace` | Optional | string |
    Namespace of the services to query for the task. | In order of precedence:
    1. Inferred from the CTS ACL token
    2. The `default` namespace. | -| `filter` | Optional | string | Expression used to additionally filter the services to monitor.

    Refer to the [services filtering documentation](/consul/api-docs/health#filtering-2) and section about [how to write filter expressions](/consul/api-docs/features/filtering#creating-expressions) for additional information. | none | -| `cts_user_defined_meta` | Optional | map[string] | User-defined metadata that is appended to the [service input variable](/consul/docs/nia/terraform-modules#services-module-input) for compatible Terraform modules.

    Some modules do not use the configured metadata. Refer to the module configured for the task for information about metadata usage and expected keys and format.

    If multiple tasks depend on the same service but require different metadata, you can declare different sets of metadata for the same service. Define multiple service blocks for the service with unique IDs (and identical names) for those blocks. The metadata can then be separated per task based on the service IDs. | none| - -## Task - -A `task` block configures which task to execute in automation. Use the `condition` block to specify when the task executes. You can specify the `task` block multiple times to configure multiple tasks, or you can omit it entirely. If task blocks are not specified in your initial configuration, you can add them to a running CTS instance by using the [`/tasks` API endpoint](/consul/docs/nia/api/tasks#tasks) or the [CLI's `task` command](/consul/docs/nia/cli/task#task). - -```hcl -task { - name = "taskA" - description = "" - enabled = true - providers = [] - module = "org/example/module" - version = "1.0.0" - variable_files = [] - condition "services" { - names = ["web", "api"] - } -} -``` - -- `description` - (string) The human readable text to describe the task. -- `name` - (string: required) Name is the unique name of the task (required). A task name must start with a letter or underscore and may contain only letters, digits, underscores, and dashes. -- `enabled` - (bool: true) Enable or disable a task from running and managing resources. -- `providers` - (list[string]) Providers is the list of provider names the task is dependent on. This is used to map [Terraform provider configuration](#terraform-provider) to the task. -- `services` - (list[string]) **Deprecated in CTS 0.5.0 and will be removed in a future major release. Use [Services Condition](/consul/docs/nia/configuration#services-condition) or [Services Module Input](/consul/docs/nia/configuration#services-module-input) instead. See [0.5.0 release notes](/consul/docs/release-notes/consul-terraform-sync/v0_5_x#deprecate-services-field) for examples.** Specifies an optional list of logical service names or service IDs that the task monitors for changes in the Consul catalog. The `services` can act in different ways depending on the configuration of the task's `condition` block: - - no `condition` block configured: `services` will act as the task's condition and provide the services information as module input - - the `condition` block configured for type `services`: `services` is incompatible with this type of `condition` because both configure the services module input. CTS will return an error. - - the `condition` block configured for all other types: `services` will act only to provide services module input. - - Service values that are not explicitly defined by a `service` block that have a matching ID are assumed to be logical service names in the `default` namespace. -- `source` - (string: required) **Deprecated in CTS 0.5.0 and will be removed in a future major release. See the `module` field instead.** -- `module` - (string: required) Module is the location the driver uses to discover the Terraform module used for automation. The module's source can be local or remote on the [Terraform Registry](https://registry.terraform.io/) or private module registry. Read more on [Terraform module source and other supported types here](/terraform/language/modules/sources). - - - To use a private module with the [`terraform` driver](#terraform-driver), run the command [`terraform login [hostname]`](/terraform/tutorials/cloud/cloud-login?utm_source=docs) to authenticate the local Terraform CLI prior to starting CTS. - - To use a private module with the [`terraform_cloud` driver](#hcp-terraform-driver), no extra steps are needed. - - ```hcl - // local module example: "./terraform-cts-hello" - module = "" - - // public module example: "mkam/hello/cts" - module = "//" - - // private module example: "my.tfe.hostname.io/my-org/hello/cts" - module = "///" - ``` - -- `variable_files` - (list[string]) Specifies list of paths to [Terraform variable definition files (`.tfvars`)](/terraform/language/values/variables#variable-definitions-tfvars-files). The content of these files should consist of only variable name assignments. The variable assignments must match the corresponding variable declarations made available by the Terraform module for the task. - - Variables are loaded in the order they appear in the files. Duplicate variables are overwritten with the later value. _Unless specified by the module, configure arguments for Terraform providers using [`terraform_provider` blocks](#terraform-provider)._ - - - - ```hcl - address_group = "consul-services" - tags = [ - "consul-terraform-sync", - "terraform" - ] - ``` - - - -- `version` - (string) The version of the provided module the task will use. The latest version will be used as the default if omitted. -- `working_dir` - (string) The working directory to manage generated artifacts by CTS for this task, including Terraform configuration files. By default, a working directory is created for each task as a subdirectory in the base [`working_dir`](#working_dir), e.g. `sync-tasks/task-name`. -- `buffer_period` - Configures the buffer period for a dynamic task to dampen the effects of flapping services to downstream network devices. It defines the minimum and maximum amount of time to wait for the cluster to reach a consistent state and accumulate changes before triggering task execution. The default is inherited from the top level [`buffer_period` block](#global-config-options). If configured, these values will take precedence over the global buffer period. This is useful to enable for a task that is dependent on services that have a lot of flapping. Buffer periods do not apply to scheduled tasks. - - `enabled` - (bool) Enable or disable buffer periods for this task. Specifying `min` will also enable it. - - `min` - (string: "5s") The minimum period of time to wait after changes are detected before triggering related tasks. - - `max` - (string: "20s") The maximum period of time to wait after changes are detected before triggering related tasks. If `min` is set, the default period for `max` is 4 times the value of `min`. -- `condition` - (obj: required) The requirement that, when met, triggers CTS to execute the task. Only one `condition` may be configured per task. CTS supports different types of conditions, which each have their own configuration options. See [Task Condition](#task-condition) configuration for full details on configuration options for each condition type. -- `source_input` - (obj) **Deprecated in CTS 0.5.0 and will be removed in 0.8.0. See the `module_input` block instead.** -- `module_input` - (obj) Specifies a Consul object containing values or metadata to be provided to the Terraform Module. The `module_input` block defines any extra module inputs needed for task execution. This is in addition to any module input provided by the `condition` block or `services` field (deprecated). Multiple `module_input` blocks can be configured per task. [Task Module Input](#task-module-input) configuration for full details on usage and restrictions. -- `terraform_version` - (string) **Deprecated in CTS 0.6.0 and will be removed in 0.8.0. Review `terraform_cloud_workspace.terraform_version` instead.** The version of Terraform to use for the HCP Terraform workspace associated with the task. Defaults to the latest compatible version supported by the organization. This option is only available when used with the [HCP Terraform driver](#hcp-terraform-driver); otherwise, set the version within the [Terraform driver](#terraform-driver). -- `terraform_cloud_workspace` - (obj) Configures attributes of the HCP Terraform workspace associated with the task. This option is only available when used with the [HCP Terraform driver](#hcp-terraform-driver). For global configurations of all workspaces, review [`driver.workspaces`](#workspaces). - - `execution_mode` - (string: "remote") The execution mode that determines whether to use HCP Terraform as the Terraform execution platform. Only supports "remote" or "agent". - - `agent_pool_id` - (string) Only supported if `execution_mode` is set to "agent". The ID of the agent pool that should run the Terraform workloads. Either `agent_pool_id` or `agent_pool_name` are required if `execution_mode` is set to "agent". `agent_pool_id` takes precedence over `agent_pool_name` if both are provided. - - `agent_pool_name` - (string) Only supported if `execution_mode` is set to "agent". The name of the agent pool that should run the Terraform workloads. Only supported if `execution_mode` is set to "agent". Either `agent_pool_id` or `agent_pool_name` are required. `agent_pool_id` takes precedence over `agent_pool_name` if both are provided. - - `terraform_version` - (string) The version of Terraform to use for the HCP Terraform workspace associated with the task. Defaults to the latest compatible version supported by the organization. - -### Task Condition - -A `task` block is configured with a `condition` block to set the conditions that should be met in order to execute that particular task. Below are the different types of conditions that CTS supports. - -#### Services Condition - -This condition will trigger the task on services that match the regular expression configured in `regexp` or services listed by name in `names`. Either `regexp` or `names` must be configured, but not both. - -When a `condition "services"` block is configured for a task, then the following restrictions become applicable: -- the task cannot be configured with the `services` field (deprecated) -- the task cannot be configure with a `module_input "services"` or `source_input "services"` (deprecated) block - -These restrictions are due to the fact that the monitored services information for a task can only be set through one configuration option. Any services module input that the task needs should be configured solely through the `condition` block. - -See [Task Execution: Services Condition](/consul/docs/nia/tasks#services-condition) for more details on how tasks are triggered with a services condition. - -```hcl -task { - name = "services_condition_regexp_task" - description = "execute on changes to services with names starting with web" - providers = ["my-provider"] - module = "path/to/services-condition-module" - - condition "services" { - regexp = "^web.*" - datacenter = "dc1" - namespace = "default" - filter = "Service.Tags not contains \"prod\"" - cts_user_defined_meta { - key = "value" - } - } -} -``` -```hcl -task { - name = "services_condition_names_task" - description = "execute on changes to services with names api or web" - module = "path/to/services-condition-module" - - condition "services" { - names = ["api", "web"] - datacenter = "dc1" - namespace = "default" - filter = "Service.Tags not contains \"prod\"" - cts_user_defined_meta { - key = "value" - } - } -} - -``` - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `regexp` | Required if `names` is not configured | string | Regular expression used to match the names of Consul services to monitor. Only services that have a name matching the regular expression are used by the task.

    If both a list and a regex are needed, consider including the list as part of the regex or creating separate tasks. | none | -| `names` | Required if `regexp` is not configured | list[string] | Names of Consul services to monitor. Only services that have their name listed in `names` are used by the task. | none | -| `datacenter` | Optional | string | Name of a datacenter to query for the task. | Datacenter of the agent that CTS queries. | -| `namespace` | Optional | string |
    Namespace of the services to query for the task. | In order of precedence:
    1. Inferred from the CTS ACL token
    2. The `default` namespace. | -| `filter` | Optional | string | Expression used to additionally filter the services to monitor.

    Refer to the [services filtering documentation](/consul/api-docs/health#filtering-2) and section about [how to write filter expressions](/consul/api-docs/features/filtering#creating-expressions) for additional information. | none | -| `cts_user_defined_meta` | Optional | map[string] | User-defined metadata that is appended to the [service input variable](/consul/docs/nia/terraform-modules#services-module-input) for compatible Terraform modules.

    Some modules do not use the configured metadata. Refer to the module configured for the task for information about metadata usage and expected keys and format. | none| -| `source_includes_var` | Optional | boolean | **Deprecated in CTS 0.5.0 and will be removed in 0.8.0. See the `use_as_module_input` field instead.** | true | -| `use_as_module_input` | Optional | boolean | Whether or not the values of the condition object should also be used as input for the [`services` variable](/consul/docs/nia/terraform-modules#services-variable) for the Terraform module

    Please refer to the selected module's documentation for guidance on how to configure this field. If configured inconsistently with the module, CTS will error and exit. | true | - - -#### Catalog-Services Condition - -A catalog-services condition block configures a task to only execute on service registration and deregistration, more specifically on first service instance registration and last service instance deregistration respectively. The catalog-services condition has additional configuration options to specify the services that can trigger the task on registration and deregistration. - -See [Task Execution: Catalog Services Condition](/consul/docs/nia/tasks#catalog-services-condition) for more information on how tasks are triggered with a catalog-services condition. - -```hcl -task { - name = "catalog_service_condition_task" - description = "execute on service de/registrations with name matching 'web.*'" - module = "path/to/catalog-services-module" - providers = ["my-provider"] - - condition "catalog-services" { - datacenter = "dc1" - namespace = "default" - regexp = "web.*" - use_as_module_input = true - node_meta { - key = "value" - } - } -} -``` - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `regexp` | Required | string | Regular expression used to match the names of Consul service to monitor for registration and deregistration. Only services that have a name matching the regular expression are used by the task.

    Refer to [regular expression syntax documentation](https://github.com/google/re2/wiki/Syntax) and [try out regular expression string matching](https://golang.org/pkg/regexp/#Regexp.MatchString) for additional information. | none | -| `datacenter` | Optional | string | Name of a datacenter to query for the task. | Datacenter of the agent that CTS queries. | -| `namespace` | Optional | string |
    Namespace of the services to query for the task. | In order of precedence:
    1. Inferred from the CTS ACL token
    2. The `default` namespace. | -| `node_meta` | Optional | map[string] | Node metadata key/value pairs to use to filter services. Only services registered at a node with the specified key/value pairs are used by the task. | none | -| `source_includes_var` | Optional | boolean | **Deprecated in CTS 0.5.0 and will be removed in 0.8.0. See the `use_as_module_input` field instead.** | true | -| `use_as_module_input` | Optional | boolean | Whether or not the values of the condition object should also be used as input for the [`catalog_services` variable](/consul/docs/nia/terraform-modules#catalog-services-variable) for the Terraform module

    Please refer to the selected module's documentation for guidance on how to configure this field. If configured inconsistently with the module, CTS will error and exit. | true | - -#### Consul KV Condition - -A `condition "consul-kv"` block configures a task to only execute on changes to a Consul KV entry. The condition can be configured for a single Consul KV entry or for any Consul KV entries that are prefixed with a given path. - -When a `condition "consul-kv"` block is configured for a task, the task cannot be configured with a `module_input "consul-kv"` or `source_input "consul-kv"` (deprecated) block. The monitored consul-kv information for a task can only be set through one configuration option. Any consul-kv module input that the task needs should be configured solely through the `condition` block. - - -See [Task Execution: Consul KV Condition](/consul/docs/nia/tasks#consul-kv-condition) for more information on how tasks are triggered with a consul-kv condition. - -```hcl -task { - name = "consul_kv_condition_task" - description = "execute on changes to Consul KV entry" - module = "path/to/consul-kv-module" - providers = ["my-provider"] - - condition "consul-kv" { - path = "my-key" - recurse = false - datacenter = "dc1" - namespace = "default" - use_as_module_input = true - } -} -``` - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `path` | Required | string | Path of the key used by the task. The path can point to a single Consul KV entry or several entries within the path. | none | -| `recurse` | Optional | boolean | Enables CTS to treat the path as a prefix. If set to `false`, the path will be treated as a literal match. | `false` | -| `datacenter` | Optional | string | Name of a datacenter to query for the task. | Datacenter of the agent that CTS queries. | -| `namespace` | Optional | string |
    Namespace of the services to query for the task. | In order of precedence:
    1. Inferred from the CTS ACL token
    2. The `default` namespace. | -| `source_includes_var` | Optional | boolean | **Deprecated in CTS 0.5.0 and will be removed in 0.8.0. See the `use_as_module_input` field instead.** | true | -| `use_as_module_input` | Optional | boolean | Whether or not the values of the condition object should also be used as input for the [`consul_kv` variable](/consul/docs/nia/terraform-modules#consul-kv-variable) for the Terraform module

    Please refer to the selected module's documentation for guidance on how to configure this field. If configured inconsistently with the module, CTS will error and exit. | true | - - -#### Schedule Condition - -A scheduled task has a schedule condition block, which defines the schedule for executing the task. Unlike a dynamic task, a scheduled task does not dynamically trigger on changes in Consul. - -Schedule tasks also rely on additional task configuration, separate from the condition block to determine the module input information to provide to the task module. See [`module_input`](#module_input) block configuration for details on how to configure module input. - -See [Task Execution: Schedule Condition](/consul/docs/nia/tasks#schedule-condition) for more information on how tasks are triggered with schedule conditions. - -See [Terraform Module: Module Input](/consul/docs/nia/terraform-modules#module-input) for more information on module input options for a scheduled task. - -```hcl -task { - name = "scheduled_task" - description = "execute every Monday using service information from web and db" - module = "path/to/module" - - condition "schedule" { - cron = "* * * * Mon" - } - - module_input "services" { - names = ["web", "db"] - } -} -``` - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `cron` | Required | string | The CRON expression that dictates the schedule to trigger the task. For more information on CRON expressions, see the [cronexpr parsing library](https://github.com/hashicorp/cronexpr). | none | - -### Task Module Input ((#task-source-input)) - -~> `module_input` was renamed from `source_input` in CTS 0.5.0. Documentation for the `module_input` block also applies to the `source_input` block. - -You can optionally add one or more `module_input` blocks to the `task` block. A `module_input` block specifies a Consul object containing values or metadata to be provided to the Terraform Module. Both scheduled and dynamic tasks can be configured with `module_input` blocks. - -The example below shows an outline of `module_input` within a task configuration: - -```hcl -task { - name = "task_a" - module = "path/to/module" - services = ["api"] // (deprecated) - - condition "" { - // ... - } - - module_input "" { - // ... - } -} -``` - -~> The type of the `module_input` block that can be configured depends on the `condition` block type and the `services` field (deprecated). See [Task Module Input Restrictions](/consul/docs/nia/configuration#task-module-input-restrictions) for more details. - -The following sections describe the module input types that CTS supports. - -#### Services Module Input ((#services-source-input)) - -This `services` module input object defines services registered to Consul whose metadata will be used as [services module input to the Terraform Module](/consul/docs/nia/terraform-modules/#services-module-input). The following parameters are supported: - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `regexp` | Required if `names` is not configured | string | Regular expression used to match the names of Consul services to monitor. Only services that have a name matching the regular expression are used by the task.

    If both a list and a regex are needed, consider including the list as part of the regex or creating separate tasks. | none | -| `names` | Required if `regexp` is not configured | list[string] | Names of Consul services to monitor. Only services that have their name listed in `names` are used by the task. | none | -| `datacenter` | Optional | string | Name of a datacenter to query for the task. | Datacenter of the agent that CTS queries. | -| `namespace` | Optional | string |
    String value indicating the namespace of the services to query for the task. | In order of precedence:
    1. Inferred from the CTS ACL token
    2. The `default` namespace. | -| `filter` | Optional | string | Expression used to additionally filter the services to monitor.

    Refer to the [services filtering documentation](/consul/api-docs/health#filtering-2) and section about [how to write filter expressions](/consul/api-docs/features/filtering#creating-expressions) for additional information. | none | -| `cts_user_defined_meta` | Optional | map[string] | User-defined metadata that is appended to the [service input variable](/consul/docs/nia/terraform-modules#services-module-input) for compatible Terraform modules.

    Some modules do not use the configured metadata. Refer to the module configured for the task for information about metadata usage and expected keys and format. | none| - -In the following example, the scheduled task queries all Consul services with `web` as the suffix. The metadata of matching services are provided to the Terraform module. - -```hcl -task { - name = "schedule_condition_task" - description = "execute every Monday using information from service names starting with web" - module = "path/to/module" - - condition "schedule" { - cron = "* * * * Mon" - } - - module_input "services" { - regexp = "^web.*" - datacenter = "dc1" - namespace = "default" - filter = "Service.Tags not contains \"prod\"" - cts_user_defined_meta { - key = "value" - } - } -} -``` - -#### Consul KV Module Input ((#consul-kv-source-input)) - -A Consul KV module input block defines changes to Consul KV that will be monitored. These changes will then be provided as [Consul KV module input to the Terraform Module](/consul/docs/nia/terraform-modules/#consul-kv-module-input). The module input can be configured for a single Consul KV entry or for any Consul KV entries that are prefixed with a given path. The following parameters are supported: - -| Parameter | Required | Type | Description | Default | -| --------- | -------- | ---- | ----------- | ------- | -| `path` | Required | string | Path of the key used by the task. The path can point to a single Consul KV entry or several entries within the path. | none | -| `recurse` | Optional | boolean | Enables CTS to treat the path as a prefix. If set to `false`, the path will be treated as a literal match. | `false` | -| `datacenter` | Optional | string | Name of a datacenter to query for the task. | Datacenter of the agent that CTS queries. | -| `namespace` | Optional | string |
    Namespace of the services to query for the task. | In order of precedence:
    1. Inferred from the CTS ACL token
    2. The `default` namespace. | - -In the following example, the scheduled task queries datacenter `dc1` in the `default` namespace for changes to the value held by the key `my-key`. - -```hcl -task { - name = "schedule_condition_task_kv" - description = "execute every Monday using information from Consul KV entry my-key" - module = "path/to/module" - - condition "schedule" { - cron = "* * * * Mon" - } - - module_input "consul-kv" { - path = "my-key" - recurse = false - datacenter = "dc1" - namespace = "default" - } -} -``` - -#### Task Module Input Restrictions - -There are some limitations to the type of `module_input` blocks that can be configured for a task given the task's `condition` block and `services` field (deprecated). This is because a task cannot have multiple configurations defining the same type of monitored variable: -- A task cannot be configured with a `condition` and `module_input` block of the same type. For example, configuring `condition "consul-kv"` and `module_input "consul-kv"` will error because both configure the `consul_kv` variable. -- A task cannot be configured with two or more `module_input` blocks of the same type. For example, configuring two `module_input "catalog-services"` within a task will return an error because they define multiple configurations for the `catalog_services` variable. -- A task that monitors services can only contain one of the following configurations: - - `condition "services"` block - - `module_input "services"` block - - Block was previously named `source_input "services"` (deprecated) - - `services` field (deprecated) - - All of the listed configurations define the `services` variable and including more than one configuration will return an error. - -## Network Drivers - -A driver is required for CTS to propagate network infrastructure change. The `driver` block configures the subprocess that CTS runs in automation. The default driver is the [Terraform driver](#terraform-driver) which automates Terraform as a local installation of the Terraform CLI. - -Only one network driver can be configured per deployment of CTS. - -## Terraform Driver - -The Terraform driver block is used to configure CTS for installing and automating Terraform locally. The driver block supports Terraform configuration to specify the `backend` used for state management and `required_providers` configuration used for provider discovery. - -```hcl -driver "terraform" { - log = false - persist_log = false - path = "" - - backend "consul" { - gzip = true - } - - required_providers { - myprovider = { - source = "namespace/myprovider" - version = "1.3.0" - } - } -} -``` - -- `backend` - (obj) The backend stores [Terraform state files](/terraform/language/state) for each task. This option is similar to the [Terraform backend configuration](/terraform/language/settings/backends/configuration). CTS supports Terraform backends used as a state store. - - Supported backend options: [azurerm](/terraform/language/settings/backends/azurerm), [consul](/terraform/language/settings/backends/consul), [cos](/terraform/language/settings/backends/cos), [gcs](/terraform/language/settings/backends/gcs), [kubernetes](/terraform/language/settings/backends/kubernetes), [local](/terraform/language/settings/backends/local), [manta](/terraform/language/v1.2.x/settings/backends/manta), [pg](/terraform/language/settings/backends/pg) (Terraform v0.14+), [s3](/terraform/language/settings/backends/s3). Visit the Terraform documentation links for details on backend configuration options. - - If omitted, CTS will generate default values and use configurations from the [`consul` block](#consul) to configure [Consul as the backend](/terraform/language/settings/backends/consul), which stores Terraform statefiles in the Consul KV. The [ACL token provided for Consul authentication](#consul) is used to read and write to the KV store and requires [Consul KV privileges](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-secure?utm_source=docs#configure-acl-privileges-for-consul-terraform-sync). The Consul KV path is the base path to store state files for tasks. The full path of each state file will have the task identifier appended to the end of the path, e.g. `consul-terraform-sync/terraform-env:task-name`. - - The remote enhanced backend is not supported with the Terraform driver to run operations in HCP Terraform. Use the [HCP Terraform driver](#hcp-terraform-driver) to integrate CTS with HCP Terraform for remote workspaces and remote operations. - - The `local` backend type is not supported with CTS instances configured for high availability. If high availability is configured and the Terraform backend type is `local`, CTS logs an error and exits. -- `log` - (bool) Enable all Terraform output (stderr and stdout) to be included in the CTS log. This is useful for debugging and development purposes. It may be difficult to work with log aggregators that expect uniform log format. -- `path` - (string) The file path to install Terraform or discover an existing Terraform binary. If omitted, Terraform will be installed in the same directory as the CTS daemon. To resolve an incompatible Terraform version or to change versions will require removing the existing binary or change to a different path. -- `persist_log` - (bool) Enable trace logging for each Terraform client to disk per task. This is equivalent to setting `TF_LOG_PATH=/terraform.log`. Trace log level results in verbose logging and may be useful for debugging and development purposes. We do not recommend enabling this for production. There is no log rotation and may quickly result in large files. -- `required_providers` - (obj: required) Declare each Terraform provider used across all tasks. This can be configured the same as how you would configure [Terraform `terraform.required_providers`](/terraform/language/providers/requirements#requiring-providers) field to specify the source and version for each provider. CTS will process these requirements when preparing each task that uses the provider. -- `version` - (string) The Terraform version to install and run in automation for task execution. If omitted, the driver will install the latest [compatible release of Terraform](/consul/docs/nia/compatibility#terraform). To change versions, remove the existing binary or change the path to install the desired version. Verify that the desired Terraform version is compatible across all Terraform modules used for CTS automation. - -## HCP Terraform Driver - - - This feature requires{' '} -
    - Consul-Terraform-Sync Enterprise - {' '} - which is available with Consul Enterprise. - - -The HCP Terraform driver enables CTS Enterprise to integrate with HCP Terraform, including both the [self-hosted distribution](https://www.hashicorp.com/products/terraform/editions/enterprise) and the [managed service](https://www.hashicorp.com/products/terraform/editions/cloud). With this driver, CTS automates Terraform runs and remote operations for workspaces. - -An overview of features enabled with HCP Terraform can be viewed within the [Network Drivers](/consul/docs/nia/network-drivers) documentation. - -Only one network driver can be configured per deployment of CTS. - -```hcl -driver "terraform-cloud" { - hostname = "https://app.terraform.io" - organization = "my-org" - token = "" - // Optionally set the token to be securely queried from Vault instead of - // written directly to the configuration file. - // token = "{{ with secret \"secret/my/path\" }}{{ .Data.data.foo }}{{ end }}" - - workspaces { - tags = ["source:cts"] - tags_allowlist = [] - tags_denylist = [] - } - - required_providers { - myprovider = { - source = "namespace/myprovider" - version = "1.3.0" - } - } -} -``` - -- `hostname` - (string) The HCP Terraform hostname to connect to. Can be overridden with the `TFC_HOSTNAME` environment variable. -- `organization` - (string) The HCP Terraform organization that hosts the managed workspaces by CTS. Can be overridden with the `TFC_ORGANIZATION` environment variable. -- `token` - (string) Required [Team API token](/terraform/cloud-docs/users-teams-organizations/api-tokens#team-api-tokens) used for authentication with HCP Terraform and workspace management. Only workspace permissions are needed for CTS. The token can also be provided using the `TFC_TOKEN` environment variable. - - We recommend creating a dedicated team and team API token to isolate automation by CTS from other HCP Terraform operations. -- `workspace_prefix` - (string) **Deprecated in CTS 0.5.0**, use the [`workspaces.prefix`](#prefix) option instead. Specifies a prefix to prepend to the automatically-generated workspace names used for automation. This prefix will be used by all tasks that use this driver. By default, when no prefix is configured, the workspace name will be the task name. When a prefix is configured, the workspace name will be `-`, with the character '-' between the workspace prefix and task name. For example, if you configure the prefix as "cts", then a task with the name "task-firewall" will have the workspace name "cts-task-firewall". -- `workspaces` - Configure CTS management of HCP Terraform workspaces. - - `prefix` - (string) Specifies a prefix to prepend to the workspace names used for CTS task automation. This prefix will be used by all tasks that use this driver. By default, when no prefix is configured, the workspace name will be the task name. When a prefix is configured, the workspace name will be ``. For example, if you configure the prefix as "cts_", then a task with the name "task_firewall" will have the workspace name "cts_task_firewall". - - `tags` - (list[string]) Tags for CTS to add to all automated workspaces when the workspace is first created or discovered. Tags are added to discovered workspaces only if the workspace meets [automation requirements](/consul/docs/nia/network-drivers/hcp-terraform#remote-workspaces) and satisfies the allowlist and denylist tag options. This option will not affect existing tags. Tags that were manually removed during runtime will be re-tagged when CTS restarts. Compatible with HCP Terraform and Terraform Enterprise v202108-1+ - - `tags_allowlist` - (list[string]) Tag requirement to use as a provision check for CTS automation of workspaces. When configured, HCP Terraform workspaces must have at least one tag from the allow list for CTS to automate the workspace and runs. Compatible with HCP Terraform and Terraform Enterprise v202108-1+. - - `tags_denylist` - (list[string]) Tag restriction to use as a provision check for CTS automation of workspaces. When configured, HCP Terraform workspaces must not have any tag from the deny list for CTS to automate the workspace and runs. Denied tags have higher priority than tags set in the `tags_allowlist` option. Compatible with HCP Terraform and Terraform Enterprise v202108-1+. -- `required_providers` - (obj: required) Declare each Terraform provider used across all tasks. This can be configured the same as how you would configure [Terraform `terraform.required_providers`](/terraform/language/providers/requirements#requiring-providers) field to specify the source and version for each provider. CTS will process these requirements when preparing each task that uses the provider. -- `tls` - Configure TLS to allow HTTPS connections to [Terraform Enterprise](/terraform/enterprise/install/interactive/installer#tls-key-amp-cert). - - `enabled` - (bool) Enable TLS. Providing a value for any of the TLS options will enable this parameter implicitly. - - `ca_cert` - (string) The path to a PEM-encoded certificate authority file used to verify the authenticity of the connection to Terraform Enterprise over TLS. - - `ca_path` - (string) The path to a directory of PEM-encoded certificate authority files used to verify the authenticity of the connection to Terraform Enterprise over TLS. - - `cert` - (string) The path to a PEM-encoded client certificate file provided to Terraform Enterprise over TLS in order for Terraform Enterprise to verify the authenticity of the connection from CTS. - - `key` - (string) The path to the PEM-encoded private key file used with the client certificate configured by `cert` for communicating with Terraform Enterprise over TLS. - - `server_name` - (string) The server name to use as the Server Name Indication (SNI) for Terraform Enterprise when connecting via TLS. - - `verify` - (bool: true) Enables TLS peer verification. The default is enabled, which will check the global certificate authority (CA) chain to make sure the certificates returned by Terraform Enterprise are valid. - - If Terraform Enterprise is using a self-signed certificate that you have not added to the global CA chain, you can set this certificate with `ca_cert` or `ca_path`. Alternatively, you can disable SSL verification by setting `verify` to false. However, disabling verification is a potential security vulnerability. - ```hcl - tls { - verify = false - } - ``` - -CTS generates local artifacts to prepare configuration versions used for workspace runs. The location of the files created can be set with the [`working_dir`](/consul/docs/nia/configuration#working_dir) option or configured per task. When a task is configured with a local module and is run with the HCP Terraform driver, the local module is copied and uploaded as a part of the configuration version. - -The version of Terraform to use for each workspace can also be set within the [task](#task) configuration. - -## Terraform Provider - -A `terraform_provider` block configures the options to interface with network infrastructure. Define a block for each provider required by the set of Terraform modules across all tasks. This block resembles [provider blocks for Terraform configuration](/terraform/language/providers/configuration). To find details on how to configure a provider, refer to the corresponding documentation for the Terraform provider. The main directory of publicly available providers are hosted on the [Terraform Registry](https://registry.terraform.io/browse/providers). - -The below configuration captures the general design of defining a provider using the [AWS Terraform provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs) as an example. - -```hcl -driver "terraform" { - required_providers { - aws = { - source = "hashicorp/aws" - version = "3.33.0" - } - } -} - -terraform_provider "aws" { - // Configuration options - region = "us-east-1" -} - -task { - module = "path/to/module" - providers = ["aws"] - condition "services" { - names = ["web", "api"] - } -} -``` - -~> **Note**: Provider arguments configured in CTS configuration files are written in plain text to the generated [`terraform.tfvars`](/consul/docs/nia/network-drivers#terraform-tfvars) file for each Terraform workspace that references the provider. To exclude arguments or dynamic values from rendering to local files in plain text, use [`task_env` in addition to using dynamic configuration](#securely-configure-terraform-providers). - -### Securely Configure Terraform Providers - -The `terraform_provider` block supports dynamically loading arguments and the local environment from other sources. This can be used to securely configure your Terraform provider from the shell environment, Consul KV, or Vault. Using the `task_env` meta-argument and template syntax below, you can avoid exposing sensitive values or credentials in plain text within configuration files for CTS. - -`task_env` and the template syntax for dynamic values are only supported within the `terraform_provider` block. - -#### Provider Environment Variables - -Terraform providers may support shell environment variables as values for some of their arguments. When available, we recommend using environment variables as a way to keep credentials out of plain-text configuration files. Refer to the official provider docs hosted on the [Terraform Registry](https://registry.terraform.io/browse/providers) to find supported environment variables for a provider. By default, CTS enables all Terraform workspaces to inherit from its environment. - -The `task_env` block is a meta-argument available for the `terraform_provider` block that can be used to rename or scope the available environment to a selected set of variables. Passing sensitive values as environment variables will scope the values to only the tasks that require the provider. - -```hcl -terraform_provider "foo" { - // Direct assignment of provider arguments are rendered in plain-text within - // the CTS configuration and the generated terraform.tfvars - // file for the corresponding Terraform workspaces. - // token = "" - - // Instead of configuring the token argument directly for the provider, - // use the provider's supported environment variable for the token argument. - // For example, - // $ export FOO_TOKEN = "" - - // Dynamically assign the task's environment from the shell env, Consul KV, - // Vault. - task_env { - "FOO_TOKEN" = "{{ env \"CTS_FOO_TOKEN\" }}" - } -} -``` - -!> **Security note**: CTS does not prevent sensitive values from being written to Terraform state files. We recommend securing state files in addition to securely configuring Terraform providers. Options for securing state files can be set within [`driver.backend`](#backend) based on the backend used. For example, Consul KV is the default backend and can be secured with ACLs for KV path. For other backends, we recommend enabling encryption, if applicable. - -#### Load Dynamic Values - -Load dynamic values for Terraform providers with integrated template syntax. - -##### Env - -`env` reads the given environment variable accessible to CTS. - -```hcl -terraform_provider "example" { - address = "{{ env \"EXAMPLE_HOSTNAME\" }}" -} -``` - -#### Consul - -`key` queries the key's value in the KV store of the Consul server configured in the required [`consul` block](#consul). - -```hcl -terraform_provider "example" { - value = "{{ key \"path/example/key\" }}" -} -``` - -#### Vault - -`with secret` queries the [Vault KV secrets engine](/vault/api-docs/secret/kv). Vault is an optional source that require operators to configure the Vault client with a [`vault` block](#vault-configuration). Access the secret using template dot notation `Data.data.`. - -```hcl -vault { - address = "vault.example.com" -} - -terraform_provider "example" { - token = "{{ with secret \"secret/my/path\" }}{{ .Data.data.foo }}{{ end }}" -} -``` - -##### Vault Configuration - -- `address` - (string) The URI of the Vault server. This can also be set via the `VAULT_ADDR` environment variable. -- `enabled` - (bool) Enabled controls whether the Vault integration is active. -- `namespace` - (string) Namespace is the Vault namespace to use for reading secrets. This can also be set via the `VAULT_NAMESPACE` environment variable. -- `renew_token` - (bool) Renews the Vault token. This can also be set via the `VAULT_RENEW_TOKEN` environment variable. -- `tls` - [(tls block)](#tls-1) TLS indicates the client should use a secure connection while talking to Vault. Supports the environment variables: - - `VAULT_CACERT` - - `VAULT_CAPATH` - - `VAULT_CLIENT_CERT` - - `VAULT_CLIENT_KEY` - - `VAULT_SKIP_VERIFY` - - `VAULT_TLS_SERVER_NAME` -- `token` - (string) Token is the Vault token to communicate with for requests. It may be a wrapped token or a real token. This can also be set via the `VAULT_TOKEN` environment variable, or via the `VaultAgentTokenFile`. -- `vault_agent_token_file` - (string) The path of the file that contains a Vault Agent token. If this is specified, CTS will not try to renew the Vault token. -- `transport` - [(transport block)](#transport) Transport configures the low-level network connection details. -- `unwrap_token` - (bool) Unwraps the provided Vault token as a wrapped token. - --> Note: Vault credentials are not accessible by tasks and the associated Terraform configurations, including automated Terraform modules. If the task requires Vault, you will need to separately configure the Vault provider and explicitly include it in the `task.providers` list. - -### Multiple Provider Configurations - -CTS supports the [Terraform feature to define multiple configurations](/terraform/language/providers/configuration#alias-multiple-provider-configurations) for the same provider by utilizing the `alias` meta-argument. Define multiple provider blocks with the same provider name and set the `alias` to a unique value across a given provider. Select which provider configuration to use for a task by specifying the configuration with the provider name and alias (`.`) within the list of providers in the [`task.provider`](#task) parameter. A task can use multiple providers, but only one provider instance of a provider is allowed per task. - -The example CTS configuration below defines two similar tasks executing the same module with different instances of the AWS provider. - -```hcl -terraform_provider "aws" { - alias = "a" - profile = "team-a" - task_env { - "AWS_ACCESS_KEY_ID" = "{{ env \"CTS_AWS_ACCESS_KEY_ID_A\" }}" - } -} - -terraform_provider "aws" { - alias = "b" - profile = "team-b" - task_env { - "AWS_ACCESS_KEY_ID" = "{{ env \"CTS_AWS_ACCESS_KEY_ID_B\" }}" - } -} - -terraform_provider "dns" { - // ... -} - -task { - name = "task-a" - module = "org/module" - providers = ["aws.a", "dns"] - // ... -} - -task { - name = "task-b" - module = "org/module" - providers = ["aws.b", "dns"] - // ... -} -``` diff --git a/website/content/docs/nia/enterprise/index.mdx b/website/content/docs/nia/enterprise/index.mdx deleted file mode 100644 index 94b42f6c902a..000000000000 --- a/website/content/docs/nia/enterprise/index.mdx +++ /dev/null @@ -1,30 +0,0 @@ ---- -layout: docs -page_title: Consul-Terraform-Sync Enterprise -description: >- - Consul-Terraform-Sync Enterprise ---- - -# Consul-Terraform-Sync Enterprise - -Consul-Terraform-Sync (CTS) Enterprise is available with [Consul Enterprise](https://www.hashicorp.com/products/consul) and requires a Consul [license](/consul/docs/nia/enterprise/license) to be applied. - -Enterprise features of CTS address organization complexities of collaboration, operations, scale, and governance. CTS Enterprise supports an official integration with [HCP Terraform](https://cloud.hashicorp.com/products/terraform) and [Terraform Enterprise](/terraform/enterprise), the self-hosted distribution, to extend insight into dynamic updates of your network infrastructure. - -| Features | Community Edition | Enterprise | -|----------|-------------|------------| -| Consul Namespace | Default namespace only | Filter task triggers by any namespace | -| Automation Driver | Terraform Community Edition | Terraform Community Edition, HCP Terraform, or Terraform Enterprise | -| Terraform Workspaces | Local | Local workspaces with the Terraform driver or [remote workspaces](/terraform/cloud-docs/workspaces) with the HCP Terraform driver | -| Terraform Backend Options | [azurerm](/terraform/language/settings/backends/azurerm), [consul](/terraform/language/settings/backends/consul), [cos](/terraform/language/settings/backends/cos), [gcs](/terraform/language/settings/backends/gcs), [kubernetes](/terraform/language/settings/backends/kubernetes), [local](/terraform/language/settings/backends/local), [manta](/terraform/language/v1.2.x/settings/backends/manta), [pg](/terraform/language/settings/backends/pg), and [s3](/terraform/language/settings/backends/s3) with the Terraform driver | The supported backends for CTS with the Terraform driver or HCP Terraform with the HCP Terraform driver | -| Terraform Version | One Terraform version for all tasks | Optional Terraform version per task when using the HCP Terraform driver | -| Terraform Run Output | CTS logs | CTS logs or Terraform output organized by HCP Terraform remote workspaces | -| Credentials and secrets | On disk as `.tfvars` files or in shell environment | Secured variables stored in remote workspace | -| Audit | | Terraform audit logs ([HCP Terraform](/terraform/cloud-docs/api-docs/audit-trails) or [Terraform Enterprise](/terraform/enterprise/admin/infrastructure/logging)) | -| Collaboration | | Run [history](/terraform/cloud-docs/run/manage), [triggers](/terraform/cloud-docs/workspaces/settings/run-triggers), and [notifications](/terraform/cloud-docs/workspaces/settings/notifications) supported on HCP Terraform | -| Governance | | [Sentinel](/terraform/cloud-docs/policy-enforcement) to enforce governance policies as code | - -The [HCP Terraform driver](/consul/docs/nia/configuration#terraform-cloud-driver) enables CTS Enterprise to integrate with HCP Terraform or Terraform Enterprise. The [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud) page provides an overview of how the integration works within CTS. - -## Consul Admin Partition Support -CTS subscribes to a Consul agent. Depending on the admin partition the Consul agent is a part of and the services within the admin partition, CTS will be able to subscribe to those services and support the automation workflow. As such, admin partitions are not relevant to the CTS workflow. We recommend deploying a single CTS instance that subscribes to services/KV within a single partition and using a different CTS instance (or instances) to subscribe to services/KV in another partition. diff --git a/website/content/docs/nia/enterprise/license.mdx b/website/content/docs/nia/enterprise/license.mdx deleted file mode 100644 index 338e68134406..000000000000 --- a/website/content/docs/nia/enterprise/license.mdx +++ /dev/null @@ -1,83 +0,0 @@ ---- -layout: docs -page_title: Consul-Terraform-Sync Enterprise License -description: >- - Consul-Terraform-Sync Enterprise License ---- - -# Consul-Terraform-Sync Enterprise License - - - Licenses are only required for Consul-Terraform-Sync (CTS) Enterprise - - -CTS Enterprise binaries require a [Consul Enterprise license](/consul/docs/enterprise/license/overview) to run. There is no CTS Enterprise specific license. As a result, CTS Enterprise's licensing is very similar to Consul Enterprise. - -All CTS Enterprise features are available with a valid Consul Enterprise license, regardless of your Consul Enterprise packaging or pricing model. - -To get a trial license for CTS, you can sign-up for the [trial license for Consul Enterprise](/consul/docs/enterprise/license/faq#q-where-can-users-get-a-trial-license-for-consul-enterprise). - -## Automatic License Retrieval -CTS automatically retrieves a license from Consul on startup and then attempts to retrieve a new license once a day. If the current license is reaching its expiration date, CTS attempts to retrieve a license with increased frequency, as defined by the [License Expiration Date Handling](/consul/docs/nia/enterprise/license#license-expiration-handling). - -~> Enabling automatic license retrieval is recommended when using HCP Consul Dedicated, as HCP Consul Dedicated licenses expire more frequently than Consul Enterprise licenses. Without auto-retrieval enabled, you have to restart CTS every time you load a new license. - -## Setting the License Manually - -If a license needs to be manually set, choose one of the following methods (in order of precedence) to set the license: - -1. Set the `CONSUL_LICENSE` environment variable to the license string. - - ```shell-session - export CONSUL_LICENSE= - ``` - -1. Set the `CONSUL_LICENSE_PATH` environment variable to the path of the file containing the license. - - ```shell-session - export CONSUL_LICENSE_PATH=// - ``` - -1. To point to the file containing the license, in the configuration file, configure the [`license`](/consul/docs/nia/configuration#license) path option. - - ```hcl - license { - path = "//" - } - ``` - -1. To point to the file containing the license, in the configuration file, configure the [`license_path`](/consul/docs/nia/configuration#license_path) option i. **Deprecated in CTS 0.6.0 and will be removed in a future release. Use [license block](/consul/docs/nia/configuration#license) instead.** - - ```hcl - license_path = "//" - ``` - -~> **Note**: the [options to set the license and the order of precedence](/consul/docs/enterprise/license/overview#binaries-without-built-in-licenses) are the same as Consul Enterprise server agents. -Visit the [Enterprise License Tutorial](/nomad/tutorials/enterprise/hashicorp-enterprise-license?utm_source=docs) for detailed steps on how to install the license key. - -### Updating the License Manually -To update the license when it expires or is near the expiration date and automatic license retrieval is disabled: - -1. Update the license environment variable or configuration with the new license value or path to the new license file -1. Stop and restart CTS Enterprise - -Once CTS Enterprise starts again, it will pick up the new license and run the tasks with any changes that may have occurred between the stop and restart period. - -## License Expiration Handling - -Licenses have an expiration date and a termination date. The termination date is a time at or after the license expires. CTS Enterprise will cease to function once the termination date has passed. - -The time between the expiration and termination dates is a grace period. Grace periods are generally 24-hours, but you should refer to your license agreement for complete terms of your grace period. - -When approaching expiration and termination, by default, CTS Enterprise will attempt to retrieve a new license. If auto-retrieval is disabled, CTS Enterprise will provide notifications in the system logs: - -| Time period | Behavior - auto-retrieval enabled (default) |Behavior - auto-retrieval disabled | -| ------------------------------------------- |-------------------------------------------- |---------------------------------- | -| 30 days before expiration | License retrieval attempt every 24-hours | Warning-level log every 24-hours | -| 7 days before expiration | License retrieval attempt every 1 hour | Warning-level log every 1 hour | -| 1 day before expiration | License retrieval attempt every 5 minutes | Warning-level log every 5 minutes | -| 1 hour before expiration | License retrieval attempt every 1 minute | Warning-level log every 1 minute | -| At or after expiration (before termination) | License retrieval attempt every 1 minute | Error-level log every 1 minute | -| At or after termination | Error-level log and exit | Error-level log and exit | - -~> **Note**: Notification frequency and [grace period](/consul/docs/enterprise/license/faq#q-is-there-a-grace-period-when-licenses-expire) behavior is the same as Consul Enterprise. diff --git a/website/content/docs/nia/index.mdx b/website/content/docs/nia/index.mdx deleted file mode 100644 index 453a2beb83e7..000000000000 --- a/website/content/docs/nia/index.mdx +++ /dev/null @@ -1,76 +0,0 @@ ---- -layout: docs -page_title: Network Infrastructure Automation -description: >- - Network Infrastructure Automation (NIA) is the concept of dynamically updating infrastructure devices triggered by service changes. Consul-Terraform-Sync is a tool that performs NIA and utilizes Consul as a data source that contains networking information about services and monitors those services. Terraform is used as the underlying automation tool and leverages the Terraform provider ecosystem to drive relevant changes to the network infrastructure. ---- - -# Network Infrastructure Automation - -Network Infrastructure Automation (NIA) enables dynamic updates to network infrastructure devices triggered by service changes. Consul-Terraform-Sync (CTS) utilizes Consul as a data source that contains networking information about services and monitors those services. Terraform is used as the underlying automation tool and leverages the Terraform provider ecosystem to drive relevant changes to the network infrastructure. - -CTS executes one or more automation tasks with the most recent service variable values from the Consul service catalog. Each task consists of a runbook automation written as a CTS compatible Terraform module using resources and data sources for the underlying network infrastructure. The `consul-terraform-sync` daemon runs on the same node as a Consul agent. - -CTS is available as an open source and enterprise distribution. Follow the [Automate your network configuration with Consul-Terraform-Sync tutorial](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-intro?utm_source=docs) to get started with CTS OSS or read more about [CTS Enterprise](/consul/docs/nia/enterprise). - -## Use Cases - -**Application teams must wait for manual changes in the network to release, scale up/down and re-deploy their applications.** This creates a bottleneck, especially in frequent workflows related to scaling up/down the application, breaking the DevOps goal of self-service enablement. CTS automates this process, thus decreasing the possibility of human error in manually editing configuration files, as well as decreasing the overall time taken to push out configuration changes. - -**Networking and security teams cannot scale processes to the speed and changes needed.** Manual approaches don't scale well, causing backlogs in network and security teams. Even in organizations that have some amount of automation (such as scripting), there is a need for an accurate, real-time source of data to trigger and drive their network automation workflows. CTS runs in near real-time to keep up with the rate of change. - -## Glossary - -- `Condition` - A task-level defined environmental requirement. When a task's condition is met, CTS executes that task to update network infrastructure. Depending on the condition type, the condition definition may also define and enable the module input that the task provides to the configured Terraform Module. - -- `Consul objects` - Consul objects are the response request objects returned from the Consul API that CTS monitor for changes. Examples of Consul objects can be service instance information, Consul key-value pairs, and service registration. The Consul objects are used to inform a task's condition and/or module input. - -- `Consul-Terraform-Sync (CTS)` - [GitHub repo](https://github.com/hashicorp/consul-terraform-sync) and binary/CLI name for the project that is used to perform Network Infrastructure Automation. - -- `Dynamic Tasks` - A dynamic task is a type of task that is dynamically triggered on a change to any relevant Consul catalog values e.g. service instances, Consul KV, catalog-services. See scheduled tasks for a type of non-dynamic task. - - -> **Note:** The terminology "tasks" used throughout the documentation refers to all types of tasks except when specifically stated otherwise. - -- `Network Drivers` - CTS uses [network drivers](/consul/docs/nia/network-drivers) to execute and update network infrastructure. Drivers transform Consul service-level information into downstream changes by processing and abstracting API and resource details tied to specific network infrastructure. - -- `Network Infrastructure Automation (NIA)` - Enables dynamic updates to network infrastructure devices triggered when specific conditions, such as service changes and registration, are met. - -- `Scheduled Tasks` - A scheduled task is a type of task that is triggered only on a schedule. It is configured with a [schedule condition](/consul/docs/nia/configuration#schedule-condition). - -- `Services` - A service in CTS represents a service that is registered with Consul for service discovery. Services are grouped by their service names. There may be more than one instance of a particular service, each with its own unique ID. CTS monitors services based on service names and can provide service instance details to a Terraform module for network automation. - -- `Module Input` - A module input defines objects that provide values or metadata to the Terraform module. See [module input](/consul/docs/nia/terraform-modules#module-input) for the supported metadata and values. For example, a user can configure a Consul KV module input to provide KV pairs as variables to their respective Terraform Module. - - The module input can be configured in a couple of ways: - - Setting the `condition` block's `use_as_module_input` field to true - - Field was previously named `source_includes_var` (deprecated) - - Configuring `module_input` block(s) - - Block was previously named `source_input` (deprecated) - - ~> "Module input" was renamed from "source input" in CTS 0.5.0 due to updates to the configuration names seen above. - - -> **Note:** The terminology "tasks" used throughout the documentation refers to all types of tasks except when specifically stated otherwise. - -- `Tasks` - A task is the translation of dynamic service information from the Consul Catalog into network infrastructure changes downstream. - -- `HCP Terraform` - Per the [Terraform documentation](/terraform/cloud-docs), "HCP Terraform" describes both HCP Terraform and Terraform Enterprise, which are different distributions of the same application. Documentation will apply to both distributions unless specifically stated otherwise. - -- `Terraform Module` - A [Terraform module](/terraform/language/modules) is a container for multiple Terraform resources that are used together. - -- `Terraform Provider` - A [Terraform provider](/terraform/language/providers) is responsible for understanding API interactions and exposing resources for an infrastructure type. - -## Getting Started With Network Infrastructure Automation - -The [Network Infrastructure Automation (NIA)](/consul/tutorials/network-infrastructure-automation?utm_source=docs) -collection contains examples on how to configure CTS to -perform Network Infrastructure Automation. The collection contains also a -tutorial to secure your CTS configuration for a production -environment and one to help you build you own CTS compatible -module. - -## Community - -- [Contribute](https://github.com/hashicorp/consul-terraform-sync) to the open source project -- [Report](https://github.com/hashicorp/consul-terraform-sync/issues) bugs or request enhancements -- [Discuss](https://discuss.hashicorp.com/tags/c/consul/29/consul-terraform-sync) with the community or ask questions -- [Build integrations](/consul/docs/nia/terraform-modules) for CTS diff --git a/website/content/docs/nia/installation/configure.mdx b/website/content/docs/nia/installation/configure.mdx deleted file mode 100644 index 1f9d5265fc43..000000000000 --- a/website/content/docs/nia/installation/configure.mdx +++ /dev/null @@ -1,107 +0,0 @@ ---- -layout: docs -page_title: Configure Consul-Terraform-Sync -description: >- - A high level guide to configure Consul-Terraform-Sync. ---- - -# Configure Consul-Terraform-Sync - -The page will cover the main components for configuring your Network Infrastructure Automation with Consul at a high level. For the full list of configuration options, visit the [Consul-Terraform-Sync (CTS) configuration page](/consul/docs/nia/configuration). - -## Tasks - -A task captures a network automation process by defining which network resources to update on a given condition. Configure CTS with one or more tasks that contain a list of Consul services, a Terraform module, and various Terraform providers. - -Within the [`task` block](/consul/docs/nia/configuration#task), the list of services for a task represents the service layer that drives network automation. The `module` is the discovery location of the Terraform module that defines the network automation process for the task. The `condition`, not shown below, defaults to the services condition when unconfigured such that network resources are updated on changes to the list of services over time. - -Review the Terraform module to be used for network automation and identify the Terraform providers required by the module. If the module depends on a set of providers, include the list of provider names in the `providers` field to associate the corresponding provider configuration with the task. These providers will need to be configured later in a separate block. - -```hcl -task { - name = "website-x" - description = "automate services for website-x" - module = "namespace/example/module" - version = "1.0.0" - providers = ["myprovider"] - condition "services" { - names = ["web", "api"] - } -} -``` - -## Terraform Providers - -Configuring Terraform providers within CTS requires 2 config components. The first component is required within the [`driver.terraform` block](/consul/docs/nia/configuration#terraform-driver). All providers configured for CTS must be listed within the `required_providers` stanza to satisfy a [Terraform v0.13+ requirement](/terraform/language/providers/requirements#requiring-providers) for Terraform to discover and install them. The providers listed are later organized by CTS to be included in the appropriate Terraform configuration files for each task. - -```hcl -driver "terraform" { - required_providers { - myprovider = { - source = "namespace/myprovider" - version = "1.3.0" - } - } -} -``` - -The second component for configuring a provider is the [`terraform_provider` block](/consul/docs/nia/configuration#terraform-provider). This block resembles [provider blocks for Terraform configuration](/terraform/language/providers/configuration) and has the same responsibility for understanding API interactions and exposing resources for a specific infrastructure platform. - -Terraform modules configured for task automation may require configuring the referenced providers. For example, configuring the host address and authentication to interface with your network infrastructure. Refer to the Terraform provider documentation hosted on the [Terraform Registry](https://registry.terraform.io/browse/providers) to find available options. The `terraform_provider` block is loaded by CTS during runtime and processed to be included in [autogenerated Terraform configuration files](/consul/docs/nia/network-drivers#provider) used for task automation. Omitting the `terraform_provider` block for a provider will defer to the Terraform behavior assuming an empty default configuration. - -```hcl -terraform_provider "myprovider" { - address = "myprovider.example.com" -} -``` - -## Summary - -Piecing it all together, the configuration file for CTS will have several HCL blocks in addition to other options for configuring the CTS daemon: `task`, `driver.terraform`, and `terraform_provider` blocks. - -An example HCL configuration file is shown below to automate one task to execute a Terraform module on the condition when there are changes to two services. - - - -```hcl -log_level = "info" - -syslog { - enabled = true -} - -consul { - address = "consul.example.com" -} - -task { - name = "website-x" - description = "automate services for website-x" - module = "namespace/example/module" - version = "1.0.0" - providers = ["myprovider"] - condition "services" { - names = ["web", "api"] - } - buffer_period { - min = "10s" - } -} - -driver "terraform" { - log = true - - required_providers { - myprovider = { - source = "namespace/myprovider" - version = "1.3.0" - } - } -} - -terraform_provider "myprovider" { - address = "myprovider.example.com" -} -``` - - diff --git a/website/content/docs/nia/installation/install.mdx b/website/content/docs/nia/installation/install.mdx deleted file mode 100644 index a8027b443683..000000000000 --- a/website/content/docs/nia/installation/install.mdx +++ /dev/null @@ -1,124 +0,0 @@ ---- -layout: docs -page_title: Install Consul and Consul-Terraform-Sync -description: >- - Consul-Terraform-Sync is a daemon that runs alongside Consul. Consul-Terraform-Sync is not included with the Consul binary and will need to be installed separately. ---- - -# Install Consul-Terraform-Sync - -Refer to the [introduction](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-intro?utm_source=docs) tutorial for details about installing, configuring, and running Consul-Terraform-Sync (CTS) on your local machine with the Terraform driver. - -## Install Consul-Terraform-Sync - - - - -To install CTS, find the [appropriate package](https://releases.hashicorp.com/consul-terraform-sync/) for your system and download it as a zip archive. For the CTS Enterprise binary, download a zip archive with the `+ent` metadata. [CTS Enterprise requires a Consul Enterprise license](/consul/docs/nia/enterprise/license) to run. - -Unzip the package to extract the binary named `consul-terraform-sync`. Move the `consul-terraform-sync` binary to a location available on your `PATH`. - -Example: - -```shell-session -$ echo $PATH -/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin -$ mv ./consul-terraform-sync /usr/local/bin/consul-terraform-sync -``` - -Once installed, verify the installation works by prompting the `-version` or `-help` option. The version outputted for the CTS Enterprise binary includes the `+ent` metadata. - -```shell-session -$ consul-terraform-sync -version -``` - - - - -Install and run CTS as a [Docker container](https://hub.docker.com/r/hashicorp/consul-terraform-sync). - -For the CTS Enterprise, use the Docker image [`hashicorp/consul-terraform-sync-enterprise`](https://hub.docker.com/r/hashicorp/consul-terraform-sync-enterprise). - -```shell-session -$ docker pull hashicorp/consul-terraform-sync -``` - -Once installed, verify the installation works by prompting the `-version` or `-help` option. The version outputted for the CTS Enterprise image includes the `+ent` metadata. - -```shell-session -$ docker run --rm hashicorp/consul-terraform-sync -version -``` - - - - -The CTS OSS binary is available in the HashiCorp tap, which is a repository of all our Homebrew packages. - -```shell-session -$ brew tap hashicorp/tap -$ brew install hashicorp/tap/consul-terraform-sync -``` - -Run the following command to update to the latest version: - -```shell-session -$ brew upgrade hashicorp/tap/consul-terraform-sync -``` - -Once installed, verify the installation works by prompting the `-version` or `-help` option. - -```shell-session -$ consul-terraform-sync -version -``` - - - - -Clone the repository from GitHub [`hashicorp/consul-terraform-sync`](https://github.com/hashicorp/consul-terraform-sync) to build and install the CTS OSS binary in your path `$GOPATH/bin`. Building from source requires `git` and [Golang](https://go.dev/). - -```shell-session -$ git clone https://github.com/hashicorp/consul-terraform-sync.git -$ cd consul-terraform-sync -$ git checkout tags/ -$ go install -``` - -Once installed, verify the installation works by prompting the `-version` or `-help` option. - -```shell-session -$ consul-terraform-sync -version -``` - - - - -## Connect your Consul Cluster - -CTS connects with your Consul cluster in order to monitor the Consul catalog for service changes. These service changes lead to downstream updates to your network devices. You can configure your Consul cluster in CTS with the [Consul block](/consul/docs/nia/configuration#consul). Below is an example: - -```hcl -consul { - address = "localhost:8500" - token = "my-consul-acl-token" -} -``` - -## Connect your Network Device - -CTS interacts with your network device through a network driver. For the Terraform network driver, CTS uses Terraform providers to make changes to your network infrastructure resources. You can reference existing provider docs on the Terraform Registry to configure each provider or create a new Terraform provider. - -Once you have identified a Terraform provider for all of your network devices, you can configure them in CTS with a [`terraform_provider` block](/consul/docs/nia/configuration#terraform-provider) for each network device. Below is an example: - -```hcl -terraform_provider "fake-firewall" { - address = "10.10.10.10" - username = "admin" - password = "password123" -} -``` - -This provider is then used by task(s) to execute a Terraform module that will update the related network device. - -### Multiple Instances per Provider - -You might have multiple instances of the same type of network device; for example, multiple instances of a firewall or load balancer. You can configure each instance with its own provider block and distinguish it by the `alias` meta-argument. See [multiple provider configurations](/consul/docs/nia/configuration#multiple-provider-configurations) for more details and an example of the configuration. diff --git a/website/content/docs/nia/network-drivers/index.mdx b/website/content/docs/nia/network-drivers/index.mdx deleted file mode 100644 index dd89f448d9bd..000000000000 --- a/website/content/docs/nia/network-drivers/index.mdx +++ /dev/null @@ -1,33 +0,0 @@ ---- -layout: docs -page_title: Network Drivers -description: >- - Consul-Terraform-Sync Network Drivers with Terraform and HCP Terraform ---- - -# Network Drivers - -Consul-Terraform-Sync (CTS) uses network drivers to execute and update network infrastructure. Drivers transform Consul service-level information into downstream changes by processing and abstracting API and resource details tied to specific network infrastructure. - -CTS is a HashiCorp solution to Network Infrastructure Automation. It bridges Consul's networking features and Terraform infrastructure management capabilities. The solution seamlessly embeds Terraform as network drivers to manage automation of Terraform modules. This expands the Consul ecosystem and taps into the rich features and community of Terraform and Terraform providers. - -The following table highlights some of the additional features Terraform and HCP Terraform offer when used as a network driver for CTS. Visit the [Terraform product page](https://www.hashicorp.com/products/terraform) or [contact our sales team](https://www.hashicorp.com/contact-sales) for a comprehensive list of features. - -| Network Driver | Description | Features | -| -------------- | ----------- | -------- | -| [Terraform driver](/consul/docs/nia/network-drivers/terraform) | CTS automates a local installation of the [Terraform CLI](https://www.terraform.io/) | - Local Terraform execution
    - Local workspace directories
    - [Backend options](/consul/docs/nia/configuration#backend) available for state storage
    | -| [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud) | CTS Enterprise automates remote workspaces on [HCP Terraform](/terraform/cloud-docs) | - [Remote Terraform execution](/terraform/cloud-docs/run/remote-operations)
    - Concurrent runs
    - [Secured variables](/terraform/cloud-docs/workspaces/variables)
    - [State versions](/terraform/cloud-docs/workspaces/state)
    - [Sentinel](/terraform/cloud-docs/policy-enforcement) to enforce governance policies as code
    - Audit [logs](/terraform/enterprise/admin/infrastructure/logging) and [trails](/terraform/cloud-docs/api-docs/audit-trails)
    - Run [history](/terraform/cloud-docs/run/manage), [triggers](/terraform/cloud-docs/workspaces/settings/run-triggers), and [notifications](/terraform/cloud-docs/workspaces/settings/notifications)
    - [Terraform Cloud Agents](/terraform/cloud-docs/agents) | - -## Understanding Terraform Automation - -CTS automates Terraform execution using a templated configuration to carry out infrastructure changes. The auto-generated configuration leverages input variables sourced from Consul and builds on top of reusable Terraform modules published and maintained by HashiCorp partners and the community. CTS can also run your custom built modules that suit your team's specific network automation needs. - -The network driver for CTS determines how the Terraform automation operates. Visit the driver pages to read more about the [Terraform driver](/consul/docs/nia/network-drivers/terraform) and the [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud). - -### Upgrading Terraform - -Upgrading the Terraform version used by CTS may introduce breaking changes that can impact the Terraform modules. Refer to the Terraform [upgrade guides](/terraform/language/upgrade-guides) for details before upgrading. - -The following versions were identified as containing changes that may impact Terraform modules. - -- [Terraform v0.15](/terraform/language/v1.1.x/upgrade-guides/0-15) diff --git a/website/content/docs/nia/network-drivers/terraform.mdx b/website/content/docs/nia/network-drivers/terraform.mdx deleted file mode 100644 index 1796c1298fa0..000000000000 --- a/website/content/docs/nia/network-drivers/terraform.mdx +++ /dev/null @@ -1,61 +0,0 @@ ---- -layout: docs -page_title: Terraform Driver -description: >- - Consul-Terraform-Sync Network Drivers with Terraform ---- - -# Terraform Driver - -Consul-Terraform-Sync (CTS) extends the Consul ecosystem to include Terraform as an officially supported tooling project. With the Terraform driver, CTS installs the [Terraform CLI](/terraform/downloads) locally and runs Terraform commands based on monitored Consul changes. This page details how the Terraform driver operates using local workspaces and templated files. - -## Terraform CLI Automation - -On startup, CTS: -1. Downloads and installs Terraform -2. Prepares local workspace directories. Terraform configuration and execution for each task is organized as separate [Terraform workspaces](/terraform/language/state/workspaces). The state files for tasks are independent of each other. -3. Generates Terraform configuration files that make up the root module for each task. - -Once all workspaces are set up, CTS monitors the Consul catalog for service changes. When relevant changes are detected, the Terraform driver dynamically updates input variables for that task using a template to render them to a file named [`terraform.tfvars`](/consul/docs/nia/network-drivers#terraform-tfvars). This file is passed as a parameter to the Terraform CLI when executing `terraform plan` and `terraform apply` to update your network infrastructure with the latest Consul service details. - -### Local Workspaces - -Within the CTS configuration for a task, practitioners can select the desired module to run for the task as well as set the condition to execute the task. Each task executed by the Terraform driver corresponds to an automated root module that calls the selected module in an isolated Terraform environment. CTS will manage concurrent execution of these tasks. - -Autogenerated root modules for tasks are maintained in local subdirectories of the CTS working directory. Each subdirectory represents the local workspace for a task. By default, the working directory `sync-tasks` is created in the current directory. To configure where Terraform configuration files are stored, set [`working_dir`](/consul/docs/nia/configuration#working_dir) to the desired path or configure the [`task.working_dir`](/consul/docs/nia/configuration#working_dir-1) individually. - -~> **Note:** Although Terraform state files for task workspaces are independent, this does not guarantee the infrastructure changes from concurrent task executions are independent. Ensure that modules across all tasks are not modifying the same resource objects or have overlapping changes that may result in race conditions during automation. - -### Root Module - -The root module proxies Consul information, configuration, and other variables to the Terraform module for the task. The content of the files that make up the root module are sourced from CTS configuration, information for task's module to use as the automation playbook, and information from Consul such as service information. - -A working directory with one task named "cts-example" would have the folder structure below when running with the Terraform driver. - -```shell-session -$ tree sync-tasks/ - -sync-tasks/ -└── cts-example/ - ├── main.tf - ├── variables.tf - ├── terraform.tfvars - └── terraform.tfvars.tmpl -``` - -The following files of the root module are generated for each task. An [example of a root module created by CTS](https://github.com/hashicorp/consul-terraform-sync/tree/master/examples) can be found in the project repository. - -- `main.tf` - The main file contains the terraform block, provider blocks, and a module block calling the module configured for the task. - - `terraform` block - The corresponding provider source and versions for the task from the configuration files are placed into this block for the root module. The Terraform backend from the configuration is also templated here. - - `provider` blocks - The provider blocks generated in the root module resemble the `terraform_provider` blocks from the configuration for CTS. They have identical arguments present and are set from the intermediate variable created per provider. - - `module` block - The module block is where the task's module is called as a [child module](/terraform/language/modules). The child module contains the core logic for automation. Required and optional input variables are passed as arguments to the module. -- `variables.tf` - This file contains three types of variable declarations. - - `services` input variable (required) determines module compatibility with Consul-Terraform Sync (read more on [compatible Terraform modules](/consul/docs/nia/terraform-modules) for more details). - - Any additional [optional input variables](/consul/docs/nia/terraform-modules#optional-input-variables) provided by CTS that the module may use. - - Various intermediate variables used to configure providers. Intermediate provider variables are interpolated from the provider blocks and arguments configured in the CTS configuration. -- `variables.module.tf` - This file is created if there are [variables configured for the task](/consul/docs/nia/configuration#variable_files) and contains the interpolated variable declarations that match the variables from configuration. These are then used to proxy the configured variables to the module through explicit assignment in the module block. -- `providers.tfvars` - This file is created if there are [providers configured for the task](/consul/docs/nia/configuration#providers) and defined [`terraform_provider` blocks](/consul/docs/nia/configuration#terraform-provider). This file may contain sensitive information. To omit sensitive information from this file, you can [securely configure Terraform providers for CTS](/consul/docs/nia/configuration#securely-configure-terraform-providers) using environment variables or templating. -- `terraform.tfvars` - The variable definitions file is where the services input variable and any optional CTS input variables are assigned values from Consul. It is periodically updated, typically when the task condition is met, to reflect the current state of Consul. -- `terraform.tfvars.tmpl` - The template file is used by CTS to template information from Consul by using the HashiCorp configuration and templating library ([hashicorp/hcat](https://github.com/hashicorp/hcat)). - -~> **Note:** Generated template and Terraform configuration files are crucial for the automation of tasks. Any manual changes to the files may not be preserved and could be overwritten by a subsequent update. Unexpected manual changes to the format of the files may cause automation to error. diff --git a/website/content/docs/nia/tasks.mdx b/website/content/docs/nia/tasks.mdx deleted file mode 100644 index f3a0ab11f35d..000000000000 --- a/website/content/docs/nia/tasks.mdx +++ /dev/null @@ -1,272 +0,0 @@ ---- -layout: docs -page_title: Tasks -description: >- - Consul-Terraform-Sync Tasks ---- - -# Tasks - -A task is the translation of dynamic service information from the Consul Catalog into network infrastructure changes downstream. Consul-Terraform-Sync (CTS) carries out automation for executing tasks using network drivers. For a Terraform driver, the scope of a task is a Terraform module. - -Below is an example task configuration: - -```hcl -task { - name = "frontend-firewall-policies" - description = "Add firewall policy rules for frontend services" - providers = ["fake-firewall", "null"] - module = "example/firewall-policy/module" - version = "1.0.0" - condition "services" { - names = ["web", "image"] - } -} -``` - -In the example task above, the "fake-firewall" and "null" providers, listed in the `providers` field, are used. These providers themselves should be configured in their own separate [`terraform_provider` blocks](/consul/docs/nia/configuration#terraform-provider). These providers are used in the Terraform module "example/firewall-policy/module", configured in the `module` field, to create, update, and destroy resources. This module may do something like use the providers to create and destroy firewall policy objects based on IP addresses. The IP addresses come from the "web" and "image" service instances configured in the `condition "services"` block. This service-level information is retrieved by CTS which watches Consul catalog for changes. - -See [task configuration](/consul/docs/nia/configuration#task) for more details on how to configure a task. - -A task can be either enabled or disabled using the [task cli](/consul/docs/nia/cli/task). When enabled, tasks are executed and automated as described in sections below. However, disabled tasks do not execute when changes are detected from Consul catalog. Since disabled tasks do not execute, they also do not store [events](/consul/docs/nia/tasks#event) until re-enabled. - -## Task Execution - -An enabled task can be configured to monitor and execute on different types of conditions, such as changes to services ([services condition](/consul/docs/nia/tasks#services-condition)) or service registration and deregistration ([catalog-services condition](/consul/docs/nia/tasks#catalog-services-condition)). - -A task can also monitor, but not execute on, other variables that provide additional information to the task's module. For example, a task with a catalog-services condition may execute on registration changes and additionally monitor service instances for IP information. - -All configured monitored information, regardless if it's used for execution or not, can be passed to the task's module as module input. Below are details on the types of execution conditions that CTS supports and their module inputs. - -### Services Condition - -Tasks with the services condition monitor and execute on either changes to a list of configured services or changes to any services that match a given regex. - -There are two ways to configure a task with a services condition. Only one of the two options below can be configured for a single task: -1. Configure a task's [`services`](/consul/docs/nia/configuration#services) field (deprecated) to specify the list of services to trigger the task -1. Configure a task's `condition` block with the [services condition](/consul/docs/nia/configuration#services-condition) type to specify services to trigger the task. - -The services condition operates by monitoring the [Health List Nodes For Service API](/consul/api-docs/health#list-nodes-for-service) and executing the task on any change of information for services configured. These changes include one or more changes to service values, like IP address, added or removed service instance, or tags. A complete list of values that would cause a task to run are expanded below: - -| Attribute | Description | -| ----------------------- | ------------------------------------------------------------------------------------------------- | -| `id` | A unique Consul ID for this service. This is unique per Consul agent. | -| `name` | The logical name of the service. Many service instances may share the same logical service name. | -| `address` | IP address of the service host -- if empty, node address should be used. | -| `port` | Port number of the service | -| `meta` | List of user-defined metadata key/value pairs for the service | -| `tags` | List of tags for the service | -| `namespace` | Consul Enterprise namespace of the service instance | -| `status` | Representative status for the service instance based on an aggregate of the list of health checks | -| `node` | Name of the Consul node on which the service is registered | -| `node_id` | ID of the node on which the service is registered. | -| `node_address` | The IP address of the Consul node on which the service is registered. | -| `node_datacenter` | Data center of the Consul node on which the service is registered. | -| `node_tagged_addresses` | List of explicit LAN and WAN IP addresses for the agent | -| `node_meta` | List of user-defined metadata key/value pairs for the node | - -Below is an example configuration for a task that will execute when a service with a name that matches the regular expression has a change. - -```hcl -task { - name = "services_condition_task" - description = "execute on changes to services whose name starts with web" - providers = ["my-provider"] - module = "path/to/services-condition-module" - - condition "services" { - regexp = "^web.*" - use_as_module_input = false - } -} -``` - -The services condition can provide input for the [`services` input variable](/consul/docs/nia/terraform-modules#services-variable) that is required for each CTS module. This can be provided depending on how the services condition is configured: -- task's `services` field (deprecated): services object is automatically passed as module input -- task's `condition "services"` block: users can configure the `use_as_module_input` field to optionally use the condition's services object as module input - - Field was previously named `source_includes_var` (deprecated) - -### Catalog-Services Condition - -Tasks with a catalog-services condition monitor and execute on service registration changes for services that satisfy the condition configuration. 'Service registration changes' specifically refers to service registration and deregistration where service registration occurs on the first service instance registration, and service deregistration occurs on the last service instance registration. Tasks with a catalog-services condition may, depending on the module, additionally monitor but not execute on service instance information. - -The catalog-services condition operates by monitoring the [Catalog List Services API](/consul/api-docs/catalog#list-services) and executing the task when services are added or removed in the list of registered services. Note, the task does not execute on changes to the tags of the list of services. This is similar to how changes to service instance information, mentioned above, also does not execute a task. - -Below is an example configuration for a task that will execute when a service with a name that matches the "web.*" regular expression in datacenter "dc1" has a registration change. It additionally monitors but does not execute on service instance changes to "web-api" in datacenter "dc2". - -```hcl -task { - name = "catalog_service_condition_task" - module = "path/to/catalog-services-module" - providers = ["my-provider"] - - condition "catalog-services" { - datacenter = "dc1" - regexp = "web.*" - use_as_module_input = false - } - - module_input "services" { - names = ["web-api"] - datacenter = "dc2" - } -} -``` - -Using the condition block's `use_as_module_input` field, users can configure CTS to use the condition's object as module input for the [`catalog_services` input variable](/consul/docs/nia/terraform-modules#catalog-services-variable). Users can refer to the configured module's documentation on how to set `use_as_module_input`. - -See the [Catalog-Services Condition](/consul/docs/nia/configuration#catalog-services-condition) configuration section for further details and additional configuration options. - -### Consul KV Condition - -Tasks with a consul-kv condition monitor and execute on Consul KV changes for KV pairs that satisfy the condition configuration. The consul-kv condition operates by monitoring the [Consul KV API](/consul/api-docs/kv#read-key) and executing the task when a configured KV entry is created, deleted, or updated. - -Based on the `recurse` option, the condition either monitors a single Consul KV pair for a given path or monitors all pairs that are prefixed by that path. In the example below, because `recurse` is set to true, the `path` option is treated as a prefix. Changes to an entry with the key `my-key` and an entry with the key `my-key/another-key` would both trigger the task. If `recurse` were set to false, then only changes to `my-key` would trigger the task. - -```hcl -task { - name = "consul_kv_condition_task" - description = "execute on changes to Consul KV entry" - module = "path/to/consul-kv-module" - providers = ["my-provider"] - - condition "consul-kv" { - path = "my-key" - recurse = true - datacenter = "dc1" - namespace = "default" - use_as_module_input = true - } -} -``` - -Using the condition block's `use_as_module_input` field, users can configure CTS to use the condition's object as module input for the [`consul_kv` input variable](/consul/docs/nia/terraform-modules#consul-kv-variable). Users can refer to the configured module's documentation on how to set `use_as_module_input`. - -See the [Consul-KV Condition](/consul/docs/nia/configuration#consul-kv-condition) configuration section for more details and additional configuration options. - -### Schedule Condition - -All scheduled tasks must be configured with a schedule condition. The schedule condition sets the cadence to trigger a task with a [`cron`](/consul/docs/nia/configuration#cron) configuration. The schedule condition block does not support parameters to configure module input. As a result, inputs must be configured separately. You can configure [`module_input` blocks](/consul/docs/nia/configuration#module_input) to define the module inputs. - -Below is an example configuration for a task that will execute every Monday, which is set by the schedule condition's [`cron`](/consul/docs/nia/configuration#cron) configuration. The module input is defined by the `module_input` block. When the task is triggered on Monday, it will retrieve the latest information on "web" and "db" from Consul and provide this to the module's input variables. - -```hcl -task { - name = "scheduled_task" - description = "execute every Monday using service information from web and db" - module = "path/to/module" - - condition "schedule" { - cron = "* * * * Mon" - } - module_input "services" { - names = ["web", "db"] - } -} -``` - -Below are the available options for module input types and how to configure them: - -- [Services module input](/consul/docs/nia/terraform-modules/#services-module-input): - - [`task.services`](/consul/docs/nia/configuration#services) field (deprecated) - - [`module_input "services"`](/consul/docs/nia/configuration#services-configure-input) block - - Block was previously named `source_input "services"` (deprecated) -- [Consul KV module input](/consul/docs/nia/terraform-modules/#consul-kv-module-input): - - [`module_input "consul-kv"`](/consul/docs/nia/configuration#consul-kv-module-input) - - Block was previously named `source_input "consul-kv"` (deprecated) - -#### Running Behavior - -Scheduled tasks generally run on schedule, but they can be triggered on demand when running CTS in the following ways: - -- [Long-running mode](/consul/docs/nia/cli#long-running-mode): At the beginning of the long-running mode, CTS first passes through a once-mode phase in which all tasks are executed once. Scheduled tasks will trigger once during this once-mode phase. This behavior also applies to tasks that are not scheduled. After once-mode has completed, scheduled tasks subsequently trigger on schedule. - -- [Inspect mode](/consul/docs/nia/cli#inspect-mode): When running in inspect mode, the terminal will output a plan of proposed updates that would be made if the tasks were to trigger at that moment and then exit. No changes are applied in this mode. The outputted plan for a scheduled task is also the proposed updates that would be made if the task was triggered at that moment, even if off-schedule. - -- [Once mode](/consul/docs/nia/cli#once-mode): During the once mode, all tasks are only triggered one time. Scheduled tasks will execute during once mode even if not on the schedule. - -- [Enable CLI](/consul/docs/nia/cli/task#task-enable): When a task is enabled through the CLI, any type of task, including scheduled tasks, will be triggered at that time. - -#### Buffer Period - -Because scheduled tasks trigger on a configured cadence, buffer periods are disabled for scheduled tasks. Any configured `buffer_period` at the global level or task level will only apply to dynamic tasks and not scheduled ones. - -#### Events - -[Events](#event) are stored each time a task executes. For scheduled tasks, an event will be stored each time the task triggers on schedule regardless of if there was a change in Consul catalog. - -## Task Automation - -CTS will attempt to execute each enabled task once upon startup to synchronize infrastructure with the current state of Consul. The daemon will stop and exit if any error occurs while preparing the automation environment or executing a task for the first time. This helps ensure tasks have proper configuration and are executable before the daemon transitions into running tasks in full automation as service changes are discovered over time. As a result, it is not recommended to configure a task as disabled from the start. After all tasks have successfully executed once, task failures during automation will be logged and retried or attempted again after a subsequent change. - -Tasks are executed near-real time when service changes are detected. For services or environments that are prone to flapping, it may be useful to configure a [buffer period](/consul/docs/nia/configuration#buffer_period-1) for a task to accumulate changes before it is executed. The buffer period would reduce the number of consecutive network calls to infrastructure by batching changes for a task over a short duration of time. - -## Status Information - -Status-related information is collected and offered via [status API](/consul/docs/nia/api#status) to provide visibility into what and how the tasks are running. Information is offered in three-levels (lowest to highest): - -- Event data -- Task status -- Overall status - -These three levels form a hierarchy where each level of data informs the one higher. The lowest-level, event data, is collected each time a task runs to update network infrastructure. This event data is then aggregated to inform individual task statuses. The count distribution of all the task statuses inform the overall status's task summary. - -### Event - -When a task is triggered, CTS takes a series of steps in order to update the network infrastructure. These steps consist of fetching the latest data from Consul for the task's module inputs and then updating the network infrastructure accordingly. An event captures information across this process. It stores information to help understand if the update to network infrastructure was successful or not and any errors that may have occurred. - -A dynamic task will store an event when it is triggered by a change in Consul. A scheduled task will store an event when it is triggered on schedule, regardless if there is a change in Consul. A disabled task does not update network infrastructures, so it will not store events until until re-enabled. - -Sample event: - -```json -{ - "id": "ef202675-502f-431f-b133-ed64d15b0e0e", - "success": false, - "start_time": "2020-11-24T12:05:18.651231-05:00", - "end_time": "2020-11-24T12:05:20.900115-05:00", - "task_name": "task_b", - "error": { - "message": "example error: error while doing terraform-apply" - }, - ... -} -``` - -For complete information on the event structure, see [events in our API documentation](/consul/docs/nia/api#event). Event information can be retrieved by using the [`include=events` parameter](/consul/docs/nia/api#include) with the [task status API](/consul/docs/nia/api#task-status). - -### Task Status - -Each time a task runs to update network infrastructure, event data is stored for that run. 5 most recent events are stored for each task, and these stored events are used to determine task status. For example, if the most recent stored event is not successful but the others are, then the task's health status is "errored". - -Sample task status: - -```json -{ - "task_name": "task_b", - "status": "errored", - "providers": ["null"], - "services": ["web"], - "events_url": "/v1/status/tasks/task_b?include=events" -} -``` - -Task status information can be retrieved with [task status API](/consul/docs/nia/api#task-status). The API documentation includes details on what health statuses are available and how it is calculated based on events' success/failure information. - -### Overall Status - -Overall status returns a summary of the health statuses across all tasks. The summary is the count of tasks in each health status category. - -Sample overall status: - -```json -{ - "task_summary": { - "successful": 28, - "errored": 5, - "critical": 1 - } -} -``` - -Overall status information can be retrieved with [overall status API](/consul/docs/nia/api#overall-status). The API documentation includes details on what health statuses are available and how it is calculated based on task statuses' health status information. diff --git a/website/content/docs/nia/terraform-modules.mdx b/website/content/docs/nia/terraform-modules.mdx deleted file mode 100644 index 8e041de79b73..000000000000 --- a/website/content/docs/nia/terraform-modules.mdx +++ /dev/null @@ -1,384 +0,0 @@ ---- -layout: docs -page_title: Compatible Terraform Modules for NIA -description: >- - Consul-Terraform-Sync automates execution Terraform modules for network infrastructure automation. ---- - -# Compatible Terraform Modules for Network Infrastructure Automation - -Consul-Terraform-Sync (CTS) automates execution of Terraform modules through tasks. A task is a construct in CTS that defines the automation of Terraform and the module. - -## Module Specifications - -Compatible modules for CTS follow the [standard module](/terraform/language/modules/develop#module-structure) structure. Modules can use syntax supported by Terraform version 0.13 and newer. - -### Compatibility Requirements - -Below are the two required elements for module compatibility with CTS - -1. **Root module** - Terraform has one requirement for files in the root directory of the repository to function as the primary entrypoint for the module. It should encapsulate the core logic to be used by CTS for task automation. `main.tf` is the recommended filename for the main file where resources are created. -2. [**`services` input variable**](#services-variable) - CTS requires all modules to have the following input variable declared within the root module. The declaration of the `services` variable can be included at the top of the suggested `variables.tf` file where other input variables are commonly declared. This variable functions as the response object from the Consul catalog API and surfaces network information to be consumed by the module. It is structured as a map of objects. - -### Optional Input Variables - -In addition to the required `services` input variable, CTS provides additional, optional input variables to be used within your module. Support for an optional input variable requires two changes: - -1. Updating the Terraform Module to declare the input variable in the suggested `variables.tf` -1. Adding configuration to the CTS task block to define the module input values that should be provided to the input variables - -See below sections for more information on [defining module input](#module-input) and [declaring optional input variables](#how-to-create-a-compatible-terraform-module) in your Terraform module. - -### Module Input ((#source-input)) - -A task monitors [Consul objects](/consul/docs/nia#consul-objects) that are defined by the task's configuration. The Consul objects can be used for the module input that satisfies the requirements defined by the task's Terraform module's [input variables](/terraform/language/values/variables). - -A task's module input is slightly different from the task's condition, even though both monitor defined objects. The task's condition monitors defined objects with a configured criteria. When this criteria is satisfied, the task will trigger. - -The module input, however, monitors defined objects with the intent of providing values or metadata about these objects to the Terraform module. The monitored module input and condition objects can be the same object, such as a task configured with a `condition "services"` block and `use_as_module_input` set to `true`. The module input and condition can also be different objects and configured separately, such as a task configured with a `condition "catalog-services` and `module_input "consul-kv"` block. As a result, the monitored module input is decoupled from the provided condition in order to satisfy the Terraform module. - -Each type of object that CTS monitors can only be defined through one configuration within a task definition. For example, if a task monitors services, the task cannot have both `condition "services"` and `module_input "services"` configured. See [Task Module Input configuration](/consul/docs/nia/configuration#task-module-input) for more details. - -There are a few ways that a module input can be defined: - -- [**`services` list**](/consul/docs/nia/configuration#services) (deprecated) - The list of services to use as module input. -- **`condition` block's `use_as_module_input` field** - When set to true, the condition's objects are used as module input. - - Field was previously named `source_includes_var` (deprecated) -- [**`module_input` blocks**](/consul/docs/nia/configuration#module-input) - This block can be configured multiple times to define objects to use as module input. - - Block was previously named `source_input` (deprecated) - -Multiple ways of defining a module input adds configuration flexibility, and allows for optional additional input variables to be supported by CTS alongside the `services` input variable. - -Additional optional input variable types: - -- [**`catalog_services` variable**](#catalog-services-variable) -- [**`consul_kv` variable**](#consul-kv-variable) - -#### Services Module Input ((#services-source-input)) - -Tasks configured with a services module input monitor for changes to services. Monitoring is either performed on a configured list of services or on any services matching a provided regex. - -Sample rendered services input: - - - -```hcl -services = { - "web.test-server.dc1" = { - id = "web" - name = "web" - kind = "" - address = "127.0.0.1" - port = 80 - meta = {} - tags = ["example"] - namespace = "" - status = "passing" - node = "pm8902" - node_id = "307625d3-a1cf-9e85-ff81-12017ca4d848" - node_address = "127.0.0.1" - node_datacenter = "dc1" - node_tagged_addresses = { - lan = "127.0.0.1" - lan_ipv4 = "127.0.0.1" - wan = "127.0.0.1" - wan_ipv4 = "127.0.0.1" - } - node_meta = { - consul-network-segment = "" - } - }, -} -``` - - - -In order to configure a task with the services module input, the list of services that will be used for the input must be configured in one of the following ways: - -- the task's [`services`](/consul/docs/nia/configuration#services) (deprecated) -- a [`condition "services"` block](/consul/docs/nia/configuration#services-condition) configured with `use_as_module_input` field set to true - - Field was previously named `source_includes_var` (deprecated) -- a [`module_input "services"` block](/consul/docs/nia/configuration#services-module-input) - - Block was previously named `source_input "services"` (deprecated) - -The services module input operates by monitoring the [Health List Nodes For Service API](/consul/api-docs/health#list-nodes-for-service) and provides the latest service information to the input variables. A complete list of service information that would be provided to the module is expanded below: - -| Attribute | Description | -| ----------------------- | ------------------------------------------------------------------------------------------------- | -| `id` | A unique Consul ID for this service. The service id is unique per Consul agent. | -| `name` | The logical name of the service. Many service instances may share the same logical service name. | -| `address` | IP address of the service host -- if empty, node address should be used. | -| `port` | Port number of the service | -| `meta` | List of user-defined metadata key/value pairs for the service | -| `tags` | List of tags for the service | -| `namespace` | Consul Enterprise namespace of the service instance | -| `status` | Representative status for the service instance based on an aggregate of the list of health checks | -| `node` | Name of the Consul node on which the service is registered | -| `node_id` | ID of the node on which the service is registered. | -| `node_address` | The IP address of the Consul node on which the service is registered. | -| `node_datacenter` | Data center of the Consul node on which the service is registered. | -| `node_tagged_addresses` | List of explicit LAN and WAN IP addresses for the agent | -| `node_meta` | List of user-defined metadata key/value pairs for the node | - -Below is an example configuration for a task that will execute on a schedule and provide information about the services matching the `regexp` parameter to the task's module. - -```hcl -task { - name = "services_condition_task" - description = "execute on changes to services whose name starts with web" - providers = ["my-provider"] - module = "path/to/services-condition-module" - condition "schedule" { - cron = "* * * * Mon" - } - module_input "services" { - regexp = "^web.*" - } -} -``` - -#### Consul KV Module Input ((#consul-kv-source-input)) - -Tasks configured with a Consul KV module input monitor Consul KV for changes to KV pairs that satisfy the provided configuration. The Consul KV module input operates by monitoring the [Consul KV API](/consul/api-docs/kv#read-key) and provides these key values to the task's module. - -Sample rendered consul KV input: - - - -```hcl -consul_kv = { - "my-key" = "some value" -} -``` - - - -To configure a task with the Consul KV module input, the KVs which will be used for the input must be configured in one of the following ways: - -- a [`condition "consul-kv"` block](/consul/docs/nia/configuration#consul-kv-condition) configured with the `use_as_module_input` field set to true. - - Field was previously named `source_includes_var` (deprecated) -- a [`module_input "consul-kv"` block](/consul/docs/nia/configuration#consul-kv-module-input). - - Block was previously named `source_input "consul-kv"` (deprecated) - -Below is a similar example to the one provided in the [Consul KV Condition](/consul/docs/nia/tasks#consul-kv-condition) section. However, the difference in this example is that instead of triggering based on a change to Consul KV, this task will instead execute on a schedule. Once execution is triggered, Consul KV information is then provided to the task's module. - -```hcl -task { - name = "consul_kv_schedule_task" - description = "executes on Monday monitoring Consul KV" - module = "path/to/consul-kv-module" - - condition "schedule" { - cron = "* * * * Mon" - } - - module_input "consul-kv" { - path = "my-key" - recurse = true - datacenter = "dc1" - namespace = "default" - } -} -``` - -#### Catalog Services Module Input ((#catalog-services-source-input)) - -Tasks configured with a Catalog Services module input monitors for service and tag information provided by the [Catalog List Services API](/consul/api-docs/catalog#list-services). The module input is a map of service names to a list of tags. - -Sample rendered catalog-services input: - - - -```hcl -catalog_services = { - "api" = ["prod", "staging"] - "consul" = [] - "web" = ["blue", "green"] -} -``` - - - -To configure a task with the Catalog Services module input, the catalog services which will be used for the input must be configured in one of the following ways: - -- a [`condition "catalog-services"` block](/consul/docs/nia/configuration#consul-kv-condition) configured with `use_as_module_input` field. - - Field was previously named `source_includes_var` (deprecated) - --> **Note:** Currently there is no support for a `module_input "catalog-services"` block. - -Example of a catalog-services condition which supports module input through `use_as_module_input`: - -```hcl -task { - name = "catalog_services_condition_task" - description = "execute on registration/deregistration of services" - providers = ["my-provider"] - module = "path/to/catalog-services-module" - condition "catalog-services" { - datacenter = "dc1" - namespace = "default" - regexp = "web.*" - use_as_module_input = true - node_meta { - key = "value" - } - } -} -``` - -## How to Create a Compatible Terraform Module - -You can read more on how to [create a module](/terraform/language/modules/develop) or work through a [tutorial to build a module](/terraform/tutorials/modules/module-create?utm_source=docs). CTS is designed to integrate with any module that satisfies the specifications in the following section. - -The repository [hashicorp/consul-terraform-sync-template-module](https://github.com/hashicorp/consul-terraform-sync-template-module) can be cloned and used as a starting point for structuring a compatible Terraform module. The template repository has the files described in the next steps prepared. - -First, create a directory to organize Terraform configuration files that make up the module. You can start off with creating two files `main.tf` and `variables.tf` and expand from there based on your module and network infrastructure automation needs. - -The `main.tf` is the entry point of the module and this is where you can begin authoring your module. It can contain multiple Terraform resources related to an automation task that uses Consul service discovery information, particularly the required [`services` input variable](#services-variable). The code example below shows a resource using the `services` variable. When this example is used in automation with CTS, the content of the local file would dynamically update as Consul service discovery information changes. - - - -```hcl -# Create a file with service names and their node addresses -resource "local_file" "consul_services" { - content = join("\n", [ - for _, service in var.services : "${service.name} ${service.id} ${service.node_address}" - ]) - filename = "consul_services.txt" -} -``` - - - -Something important to consider before authoring your module is deciding the [condition under which it will execute](/consul/docs/nia/tasks#task-execution). This will allow you to potentially use other types of CTS provided input variables in your module. It will also help inform your documentation and how users should configure their task for your module. - -### Services Variable - -To satisfy the specification requirements for a compatible module, copy the `services` variable declaration to the `variables.tf` file. Your module can optionally have other [variable declarations](#module-input-variables) and [CTS provided input variables](/consul/docs/nia/terraform-modules#optional-input-variables) in addition to `var.services`. - - - -```hcl -variable "services" { - description = "Consul services monitored by Consul-Terraform-Sync" - type = map( - object({ - id = string - name = string - kind = string - address = string - port = number - meta = map(string) - tags = list(string) - namespace = string - status = string - - node = string - node_id = string - node_address = string - node_datacenter = string - node_tagged_addresses = map(string) - node_meta = map(string) - - cts_user_defined_meta = map(string) - }) - ) -} -``` - - - -Keys of the `services` map are unique identifiers of the service across Consul agents and data centers. Keys follow the format `service-id.node.datacenter` (or `service-id.node.namespace.datacenter` for Consul Enterprise). A complete list of attributes available for the `services` variable is included in the [documentation for CTS tasks](/consul/docs/nia/tasks#services-condition). - -Terraform variables when passed as module arguments can be [lossy for object types](/terraform/language/expressions/type-constraints#conversion-of-complex-types). This allows CTS to declare the full variable with every object attribute in the generated root module, and pass the variable to a child module that contains a subset of these attributes for its variable declaration. Modules compatible with CTS may simplify the `var.services` declaration within the module by omitting unused attributes. For example, the following services variable has 4 attributes with the rest omitted. - - - -```hcl -variable "services" { - description = "Consul services monitored by Consul-Terraform-Sync" - type = map( - object({ - id = string - name = string - node_address = string - port = number - status = string - }) - ) -} -``` - - - -### Catalog Services Variable - -If you are creating a module for a [catalog-services condition](/consul/docs/nia/tasks#catalog-services-condition), then you have the option to add the `catalog_services` variable, which contains service registration and tag information. If your module would benefit from consuming this information, you can copy the `catalog_services` variable declaration to your `variables.tf` file in addition to the other variables. - - - -```hcl -variable "catalog_services" { - description = "Consul catalog service names and tags monitored by Consul-Terraform-Sync" - type = map(list(string)) -} -``` - - - -The keys of the `catalog_services` map are the names of the services that are registered with Consul at the given datacenter. The value for each service name is a list of all known tags for that service. - -We recommend that if you make a module with with a catalog-services condition, that you document this in the README. This way, users that want to configure a task with your module will know to configure a catalog-services [condition](/consul/docs/nia/configuration#condition) block. - -Similarly, if you use the `catalog_services` variable in your module, we recommend that you also document this usage in the README. Users of your module will then know to set the catalog-services condition [`use_as_module_input`](/consul/docs/nia/configuration#catalog-services-condition) configuration to be true. When this field is set to true, CTS will declare the `catalog_services` variable in the generated root module, and pass the variable to a child module. Therefore, if this field is configured inconsistently, CTS will error and exit. - -### Consul KV Variable - -If you are creating a module for a [consul-kv condition](/consul/docs/nia/tasks#consul-kv-condition), then you have the option to add the `consul_kv` variable, which contains a map of the keys and values for the Consul KV pairs. If your module would benefit from consuming this information, you can copy the `consul_kv` variable declaration to your `variables.tf` file in addition to the other variables. - - - -```hcl -variable "consul_kv" { - description = "Keys and values of the Consul KV pairs monitored by Consul-Terraform-Sync" - type = map(string) -} -``` - - - -If your module contains the `consul_kv` variable, we recommend documenting the usage in the README file so that users know to set the [`use_as_module_input`](/consul/docs/nia/configuration#consul-kv-condition) configuration to `true` in the `consul-kv` condition. Setting the field to `true` instructs CTS to declare the `consul_kv` variable in the generated root module and pass the variable to a child module. Therefore, if this field is configured inconsistently, CTS will error and exit. - -### Module Input Variables - -Network infrastructure differs vastly across teams and organizations, and the automation needs of practitioners are unique based on their existing setup. [Input variables](/terraform/language/values/variables) can be used to serve as customization parameters to the module for practitioners. - -1. Identify areas in the module where practitioners could tailor the automation to fit their infrastructure. -2. Declare input variables and insert the use of variables throughout module resources to expose these options to practitioners. -3. Include descriptions to capture what the variables are and how they are used, and specify [custom validation rules for variables](/terraform/language/values/variables#custom-validation-rules) to provide context to users the expected format and conditions for the variables. -4. Set reasonable default values for variables that are optional, or omit default values for variables that are required module arguments. -5. Set the [sensitive argument](/terraform/language/values/variables) for variables that contain secret or sensitive values. When set, Terraform will redact the value from output when Terraform commands are run. - -Terraform is an explicit configuration language and requires variables to be declared, typed, and passed explicitly through as module arguments. CTS abstracts this by creating intermediate variables at the root level from the module input. These values are configured by practitioners within the [`task` block](/consul/docs/nia/configuration#variable_files). Value assignments are parsed to interpolate the corresponding variable declaration and are written to the appropriate Terraform files. A few assumptions are made for the intermediate variables: the variables users provide CTS are declared and supported by the module, matching name and type. - -### Module Guidelines - -This section covers guidelines for authoring compatible CTS modules. - -#### Scope - -We recommend scoping the module to a few related resources for a provider. Small modules are easier and more flexible for end users to adopt for CTS. It allows them to iteratively combine different modules and use them as building blocks to meet their unique network infrastructure needs. - -#### Complexity - -Consider authoring modules with low complexity to reduce the run time for Terraform execution. Complex modules that have a large number of dependencies may result in longer runs, which adds delay to the near real time network updates. - -#### Providers - -The Terraform module must declare which providers it requires within the [`terraform.required_providers` block](/terraform/language/providers/requirements#requiring-providers). We suggest to also include a version constraint for the provider to specify which versions the module is compatible with. - -Aside from the `required_providers` block, provider configurations should not be included within the sharable module for network integrations. End users will configure the providers through CTS, and CTS will then translate provider configuration to the generated root module appropriately. - -#### Documentation - -Modules for CTS are Terraform modules and can effectively run independently from the `consul-terraform-sync` daemon and Consul environment. They should be written and designed with Terraform best practices and should be clear to a Terraform user what the module does and how to use it. Module documentation should be named `README` or `README.md`. The description should capture what the module should be used for and the implications of running it in automation with CTS. diff --git a/website/content/docs/nia/usage/errors-ref.mdx b/website/content/docs/nia/usage/errors-ref.mdx deleted file mode 100644 index 4cbb525a3a1c..000000000000 --- a/website/content/docs/nia/usage/errors-ref.mdx +++ /dev/null @@ -1,143 +0,0 @@ ---- -layout: docs -page_title: Error Messages -description: >- - Look up Consul-Terraform-Sync error message to learn how to resolve potential issues using CTS. ---- - -# Error Messages - -This topic explains error messages you may encounter when using Consul-Terraform-Sync (CTS). - -## Example error log messages - -If you configured the CTS cluster to run in [high availability mode](/consul/docs/nia/usage/run-ha) and the local module is missing, then the following message appears in the log: - -```shell-session -[ERROR] ha.compat: error="compatibility check failure: stat ./example-module: no such file or directory" -``` - -The resolution is to add the missing local module on the incompatible CTS instance. Refer to the [`module` documentation](/consul/docs/nia/configuration#module) in the CTS configuration reference for additional information. - -## Example API and CLI error messages - -**Error**: - -```json -{ - "error": { - "message": "redirect requests to leader 'cts-01' at cts-01.example.com:8558" - } -} -``` - -**Conditions**: - -- CTS can determine the leader. -- `high_availability.instance.address` is configured for the leader. -- The CTS instance you sent the request to is not the leader. - - - -**Resolution**: - -Redirect the request to the leader instance, for example: - -```shell-session -$ curl --request GET cts-01.example.com:8558/v1/tasks -``` ---- - - -**Error**: - -```json -{ - "error": { - "message": "redirect requests to leader 'cts-01'" - } -} -``` - -**Conditions**: - -* CTS can determine the leader. -* The CTS instance you sent the request to is not the leader. -* `high_availability.instance.address` is not configured. - -**Resolution**: - -Identify the leader instance address and redirect the request to the leader. You can identify the leader by calling the [`status/cluster` API endpoint](/consul/docs/nia/api/status#cluster-status) or by checking the logs for the following entry: - -```shell-session -[INFO] ha: acquired leadership lock: id=. - -We recommend deploying a cluster that has three instances. - ---- - -**Error**: - -```json -{ - "error": { - "message": "redirect requests to leader" - } -} -``` - -**Conditions**: - -* The CTS instance you sent the request to is not the leader. -* The CTS is unable to determine the leader. -* Note that these conditions are rare. - - -**Resolution**: - -Identify and send the request to the leader CTS instance. You can identify the leader by calling the [`status/cluster` API endpoint](/consul/docs/nia/api/status#cluster-status) or by checking the logs for the following entry: - -```shell-session -[INFO] ha: acquired leadership lock: id= -``` - ---- - -**Error**: - -```json -{ - "error": { - "message": "this endpoint is only available with high availability configured" - } -} -``` - -**Conditions**: - -- You called the [`status/cluster` API endpoint](/consul/docs/nia/api/status#cluster-status) without configuring CTS for [high availability](/consul/docs/nia/usage/run-ha). - -**Resolution**: - -Configure CTS to run in [high availability mode](/consul/docs/nia/usage/run-ha). - ---- - -**Error**: - -```json -{ - "error": { - "message": "example error message: unsupported status parameter value" - } -} -``` - -**Conditions**: - -- You sent a request to the `status` API endpoint. -- The request included an unsupported parameter value. - -**Resolution**: - -Send a new request and verify that all of the parameter values are correct. \ No newline at end of file diff --git a/website/content/docs/nia/usage/requirements.mdx b/website/content/docs/nia/usage/requirements.mdx deleted file mode 100644 index e7c5fe8c8798..000000000000 --- a/website/content/docs/nia/usage/requirements.mdx +++ /dev/null @@ -1,134 +0,0 @@ ---- -layout: docs -page_title: Requirements -description: >- - Consul-Terraform-Sync requires a Terraform Provider, a Terraform Module, and a running Consul cluster outside of the `consul-terraform-sync` daemon. ---- - -# Requirements - -The following components are required to run Consul-Terraform-Sync (CTS): - -- A Terraform provider -- A Terraform module -- A Consul cluster running outside of the `consul-terraform-sync` daemon - -You can add support for your network infrastructure through Terraform providers so that you can apply Terraform modules to implement network integrations. - -The following guidance is for running CTS using the Terraform driver. The HCP Terraform driver has [additional prerequisites](/consul/docs/nia/network-drivers/terraform-cloud#setting-up-terraform-cloud-driver). - -## Run a Consul cluster - -Below are several steps towards a minimum Consul setup required for running CTS. - -### Install Consul - -CTS is a daemon that runs alongside Consul, similar to other Consul ecosystem tools like Consul Template. CTS is not included with the Consul binary and needs to be installed separately. - -To install a local Consul agent, refer to the [Getting Started: Install Consul Tutorial](/consul/tutorials/get-started-vms?utm_source=docs). - -For information on compatible Consul versions, refer to the [Consul compatibility matrix](/consul/docs/nia/compatibility#consul). - -### Run an agent - -The Consul agent must be running in order to dynamically update network devices. Refer to the [Consul agent documentation](/consul/docs/agent) for information about configuring and starting a Consul agent. - -When running a Consul agent with CTS in production, consider that CTS uses [blocking queries](/consul/api-docs/features/blocking) to monitor task dependencies, such as changes to registered services. This results in multiple long-running TCP connections between CTS and the agent to poll changes for each dependency. Consul may quickly reach the agent connection limits if CTS is monitoring a high number of services. - -To avoid reaching the limit prematurely, we recommend using HTTP/2 (requires HTTPS) to communicate between CTS and the Consul agent. When using HTTP/2, CTS establishes a single connection and reuses it for all communication. Refer to the [Consul Configuration section](/consul/docs/nia/configuration#consul) for details. - -Alternatively, you can configure the [`limits.http_max_conns_per_client`](/consul/docs/agent/config/config-files#http_max_conns_per_client) option to set a maximum number of connections to meet your needs. - -### Register services - -CTS monitors the Consul catalog for service changes that lead to downstream changes to your network devices. Without services, your CTS daemon is operational but idle. You can register services with your Consul agent by either loading a service definition or by sending an HTTP API request. - -The following HTTP API request example registers a service named `web` with your Consul agent: - -```shell-session -$ echo '{ - "ID": "web", - "Name": "web", - "Address": "10.10.10.10", - "Port": 8000 -}' > payload.json - -$ curl --request PUT --data @payload.json http://localhost:8500/v1/agent/service/register -``` - -The example represents a non-existent web service running at `10.10.10.10:8000` that is now available for CTS to consume. - -You can configure CTS to monitor the web service, execute a task, and update network device(s) by configuring `web` in the [`condition "services"`](/consul/docs/nia/configuration#services-condition) task block. If the web service has any non-default values, it can also be configured in `condition "services"`. - -For more details on registering a service using the HTTP API endpoint, refer to the [register service API docs](/consul/api-docs/agent/service#register-service). - -For hands-on instructions on registering a service by loading a service definition, refer to the [Getting Started: Register a Service with Consul Service Discovery Tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-service-discovery). - -### Run a cluster - -For production environments, we recommend operating a Consul cluster rather than a single agent. Refer to [Getting Started: Deploy a Consul Datacenter Tutorial](/consul/tutorials/get-started-vms/virtual-machine-gs-deploy) for instructions on starting multiple Consul agents and joining them into a cluster. - -## Network infrastructure using a Terraform provider - -CTS integrations for the Terraform driver use Terraform providers as plugins to interface with specific network infrastructure platforms. The Terraform driver for CTS inherits the expansive collection of Terraform providers to integrate with. You can also specify a provider `source` in the [`required_providers` configuration](/terraform/language/providers/requirements#requiring-providers) to use providers written by the community (requires Terraform 0.13 or later). - -### Finding Terraform providers - -To find providers for the infrastructure platforms you use, browse the providers section of the [Terraform Registry](https://registry.terraform.io/browse/providers). - -### How to create a provider - -If a Terraform provider does not exist for your environment, you can create a new Terraform provider and publish it to the registry so that you can use it within a network integration task or create a compatible Terraform module. Refer to the following Terraform tutorial and documentation for additional information on creating and publishing providers: - -- [Setup and Implement Read](/terraform/tutorials/providers/provider-setup) -- [Publishing Providers](/terraform/registry/providers/publishing). - -## Network integration using a Terraform module - -The Terraform module for a task in CTS is the core component of the integration. It declares which resources to use and how your infrastructure is dynamically updated. The module, along with how it is configured within a task, determines the conditions under which your infrastructure is updated. - -Working with a Terraform provider, you can write an integration task for CTS by [creating a Terraform module](/consul/docs/nia/terraform-modules) that is compatible with the Terraform driver. You can also use a [module built by partners](#partner-terraform-modules). - -Refer to [Configuration](/consul/docs/nia/configuration) for information about configuring CTS and how to use Terraform providers and modules for tasks. - -### Partner Terraform Modules - -The modules listed below are available to use and are compatible with CTS. - -#### A10 Networks - -- Dynamic Load Balancing with Group Member Updates: [Terraform Registry](https://registry.terraform.io/modules/a10networks/service-group-sync-nia/thunder/latest) / [GitHub](https://github.com/a10networks/terraform-thunder-service-group-sync-nia) - -#### Avi Networks - -- Scale Up and Scale Down Pool and Pool Members (Servers): [GitHub](https://github.com/vmware/terraform-provider-avi/tree/20.1.5/modules/nia/pool) - -#### AWS Application Load Balancer (ALB) - -- Create Listener Rule and Target Group for an AWS ALB, Forward Traffic to Consul Ingress Gateway: [Terraform Registry](https://registry.terraform.io/modules/aws-quickstart/cts-alb_listener-nia/hashicorp/latest) / [GitHub](https://github.com/aws-quickstart/terraform-hashicorp-cts-alb_listener-nia) - -#### Checkpoint - -- Dynamic Firewalling with Address Object Updates: [Terraform Registry](https://registry.terraform.io/modules/CheckPointSW/dynobj-nia/checkpoint/latest) / [GitHub](https://github.com/CheckPointSW/terraform-checkpoint-dynobj-nia) - -#### Cisco ACI - -- Policy Based Redirection: [Terraform Registry](https://registry.terraform.io/modules/CiscoDevNet/autoscaling-nia/aci/latest) / [GitHub](https://github.com/CiscoDevNet/terraform-aci-autoscaling-nia) -- Create and Update Cisco ACI Endpoint Security Groups: [Terraform Registry](https://registry.terraform.io/modules/CiscoDevNet/esg-nia/aci/latest) / [GitHub](https://github.com/CiscoDevNet/terraform-aci-esg-nia) - -#### Citrix ADC - -- Create, Update, and Delete Service Groups in Citrix ADC: [Terraform Registry](https://registry.terraform.io/modules/citrix/servicegroup-consul-sync-nia/citrixadc/latest) / [GitHub](https://github.com/citrix/terraform-citrixadc-servicegroup-consul-sync-nia) - -#### F5 - -- Dynamic Load Balancing with Pool Member Updates: [Terraform Registry](https://registry.terraform.io/modules/f5devcentral/app-consul-sync-nia/bigip/latest) / [GitHub](https://github.com/f5devcentral/terraform-bigip-app-consul-sync-nia) - -#### NS1 - -- Create, Delete, and Update DNS Records and Zones: [Terraform Registry](https://registry.terraform.io/modules/ns1-terraform/record-sync-nia/ns1/latest) / [GitHub](https://github.com/ns1-terraform/terraform-ns1-record-sync-nia) - -#### Palo Alto Networks - -- Dynamic Address Group (DAG) Tags: [Terraform Registry](https://registry.terraform.io/modules/PaloAltoNetworks/dag-nia/panos/latest) / [GitHub](https://github.com/PaloAltoNetworks/terraform-panos-dag-nia) -- Address Group and Dynamic Address Group (DAG) Tags: [Terraform Registry](https://registry.terraform.io/modules/PaloAltoNetworks/ag-dag-nia/panos/latest) / [GitHub](https://github.com/PaloAltoNetworks/terraform-panos-ag-dag-nia) diff --git a/website/content/docs/nia/usage/run-ha.mdx b/website/content/docs/nia/usage/run-ha.mdx deleted file mode 100644 index 7ef90e29d082..000000000000 --- a/website/content/docs/nia/usage/run-ha.mdx +++ /dev/null @@ -1,183 +0,0 @@ ---- -layout: docs -page_title: Run Consul-Terraform-Sync with high availability -description: >- - Improve network automation resiliency by enabling high availability for Consul-Terraform-Sync. HA enables persistent task and event data so that CTS functions as expected during a failover event. ---- - -# Run Consul-Terraform-Sync with high availability - - - An enterprise license is only required for enterprise distributions of Consul-Terraform-Sync (CTS). - - -This topic describes how to run Consul-Terraform-Sync (CTS) configured for high availability. High availability is an enterprise capability that ensures that all changes to Consul that occur during a failover transition are processed and that CTS continues to operate as expected. - -## Introduction - -A network always has exactly one instance of the CTS cluster that is the designated leader. The leader is responsible for monitoring and running tasks. If the leader fails, CTS triggers the following process when it is configured for high availability: - -1. The CTS cluster promotes a new leader from the pool of followers in the network. -1. The new leader begins running all existing tasks in `once-mode` in order to process changes that occurred during the failover transition period. In this mode, CTS runs all existing tasks one time. -1. The new leader logs any errors that occur during `once-mode` operation and the new leader continues to monitor Consul for changes. - -In a standard configuration, CTS exits if errors occur when the CTS instance runs tasks in `once-mode`. In a high availability configuration, CTS logs the errors and continues to operate without interruption. - -The following diagram shows operating state when high availability is enabled. CTS Instance A is the current leader and is responsible for monitoring and running tasks: - -![Consul-Terraform-Sync architecture configured for high availability before a shutdown event](/img/nia/cts-ha-before.svg) - -The following diagram shows the CTS cluster state after the leader stops. CTS Instance B becomes the leader responsible for monitoring and running tasks. - -![Consul-Terraform-Sync architecture configured for high availability before a shutdown event](/img/nia/cts-ha-after.svg) - -### Failover details - -- The time it takes for a new leader to be elected is determined by the `high_availability.cluster.storage.session_ttl` configuration. The minimum failover time is equal to the `session_ttl` value. The maximum failover time is double the `session_ttl` value. -- If failover occurs during task execution, a new leader is elected. The new leader will attempt to run all tasks once before continuing to monitor for changes. -- If using the [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud), the task finishes and CTS starts a new leader that attempts to queue a run for each task in HCP Terraform in once-mode. -- If using [Terraform driver](/consul/docs/nia/network-drivers/terraform), the task may complete depending on the cause of the failover. The new leader starts and attempts to run each task in [once-mode](/consul/docs/nia/cli/start#modes). Depending on the module and provider, the task may require manual intervention to fix any inconsistencies between the infrastructure and Terraform state. -- If failover occurs when no task is executing, CTS elects a new leader that attempts to run all tasks in once-mode. - -Note that driver behavior is consistent whether or not CTS is running in high availability mode. - -## Requirements - -Verify that you have met the [basic requirements](/consul/docs/nia/usage/requirements) for running CTS. - -* CTS Enterprise 0.7 or later -* Terraform CLI 0.13 or later -* All instances in a cluster must be in the same datacenter. - -You must configure appropriate ACL permissions for your cluster. Refer to [ACL permissions](#) for details. - -We recommend specifying the [HCP Terraform driver](/consul/docs/nia/network-drivers/terraform-cloud) in your CTS configuration if you want to run in high availability mode. - -## Configuration - -Add the `high_availability` block in your CTS configuration and configure the required settings to enable high availability. Refer to the [Configuration reference](/consul/docs/nia/configuration#high-availability) for details about the configuration fields for the `high_availability` block. - -The following example configures high availability functionality for a cluster named `cts-cluster`: - - - -```hcl -high_availability { - cluster { - name = "cts-cluster" - storage "consul" { - parent_path = "cts" - namespace = "ns" - session_ttl = "30s" - } - } - - instance { - address = "cts-01.example.com" - } -} -``` - - -### ACL permissions - -The `session` and `keys` resources in your Consul environment must have `write` permissions. Refer to the [ACL documentation](/consul/docs/security/acl) for details on how to define ACL policies. - -If the `high_availability.cluster.storage.namespace` field is configured, then your ACL policy must also enable `write` permissions for the `namespace` resource. - -## Start a new CTS cluster - -We recommend deploying a cluster that includes three CTS instances. This is so that the cluster has one leader and two followers. - -1. Create an HCL configuration file that includes the settings you want to include, including the `high_availability` block. Refer to [Configuration Options for Consul-Terraform-Sync](/consul/docs/nia/configuration) for all configuration options. -1. Issue the startup command and pass the configuration file. Refer to the [`start` command reference](/consul/docs/nia/cli/start#modes) for additional information about CTS startup modes. - ```shell-session - $ consul-terraform-sync start -config-file ha-config.hcl - ``` -1. You can call the `/status` API endpoint to verify the status of tasks CTS is configured to monitor. Only the leader of the cluster will return a successful response. Refer to the [`/status` API reference documentation](/consul/docs/nia/api/status) for information about usage and responses. - - ```shell-session - $ curl localhost:/status/tasks - ``` - -Repeat the procedure to start the remaining instances for your cluster. We recommend using near-identical configurations for all instances in your cluster. You may not be able to use exact configurations in all cases, but starting instances with the same configuration improves consistency and reduces confusion if you need to troubleshoot errors. - -## Modify an instance configuration - -You can implement a rolling update to update a non-task configuration for a CTS instance, such as the Consul connection settings. If you need to update a task in the instance configuration, refer to [Modify tasks](#modify-tasks). - -1. Identify the leader CTS instance by either making a call to the [`status/cluster` API endpoint](/consul/docs/nia/api/status#cluster-status) or by checking the logs for the following entry: - ```shell-session - [INFO] ha: acquired leadership lock: id= - ``` -1. Stop one of the follower CTS instances and apply the new configuration. -1. Restart the follower instance. -1. Repeat steps 2 and 3 for other follower instances in your cluster. -1. Stop the leader instance. One of the follower instances becomes the leader. -1. Apply the new configuration to the former leader instance and restart it. - -## Modify tasks - -When high availability is enabled, CTS persists task and event data. Refer to [State storage and persistence](/consul/docs/nia/architecture#state-storage-and-persistence) for additional information. - -You can use the following methods for modifying tasks when high availability is enabled. We recommend choosing a single method to make all task configuration changes because inconsistencies between the state and the configuration can occur when mixing methods. - -### Delete and recreate the task - -We recommend deleting and recreating a task if you need to make a modification. Use the CTS API to identify the CTS leader instance and replace a task. - -1. Identify the leader CTS instance by either making a call to the [`status/cluster` API endpoint](/consul/docs/nia/api/status#cluster-status) or by checking the logs for the following entry: - - ```shell-session - [INFO] ha: acquired leadership lock: id= - ``` -1. Send a `DELETE` call to the [`/task/` endpoint](/consul/docs/nia/api/tasks#delete-task) to delete the task. In the following example, the leader instance is at `localhost:8558`: - - ```shell-session - $ curl --request DELETE localhost:8558/v1/tasks/task_a - ``` - - You can also use the [`task delete` command](/consul/docs/nia/cli/task#task-delete) to complete this step. - -1. Send a `POST` call to the `/task/` endpoint and include the updated task in your payload. - ```shell-session - $curl --header "Content-Type: application/json" \ - --request POST \ - --data @payload.json \ - localhost:8558/v1/tasks - ``` - - You can also use the [`task-create` command](/consul/docs/nia/cli/task#task-create) to complete this step. - -### Discard data with the `-reset-storage` flag - -You can restart the CTS cluster using the [`-reset-storage` flag](/consul/docs/nia/cli/start#options) to discard persisted data if you need to update a task. - -1. Stop a follower instance. -1. Update the instance’s task configuration. -1. Restart the instance and include the `-reset-storage` flag. -1. Stop all other instances so that the updated instance becomes the leader. -1. Start all other instances again. -1. Restart the instance you restarted in step 3 without the `-reset-storage` flag so that it starts up with the current state. If you continue to run an instance with the `-reset-storage` flag enabled, then CTS will reset the state data whenever the instance becomes the leader. - -## Troubleshooting - -Use the following troubleshooting procedure if a previous leader had been running a task successfully but the new leader logs an error after a failover: - -1. Check the logs printed to the console for errors. Refer to the [`syslog` configuration](/consul/docs/nia/configuration#syslog) for information on how to locate the logs. In the following example output, CTS reported a `401: Bad credentials` error: - ```shell-session - 2022-08-23T09:25:09.501-0700 [ERROR] tasksmanager: error applying task: task_name=config-task - error= - | error tf-apply for 'config-task': exit status 1 - | - | Error: GET https://api.github.com/user: 401 Bad credentials [] - | - | with module.config-task.provider["registry.terraform.io/integrations/github"], - | on .terraform/modules/config-task/main.tf line 11, in provider "github": - | 11: provider "github" { - | - ``` -1. Check for differences between the previous leader and new leader, such as differences in configurations, environment variables, and local resources. -1. Start a new instance with the fix that resolves the issue. -1. Tear down the leader instance that has the issue and any other instances that may have the same issue. -1. Restart the affected instances to implement the fix. diff --git a/website/content/docs/nia/usage/run.mdx b/website/content/docs/nia/usage/run.mdx deleted file mode 100644 index 43e35f09c392..000000000000 --- a/website/content/docs/nia/usage/run.mdx +++ /dev/null @@ -1,41 +0,0 @@ ---- -layout: docs -page_title: Run Consul-Terraform-Sync -description: >- - Consul-Terraform-Sync requires a Terraform Provider, a Terraform Module and a running Consul Cluster outside of the `consul-terraform-sync` daemon. ---- - -# Run Consul-Terraform-Sync - -This topic describes the basic procedure for running Consul-Terraform-Sync (CTS). Verify that you have met the [basic requirements](/consul/docs/nia/usage/requirements) before attempting to run CTS. - -1. Move the `consul-terraform-sync` binary to a location available on your `PATH`. - - ```shell-session - $ mv ~/Downloads/consul-terraform-sync /usr/local/bin/consul-terraform-sync - ``` - -2. Create the config.hcl file and configure the options for your use case. Refer to the [configuration reference](/consul/docs/nia/configuration) for details about all CTS configurations. - -3. Run Consul-Terraform-Sync (CTS). - - ```shell-session - $ consul-terraform-sync start -config-file - ``` - -4. Check status of tasks. Replace port number if configured in Step 2. Refer to [Consul-Terraform-Sync API](/consul/docs/nia/api) for additional information. - - ```shell-session - $ curl localhost:8558/status/tasks - ``` - -## Other Run modes - -You can [configure CTS for high availability](/consul/docs/nia/usage/run-ha), which is an enterprise capability that ensures that all changes to Consul that occur during a failover transition are processed and that CTS continues to operate as expected. - -You can start CTS in [inspect mode](/consul/docs/nia/cli/start#modes) to review and test your configuration before applying any changes. Inspect mode allows you to verify that the changes work as expected before running them in an unsupervised daemon mode. - -For hands-on instructions on using inspect mode, refer to the [Consul-Terraform-Sync Run Modes and Status Inspection](/consul/tutorials/network-infrastructure-automation/consul-terraform-sync-run-and-inspect?utm_source=WEBSITE&utm_medium=WEB_IO&utm_offer=ARTICLE_PAGE&utm_content=DOCS) tutorial. - - - diff --git a/website/content/docs/nomad.mdx b/website/content/docs/nomad.mdx new file mode 100644 index 000000000000..9ceb03752763 --- /dev/null +++ b/website/content/docs/nomad.mdx @@ -0,0 +1,58 @@ +--- +layout: docs +page_title: Consul on Nomad +description: >- + Consul documentation provides reference material for all features and options available in Consul. +--- + +# Consul on Nomad + +This topic provides an overview of the documentation available for users running Consul with Nomad. + +For more information about using Nomad to schedule clusters and applications, refer to the [Nomad documentation](/nomad/docs). + +## Introduction + +Nomad is HashiCorp’s workload orchestrator. It enables you to run containers, legacy, and batch applications together on the same infrastructure. Just like a Consul client runs as a system daemon on a worker node, Nomad is deployed in a similar fashion. In order to use Consul with Nomad, you must configure Nomad to access the Consul cluster. We recommend running a Consul client alongside each Nomad client. + +Consul and Nomad operate independently in their deployment and functions, but can integrate to simplify the other’s operations. For example, Nomad can register tasks as services in the Consul catalog and configure sidecar proxies. Furthermore, Nomad servers can also register themselves as a service in Consul, which helps with bootstrapping a Nomad cluster. + +Deployments in Nomad can take advantage of Consul service discovery, service mesh, or both. You can leverage Consul DNS and run the Nomad service workloads, or start a sidecar process for each service to abstract the networking layer within the service mesh environment. + +Nomad supports the following Consul features: + +- Automatic clustering +- Concurrent connections to multiple Consul clusters +- Service discovery +- Service mesh +- Key/value store for dynamic application configuration +- Consul access controls +- Namespaces +- Admin partitions + +## Tutorials + +If you are familiar with virtualized workloads in a Linux environment, we recommend you attempt the [Nomad getting started tutorials](/nomad/tutorials/get-started) to get acquainted with the basics of running workloads in Nomad. Then complete the [Consul getting started on VMs tutorials](/consul/tutorials/get-started-vms) to learn how to configure and deploy Consul. Then review the dedicated Consul and Nomad tutorials to understand how both systems interact together. + +To learn more about using Consul with Nomad, refer to the following tutorials in the Nomad documentation: + +- [Convert from Nomad service discovery to Consul service discovery](/nomad/tutorials/service-discovery/service-discovery-consul-conversion) +- [Secure Nomad jobs with Consul service mesh](/nomad/tutorials/integrate-consul/consul-service-mesh) +- [Consul ACL with Nomad Workload Identities](/nomad/tutorials/integrate-consul/consul-acl) +- [Use Consul to automatically cluster Nomad nodes](/nomad/tutorials/manage-clusters/clustering#use-consul-to-automatically-cluster-nodes) + +## Nomad documentation + +[Nomad Consul Service Integration](/nomad/docs/integrations/consul/service-mesh) enables Nomad users to automatically register services to Consul and deploy sidecar proxies alongside their services. The following resources are available in the Nomad documentation to help you define Consul jobs: + +- [consul Block specification](/nomad/docs/job-specification/consul) +- [connect Block specification](/nomad/docs/job-specification/connect) +- [check Block specification](/nomad/docs/job-specification/check) +- [check_restart Block specification](/nomad/docs/job-specification/check_restart) +- [gateway Block specification](/nomad/docs/job-specification/gateway) +- [identity Block specification](/nomad/docs/job-specification/identity) +- [proxy Block specification](/nomad/docs/job-specification/proxy) +- [service Block specification](/nomad/docs/job-specification/service) +- [sidecar_service Block specification](/nomad/docs/job-specification/sidecar_service) +- [sidecar_task Block specification](/nomad/docs/job-specification/sidecar_task) +- [upstreams Block specification](/nomad/docs/job-specification/upstreams) \ No newline at end of file diff --git a/website/content/docs/nomad/index.mdx b/website/content/docs/nomad/index.mdx deleted file mode 100644 index a9ba57fb6a09..000000000000 --- a/website/content/docs/nomad/index.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -layout: docs -page_title: Consul on Nomad -description: >- - Consul is a service networking solution that you can run with Nomad. Learn more about Consul on Nomad and find documentation specific to the Nomad runtime. ---- - -# Consul on Nomad - -This topic provides an overview of the documentation available for users running Consul with Nomad. - -For more information about using Nomad to schedule clusters and applications, refer to the [Nomad documentation](/nomad/docs). - -## Introduction - -Nomad is HashiCorp’s workload orchestrator. It enables you to run containers, legacy, and batch applications together on the same infrastructure. Just like a Consul client runs as a system daemon on a worker node, Nomad is deployed in a similar fashion. In order to use Consul with Nomad, you must configure Nomad to access the Consul cluster. We recommend running a Consul client alongside each Nomad client. - -Consul and Nomad operate independently in their deployment and functions, but can integrate to simplify the other’s operations. For example, Nomad can register tasks as services in the Consul catalog and configure sidecar proxies. Furthermore, Nomad servers can also register themselves as a service in Consul, which helps with bootstrapping a Nomad cluster. - -Deployments in Nomad can take advantage of Consul service discovery, service mesh, or both. You can leverage Consul DNS and run the Nomad service workloads, or start a sidecar process for each service to abstract the networking layer within the service mesh environment. - -Nomad supports the following Consul features: - -- Automatic clustering -- Concurrent connections to multiple Consul clusters -- Service discovery -- Service mesh -- Key/value store for dynamic application configuration -- Consul access controls -- Namespaces -- Admin partitions - -## Tutorials - -If you are familiar with virtualized workloads in a Linux environment, we recommend you attempt the [Nomad getting started tutorials](/nomad/tutorials/get-started) to get acquainted with the basics of running workloads in Nomad. Then complete the [Consul getting started on VMs tutorials](/consul/tutorials/get-started-vms) to learn how to configure and deploy Consul. Then review the dedicated Consul and Nomad tutorials to understand how both systems interact together. - -To learn more about using Consul with Nomad, refer to the following tutorials in the Nomad documentation: - -- [Convert from Nomad service discovery to Consul service discovery](/nomad/tutorials/service-discovery/service-discovery-consul-conversion) -- [Secure Nomad jobs with Consul service mesh](/nomad/tutorials/integrate-consul/consul-service-mesh) -- [Consul ACL with Nomad Workload Identities](/nomad/tutorials/integrate-consul/consul-acl) -- [Use Consul to automatically cluster Nomad nodes](/nomad/tutorials/manage-clusters/clustering#use-consul-to-automatically-cluster-nodes) - -## Nomad documentation - -[Nomad Consul Service Integration](/nomad/docs/integrations/consul/service-mesh) enables Nomad users to automatically register services to Consul and deploy sidecar proxies alongside their services. The following resources are available in the Nomad documentation to help you define Consul jobs: - -- [consul Block specification](/nomad/docs/job-specification/consul) -- [connect Block specification](/nomad/docs/job-specification/connect) -- [check Block specification](/nomad/docs/job-specification/check) -- [check_restart Block specification](/nomad/docs/job-specification/check_restart) -- [gateway Block specification](/nomad/docs/job-specification/gateway) -- [identity Block specification](/nomad/docs/job-specification/identity) -- [proxy Block specification](/nomad/docs/job-specification/proxy) -- [service Block specification](/nomad/docs/job-specification/service) -- [sidecar_service Block specification](/nomad/docs/job-specification/sidecar_service) -- [sidecar_task Block specification](/nomad/docs/job-specification/sidecar_task) -- [upstreams Block specification](/nomad/docs/job-specification/upstreams) \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/index.mdx b/website/content/docs/north-south/api-gateway/index.mdx new file mode 100644 index 000000000000..2f2608c81ebb --- /dev/null +++ b/website/content/docs/north-south/api-gateway/index.mdx @@ -0,0 +1,93 @@ +--- +layout: docs +page_title: API gateways overview +description: API gateways provide an ingress point for service mesh traffic. Learn how API gateways add listeners for external traffic and route HTTP requests to services in the mesh. +--- + +# API gateways overview + +This topic provides overview information about API gateways in Consul. + +## Introduction + +API gateways enable external network clients to access applications and services running in a Consul datacenter. Consul API gateways can also forward requests from clients to specific destinations based on path or request protocol. Systems that access services in the mesh may be internal or external to your organizational network. _North-south traffic_ is a common term to describe this type of network traffic. + +## API gateway use cases + +API gateways solve the following primary use cases: + +- **Control access at the point of entry**: Set the protocols of external connection requests and secure inbound connections with TLS certificates from trusted providers, such as Verisign and Let's Encrypt. +- **Simplify traffic management**: Load balance requests across services and route traffic to the appropriate service by matching one or more criteria, such as hostname, path, header presence or value, and HTTP method. + +## Workflows + +You can deploy API gateways to networks that implement a variety of computing environments: + +- Services hosted on VMs +- Kubernetes-orchestrated service containers +- Kubernetes-orchestrated service containers in OpenShift + +The following steps describe the general workflow for deploying a Consul API gateways: + +1. For Kubernetes-orchestrated services, install Consul on your cluster. For Kubernetes-orchestrated services on OpenShift, you must also enable the `openShift.enabled` parameter. Refer to [Install Consul on Kubernetes](/consul/docs/deploy/server/k8s/consul-k8s) for additional information. +1. Define and deploy the API gateway configurations to create the API gateway artifacts. For VM-hosted services, create configuration entries for the gateway service, listeners configurations, and references to TLS certificates. For Kubernetes-orchestrated, configurations also include `GatewayClassConfig`s and `parametersRef`s. +1. Define and deploy routes between the gateway listeners and services in the mesh. + +Gateway configurations are modular, so you can define and attach routes and inline certificates to multiple gateways. + +### Configurations for virtual machines + +Apply the following configuration items if your network runs on virtual machines nodes: + +| Configuration | Description | Usage | +| --- | --- | --- | +| [`api-gateway`](/consul/docs/reference/config-entry/api-gateway) | Defines the main infrastructure resource for declaring an API gateway and listeners on the gateway. | [Deploy API gateway listeners on virtual machines](/consul/docs/north-south/api-gateway/vm/listener) | +| [`http-route`](/consul/docs/reference/config-entry/http-route) | Enables HTTP traffic to reach services in the mesh from a listener on the gateway.| [Define routes on virtual machines](/consul/docs/north-south/api-gateway/vm/route) | +| [`tcp-route`](/consul/docs/reference/config-entry/tcp-route) | Enables TCP traffic to reach services in the mesh from a listener on the gateway.| [Define routes on virtual machines](/consul/docs/north-south/api-gateway/vm/route) | +| [`file-system-certificate`](/consul/docs/reference/config-entry/file-system-certificate) | Provides gateway with a CA certificate so that requests between the user and the gateway endpoint are encrypted. | [Encrypt API gateway traffic on virtual machines](/consul/docs/north-south/api-gateway/secure-traffic/encrypt) | +| [`inline-certificate`](/consul/docs/reference/config-entry/inline-certificate) | Provides gateway with a CA certificate so that requests between the user and the gateway endpoint are encrypted. | [Encrypt API gateway traffic on virtual machines](/consul/docs/north-south/api-gateway/secure-traffic/encrypt) | +| [`service-intentions`](/consul/docs/reference/config-entry/service-intentions) | Specifies traffic communication rules between services in the mesh. Intentions also enforce rules for service-to-service traffic routed through a Consul API gateway. | General configuration for securing a service mesh | + +### Configurations for Kubernetes + +Apply the following configuration items if your network runs on Kubernetes: + +| Configuration | Description | Usage | +| --- | --- | --- | +| [`Gateway`](/consul/docs/reference/k8s/api-gateway/gateway) | Defines the main infrastructure resource for declaring an API gateway and listeners on the gateway. It also specifies the name of the `GatewayClass`. | [Deploy listeners on Kubernetes](/consul/docs/north-south/api-gateway/k8s/listener) | +| [`GatewayClass`](/consul/docs/reference/k8s/api-gateway/gatewayclass) | Defines a class of gateway resources used as a template for creating gateways. The default gateway class is `consul` and is suitable for most API gateway implementations. | [Deploy listeners on Kubernetes](/consul/docs/north-south/api-gateway/k8s/listener) | +| [`GatewayClassConfig`](/consul/docs/reference/k8s/api-gateway/gatewayclassconfig) | Describes additional gateway-related configuration parameters for the `GatewayClass` resource. | [Deploy listeners on Kubernetes](/consul/docs/north-south/api-gateway/k8s/listener) | +| [`Routes`](/consul/docs/reference/k8s/api-gateway/routes) | Specifies paths from the gateway listener to backend services. | [Define routes on Kubernetes](/consul/docs/north-south/api-gateway/k8s/route)

    [Reroute traffic in Kubernetes](/consul/docs/north-south/api-gateway/k8s/reroute)

    [Route traffic to peered services in Kubernetes](/consul/docs/north-south/api-gateway/k8s/peer)

    | +| [`MeshServices`](/consul/docs/north-south/api-gateway/k8s/peer) | Enables routes to reference services in Consul. | [Route traffic to peered services in Kubernetes](/consul/docs/north-south/api-gateway/k8s/peer) | +| [`ServiceIntentions`](/consul/docs/reference/config-entry/service-intentions) | Specifies traffic communication rules between services in the mesh. Intentions also enforce rules for service-to-service traffic routed through a Consul API gateway. | General configuration for securing a service mesh | + +## Technical specifications + +Refer to [Technical specifications for API gateways on Kubernetes](/consul/docs/north-south/api-gateway/k8s/tech-specs) for additional details and considerations about using API gateways in Kubernetes-orchestrated networks. + +## Guidance + +Refer to the following resources for help setting up and using API gateways: + +### Tutorials + +- [Control access into the service mesh with Consul API gateway](/consul/tutorials/developer-mesh/kubernetes-api-gateway) + +### Usage documentation + +- [Deploy API gateway listeners to VMs](/consul/docs/north-south/api-gateway/vm/listener) +- [Deploy API gateway listeners to Kubernetes](/consul/docs/north-south/api-gateway/k8s/listener) +- [Deploy API gateway routes to VMs](/consul/docs/north-south/api-gateway/vm/route) +- [Deploy API gateway routes to Kubernetes](/consul/docs/north-south/api-gateway/k8s/route) +- [Reroute HTTP requests in Kubernetes](/consul/docs/north-south/api-gateway/k8s/reroute) +- [Route traffic to peered services in Kubernetes](/consul/docs/north-south/api-gateway/k8s/peer) +- [Encrypt API gateway traffic on VMs](/consul/docs/north-south/api-gateway/secure-traffic/encrypt) +- [Use JWTs to verify requests to API gateways on VMs](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) +- [Use JWTs to verify requests to API gateways on Kubernetes](/consul/docs/north-south/api-gateway/secure-traffic/jwt/k8s) + +### Reference + +- [API gateway configuration entry reference](/consul/docs/reference/config-entry/api-gateway) +- [HTTP route configuration entry reference](/consul/docs/reference/config-entry/http-route) +- [TCP route configuration entry reference](/consul/docs/reference/config-entry/tcp-route) +- [Error messages](/consul/docs/error-messages/api-gateway) diff --git a/website/content/docs/north-south/api-gateway/k8s/enable.mdx b/website/content/docs/north-south/api-gateway/k8s/enable.mdx new file mode 100644 index 000000000000..bdc490c3303b --- /dev/null +++ b/website/content/docs/north-south/api-gateway/k8s/enable.mdx @@ -0,0 +1,133 @@ +--- +layout: docs +page_title: Enable API Gateway for Kubernetes +description: >- + Learn how to install custom resource definitions (CRDs) and configure the Helm chart so that you can run Consul API Gateway on your Kubernetes deployment. +--- + +# Enable API gateway for Kubernetes + +The Consul API gateway ships with Consul and is automatically installed when you install Consul on Kubernetes. Before you begin the installation process, verify that the environment you are deploying Consul and the API gateway in meets the requirements listed in the [Technical Specifications](/consul/docs/north-south/api-gateway/k8s/tech-specs). Refer to the [Release Notes](/consul/docs/release-notes) for any additional information about the version you are deploying. + +1. The Consul Helm chart deploys the API gateway using the configuration specified in the `values.yaml` file. Refer to [Helm Chart Configuration - `connectInject.apiGateway`](/consul/docs/reference/k8s/helm#apigateway) for information about the Helm chart configuration options. Create a `values.yaml` file for configuring your Consul API gateway deployment and include the following settings: + + + + + + + ```yaml + global: + name: consul + connectInject: + enabled: true + apiGateway: + manageExternalCRDs: true + ``` + + + + + + + If you are installing Consul on an OpenShift Kubernetes cluster, you must include the `global.openShift.enabled` parameter and set it to `true`. Refer to [OpenShift requirements](/consul/docs/connect/gateways/api-gateway/tech-specs#openshift-requirements) for additional information. + + + + ```yaml + global: + openshift: + enabled: true + connectInject: + enabled: true + apiGateway: + manageExternalCRDs: true + cni: + enabled: true + logLevel: info + multus: true + cniBinDir: "/var/lib/cni/bin" + cniNetDir: "/etc/kubernetes/cni/net.d" + ``` + + + + + + By default, GKE Autopilot installs [Gateway API resources](https://gateway-api.sigs.k8s.io), so we recommend customizing the `connectInject.apiGateway` stanza to accommodate the pre-installed Gateway API CRDs. + + The following working example enables both Consul Service Mesh and Consul API Gateway on GKE Autopilot. Refer to [`connectInject.agiGateway` in the Helm chart reference](https://developer.hashicorp.com/consul/docs/reference/k8s/helm#v-connectinject-apigateway) for additional information. + + + + ```yaml + global: + name: consul + connectInject: + enabled: true + apiGateway: + manageExternalCRDs: false + manageNonStandardCRDs: true + cni: + enabled: true + logLevel: debug + cniBinDir: "/home/kubernetes/bin" + cniNetDir: "/etc/cni/net.d" + server: + resources: + requests: + memory: "500Mi" + cpu: "500m" + limits: + memory: "500Mi" + cpu: "500m" + ``` + + + + + +1. Install Consul API Gateway using the standard Consul Helm chart or Consul K8s CLI specify the custom values file. Refer to the [Consul Helm chart](https://github.com/hashicorp/consul-k8s/releases) in GitHub releases for the available versions. + + + + + Refer to the official [Consul K8S CLI documentation](/consul/docs/reference/cli/consul-k8s) to find additional settings. + + ```shell-session + $ brew tap hashicorp/tap + ``` + + ```shell-session + $ brew install hashicorp/tap/consul-k8s + ``` + + ```shell-session + $ consul-k8s install -config-file=values.yaml -set global.image=hashicorp/consul:1.17.0 + ``` + + + + + Add the HashiCorp Helm repository. + + ```shell-session + $ helm repo add hashicorp https://helm.releases.hashicorp.com + ``` + + Install Consul with API Gateway on your Kubernetes cluster by specifying the `values.yaml` file. + + ```shell-session + $ helm install consul hashicorp/consul --version 1.3.0 --values values.yaml --create-namespace --namespace consul + ``` + + + + + + +[tech-specs]: /consul/docs/north-south/api-gateway/k8s/tech-specs +[rel-notes]: /consul/docs/release-notes \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/k8s/listener.mdx b/website/content/docs/north-south/api-gateway/k8s/listener.mdx new file mode 100644 index 000000000000..17593d41742e --- /dev/null +++ b/website/content/docs/north-south/api-gateway/k8s/listener.mdx @@ -0,0 +1,73 @@ +--- +layout: docs +page_title: Deploy API gateway listeners in Kubernetes +description: Learn how to create API gateway configurations in Kubernetes that enable you to instantiate gateway instances. +--- + +# Deploy API gateway listeners in Kubernetes + +This topic describes how to deploy Consul API gateway listeners to Kubernetes-orchestrated environments. If you want to implement API gateway listeners on VMs, refer to [Deploy API gateway listeners to virtual machines](/consul/docs/north-south/api-gateway/vm/listener). + +## Overview + +API gateways have one or more listeners that serve as ingress points for requests to services in a Consul service mesh. Create an [API gateway configuration](/consul/docs/reference/k8s/api-gateway/gateway) and define listeners that expose ports on the endpoint for ingress. Apply the configuration to direct Kubernetes to start API gateway services. + +### Routes + +After deploying the gateway, attach HTTP or TCP [routes](/consul/docs/reference/k8s/api-gateway/routes) to listeners defined in the gateway to control how requests route to services in the network. + +### Intentions + +Configure Consul intentions to allow or prevent traffic between gateway listeners and services in the mesh. Refer to [Service intentions](/consul/docs/secure-mesh/intention) for additional information. + + +## Requirements + +1. Verify that your environment meets the requirements specified in [Technical specifications for Kubernetes](/consul/docs/north-south/api-gateway/k8s/tech-specs). +1. Verify that the Consul API Gateway CRDs were applied. Refer to [Installation](/consul/docs/north-south/api-gateway/k8s/enable) for details. +1. If your Kubernetes-orchestrated network runs on OpenShift, verify that OpenShift is enabled for your Consul installation. Refer to [OpenShift requirements](/consul/docs/connect/gateways/api-gateway/tech-specs#openshift-requirements) for additional information. + +## Define the gateway and listeners + +Create an API gateway values file that defines the gateway and listeners. + +1. Specify the following fields: + - `apiVersion`: Specifies the Kubernetes gateway API version. Must be `gateway.networking.k8s.io/v1beta1`. + - `kind`: Specifies the type of configuration entry to implement. This must be `Gateway`. + - `metadata.name`: Specify a name for the gateway configuration. The name is metadata that you can use to reference the configuration when performing Consul operations. + - `spec.gatewayClassName`: Specify the name of a `gatewayClass` configuration. Gateway classes are template-like resources in Kubernetes for instantiating gateway services. Specify `consul` to use the default gateway class shipped with Consul. Refer to the [GatewayClass configuration reference](/consul/docs/reference/k8s/api-gateway/gatewayclass) for additional information. + - `spec.listeners`: Specify a list of listener configurations. Each listener is map containing the following fields: + - `port`: Specifies the port that the listener receives traffic on. + - `name`: Specifies a unique name for the listener. + - `protocol`: You can set either `tcp` or `http` + - `allowedRoutes.namespaces`: Contains configurations for determining which namespaces are allowed to attach a route to the listener. +1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. Refer to the [API gateway configuration entry reference](/consul/docs/reference/k8s/api-gateway/gateway) for additional information. +1. Save the configuration. + +In the following example, the API gateway specifies an HTTP listener on port `80`: + +```yaml +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: Gateway +metadata: + name: my-gateway + namespace: consul +spec: + gatewayClassName: consul + listeners: + - protocol: HTTP + port: 80 + name: http + allowedRoutes: + namespaces: + from: "All" +``` + + +## Deploy the API gateway and listeners + +Apply the configuration to your cluster using the `kubectl` command. The following command applies the configuration to the `consul` namespace: + +```shell-session +$ kubectl apply -f my-gateway.yaml -n consul +``` \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/k8s/peer.mdx b/website/content/docs/north-south/api-gateway/k8s/peer.mdx new file mode 100644 index 000000000000..4ac3e9719cf2 --- /dev/null +++ b/website/content/docs/north-south/api-gateway/k8s/peer.mdx @@ -0,0 +1,76 @@ +--- +page_title: Route traffic to peered services +description: Learn how to configure Consul API gateway to route traffic to services connected to the mesh through a peering connection. +--- + +# Route traffic to peered services + +This topic describes how to configure Consul API Gateway to route traffic to services connected to the mesh through a cluster peering connection. + +## Requirements + +- Consul v1.14 or later +- Verify that the [requirements](/consul/docs/north-south/api-gateway/k8s/tech-specs) have been met. +- Verify that the Consul API Gateway CRDs and controller have been installed and applied. Refer to [Installation](/consul/docs/north-south/api-gateway/k8s/enable) for details. +- A peering connection must already be established between Consul clusters. Refer to [Cluster Peering on Kubernetes](/consul/docs/east-west/cluster-peering/tech-specs/k8s) for instructions. +- The Consul service that you want to route traffic to must be exported to the cluster containing your `Gateway`. Refer to [Cluster Peering on Kubernetes](/consul/docs/east-west/cluster-peering/tech-specs/k8s) for instructions. +- A `ServiceResolver` for the Consul service you want to route traffic to must be created in the cluster that contains your `Gateway`. Refer to [Service Resolver Configuration Entry](/consul/docs/reference/config-entry/service-resolver) for instructions. + +## Configuration + +Specify the following fields in your `MeshService` configuration to use this feature. Refer to the [MeshService configuration reference](/consul/docs/reference/k8s/api-gateway/meshservice) for details about the parameters. + +- [`name`](/consul/docs/connect/gateways/api-gateway/configuration/meshservice#name) +- [`peer`](/consul/docs/connect/gateways/api-gateway/configuration/meshservice#peer) + +## Example + +In the following example, routes that use `example-mesh-service` as a backend are configured to send requests to the `echo` service exported by the peered Consul cluster `cluster-02`. + + + +```yaml hideClipboard +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceResolver +metadata: + name: echo +spec: + redirect: + peer: cluster-02 + service: echo +``` + + + + +```yaml hideClipboard +apiVersion: api-gateway.consul.hashicorp.com/v1alpha1 +kind: MeshService +metadata: + name: example-mesh-service +spec: + name: echo + peer: cluster-02 +``` + + +After applying the `meshservice.yaml` configuration, an `HTTPRoute` may then reference `example-mesh-service` as its `backendRef`. + + + +```yaml hideClipboard +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: example-route +spec: + ... + rules: + - backendRefs: + - group: consul.hashicorp.com + kind: MeshService + name: example-mesh-service + port: 3000 + ... +``` + \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/k8s/reroute.mdx b/website/content/docs/north-south/api-gateway/k8s/reroute.mdx new file mode 100644 index 000000000000..bd9d2735dd04 --- /dev/null +++ b/website/content/docs/north-south/api-gateway/k8s/reroute.mdx @@ -0,0 +1,58 @@ +--- +layout: docs +page_title: Reroute HTTP requests +description: >- + Learn how to reroute HTTP requests through an API gateway to a specific path in Kubernetes. +--- + +# Reroute HTTP requests + +This topic describes how to configure Consul API Gateway to reroute HTTP requests. + +## Requirements + +1. Verify that the [requirements](/consul/docs/north-south/api-gateway/k8s/tech-specs) have been met. +1. Verify that the Consul API Gateway CRDs and controller have been installed and applied. Refer to [Installation](/consul/docs/north-south/api-gateway/k8s/enable) for details. + +## Configuration + +Specify the following fields in your `Route` configuration. Refer to the [Route configuration reference](/consul/docs/reference/k8s/api-gateway/routes) for details about the parameters. + +- [`rules.filters.type`](/consul/docs/connect/gateways/api-gateway/configuration/routes#rules-filters-type): Set this parameter to `URLRewrite` to instruct Consul API Gateway to rewrite the URL when specific conditions are met. +- [`rules.filters.urlRewrite`](/consul/docs/connect/gateways/api-gateway/configuration/routes#rules-filters-urlrewrite): Specify the `path` configuration. +- [`rules.filters.urlRewrite.path`](/consul/docs/connect/gateways/api-gateway/configuration/routes#rules-filters-urlrewrite-path): Contains the paths that incoming requests should be rewritten to based on the match conditions. + +To configure the route to accept paths with or without a trailing slash, you must make two separate routes to handle each case. + +### Example + +In the following example, requests to` /incoming-request-prefix/` are forwarded to the `backendRef` as `/prefix-backend-receives/`. As a result, requests to `/incoming-request-prefix/request-path` are received by `backendRef` as `/prefix-backend-receives/request-path`. + + + +```yaml hideClipboard +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: example-route + ##... +spec: + parentRefs: + - group: gateway.networking.k8s.io + kind: Gateway + name: api-gateway + rules: + - backendRefs: + . . . + filters: + - type: URLRewrite + urlRewrite: + path: + replacePrefixMatch: /prefix-backend-receives/ + type: ReplacePrefixMatch + matches: + - path: + type: PathPrefix + value: /incoming–request-prefix/ +``` + \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/k8s/route.mdx b/website/content/docs/north-south/api-gateway/k8s/route.mdx new file mode 100644 index 000000000000..3015041766e7 --- /dev/null +++ b/website/content/docs/north-south/api-gateway/k8s/route.mdx @@ -0,0 +1,68 @@ +--- +layout: docs +page_title: Define API gateway routes on Kubernetes +description: Learn how to define and attach HTTP and TCP routes to Consul API gateway listeners in Kubernetes-orchestrated networks. +--- + +# Define API gateway routes on Kubernetes + +This topic describes how to configure HTTP and TCP routes and attach them to Consul API gateway listeners in Kubernetes-orchestrated networks. Routes are rule-based configurations that allow external clients to send requests to services in the mesh. For information + +## Overview + +The following steps describe the general workflow for defining and deploying routes: + +1. Define a route configuration that specifies the protocol type, name of the gateway to attach to, and rules for routing requests. +1. Deploy the configuration to create the routes and attach them to the gateway. + +Routes and the gateways they are attached to are eventually-consistent objects. They provide feedback about their current state through a series of status conditions. As a result, you must manually check the route status to determine if the route successfully bound to the gateway. + +## Requirements + +Verify that your environment meets the requirements specified in [Technical specifications for Kubernetes](/consul/docs/north-south/api-gateway/k8s/tech-specs). + +### OpenShift + +If your Kubernetes-orchestrated network runs on OpenShift, verify that OpenShift is enabled for your Consul installation. Refer to [OpenShift requirements](/consul/docs/connect/gateways/api-gateway/tech-specs#openshift-requirements) for additional information. + +## Define routes + +Define route configurations and bind them to listeners configured on the gateway so that Consul can route incoming requests to services in the mesh. + +1. Create a configuration file and specify the following fields: + + - `apiVersion`: Specifies the Kubernetes API gateway version. This must be set to `gateway.networking.k8s.io/v1beta1` + - `kind`: Set to `HTTPRoute` or `TCPRoute`. + - `metadata.name`: Specify a name for the route. The name is metadata that you can use to reference the configuration when performing Consul operations. + - `spec.parentRefs.name`: Specifies a list of API gateways that the route binds to. + - `spec. rules`: Specifies a list of routing rules for constructing a routing table that maps listeners to services. + + Refer to the [`Routes` configuration reference](/consul/docs/reference/k8s/api-gateway/routes) for details about configuring route rules. + +1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. +1. Save the configuration. + +The following example creates a route named `example-route` associated with a listener defined in `example-gateway`. + +```yaml +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: example-route +spec: + parentRefs: + - name: example-gateway + rules: + - backendRefs: + - kind: Service + name: echo + port: 8080 +``` + +## Deploy the route configuration + +Apply the configuration to your cluster using the `kubectl` command. The following command applies the configuration to the `consul` namespace: + +```shell-session +$ kubectl apply -f my-route.yaml -n consul +``` \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/k8s/tech-specs.mdx b/website/content/docs/north-south/api-gateway/k8s/tech-specs.mdx new file mode 100644 index 000000000000..24eccd996902 --- /dev/null +++ b/website/content/docs/north-south/api-gateway/k8s/tech-specs.mdx @@ -0,0 +1,155 @@ +--- +layout: docs +page_title: API gateway for Kubernetes technical specifications +description: Learn about the requirements for installing and using the Consul API gateway for Kubernetes, including required ports, component version minimums, Consul Enterprise limitations, and compatible k8s cloud environments. +--- + +# API gateway for Kubernetes technical specifications + +This topic describes the requirements and technical specifications associated with using Consul API gateway. + +## Datacenter requirements + +Your datacenter must meet the following requirements prior to configuring the Consul API gateway: + +- HashiCorp Consul Helm chart v1.2.0 and later + +## TCP port requirements + +The following table describes the TCP port requirements for each component of the API gateway. + +| Port | Description | Component | +| ---- | ----------- | --------- | +| 20000 | Kubernetes readiness probe | Gateway instance pod | +| Configurable | Port for scraping Prometheus metrics. Disabled by default. | Gateway controller pod | + +## OpenShift requirements + +You can deploy API gateways to Kubernetes clusters managed by Red Hat OpenShift, which is a security-conscious, opinionated wrapper for Kubernetes. To enable OpenShift support, add the following parameters to your Consul values file and apply the configuration: + +```yaml + openshift: + enabled: true + ``` + +Refer to the following topics for additional information: + +- [Install Consul on OpenShift clusters with Helm](/consul/docs/k8s/installation/install#install-consul-on-openshift-clusters) +- [Install Consul on OpenShift clusters with the `consul-k8s` CLI](/consul/docs/k8s/installation/install-cli#install-consul-on-openshift-clusters) + +### Security context constraints + +OpenShift requires a security context constraint (SCC) configuration, which restricts pods to specific groups. You can create a custom SCC or use one of the default constraints. Refer to the [OpenShift documentation](https://docs.openshift.com/container-platform/4.13/authentication/managing-security-context-constraints.html) for additional information. + +By default, the SCC is set to `restricted-v2` for the `managedGatewayClass` that Consul automatically creates. The `restricted-v2` SCC is one of OpenShifts default SCCs, but you can specify a different SCC in the `openshiftSCCName` parameter: + +```yaml +connectInject: + apiGateway: + managedGatewayClass: + openshiftSCCName: "restricted-v2" +``` + +### Privileged container ports + +Containers cannot use privileged ports when OpenShift is enabled. Privileged ports are 1 through 1024, and serving applications from that range is a security risk. + +To allow gateway listeners to use privileged port numbers, specify an integer value in the `mapPrivilegedContainerPorts` field of your Consul values configuration. Consul adds the value to listener port numbers that are set to a number in the privileged container range. Consul maps the configured port number to the total port number so that traffic sent to the configured port number is correctly forwarded to the service. + +For example, if a gateway listener is configured to port `80` and the `mapPrivilegedContainerPorts` field is configured to `2000`, then the actual port number on the underlying container is `2080`. + +You can set the `mapPrivilegedContainerPorts` parameter in the following map in your Consul values file: + +```yaml +connectInject: + apiGateway: + managedGatewayClass: + mapPrivilegedContainerPorts: +``` + +## Supported versions of the Kubernetes gateway API specification + +Refer to the [release notes](/consul/docs/release-notes) for your version of Consul. + +## Supported Kubernetes gateway specification features + +Consul API gateways for Kubernetes support a subset of the Kubernetes Gateway API specification. For a complete list of features, including the list of gateway and route statuses and an explanation on how they +are used, refer to the [documentation in our GitHub repo](https://github.com/hashicorp/consul-api-gateway/blob/main/dev/docs/supported-features.md): + +### `GatewayClass` + +The `GatewayClass` resource describes a class of gateway configurations to use a template for creating `Gateway` resources. You can also specify custom API gateway configurations in a `GatewayClassConfig` CRD and attach them to resource to the `GatewayClass` using the `parametersRef` field. + +You must specify the `"hashicorp.com/consul-api-gateway-controller"` controller so that Consul can manage gateways generated by the `GatewayClass`. Refer to the [Kubernetes `GatewayClass` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.GatewayClass) for additional information. + +### `Gateway` + +The `Gateway` resource is the core API gateway component. Gateways have one or more listeners that can route `HTTP`, `HTTPS`, or `TCP` traffic. You can define header-based hostname matching for listeners, but SNI is not supported. + +You can apply filters to add, remove, and set header values on incoming requests. Gateways support the `terminate` TLS mode and `core/v1/Secret` TLS certificates. Extended option support includes TLS version and cipher constraints. Refer to [Kubernetes `Gateway` resource configuration reference](/consul/docs/reference/k8s/api-gateway/gateway) for more information. + +### `HTTPRoute` + +`HTTPRoute` configurations determine HTTP paths between listeners defined on the gateway and services in the mesh. You can specify weights to load balance traffic, as well as define rules for matching request paths, headers, queries, and methods to ensure that traffic is routed appropriately. You can apply filters to add, remove, and set header values on requests sent through th route. + +Routes support the following backend types: + +- `core/v1/Service` backend types when the route maps to service registered with Consul. +- `api-gateway.consul.hashicorp.com/v1alpha1/MeshService`. + +Refer to [Kubernetes `HTTPRoute` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.HTTPRoute) for additional information. + +### `TCPRoute` + +`TCPRoute` configurations determine TCP paths between listeners defined on the gateway and services in the mesh. Routes support the following backend types: + +- `core/v1/Service` backend types when the route maps to service registered with Consul. +- `api-gateway.consul.hashicorp.com/v1alpha1/MeshService`. + +Refer to [Kubernetes `TCPRoute` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.TCPRoute) for additional information. + +### `ReferenceGrant` + +`ReferenceGrant` resources allow resources to reference resources in other namespaces. They are required to allow references from a `Gateway` to a Kubernetes `core/v1/Secret` in a different namespace. Without a `ReferenceGrant`, `backendRefs` attached to the gateway may not be permitted. As a result, the `ReferenceGrant` sets a `ResolvedRefs` status to `False` with the reason `InvalidCertificateRef`, which prevents the gateway from becoming ready. + +`ReferenceGrant` resources are also required for references from an `HTTPRoute` or `TCPRoute` to a Kubernetes `core/v1/Service` in a different namespace. Without a `ReferenceGrant`, `backendRefs` attached to the route may not be permitted. As a result, Kubernetes sets a `ResolvedRefs` status to `False` with the reason `RefNotPermitted`, which causes the gateway listener to reject the route. + +If a route `backendRefs` becomes unpermitted, the entire route is removed from the gateway listener. A `backendRefs` can become unpermitted when you delete a `ReferenceGrant` or add a new unpermitted `backendRefs` to an existing route. + +Refer to the [Kubernetes `ReferenceGrant` documentation](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io/v1alpha2.ReferenceGrant) for additional information. + +## Consul server deployments + +- Consul Enterprise and the community edition are both supported. +- Supported Consul Server deployment types: + - Self-Managed + - HCP Consul Dedicated + +@include 'alerts/hcp-dedicated-eol.mdx' + +### Consul feature support + +API gateways on Kubernetes support all Consul features, but you can only route traffic between multiple datacenters through peered connections. Refer to [Route Traffic to Peered Services](/consul/docs/north-south/api-gateway/k8s/peer) for additional information. WAN federation is not supported. + +## Deployment Environments + +Consul API gateway can be deployed in the following Kubernetes-based environments: + +- Standard Kubernetes environments +- AWS Elastic Kubernetes Service (EKS) +- Google Kubernetes Engine (GKE) +- Azure Kubernetes Service (AKS) + +## Resource allocations + +The following resources are allocated for each component of the API gateway. + +### Gateway controller pod + +- **CPU**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. +- **Memory**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. + +### Gateway instance pod + +- **CPU**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. +- **Memory**: None. Either the namespace or cluster default is allocated, depending on the Kubernetes cluster configuration. \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/secure-traffic/encrypt.mdx b/website/content/docs/north-south/api-gateway/secure-traffic/encrypt.mdx new file mode 100644 index 000000000000..0b665a5e6582 --- /dev/null +++ b/website/content/docs/north-south/api-gateway/secure-traffic/encrypt.mdx @@ -0,0 +1,89 @@ +--- +layout: docs +page_title: Encrypt API gateway traffic on virtual machines +description: Learn how to define inline certificate config entries and deploy them to Consul. Inline certificate configuration entries enable you to attach TLS certificates and keys to gateway listeners so that traffic between external clients and gateway listeners is encrypted. +--- + +# Encrypt API gateway traffic on virtual machines + +This topic describes how to make TLS certificates available to API gateways so that requests between the user and the gateway endpoint are encrypted. + +## Requirements + +- Consul v1.15 or later is required to use the Consul API gateway on VMs + - Consul v1.19 or later is required to use the [file system certificate configuration entry](/consul/docs/reference/config-entry/file-system-certificate) +- You must have a certificate and key from your CA +- A Consul cluster with service mesh enabled. Refer to [`connect`](//consul/docs/reference/agent/configuration-file/service-mesh#connect) +- Network connectivity between the machine deploying the API gateway and a + Consul cluster agent or server + +### ACL requirements + +If ACLs are enabled, you must present a token with the following permissions to +configure Consul and deploy API gateways: + +- `mesh: read` +- `mesh: write` + +Refer [Mesh Rules](/consul/docs/security/acl/acl-rules#mesh-rules) for +additional information about configuring policies that enable you to interact +with Consul API gateway configurations. + +## Define TLS certificates + +1. Create a [file system certificate](/consul/docs/reference/config-entry/file-system-certificate) or [inline certificate](/consul/docs/reference/config-entry/inline-certificate) and specify the following fields: + - `Kind`: Specifies the type of configuration entry. This must be set to `file-system-certificate` or `inline-certificate`. + - `Name`: Specify the name in the [API gateway listener configuration](/consul/docs/reference/config-entry/api-gateway#listeners) to bind the certificate to that listener. + - `Certificate`: Specifies the filepath to the certificate on the local system or the inline public certificate as plain text. + - `PrivateKey`: Specifies the filepath to private key on the local system or the inline private key to as plain text. +1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. Refer to the [file system certificate configuration reference](/consul/docs/reference/config-entry/file-system-certificate) or [inline certificate configuration reference](/consul/docs/reference/config-entry/inline-certificate) for more information. +1. Save the configuration. + +### Examples + + + + + +The following example defines a certificate named `my-certificate`. API gateway configurations that specify `inline-certificate` in the `Certificate.Kind` field and `my-certificate` in the `Certificate.Name` field are able to use the certificate. + +```hcl +Kind = "inline-certificate" +Name = "my-certificate" + +Certificate = < + + + +The following example defines a certificate named `my-certificate`. API gateway configurations that specify `file-system-certificate` in the `Certificate.Kind` field and `my-certificate` in the `Certificate.Name` field are able to use the certificate. + +```hcl +Kind = "file-system-certificate" +Name = "my-certificate" +Certificate = "/opt/consul/tls/api-gateway.crt" +PrivateKey = "/opt/consul/tls/api-gateway.key" +``` + + + + +## Deploy the configuration to Consul + +Run the `consul config write` command to enable listeners to use the certificate. The following example writes a configuration called `my-certificate.hcl`: + +```shell-session +$ consul config write my-certificate.hcl +``` diff --git a/website/content/docs/north-south/api-gateway/secure-traffic/jwt/k8s.mdx b/website/content/docs/north-south/api-gateway/secure-traffic/jwt/k8s.mdx new file mode 100644 index 000000000000..ae4d7db215a8 --- /dev/null +++ b/website/content/docs/north-south/api-gateway/secure-traffic/jwt/k8s.mdx @@ -0,0 +1,226 @@ +--- +layout: docs +page_title: Use JWTs to verify requests to API gateways on Kubernetes +description: Learn how to use JSON web tokens (JWT) to verify requests from external clients to listeners on an API gateway on Kubernetes-orchestrated networks. +--- + +# Use JWTs to verify requests to API gateways on Kubernetes + +This topic describes how to use JSON web tokens (JWT) to verify requests to API gateways deployed to Kubernetes-orchestrated containers. If your API gateway is deployed to virtual machines, refer to [Use JWTs to verify requests to API gateways on VMs](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm). + + This feature is available in Consul Enterprise. + +## Overview + +You can configure API gateways to use JWTs to verify incoming requests so that you can stop unverified traffic at the gateway. You can configure JWT verification at different levels: + +- Listener defaults: Define basic defaults in a GatewayPolicy resource to apply them to all routes attached to a listener. +- HTTP route-specific settings: You can define JWT authentication settings for specific HTTP routes. Route-specific JWT settings override default listener configurations. +- Listener overrides: Define override settings in a GatewayPolicy resource that take precedence over default and route-specific configurations. Use override settings to set enforceable policies for listeners. + + +Complete the following steps to use JWTs to verify requests: + +1. Define a JWTProvider that specifies the JWT provider and claims used to verify requests to the gateway. +1. Define a GatewayPolicy that specifies default and override settings for API gateway listeners and attach it to the gateway. +1. Define a RouteAuthFilter that specifies route-specific JWT verification settings. +1. Reference the RouteAuthFilter from the HTTPRoute. +1. Apply the configurations. + + +## Requirements + +- Consul v1.17+ +- Consul on Kubernetes CLI or Helm chart v1.3.0+ +- JWT details, such as claims and provider + + +## Define a JWTProvider + +Create a `JWTProvider` CRD that defines the JWT provider to verify claims against. + +In the following example, the JWTProvider CRD contains a local JWKS. In production environments, use a production-grade JWKs endpoint instead. + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: JWTProvider +metadata: + name: local +spec: + issuer: local + jsonWebKeySet: + local: + jwks: "" +``` + + + +For more information about the fields you can configure in this CRD, refer to [`JWTProvider` configuration reference](/consul/docs/reference/config-entry/jwt-provider). + +## Define a GatewayPolicy + +Create a `GatewayPolicy` CRD that defines default and override settings for JWT verification. + +- `kind`: Must be set to `GatewayPolicy` +- `metadata.name`: Specifies a name for the policy. +- `spec.targetRef.name`: Specifies the name of the API gateway to attach the policy to. +- `spec.targetRef.kind`: Specifies the kind of resource to attach to the policy to. Must be set to `Gateway`. +- `spec.targetRef.group`: Specifies the resource group. Unless you have created a custom group, this should be set to `gateway.networking.k8s.io/v1beta1`. +- `spec.targetRef.sectionName`: Specifies a part of the gateway that the policy applies to. +- `spec.targetRef.override.jwt.providers`: Specifies a list of providers and claims used to verify requests to the gateway. The override settings take precedence over the default and route-specific JWT verification settings. +- `spec.targetRef.default.jwt.providers`: Specifies a list of default providers and claims used to verify requests to the gateway. + +The following examples configure a Gateway and the GatewayPolicy being attached to it so that every request coming through the listener must meet these conditions: + +- The request must be signed by the `local` provider +- The request must have a claim of `role` with a value of `user` unless the HTTPRoute attached to the listener overrides it + + + + + + +```yaml +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: Gateway +metadata: + name: api-gateway +spec: + gatewayClassName: consul + listeners: + - protocol: HTTP + port: 30002 + name: listener-one +``` + + + + + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: GatewayPolicy +metadata: + name: gw-policy +spec: + targetRef: + name: api-gateway + sectionName: listener-one + group: gateway.networking.k8s.io/v1beta1 + kind: Gateway + override: + jwt: + providers: + - name: "local" + default: + jwt: + providers: + - name: "local" + verifyClaims: + - path: + - role + value: user +``` + + + + + + +For more information about the fields you can configure, refer to [`GatewayPolicy` configuration reference](/consul/docs/reference/k8s/api-gateway/gatewaypolicy). + +## Define a RouteAuthFilter + +Create an `RouteAuthFilter` CRD that defines overrides for the default JWT verification configured in the GatewayPolicy. + +- `kind`: Must be set to `RouteAuthFilter` +- `metadata.name`: Specifies a name for the filter. +- `metadata.namespace`: Specifies the Consul namespace the filter applies to. +- `spec.jwt.providers`: Specifies a list of providers and claims used to verify requests to the gateway. The override settings take precedence over the default and route-specific JWT verification settings. + +In the following example, the RouteAuthFilter overrides default settings set in the GatewayPolicy so that every request coming through the listener must meet these conditions: + +- The request must be signed by the `local` provider +- The request must have a `role` claim +- The value of the claim must be `admin` + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: RouteAuthFilter +metadata: + name: auth-filter +spec: + jwt: + providers: + - name: local + verifyClaims: + - path: + - role + value: admin +``` + + + +For more information about the fields you can configure, refer to [`RouteAuthFilter` configuration reference](/consul/docs/reference/k8s/api-gateway/routeauthfilter). + +## Attach the auth filter to your HTTP routes + +In the `filters` field of your HTTPRoute configuration, define the filter behavior that results from JWT verification. + +- `type: extensionRef`: Declare list of extension references. +- `extensionRef.group`: Specifies the resource group. Unless you have created a custom group, this should be set to `gateway.networking.k8s.io/v1beta1`. +- `extensionRef.kind`: Specifies the type of extension reference to attach to the route. Must be `RouteAuthFilter` +- `extensionRef.name`: Specifies the name of the auth filter. + +The following example configures an HTTPRoute so that every request to `api-gateway-fqdn:3002/admin` must meet these conditions: + +- The request be signed by the `local` provider. +- The request must have a `role` claim. +- The value of the claim must be `admin`. + +Every other request must be signed by the `local` provider and have a claim of `role` with a value of `user`, as defined in the GatewayPolicy. + + + +```yaml +apiVersion: gateway.networking.k8s.io/v1beta1 +kind: HTTPRoute +metadata: + name: http-route +spec: + parentRefs: + - name: api-gateway + rules: + - matches: + - path: + type: PathPrefix + value: /admin + filters: + - type: ExtensionRef + extensionRef: + group: consul.hashicorp.com + kind: RouteAuthFilter + name: auth-filter + backendRefs: + - kind: Service + name: admin + port: 8080 + - matches: + - path: + type: PathPrefix + value: / + backendRefs: + - kind: Service + name: user-service + port: 8081 +``` + + \ No newline at end of file diff --git a/website/content/docs/north-south/api-gateway/secure-traffic/jwt/vm.mdx b/website/content/docs/north-south/api-gateway/secure-traffic/jwt/vm.mdx new file mode 100644 index 000000000000..f61e461e0a1b --- /dev/null +++ b/website/content/docs/north-south/api-gateway/secure-traffic/jwt/vm.mdx @@ -0,0 +1,184 @@ +--- +layout: docs +page_title: Use JWTs to verify requests to API gateways on virtual machines +description: Learn how to use JSON web tokens (JWT) to verify requests from external clients to listeners on an API gateway. +--- + +# Use JWTs to verify requests to API gateways on virtual machines + +This topic describes how to use JSON web tokens (JWT) to verify requests to API gateways on virtual machines (VM). If your services are deployed to Kubernetes-orchestrated containers, refer to [Use JWTs to verify requests to API gateways on Kubernetes](/consul/docs/north-south/api-gateway/secure-traffic/jwt/k8s). + + This feature is available in Consul Enterprise. + +## Overview + +You can configure API gateways to use JWTs to verify incoming requests so that you can stop unverified traffic at the gateway. You can configure JWT verification at different levels: + +- Listener defaults: Define basic defaults that apply to all routes attached to a listener. +- HTTP route-specific settings: You can define JWT authentication settings for specific HTTP routes. Route-specific JWT settings override default configurations. +- Listener overrides: Define override settings that take precedence over default and route-specific configurations. This enables you to set enforceable policies for listeners. + +Complete the following steps to use JWTs to verify requests: + +1. Define a JWTProvider that specifies the JWT provider and claims used to verify requests to the gateway. +1. Configure default and override settings for listeners in the API gateway configuration entry. +1. Define route-specific JWT verification settings as filters in the HTTP route configuration entries. +1. Write the configuration entries to Consul to begin verifying requests using JWTs. + +## Requirements + +- Consul 1.17 or later +- JWT details, such as claims and provider + +## Define a JWTProvider + +Create a JWTProvider config entry that defines the JWT provider to verify claims against. +In the following example, the JWTProvider CRD contains a local JWKS. In production environments, use a production-grade JWKs endpoint instead. + + + +```hcl +Kind = "jwt-provider" +Name = "local" + +Issuer = "local" + +JSONWebKeySet = { + Local = { + JWKS="" + } +} +``` + + + +For more information about the fields you can configure in this CRD, refer to [`JWTProvider` configuration reference](/consul/docs/reference/config-entry/jwt-provider). + +## Configure default and override settings + +Define default and override settings for JWT verification in the [API gateway configuration entry](/consul/docs/reference/config-entry/api-gateway). + +1. Add a `default.JWT` block to the listener that you want to apply JWT verification to. Consul applies these configurations to routes attached to the listener. Refer to the [`Listeners.default.JWT`](/consul/docs/reference/config-entry/api-gateway#listeners-default-jwt) configuration reference for details. +1. Add an `override.JWT` block to the listener that you want to apply JWT verification policies to. Consul applies these configurations to all routes attached to the listener, regardless of the `default` or route-specific settings. Refer to the [`Listeners.override.JWT`](/consul/docs/reference/config-entry/api-gateway#listeners-override-jwt) configuration reference for details. +1. Apply the settings in the API gateway configuration entry. You can use the [`/config` API endpoint](/consul/api-docs/config#apply-configuration) or the [`consul config write` command](/consul/commands/config/write). + +The following examples configure a Gateway so that every request coming through the listener must meet these conditions: +- The request must be signed by the `local` provider +- The request must have a claim of `role` with a value of `user` unless the HTTPRoute attached to the listener overrides it + + + +```hcl +Kind = "api-gateway" +Name = "api-gateway" +Listeners = [ + { + Name = "listener-one" + Port = 9001 + Protocol = "http" + Override = { + JWT = { + Providers = [ + { + Name = "local" + } + ] + } + } + default = { + JWT = { + Providers = [ + { + Name = "local" + VerifyClaims = [ + { + Path = ["role"] + Value = "pet" + } + ] + } + ] + } + } + } +] +``` + + + +## Configure verification for specific HTTP routes + +Define filters to enable route-specific JWT verification settings in the [HTTP route configuration entry](/consul/docs/reference/config-entry/http-route). + +1. Add a `JWT` configuration to the `rules.filter` block. Route-specific configurations that overlap the [default settings ](/consul/docs/reference/config-entry/api-gateway#listeners-default-jwt) in the API gateway configuration entry take precedence. Configurations defined in the [listener override settings](/consul/docs/reference/config-entry/api-gateway#listeners-override-jwt) take the highest precedence. +1. Apply the settings in the API gateway configuration entry. You can use the [`/config` API endpoint](/consul/api-docs/config#apply-configuration) or the [`consul config write` command](/consul/commands/config/write). + +The following example configures an HTTPRoute so that every request to `api-gateway-fqdn:3002/admin` must meet these conditions: +- The request be signed by the `local` provider. +- The request must have a `role` claim. +- The value of the claim must be `admin`. + +Every other request must be signed by the `local` provider and have a claim of `role` with a value of `user`, as defined in the Gateway listener. + + + +```hcl +Kind = "http-route" +Name = "api-gateway-route" +Parents = [ + { + SectionName = "listener-one" + Name = "api-gateway" + Kind = "api-gateway" + }, +] +Rules = [ + { + Matches = [ + { + Path = { + Match = "prefix" + Value = "/admin" + } + } + ] + Filters = { + JWT = { + Providers = [ + { + Name = "local" + VerifyClaims = [ + { + Path = ["role"] + Value = "admin" + } + ] + } + ] + } + } + Services = [ + { + Name = "admin-service" + } + ] + }, + { + Matches = [ + { + Path = { + Match = "prefix" + Value = "/" + } + } + ] + Services = [ + { + Name = "user-service" + } + ] + }, +] +``` + + diff --git a/website/content/docs/north-south/api-gateway/vm/listener.mdx b/website/content/docs/north-south/api-gateway/vm/listener.mdx new file mode 100644 index 000000000000..a8619f30de0d --- /dev/null +++ b/website/content/docs/north-south/api-gateway/vm/listener.mdx @@ -0,0 +1,113 @@ +--- +layout: docs +page_title: Deploy API gateway listeners to virtual machines +description: Learn how to configure and Consul API gateways and gateway listeners on virtual machines so that you can enable ingress requests to services in your service mesh in VM environments. +--- + +# Deploy API gateway listeners to virtual machines + +This topic describes how to deploy Consul API gateway listeners to networks that operate in virtual machine (VM) environments. If you want to implement API gateway listeners in a Kubernetes environment, refer to [Deploy API gateway listeners to Kubernetes](/consul/docs/north-south/api-gateway/k8s/listener). + +## Overview + +API gateways have one or more listeners that serve as ingress points for requests to services in a Consul service mesh. Create an [API gateway configuration entry](/consul/docs/reference/config-entry/api-gateway) and define listeners that expose ports on the endpoint for ingress. + +The following steps describe the general workflow for deploying a Consul API gateway to a VM environment: + +1. Create an API gateway configuration entry. The configuration entry includes listener configurations and references to TLS certificates. +1. Deploy the API gateway configuration entry to create the listeners. + +### Encryption + +To encrypt traffic between the external client and the service that the API gateway routes traffic to, define an inline certificate configuration and attach it to your listeners. Refer to [Encrypt API gateway traffic on virtual machines](/consul/docs/north-south/api-gateway/secure-traffic/encrypt) for additional information. + +### Routes + +After deploying the gateway, attach [HTTP](/consul/docs/reference/config-entry/http-route) routes and [TCP](/consul/docs/reference/config-entry/tcp-route) routes to listeners defined in the gateway to control how requests route to services in the network. Refer to [Define API gateway routes on VMs](/consul/docs/north-south/api-gateway/vm/route) for additional information. + +## Requirements + +The following requirements must be satisfied to use API gateways on VMs: + +- Consul 1.15 or later +- A Consul cluster with service mesh enabled. Refer to [`connect`](//consul/docs/reference/agent/configuration-file/service-mesh#connect) +- Network connectivity between the machine deploying the API Gateway and a + Consul cluster agent or server + +### ACL requirements + +If ACLs are enabled, you must present a token with the following permissions to +configure Consul and deploy API gateways: + +- `mesh: read` +- `mesh: write` + +Refer to [Mesh Rules](/consul/docs/security/acl/acl-rules#mesh-rules) for +additional information about configuring policies that enable you to interact +with Consul API gateway configurations. + +## Define the gateway and listeners + +Create an API gateway configuration entry that defines listeners and TLS certificates +in the mesh. + +1. Specify the following fields: + - `Kind`: Specifies the type of configuration entry to implement. This must be `api-gateway`. + - `Name`: Specify a name for the gateway configuration. The name is metadata that you can use to reference the configuration entry when performing Consul operations. + - `Listeners`: Specify a list of listener configurations. Each listener is map containing the following fields: + - `Port`: Specifies the port that the listener receives traffic on. + - `Name`: Specifies a unique name for the listener. + - `Protocol`: You can set either `tcp` or `http` + - `TLS`: Defines TLS encryption configurations for the listener. + + Refer to the [API gateway configuration entry reference](/consul/docs/reference/config-entry/api-gateway) for details on how to define fields in the `Listeners` block. +1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. Refer to the [API gateway configuration entry reference](/consul/docs/reference/config-entry/api-gateway) for additional information. +1. Save the configuration. + +In the following example, the API gateway specifies an HTTP listener on port `8443`. It also requires an inline-certificate configuration entry named `my-certificate` that contains a valid certificate and private key pair: + +```hcl +Kind = "api-gateway" +Name = "my-gateway" + +// Each listener configures a port which can be used to access the Consul cluster +Listeners = [ + { + Port = 8443 + Name = "my-http-listener" + Protocol = "http" + TLS = { + Certificates = [ + { + Kind = "inline-certificate" + Name = "my-certificate" + } + ] + } + } +] +``` + +Refer to [API Gateway Configuration Reference](/consul/docs/reference/config-entry/api-gateway) for +information about all configuration fields. + +Gateways and routes are eventually-consistent objects that provide feedback +about their current state through a series of status conditions. As a result, +you must manually check the route status to determine if the route +bound to the gateway successfully. + +## Deploy the API gateway and listeners + +Use the `consul config write` command to implement the API gateway configuration entry. The following command applies the configuration entry for the main gateway object: + +```shell-session +$ consul config write gateways.hcl +``` + +Run the following command to deploy an API gateway instance: + +```shell-session +$ consul connect envoy -gateway api -register -service my-api-gateway +``` + +The command directs Consul to configure Envoy as an API gateway. Gateways and routes are eventually-consistent objects that provide feedback about their current state through a series of status conditions. As a result, you must manually check the route status to determine if the route successfully bound to the gateway successfully. diff --git a/website/content/docs/north-south/api-gateway/vm/route.mdx b/website/content/docs/north-south/api-gateway/vm/route.mdx new file mode 100644 index 000000000000..b4d527774cc8 --- /dev/null +++ b/website/content/docs/north-south/api-gateway/vm/route.mdx @@ -0,0 +1,121 @@ +--- +layout: docs +page_title: Define API gateway routes on virtual machines +description: Learn how to define and attach HTTP and TCP routes to Consul API gateway listeners so that requests from external clients can reach services in the mesh. +--- + +# Define API gateway routes on virtual machines + +This topic describes how to configure HTTP and TCP routes and attach them to Consul API gateway listeners. Routes are rule-based configurations that allow external clients to send requests to services in the mesh. + +## Overview + +The following steps describe the general workflow for defining and deploying routes: + +1. Define routes in an HTTP or TCP configuration entry. The configuration entry includes rules for routing requests, target services in the mesh for the traffic, and the name of the gateway to attach to. +1. Deploy the configuration entry to create the routes and attach them to the gateway. + +Routes and the gateways they are attached to are eventually-consistent objects. They provide feedback about their current state through a series of status conditions. As a result, you must manually check the route status to determine if the route is bound to the gateway successfully. + +## Requirements + +The following requirements must be satisfied to use API gateways on VMs: + +- Consul 1.15 or later +- A Consul cluster with service mesh enabled. Refer to [`connect`](//consul/docs/reference/agent/configuration-file/service-mesh#connect) +- Network connectivity between the machine deploying the API Gateway and a Consul cluster agent or server + +### ACL requirements + +If ACLs are enabled, you must present a token with the following permissions to +configure Consul and deploy API gateway routes: + +- `mesh: read` +- `mesh: write` + +Refer [Mesh Rules](/consul/docs/security/acl/acl-rules#mesh-rules) for +additional information about configuring policies that enable you to interact +with Consul API gateway configurations. + +## Define the routes + +Define route configurations and bind them to listeners configured on the gateway so that Consul can route incoming requests to services in the mesh. + +1. Create a route configuration entry file and specify the following settings: + - `Kind`: Set to `http` or `tcp`. + - `Name`: Specify a name for the route. The name is metadata that you can use to reference the configuration when performing Consul operations. + - `Parents`: Specifies a list of API gateways that the route binds to. + - `Rules`: If you are configuring HTTP routes, define a list of routing rules for constructing a routing table that maps listeners to services. Each member of the list is a map that may containing the following fields: + - `Filters` + - `Matches` + - `Services` + + Refer to the [HTTP route configuration entry](/consul/docs/reference/config-entry/http-route) and [TCP route configuration entry](/consul/docs/reference/config-entry/tcp-route) reference for details about configuring routes. + +1. Configure any additional fields necessary for your use case, such as the namespace or admin partition. +1. Save the configuration. + + +The following example routes requests from the listener on the API gateway at port `8443` to services in Consul based on the path of the request. When an incoming request starts at path `/`, Consul forwards 90 percent of the requests to the `ui` service and 10 percent to `experimental-ui`. Consul also forwards requests starting with `/api` to `api`. + +```hcl +Kind = "http-route" +Name = "my-http-route" + +// Rules define how requests will be routed +Rules = [ + // Send all requests to UI services with 10% going to the "experimental" UI + { + Matches = [ + { + Path = { + Match = "prefix" + Value = "/" + } + } + ] + Services = [ + { + Name = "ui" + Weight = 90 + }, + { + Name = "experimental-ui" + Weight = 10 + } + ] + }, + // Send all requests that start with the path `/api` to the API service + { + Matches = [ + { + Path = { + Match = "prefix" + Value = "/api" + } + } + ] + Services = [ + { + Name = "api" + } + ] + } +] + +Parents = [ + { + Kind = "api-gateway" + Name = "my-gateway" + SectionName = "my-http-listener" + } +] +``` + +## Deploy the route configuration + +Run the `consul config write` command to attach the routes to the specified gateways. The following example writes a configuration called `my-http-route.hcl`: + +```shell-session +$ consul config write my-http-route.hcl +``` diff --git a/website/content/docs/north-south/index.mdx b/website/content/docs/north-south/index.mdx new file mode 100644 index 000000000000..755aa0435a17 --- /dev/null +++ b/website/content/docs/north-south/index.mdx @@ -0,0 +1,33 @@ +--- +page_title: Secure network access north/south +description: API gateways and ingress gateways are types of service mesh proxies that you can use to securely allow systems outside the Consul service mesh to access services in the mesh. Learn how API and ingress gateways can help you enable access to services registered by Consul. +layout: docs +--- + +# Secure network access north/south + +This topic provides an overview of the Consul components that securely allow systems outside the service mesh to access services inside the mesh. Network traffic that connects services inside the mesh to external clients or services is referred to as _north-south traffic_. + +For information about intra-mesh, or _east-west traffic_ in your service mesh, refer to [Expand network east/west overview](/consul/docs/east-west). + +## Introduction + +You can define points of ingress to the service mesh using either API gateways or ingress gateways. These gateways allow external network clients to access applications and services running in a Consul datacenter. + +API gateways forward requests from clients to specific destinations based on path or request protocol. Ingress gateways are Consul's legacy capability for ingress. We recommend using API gateways instead of ingress gateways. + +## API gateways + +@include 'text/descriptions/api-gateway.mdx' + +## Ingress gateways + +@include 'text/descriptions/ingress-gateway.mdx' + +## Terminating gateways + +@include 'text/descriptions/terminating-gateway.mdx' + +## Guidance + +@include 'text/guidance/north-south.mdx' \ No newline at end of file diff --git a/website/content/docs/north-south/ingress-controller.mdx b/website/content/docs/north-south/ingress-controller.mdx new file mode 100644 index 000000000000..78092fadb86d --- /dev/null +++ b/website/content/docs/north-south/ingress-controller.mdx @@ -0,0 +1,94 @@ +--- +layout: docs +page_title: Configure Ingress Controllers for Consul on Kubernetes +description: >- + Ingress controllers are pluggable components that must be configured in k8s in order to use the Ingress resource. Learn how to deploy sidecars with the controller to secure its communication with Consul, review common configuration issues, and find links to example configurations. +--- + +# Configure ingress controllers for Consul on Kubernetes + +-> This topic requires Consul 1.10+, Consul-k8s 0.26+, Consul-helm 0.32+ configured with [Transparent Proxy](/consul/docs/connect/transparent-proxy) mode enabled. In addition, this topic assumes that the reader is familiar with [Ingress Controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) on Kubernetes. + +~> If you are looking for a fully supported solution for ingress traffic into Consul Service Mesh, please visit [Consul API Gateway](/consul/docs/api-gateway) for instruction on how to install Consul API Gateway along with Consul on Kubernetes. + +This page describes a general approach for integrating Ingress Controllers with Consul on Kubernetes to secure traffic from the Controller +to the backend services by deploying sidecars along with your Ingress Controller. This allows Consul to transparently secure traffic from the ingress point through the entire traffic flow of the service. + +A few steps are generally required to enable an Ingress controller to join the mesh and pass traffic through to a service: + +* Enable connect-injection via an annotation on the Ingress Controller's deployment: `consul.hashicorp.com/connect-inject` is `true`. + +* Using the following annotations on the Ingress controller's deployment, set up exclusion rules for its ports. + * [`consul.hashicorp.com/transparent-proxy-exclude-inbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-inbound-ports) - Provides the ability to exclude a list of ports for +inbound traffic that the service exposes from redirection. Typical configurations would require all inbound service ports +for the controller to be included in this list. + * [`consul.hashicorp.com/transparent-proxy-exclude-outbound-ports`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-ports) - Provides the ability to exclude a list of ports for +outbound traffic that the service exposes from redirection. These would be outbound ports used by your ingress controller + which expect to skip the mesh and talk to non-mesh services. + * [`consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs`](/consul/docs/k8s/annotations-and-labels#consul-hashicorp-com-transparent-proxy-exclude-outbound-cidrs) - Provides the ability to exclude a list of CIDRs that +the service communicates with for outbound requests from redirection. It is somewhat common that an Ingress controller +will expect to make API calls to the Kubernetes service for service/endpoint management. As such including the ClusterIP of the +Kubernetes service is common. + +~> Note: Depending on which ingress controller you use, these stanzas may differ in name and layout, but it is important to apply +these annotations to the *pods* of your *ingress controller*. + ```yaml + # An example list of pod annotations for an ingress controller, which need be applied to PODS for the controller, not the deployment itself. + podAnnotations: + consul.hashicorp.com/connect-inject: "true" + # Add the container ports used by your ingress controller + consul.hashicorp.com/transparent-proxy-exclude-inbound-ports: "80,8000,9000,8443" + # And the CIDR of your Kubernetes API: `kubectl get svc kubernetes --output jsonpath='{.spec.clusterIP}' + consul.hashicorp.com/transparent-proxy-exclude-outbound-cidrs: "10.108.0.1/32" + ``` + +* If the Ingress controller acts as a LoadBalancer and routes directly to Pod IPs instead of the ClusterIP of your Kubernetes Services +a `ServiceDefault` CRD must be applied to *each backend service* allowing it to use the `dialedDirectly` features. By default this is disabled. + + ```yaml + # Example Service defaults config entry + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceDefaults + metadata: + name: backend + spec: + transparentProxy: + dialedDirectly: true + ``` + +* An intention from the Ingress Controller to the backend application must also be applied, this could be an L4 or L7 intention: + + ```yaml + # example L4 intention, but an L7 intention can also be used to control access to specific routes. + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceIntentions + metadata: + name: ingress-backend + spec: + destination: + name: backend + sources: + - name: ingress + action: allow + ``` + +### Common Configuration Problems: +- The Ingress Controller's ServiceAccount name and Service name differ by default in some platforms. Consul on Kubernetes requires the +ServiceAccount and Service to have the same name. To resolve this be sure to explicitly set ServiceAccount name the same as the ingress +controller service name using it's respective helm configurations. + +- If the Ingress Controller does not have the correct inbound ports excluded it will fail to start and the Ingress' +service will not get created, causing the controller to hang in the init container. The required container ports are not +always readily available in the helm charts, so in order to resolve this examine the ingress controller's +underlying pod spec and look for the required container ports, adding these to the `consul.hashicorp.com/transparent-proxy-exclude-inbound-ports` +annotation on the ingress controller deployment. + +### Examples: +Here are a couple example configurations which can be used as reference points in setting up your own ingress controller configuration! +These were used in dev environments and are not intended to be fully supported but should provide some idea how to extend the information +above to your own uses cases. + +- [Traefik Consul example - kschoche](https://github.com/kschoche/traefik-consul) +- [Kong and Traefik Ingress Controller examples - joatmon08](https://github.com/joatmon08/consul-k8s-ingress-controllers) +- [NGINX Ingress Controller example](https://github.com/hashicorp-education/consul-k8s-nginx-ingress-controller) + diff --git a/website/content/docs/north-south/ingress-gateway/external.mdx b/website/content/docs/north-south/ingress-gateway/external.mdx new file mode 100644 index 000000000000..7afe03726558 --- /dev/null +++ b/website/content/docs/north-south/ingress-gateway/external.mdx @@ -0,0 +1,253 @@ +--- +layout: docs +page_title: Serve custom TLS certificates from an external service +description: Learn how to configure ingress gateways to serve TLS certificates from an external service to using secret discovery service. The SDS feature is designed for developers building integrations with custom TLS management solutions. +--- + +# Serve custom TLS certificates from an external service + +This page describes how to configure ingress gateways to serve TLS certificates sourced from an external service to inbound traffic using secret discovery service (SDS). SDS is a low-level feature designed for developers building integrations with custom TLS management solutions. For instructions on more common ingress gateway implementations, refer to [Create and manage ingress gateways on virtual machines](/consul/docs/north-south/ingress-gateway/vm). + +## Overview + +The following process describes the general procedure for configuring ingress gateways to serve TLS certificates sourced from external services: + +1. Configure static SDS clusters in the ingress gateway service definition. +1. Register the service definition. +1. Configure TLS client authentication. +1. Start Envoy. +1. Configure SDS settings in an ingress gateway configuration entry. +1. Register the ingress gateway configuration entry with Consul. + +## Requirements + +- The external service must implement Envoy's [gRPC secret discovery service (SDS) API](https://www.envoyproxy.io/docs/envoy/latest/configuration/security/secret). +- You should have some familiarity with Envoy configuration and the SDS protocol. +- The [`connect.enabled` parameter](/consul/docs/reference/agent/configuration-file/service-mesh#connect) must be set to `true` for all server agents in the Consul datacenter. +- The [`ports.grpc` parameter](/consul/docs/reference/agent/configuration-file/service-mesh#connect) must be configured for all server agents in the Consul datacenter. + +### ACL requirements + +If ACLs are enabled, you must present a token when registering ingress gateways that grant the following permissions: + +- `service:write` for the ingress gateway's service name +- `service:read` for all services in the ingress gateway's configuration entry +- `node:read` for all nodes of the services in the ingress gateway's configuration entry. + +These privileges authorize the token to route communications to other services in the mesh. If the Consul client agent on the gateway's node is not configured to use the default gRPC port, `8502`, then the gateway's token must also provide `agent:read` for its node's name in order to discover the agent's gRPC port. gRPC is used to expose Envoy's xDS API to Envoy proxies. + +## Configure static SDS clusters + +You must define one or more additional static clusters in the ingress gateway service definition for each Envoy proxy associated with the gateway. The additional clusters define how Envoy should connect to the required SDS services. + +Configure the static clusters in the [`Proxy.Config.envoy_envoy_extra_static_clusters_json`](/consul/docs/reference/proxy/envoy#envoy_extra_static_clusters_json) parameter in the service definition. + +The clusters must provide connection information and any necessary authentication information, such as mTLS credentials. + +You must manually register the ingress gateway with Consul proxy to define extra clusters in Envoy's bootstrap configuration. You can not use the `-register` flag with `consul connect envoy -gateway=ingress` to automatically register the proxy to define static clusters. + +In the following example, the `public-ingress` gateway includes a static cluster named `sds-cluster` that specifies paths to the SDS certificate and SDS certification validation files: + + + + +```hcl +Services { + Name = "public-ingress" + Kind = "ingress-gateway" + + Proxy { + Config { + envoy_extra_static_clusters_json = < + +Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/v1.17.2/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-field-config-bootstrap-v3-bootstrap-staticresources-clusters) for details about configuration parameters for SDS clusters. + +## Register the ingress gateway service definition + +Issue the `consul services register` command on the Consul agent on the Envoy proxy's node to register the service. The following example command registers an ingress gateway proxy from a `public-ingress.hcl` file: + +```shell-session +$ consul services register public-ingress.hcl +``` + +Refer to [Register services and health checks](/consul/docs/register/service/vm) for additional information about registering services in Consul. + +## Configure TLS client authentication + +Store TLS client authentication files, certificate files, and keys on disk where the Envoy proxy runs and ensure that they are available to Consul. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/bootstrap/bootstrap) for details on configuring authentication files. + +The following example specifies certificate chain: + + + + +```json +{ + "resources": [ + { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret", + "name": "tls_sds", + "tls_certificate": { + "certificate_chain": { + "filename": "/certs/sds-client-auth.crt" + }, + "private_key": { + "filename": "/certs/sds-client-auth.key" + } + } + } + ] +} +``` + + + +The following example specifies the validation context: + + + +```json +{ + "resources": [ + { + "@type": "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret", + "name": "validation_context_sds", + "validation_context": { + "trusted_ca": { + "filename": "/certs/sds-ca.crt" + } + } + } + ] +} +``` + + + +## Start Envoy + +Issue the `consul connect envoy` command to bootstrap Envoy. The following example starts Envoy and registers it as a service called `public-ingress`: + +```shell-session +$ ​​consul connect envoy -gateway=ingress -service public-ingress +``` + +Refer to [Consul Connect Envoy](/consul/commands/connect/envoy) for additional information about using the `consul connect envoy` command. + +## Define an ingress gateway configuration entry + +Create an ingress gateway configuration entry that enables the gateway to use certificates from SDS. The configuration entry also maps downstream ingress listeners to upstream services. Configure the following fields: + +- [`Kind`](/consul/docs/reference/config-entry/ingress-gateway#kind): Set the value to `ingress-gateway`. +- [`Name`](/consul/docs/reference/config-entry/ingress-gateway#name): Consul applies the configuration entry settings to ingress gateway proxies with names that match the `Name` field. +- [`TLS`](/consul/docs/reference/config-entry/ingress-gateway#tls): The main `TLS` parameter for the configuration entry holds the SDS configuration. You can also specify TLS configurations per listener and per service. + - [`TLS.SDS`](/consul/docs/reference/config-entry/ingress-gateway#tls-sds): The `SDS` map includes the following configuration settings: + - [`ClusterName`](/consul/docs/reference/config-entry/ingress-gateway#tls-sds-clustername): Specifies the name of the cluster you specified when [configuring the SDS cluster](#configure-static-SDS-clusters). + - [`CertResource`](/consul/docs/reference/config-entry/ingress-gateway#tls-sds-certresource): Specifies the name of the certificate resource to load. +- [`Listeners`](/consul/docs/reference/config-entry/ingress-gateway#listeners): Specify one or more listeners. + - [`Listeners.Port`](/consul/docs/reference/config-entry/ingress-gateway#listeners-port): Specify a port for the listener. Each listener is uniquely identified by its port number. + - [`Listeners.Protocol`](/consul/docs/reference/config-entry/ingress-gateway#listeners-protocol): The default protocol is `tcp`, but you must specify the protocol used by the services you want to allow traffic from. + - [`Listeners.Services`](/consul/docs/reference/config-entry/ingress-gateway#listeners-services): The `Services` field contains the services that you want to expose to upstream services. The field contains several options and sub-configurations that enable granular control over ingress traffic, such as health check and TLS configurations. + +For Consul Enterprise service meshes, you may also need to configure the [`Partition`](/consul/docs/reference/config-entry/ingress-gateway#partition) and [`Namespace`](/consul/docs/reference/config-entry/ingress-gateway#namespace) fields for the gateway and for each exposed service. + +Refer to [Ingress gateway configuration entry reference](/consul/docs/reference/config-entry/ingress-gateway) for details about the supported parameters. + +The following example directs Consul to retrieve `example.com-public-cert` certificates from an SDS cluster named `sds-cluster` and serve them to all listeners: + + + +```hcl +Kind = "ingress-gateway" +Name = "public-ingress" + +TLS { + SDS { + ClusterName = "sds-cluster" + CertResource = "example.com-public-cert" + } +} + +Listeners = [ + { + Port = 8443 + Protocol = "http" + Services = ["*"] + } +] +``` + + + +## Register the ingress gateway configuration entry + +You can register the configuration entry using the [`consul config` command](/consul/commands/config) or by calling the [`/config` API endpoint](/consul/api-docs/config). + +The following example registers an ingress gateway configuration entry named `public-ingress-cfg.hcl` that is stored on the local system: + +```shell-session +$ consul config write public-ingress-cfg.hcl +``` + +The Envoy instance starts a listener on the port specified in the configuration entry and fetches the TLS certificate named from the SDS server. diff --git a/website/content/docs/north-south/ingress-gateway/index.mdx b/website/content/docs/north-south/ingress-gateway/index.mdx new file mode 100644 index 000000000000..2c911112f760 --- /dev/null +++ b/website/content/docs/north-south/ingress-gateway/index.mdx @@ -0,0 +1,47 @@ +--- +layout: docs +page_title: Ingress gateway overview +description: >- + Ingress gateways enable you to connect external services to services in your mesh. Ingress gateways are a type of proxy that listens for requests from external network locations and route authorized traffic to destinations in the service mesh. +--- + +# Ingress gateway overview + +This topic provides an overview of ingress gateways in Consul. An ingress gateway is a type of proxy that enables network connectivity from external services to services inside the mesh. They listen for external requests and route authorized traffic to instances in the service mesh. Refer to [Access services overview](/north-south/) for additional information about connecting external clients to services in the mesh. + + + +Ingress gateways are deprecated. Use [Consul API gateways](/north-south/api-gateway) to secure service mesh ingress instead. + + + +## Workflow + +The following workflow describes how to deploy ingress gateways to service meshes on virtual machines (VM) and Kubernetes (K8s): + +1. For networks operating on K8s, enable ingress gateways in the Helm chart configuration when installing Consul. +1. Define listeners and the services they expose to external clients. When ACLs are enabled, you must also define service intentions to allow traffic to the destination services. +1. Register the ingress gateway service with Consul and start the ingress gateway proxy service. + +You can also configure ingress gateways to retrieve and serve custom TLS certificates from external systems. This functionality is designed to help you integrate with custom TLS management software. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for additional information. + +The following diagram describes how external traffic reaches services inside the mesh through an ingress gateway: + +![How external traffic reaches services inside the mesh through an ingress gateway](/img/ingress-gateways.png) + +## Guidance + +Refer to the following resources to help you enable secure access to service mesh resources from external clients. + +### Usage + +- [Create ingress gateways on VMs](/consul/docs/north-south/ingress-gateway/vm) +- [Create ingress gateways on K8s](/consul/docs/north-south/ingress-gateway/k8s) +- [Serve TLS certificates from external services](/consul/docs/north-south/ingress-gateway/external) + +### Reference + +- [Ingress gateway configuration](/consul/docs/reference/config-entry/ingress-gateway) +- [Helm chart configuration](/consul/docs/reference/k8s/helm) +- [Service intentions configuration](/consul/docs/reference/config-entry/service-intentions) + diff --git a/website/content/docs/north-south/ingress-gateway/k8s.mdx b/website/content/docs/north-south/ingress-gateway/k8s.mdx new file mode 100644 index 000000000000..dd2313c8656f --- /dev/null +++ b/website/content/docs/north-south/ingress-gateway/k8s.mdx @@ -0,0 +1,298 @@ +--- +layout: docs +page_title: Create and manage ingress gateways on Kubernetes +description: Learn how to create and manage an ingress gateway on K8s. Ingress gateways are Consul service mesh constructs that listen for requests from external network locations and route authorized traffic to destinations in the service mesh. +--- + +# Create and manage ingress gateways on Kubernetes + +This topic describes how to add ingress gateways to your Consul service mesh in Kubernetes environments. Refer to [Create and manage ingress gateways on Kubernetes](/consul/docs/north-south/ingress-gateway/vm) for instructions on how to implement ingress gateways in virtual machine (VM) environments. + + + +Ingress gateways are deprecated and will no longer be updated. Ingress gateways are fully supported +in this version, but we will remove them in a future release of Consul. Use [Consul API gateways](/consul/docs/north-south/api-gateway) instead. + + + +## Overview + +Ingress gateways enable services outside the service mesh to send traffic to services in the mesh. Refer to [Ingress gateways overview](/consul/docs/north-south/ingress-gateway/) for additional information about ingress gateways. + +Complete the following steps to add an ingress gateway: + +1. Enable ingress gateways in the Helm chart configuration +1. Configure ingress gateway listeners and destinations +1. If ACLs are enabled, define service intentions +1. Deploy your application to Kubernetes +1. Connect to your application + +## Enable ingress gateways in the Helm chart + +1. Create custom YAML values file and configure the `ingressGateways` object. You can specify one or more gateways for you environment in the `ingressGateways.gateways` field. The only required field for each entry is `ingressGateways.gateways.name`. Refer to the [Helm chart reference](/consul/docs/reference/k8s/helm#ingressgateways) for details about the supported fields. + + The following example configuration creates an ingress as a public, unauthenticated LoadBalancer in your cluster: + + + + ```yaml + global: + name: consul + connectInject: + enabled: true + ingressGateways: + enabled: true + gateways: + - name: ingress-gateway + service: + type: LoadBalancer + ``` + + + +1. Deploy the Helm chart and pass the custom YAML file that contains your environment configuration. We recommend verifying that the latest Consul Helm chart is installed. Refer to [Consul on Kubernetes installation overview](/consul/docs/deploy/server/k8s) for additional information. + + The following example installs Consul 1.17.0 using a `values.yaml` configuration file: + + ```shell-session + $ helm install consul -f values.yaml hashicorp/consul --version 1.17.0 --wait --debug + ``` + +## Configure gateway listeners and destinations + +Create an ingress gateway custom resource and specify listeners and destination services in the configuration. The `name` field for the must match the name specified when creating the gateway in the Helm chart. +Refer to the [ingress gateway configuration reference](/consul/docs/reference/config-entry/ingress-gateway) for details. + +The following example creates an ingress gateway that listens for HTTP traffic on port `8080` and routes traffic to `static-server`: + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: IngressGateway +metadata: + name: ingress-gateway +spec: + listeners: + - port: 8080 + protocol: http + services: + - name: static-server +``` + + + + +Apply the `IngressGateway` resource with `kubectl apply`: + +```shell-session +$ kubectl apply --filename ingress-gateway.yaml +ingressgateway.consul.hashicorp.com/ingress-gateway created +``` + +### Configure the default protocol + +Destination services mapped to the gateway listeners must use the same protocol specified in the ingress gateway custom resource. +You can create and apply a service defaults custom resource configuration to set the default protocol for all instances of a service in your mesh. +Refer to the [service defaults configuration reference](/consul/docs/reference/config-entry/service-defaults) for additional information. + +The following example directs the `static-server` service to communicate with other services in the mesh over HTTP: + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceDefaults +metadata: + name: static-server +spec: + protocol: http +``` + + + +Run the `kubectl apply` command to apply the service defaults resource: + +```shell-session +$ kubectl apply --filename service-defaults.yaml +servicedefaults.consul.hashicorp.com/static-server created +``` + +### Verify the custom resources + +You can run the `kubectl get` command to verify that your custom resources have successfully synced to Consul: + +```shell-session +$ kubectl get servicedefaults +NAME SYNCED AGE +static-server True 45s + +$ kubectl get ingressgateway +NAME SYNCED AGE +ingress-gateway True 13m +``` + +### View the gateway in the Consul UI + +You can confirm the ingress gateways have been configured as expected by viewing the ingress gateway service instances +in the Consul UI. + +Run the `kubectl port-forward` command. Refer to [View the Consul UI](/consul/docs/deploy/server/k8s#viewing-the-consul-ui) +for instructions. + +After forwarding the port, you can open [http://localhost:8500/ui/dc1/services/ingress-gateway/instances](http://localhost:8500/ui/dc1/services/ingress-gateway/instances) in a web browser to view the ingress gateway instances. + +If TLS is enabled, use open [https://localhost:8501/ui/dc1/services/ingress-gateway/instances](https://localhost:8501/ui/dc1/services/ingress-gateway/instances). + +## Define service intentions + +When ACLs are enabled, you must define service intentions to allow the ingress gateway to route traffic to the destination service. Refer to [Create service intentions](/consul/docs/secure-mesh/intention/create) for additional information. + +To define an intention, create a service intention configuration and apply it to your cluster. Refer to [Service intention configuration reference](/consul/docs/reference/config-entry/service-intentions) for details. + +In the following example, Consul allows the `ingress-gateway` service to send traffic to the `static-server` destination service: + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ServiceIntentions +metadata: + name: static-server +spec: + destination: + name: static-server + sources: + - name: ingress-gateway + action: allow +``` + + + +Run the `kubectl apply` command to apply the service intention configuration: + +```shell-session +$ kubectl apply --filename service-intentions.yaml +serviceintentions.consul.hashicorp.com/ingress-gateway created +``` + +Refer to the [zero trust network tutorial ](/consul/tutorials/kubernetes-features/service-mesh-zero-trust-network?utm_source=docs) for additional guidance on how to configure zero-trust networking with intentions. + +## Deploy your application to Kubernetes + +Deploy your application to the cluster. Refer to the [Kubernetes documentation](https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-app/) for instructions on creating deployments. + +The following configuration defines a Kubernetes Deployment container called `static-server` that uses the `hashicorp/http-echo:latest` image to print `"hello world"`: + + + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: static-server +spec: + selector: + app: static-server + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: static-server +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: static-server +spec: + replicas: 1 + selector: + matchLabels: + app: static-server + template: + metadata: + name: static-server + labels: + app: static-server + annotations: + 'consul.hashicorp.com/connect-inject': 'true' + spec: + containers: + - name: static-server + image: hashicorp/http-echo:latest + args: + - -text="hello world" + - -listen=:8080 + ports: + - containerPort: 8080 + name: http + serviceAccountName: static-server +``` + + + +Rune the `kubectl apply` command to apply the configuration and deploy the application. + +```shell-session +$ kubectl apply --filename static-server.yaml +``` + +## Validate the service registration + +You can validate that the service is registered with Consul using the UI or by sending a request to the service. + + + + + +1. Open the Consul UI in a web browser. By default, the Consul UI is on port 8500, for example `http://localhost:8500/ui/dc1/services/static-server/instances`. When TLS is enabled, the default port number is 8501. +1. Click on the **Services** tab. +1. Click on the name of your service. + + + + +The following example sends a cURL request to the cluster: + +```shell-session +$ EXTERNAL_IP=$(kubectl get services --selector component=ingress-gateway --output jsonpath="{range .items[*]}{@.status.loadBalancer.ingress[*].ip}{end}") +$ echo "Connecting to \"$EXTERNAL_IP\"" +$ curl --header "Host: static-server.ingress.consul" "http://$EXTERNAL_IP:8080" +"hello world" +``` + + + + +!> **Security Warning:** Always delete applications and services you created as part of your test and development procedures. Open and unauthenticated load balancer that you leave alive in your cluster represent a security risk. + +## Delete an ingress gateway + +To delete the ingress gateway, set `ingressGateways.enabled` to `false` in your Helm configuration: + + + +```yaml +global: + name: consul +connectInject: + enabled: true +ingressGateways: + enabled: false # Set to false + gateways: + - name: ingress-gateway + service: + type: LoadBalancer +``` + + + +Rune `helm upgrade` to apply the configuration: + +```shell-session +$ helm upgrade consul hashicorp/consul --values values.yaml +``` \ No newline at end of file diff --git a/website/content/docs/north-south/ingress-gateway/vm.mdx b/website/content/docs/north-south/ingress-gateway/vm.mdx new file mode 100644 index 000000000000..e911f10d0869 --- /dev/null +++ b/website/content/docs/north-south/ingress-gateway/vm.mdx @@ -0,0 +1,131 @@ +--- +layout: docs +page_title: Create and manage ingress gateways on virtual machines +description: Learn how to create and manage an ingress gateway on virtual machines. Ingress gateways are Consul service mesh constructs that listen for requests from external network locations and route authorized traffic to destinations in the service mesh. +--- + +# Create and manage ingress gateways on virtual machines + +This topic describes how to add ingress gateways to your Consul service mesh in virtual machine (VM) environments. Refer to [Create and manage ingress gateways on Kubernetes](/consul/docs/north-south/ingress-gateway/k8s) for instructions on how to implement ingress gateways in Kubernetes environments. + + + +Ingress gateways are deprecated and will no longer be updated. Ingress gateways are fully supported +in this version, but we will remove them in a future release of Consul. Use [Consul API gateways](/consul/docs/north-south/api-gateway) instead. + + + +## Overview + +Ingress gateways enable services outside the service mesh to send traffic to services in the mesh. Refer to [Ingress gateways overview](/consul/docs/north-south/ingress-gateway/) for additional information about ingress gateways. + +Complete the following steps to set up an ingress gateway: + +1. Define listeners and the services they expose in an ingress gateway configuration entry. +1. Register the ingress gateway configuration entry to Consul. +1. Create a service definition for the ingress gateway proxy service and deploy it to the mesh. +1. Start the ingress gateway. This step deploys the Envoy proxy that functions as the ingress gateway. + +After specifying listeners and services in the ingress gateway configuration entry, you can register the gateway service and start Envoy with a single CLI command instead of completing these steps separately. Refer [Register an ingress service on Envoy startup](#register-an-ingress-service-on-envoy-startup). + +## Requirements + +- Service mesh must be enabled for all agents. Set the [`connect.enabled` parameter](/consul/docs/reference/agent/configuration-file/service-mesh#connect) to `true` to enable service mesh. +- The gRPC port must be configured for all server agents in the datacenter. Specify the gRPC port in the [`ports.grpc` parameter](/consul/docs/reference/agent/configuration-file/general#grpc_port). We recommend setting the port to `8502` to simplify configuration when ACLs are enabled. Refer to [ACL requirements](#acl-requirements) for additional information. +- You must use Envoy for sidecar proxies in your service mesh. Refer to [Configure and deploy sidecar proxies](/consul/docs/connect/proxy/sidecar) for additional information. + +### ACL requirements + +If ACLs are enabled, you must present a token when registering ingress gateways that grant the following permissions: + +- `service:write` for the ingress gateway's service name +- `service:read` for all services in the ingress gateway's configuration entry +- `node:read` for all nodes of the services in the ingress gateway's configuration entry. + +These privileges authorize the token to route communications to other services in the mesh. If the Consul client agent on the gateway's node is not configured to use the default `8502` gRPC port, then the gateway's token must also provide `agent:read` for its node's name in order to discover the agent's gRPC port. gRPC is used to expose Envoy's xDS API to Envoy proxies. + +## Define listeners + +Specify listener configurations in an ingress gateway configuration entry. Listener configurations are traffic entry points to the service mesh that map to upstream services. When you register an ingress gateway proxy that matches the configuration entry name, Consul applies the settings specified in the configuration entry. Configure the following fields: + +- [`Kind`](/consul/docs/reference/config-entry/ingress-gateway#kind): Set the value to `ingress-gateway`. +- [`Name`](/consul/docs/reference/config-entry/ingress-gateway#name): Consul applies the configuration entry settings to ingress gateway proxies with names that match the `Name` field. +- [`Listeners`](/consul/docs/reference/config-entry/ingress-gateway#listeners): Specify one or more listeners. + - [`Listeners.Port`](/consul/docs/reference/config-entry/ingress-gateway#listeners-port): Specify a port for the listener. Each listener is uniquely identified by its port number. + - [`Listeners.Protocol`](/consul/docs/reference/config-entry/ingress-gateway#listeners-protocol): The default protocol is `tcp`, but you must specify the protocol used by the services you want to allow traffic from. + - [`Listeners.Services`](/consul/docs/reference/config-entry/ingress-gateway#listeners-services): The `Services` field contains the services that you want to expose to upstream services. The field contains several options and sub-configurations that enable granular control over ingress traffic, such as health check and TLS configurations. + +For Consul Enterprise service meshes, you may also need to configure the [`Partition`](/consul/docs/reference/config-entry/ingress-gateway#partition) and [`Namespace`](/consul/docs/reference/config-entry/ingress-gateway#namespace) fields for the gateway and for each exposed service. + +Refer to [Ingress gateway configuration entry reference](/consul/docs/reference/config-entry/ingress-gateway) for details about the supported parameters. + +## Register the ingress gateway + +You can either use the [`consul config` command](/consul/commands/config) or call the [`/config` API endpoint](/consul/api-docs/config) to register the gateway configuration entry. +The following example command registers an ingress gateway configuration entry named `public-ingress.hcl` that is stored on the local system: + +```shell-session +$ consul config write public-ingress.hcl +``` + +## Deploy an ingress gateway service + +To deploy an ingress gateway service, create a service definition and register it with Consul. + +You can also define an ingress gateway service and register it with Consul while starting an Envoy proxy from the command line. Refer to [Register an ingress service on Envoy startup](#register-an-ingress-service-on-envoy-startup) for details. + +### Create a service definition for the ingress gateway + +Consul applies the settings defined in the ingress gateway configuration entry to ingress gateway services that match the configuration entry name. Refer to [Define services](/consul/docs/register/service/vm/define) for additional information about defining services in Consul. + +The following fields are required for the ingress gateway service definition: + +- [`Kind`](/consul/docs/reference/service#kind): The field must be set to `ingress-gateway`. +- [`Name`](/consul/docs/reference/service#name): The name should match the value specified for the `Name` field in the configuration entry. + +All other service definition fields are optional, but we recommend defining health checks to verify the health of the gateway. Refer to [Services configuration reference](/consul/docs/reference/service) for information about defining services. + +### Register the ingress gateway proxy service + +You can register the ingress gateway using API or CLI. Refer to [Register services and health checks](/consul/docs/register/service/vm/) for instructions on registering services in Consul. + +The following example registers an ingress gateway defined in `ingress-gateway.hcl` from the Consul CLI: + +```shell-session +$ consul services register ingress-service.hcl +``` + +## Start an Envoy proxy + +Run the `consul connect envoy` command to start Envoy. Specify the name of the ingress gateway service and include the `-gateway=ingress` flag. Refer to [Consul Connect Envoy](/consul/commands/connect/envoy) for details about using the command. + +The following example starts Envoy for the `ingress-service` gateway service: + +```shell-session +$ consul connect envoy -gateway=ingress ingress-service +``` + +### Register an ingress service on Envoy startup + +You can also automatically register the ingress gateway service when starting the Envoy proxy. Specify the following flags with the `consul connect envoy` command: + +- `-gateway=ingress` +- `-register` +- `-service=` + +The following example starts Envoy and registers an ingress gateway service named `ingress-service` bound to the agent address at port `8888`: + +```shell-session +$ consul connect envoy -gateway=ingress -register \ + -service ingress-service \ + -address '{{ GetInterfaceIP "eth0" }}:8888' +``` +You cannot register the ingress gateway service and start the proxy at the same time if you configure the gateway to retrieve and serve TLS certificates from their external downstreams. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for more information. + +## Additional Envoy configurations + +Ingress gateways support additional Envoy gateway options and escape-hatch +overrides. Specify gateway options in the ingress gateway service definition to +use them. To use escape-hatch overrides, you must add them to your global proxy +defaults configuration entry. Refer to the [Envoy configuration +reference](/consul/docs/reference/proxy/envoy) for additional information. diff --git a/website/content/docs/north-south/k8s.mdx b/website/content/docs/north-south/k8s.mdx new file mode 100644 index 000000000000..726a98492d08 --- /dev/null +++ b/website/content/docs/north-south/k8s.mdx @@ -0,0 +1,55 @@ +--- +page_title: Service mesh ingress on Kubernetes +description: API gateways and ingress gateways are types of service mesh proxies that you can use to securely allow systems outside the Consul service mesh to access services in the mesh. Learn how API and ingress gateways can help you enable access to services registered by Consul on Kubernetes. +layout: docs +--- + +# Secure north/south access on Kubernetes + +This topic provides an overview of how Consul securely allows systems outside the service mesh to access services inside the mesh on Kubernetes. Network traffic that connects services inside the mesh to external clients or services is referred to as _north-south traffic_. + +For information about enabling intra-mesh, or _east-west traffic_ , refer to [Expand network east/west overview](/consul/docs/east-west). + +## Introduction + +You can define points of ingress to the service mesh using either API gateways or ingress gateways. These gateways allow external network clients to access applications and services running in a Consul datacenter. + +API gateways forward requests from clients to specific destinations based on path or request protocol. Ingress gateways are Consul's legacy capability for ingress and have been deprecated in favor of API gateways. + +## API gateways + +@include 'text/descriptions/api-gateway.mdx' + +## Ingress gateways + +@include 'text/descriptions/ingress-gateway.mdx' + +## Terminating gateways + +@include 'text/descriptions/terminating-gateway.mdx' + +## Guidance + +Refer to the following resources for help setting up and using API gateways: + +### Tutorials + +- [Control access into the service mesh with Consul API gateway](/consul/tutorials/developer-mesh/kubernetes-api-gateway) + +### Usage documentation + +- [Deploy API gateway listeners to Kubernetes](/consul/docs/north-south/api-gateway/k8s/listener) +- [Deploy API gateway routes to Kubernetes](/consul/docs/north-south/api-gateway/k8s/route) +- [Reroute HTTP requests in Kubernetes](/consul/docs/north-south/api-gateway/k8s/reroute) +- [Route traffic to peered services in Kubernetes](/consul/docs/north-south/api-gateway/k8s/peer) +- [Use JWTs to verify requests to API gateways on Kubernetes](/consul/docs/north-south/api-gateway/secure-traffic/jwt/k8s) + +### Reference + +- [`Gateway`](/consul/docs/reference/k8s/api-gateway/gateway) +- [`GatewayClass`](/consul/docs/reference/k8s/api-gateway/gatewayclass) +- [`GatewayClassConfig`](/consul/docs/reference/k8s/api-gateway/gatewayclassconfig) +- [`Routes`](/consul/docs/reference/k8s/api-gateway/routes) +- [`MeshServices`](/consul/docs/reference/k8s/api-gateway/meshservice) +- [`ServiceIntentions`](/consul/docs/reference/config-entry/service-intentions) +- [Error messages](/consul/docs/error-messages/api-gateway) \ No newline at end of file diff --git a/website/content/docs/north-south/terminating-gateway.mdx b/website/content/docs/north-south/terminating-gateway.mdx new file mode 100644 index 000000000000..9312ce11c945 --- /dev/null +++ b/website/content/docs/north-south/terminating-gateway.mdx @@ -0,0 +1,130 @@ +--- +layout: docs +page_title: Terminating Gateway | Service Mesh +description: >- + Terminating gateways send requests from inside the service mesh to external network locations and services outside the mesh. Learn about requirements and terminating gateway interactions with Consul's service catalog. +--- + +# Terminating Gateways + +-> **1.8.0+:** This feature is available in Consul versions 1.8.0 and newer. + +Terminating gateways enable connectivity within your organizational network from services in the Consul service mesh to +services and [destinations](/consul/docs/reference/config-entry/service-defaults#terminating-gateway-destination) outside the mesh. These gateways effectively act as service mesh proxies that can +represent more than one service. They terminate service mesh mTLS connections, enforce intentions, +and forward requests to the appropriate destination. + +![Terminating Gateway Architecture](/img/terminating-gateways.png) + +For additional use cases and usage patterns, review the tutorial for +[understanding terminating gateways](/consul/tutorials/developer-mesh/service-mesh-terminating-gateways?utm_source=docs). + +~> **Known limitations:** Terminating gateways currently do not support targeting service subsets with +[L7 configuration](/consul/docs/manage-traffic). They route to all instances of a service with no capabilities +for filtering by instance. + +## Security Considerations + +~> We recommend that terminating gateways are not exposed to the WAN or open internet. This is because terminating gateways +hold certificates to decrypt Consul service mesh traffic directed at them and may be configured with credentials to connect +to linked services. Connections over the WAN or open internet should flow through [mesh gateways](/consul/docs/east-west/mesh-gateway) +whenever possible since they are not capable of decrypting traffic or connecting directly to services. + +By specifying a path to a [CA file](/consul/docs/reference/config-entry/terminating-gateway#cafile) connections +from the terminating gateway will be encrypted using one-way TLS authentication. If a path to a +[client certificate](/consul/docs/reference/config-entry/terminating-gateway#certfile) +and [private key](/consul/docs/reference/config-entry/terminating-gateway#keyfile) are also specified connections +from the terminating gateway will be encrypted using mutual TLS authentication. + +If none of these are provided, Consul will **only** encrypt connections to the gateway and not +from the gateway to the destination service. + +When certificates for linked services are rotated, the gateway must be restarted to pick up the new certificates from disk. +To avoid downtime, perform a rolling restart to reload the certificates. Registering multiple terminating gateway instances +with the same [name](/consul/commands/connect/envoy#service) provides additional fault tolerance +as well as the ability to perform rolling restarts. + +-> **Note:** If certificates and keys are configured the terminating gateway will upgrade HTTP connections to TLS. +Client applications can issue plain HTTP requests even when connecting to servers that require HTTPS. + +## Prerequisites + +Each terminating gateway needs: + +1. A local Consul client agent to manage its configuration. +2. General network connectivity to services within its local Consul datacenter. +3. General network connectivity to services and destinations outside the mesh that are part of the gateway services list. + +Terminating gateways also require that your Consul datacenters are configured correctly: + +- You'll need to use Consul version 1.8.0 or newer. +- Consul [service mesh](//consul/docs/reference/agent/configuration-file/service-mesh#connect) must be enabled on the datacenter's Consul servers. +- [gRPC](/consul/docs/reference/agent/configuration-file/general#grpc_port) must be enabled on all client agents. + +Currently, [Envoy](https://www.envoyproxy.io/) is the only proxy with terminating gateway capabilities in Consul. + +- Terminating gateway proxies receive their configuration through Consul, which + automatically generates it based on the gateway's registration. Currently Consul + can only translate terminating gateway registration information into Envoy + configuration, therefore the proxies acting as terminating gateways must be Envoy. + +Service mesh proxies that send upstream traffic through a gateway aren't +affected when you deploy terminating gateways. If you are using non-Envoy proxies as +Service mesh proxies they will continue to work for traffic directed at services linked to +a terminating gateway as long as they discover upstreams with the +[/health/connect](/consul/api-docs/health#list-nodes-for-connect-capable-service) endpoint. + +## Running and Using a Terminating Gateway + +For a complete example of how to enable connections from services in the Consul service mesh to +services outside the mesh, review the [terminating gateway tutorial](/consul/tutorials/developer-mesh/terminating-gateways-connect-external-services). + +## Terminating Gateway Configuration + +Terminating gateways are configured in service definitions and registered with Consul like other services, with two exceptions. +The first is that the [kind](/consul/api-docs/agent/service#kind) must be "terminating-gateway". Second, +the terminating gateway service definition may contain a `Proxy.Config` entry just like a +service mesh proxy service, to define opaque configuration parameters useful for the actual proxy software. +For Envoy there are some supported [gateway options](/consul/docs/connect/proxies/envoy#gateway-options) as well as +[escape-hatch overrides](/consul/docs/connect/proxies/envoy#escape-hatch-overrides). + +-> **Note:** If ACLs are enabled, terminating gateways must be registered with a token granting `node:read` on the nodes +of all services in its configuration entry. The token must also grant `service:write` for the terminating gateway's service name **and** +the names of all services in the terminating gateway's configuration entry. These privileges will authorize the gateway +to terminate mTLS connections on behalf of the linked services and then route the traffic to its final destination. +If the Consul client agent on the gateway's node is not configured to use the default gRPC port, 8502, then the gateway's token +must also provide `agent:read` for its node's name in order to discover the agent's gRPC port. gRPC is used to expose Envoy's xDS API to Envoy proxies. + +You can link services and destinations to a terminating gateway with a `terminating-gateway` +[configuration entry](/consul/docs/reference/config-entry/terminating-gateway). This config entry can be applied via the +[CLI](/consul/commands/config/write) or [API](/consul/api-docs/config#apply-configuration). + +Gateways with the same name in Consul's service catalog are configured with a single configuration entry. +This means that additional gateway instances registered with the same name will determine their routing based on the existing configuration entry. +Adding replicas of a gateway that routes to a particular set of services requires running the +[envoy subcommand](/consul/commands/connect/envoy#terminating-gateways) on additional hosts and specifying +the same gateway name with the `service` flag. + +~> [Configuration entries](/consul/docs/fundamentals/config-entry) are global in scope. A configuration entry for a gateway name applies +across all federated Consul datacenters. If terminating gateways in different Consul datacenters need to route to different +sets of services within their datacenter then the terminating gateways **must** be registered with different names. + +The services that the terminating gateway will proxy for must be registered with Consul, even the services outside the mesh. They must also be registered +in the same Consul datacenter as the terminating gateway. Otherwise the terminating gateway will not be able to +discover the services' addresses. These services can be registered with a local Consul agent. +If there is no agent present, the services can be registered [directly in the catalog](/consul/api-docs/catalog#register-entity) +by sending the registration request to a client or server agent on a different host. + +All services registered in the Consul catalog must be associated with a node, even when their node is +not managed by a Consul client agent. All agent-less services with the same address can be registered under the same node name and address. +However, ensure that the [node name](/consul/api-docs/catalog#node) for external services registered directly in the catalog +does not match the node name of any Consul client agent node. If the node name overlaps with the node name of a Consul client agent, +Consul's [anti-entropy sync](/consul/docs/concept/consistency) will delete the services registered via the `/catalog/register` HTTP API endpoint. + +Service-defaults [destinations](/consul/docs/reference/config-entry/service-defaults#destination) let you +define endpoints external to the mesh and routable through a terminating gateway in transparent mode. +After you define a service-defaults configuration entry for each destination, you can use the service-default name as part of the terminating gateway services list. +If a service and a destination service-defaults have the same name, the terminating gateway will use the service. + +For a complete example of how to register external services review the +[external services tutorial](/consul/tutorials/developer-discovery/service-registration-external-services). diff --git a/website/content/docs/north-south/vm.mdx b/website/content/docs/north-south/vm.mdx new file mode 100644 index 000000000000..bf17165293ae --- /dev/null +++ b/website/content/docs/north-south/vm.mdx @@ -0,0 +1,51 @@ +--- +page_title: Service mesh ingress on virtual machines +description: API gateways and ingress gateways are types of service mesh proxies that you can use to securely allow systems outside the Consul service mesh to access services in the mesh. Learn how API and ingress gateways can help you enable access to services registered by Consul when running Consul on virtual machines (VMs). +layout: docs +--- + +# Secure north/south access on virtual machines + +This topic provides an overview of how Consul securely allows systems outside the service mesh to access services inside the mesh when running the Consul binary on virtual machines (VM). Network traffic that connects services inside the mesh to external clients or services is referred to as _north-south traffic_. + +For information about enabling intra-mesh, or _east-west traffic_, refer to [Expand network east/west overview](/consul/docs/east-west). + +## Introduction + +You can define points of ingress to the service mesh using either API gateways or ingress gateways. These gateways allow external network clients to access applications and services running in a Consul datacenter. + +API gateways forward requests from clients to specific destinations based on path or request protocol. Ingress gateways are Consul's legacy capability for ingress and have been deprecated in favor of API gateways. + +## API gateways + +@include 'text/descriptions/api-gateway.mdx' + +## Ingress gateways + +@include 'text/descriptions/ingress-gateway.mdx' + +## Terminating gateways + +@include 'text/descriptions/terminating-gateway.mdx' + +## Guidance + +Refer to the following resources for help setting up and using API gateways: + +### Tutorials + +- [Control access into the service mesh with Consul API gateway](/consul/tutorials/developer-mesh/kubernetes-api-gateway) + +### Usage documentation + +- [Deploy API gateway listeners to VMs](/consul/docs/north-south/api-gateway/vm/listener) +- [Deploy API gateway routes to VMs](/consul/docs/north-south/api-gateway/vm/route) +- [Encrypt API gateway traffic on VMs](/consul/docs/north-south/api-gateway/secure-traffic/encrypt) +- [Use JWTs to verify requests to API gateways on VMs](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) + +### Reference + +- [API gateway configuration entry reference](/consul/docs/reference/config-entry/api-gateway) +- [HTTP route configuration entry reference](/consul/docs/reference/config-entry/http-route) +- [TCP route configuration entry reference](/consul/docs/reference/config-entry/tcp-route) +- [Error messages](/consul/docs/error-messages/api-gateway) \ No newline at end of file diff --git a/website/content/docs/observe/access-log.mdx b/website/content/docs/observe/access-log.mdx new file mode 100644 index 000000000000..fabc88b91c60 --- /dev/null +++ b/website/content/docs/observe/access-log.mdx @@ -0,0 +1,253 @@ +--- +layout: docs +page_title: Access Logs +description: >- + Consul can emit access logs for application connections and requests that pass through Envoy proxies in the service mesh. Learn how to configure access logs, including minimum configuration requirements and the default log format. +--- + +# Access Logs + +This topic describes configuration and usage for access logs. Consul can emit access logs to record application connections and requests that pass through proxies in a service mesh, including sidecar proxies and gateways. +You can use the application traffic records in access to logs to help you performance the following operations: + + - **Diagnosing and Troubleshooting Issues**: Operators and application owners can identify configuration issues in the service mesh or the application by analyzing failed connections and requests. + - **Threat Detection**: Operators can review details about unauthorized attempts to access the service mesh and their origins. + - **Audit Compliance**: Operators can use access less for security compliance requirements for traffic entering and exiting the service mesh through gateways. + +Consul supports access logs capture through Envoy proxies started through the [`consul connect envoy`](/consul/commands/connect/envoy) CLI command and [`consul-dataplane`](/consul/docs/architecture/control-plane/dataplane). Other proxies are not supported. You can also configure the [OpenTelemetry Envoy extension](/consul/docs/envoy-extension/otel-access-logging) to capture and stream access logs. + +## Enable access logs + +Access logs configurations are defined globally in the [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults#accesslogs) configuration entry. + +The following example is a minimal configuration for enabling access logs: + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +AccessLogs { + Enabled = true +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + accessLogs: + enabled: true +``` + +```json +{ + "Kind": "proxy-defaults", + "Name": "global", + "AccessLogs": { + "Enabled": true + } +} +``` + + + +All proxies, including sidecars and gateways, emit access logs when the behavior is enabled. +Both inbound and outbound traffic through the proxy are logged, including requests made directly to [Envoy's administration interface](https://www.envoyproxy.io/docs/envoy/latest/operations/admin.html?highlight=administration%20logs#administration-interface). + +If you enable access logs after the Envoy proxy was started, access logs for the administration interface are not captured until you restart the proxy. + +## Default log format + +Access logs use the following format when no additional customization is provided: + +~> **Security warning:** The following log format contains IP addresses which may be a data compliance issue, depending on your regulatory environment. +Operators should carefully inspect their chosen access log format to prevent leaking sensitive or personally identifiable information. + +```json +{ + "start_time": "%START_TIME%", + "route_name": "%ROUTE_NAME%", + "method": "%REQ(:METHOD)%", + "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%", + "protocol": "%PROTOCOL%", + "response_code": "%RESPONSE_CODE%", + "response_flags": "%RESPONSE_FLAGS%", + "response_code_details": "%RESPONSE_CODE_DETAILS%", + "connection_termination_details": "%CONNECTION_TERMINATION_DETAILS%", + "bytes_received": "%BYTES_RECEIVED%", + "bytes_sent": "%BYTES_SENT%", + "duration": "%DURATION%", + "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%", + "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%", + "user_agent": "%REQ(USER-AGENT)%", + "request_id": "%REQ(X-REQUEST-ID)%", + "authority": "%REQ(:AUTHORITY)%", + "upstream_host": "%UPSTREAM_HOST%", + "upstream_cluster": "%UPSTREAM_CLUSTER%", + "upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%", + "downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%", + "downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%", + "requested_server_name": "%REQUESTED_SERVER_NAME%", + "upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%" +} +``` + +Depending on the connection type, such TCP or HTTP, some of these fields may be empty. + +## Custom log format + +Envoy uses [command operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators) to expose information about application traffic. +You can use these fields to customize the access logs that proxies emit. + +Custom logs can be either JSON format or text format. + +### JSON format + +You can format access logs in JSON so that you can parse them with Application Monitoring Platforms (APMs). + +To use a custom access log, in the `proxy-defaults` configuration entry, set [`JSONFormat`](/consul/docs/reference/config-entry/proxy-defaults#jsonformat) to the string representation of the desired JSON. + +Nesting is supported. + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +AccessLogs { + Enabled = true + JSONFormat = < + +### Text format + +To use a custom access log formatted in plaintext, in the `proxy-defaults` configuration entry, set [`TextFormat`](/consul/docs/reference/config-entry/proxy-defaults#textformat) to the desired customized string. + +New lines are automatically added to the end of the log to keep each access log on its own line in the output. + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +AccessLogs { + Enabled = true + TextFormat = "MY START TIME: %START_TIME%, THIS CONNECTIONS PROTOCOL IS %PROTOCOL%" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + accessLogs: + enabled: true + textFormat: "MY START TIME: %START_TIME%, THIS CONNECTIONS PROTOCOL IS %PROTOCOL%" +``` + +```json +{ + "Kind": "proxy-defaults", + "Name": "global", + "AccessLogs": { + "Enabled": true, + "JSONFormat": "MY START TIME: %START_TIME%, THIS CONNECTIONS PROTOCOL IS %PROTOCOL%" + } +} +``` + + + + +## Kubernetes + +As part of its normal operation, the Envoy debugging logs for the `consul-dataplane`, `envoy`, or `envoy-sidecar` containers are written to `stderr`. +The access log [`Type`](/consul/docs/reference/config-entry/proxy-defaults#type) is set to `stdout` by default for access logs when enabled. +Use a log aggregating solution to separate the machine-readable access logs from the Envoy process debug logs. + +## Write to a file + +You can configure Consul to write access logs to a file on the host where Envoy runs. + +Envoy does not rotate log files. A log rotation solution, such as [logrotate](https://www.redhat.com/sysadmin/setting-logrotate), can prevent access logs from consuming too much of the host's disk space when writing to a file. + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +AccessLogs { + Enabled = true + Type = "file" + Path = "/var/log/envoy/access-logs.txt" +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + accessLogs: + enabled: true + type: file + path: "/var/log/envoy/access-logs.txt" +``` + +```json +{ + "Kind": "proxy-defaults", + "Name": "global", + "AccessLogs": { + "Enabled": true, + "Type": "file", + "Path": "/var/log/envoy/access-logs.txt" + } +} +``` + + diff --git a/website/content/docs/observe/distributed-tracing.mdx b/website/content/docs/observe/distributed-tracing.mdx new file mode 100644 index 000000000000..7a67f9426dd1 --- /dev/null +++ b/website/content/docs/observe/distributed-tracing.mdx @@ -0,0 +1,265 @@ +--- +layout: docs +page_title: Distributed Tracing +description: >- + Distributed tracing tracks the path of a request as it traverses the service mesh. Consul supports distributed tracing for applications that have it implemented. Learn how to integrate tracing libraries in your application and configure Consul to participate in that tracing. +--- + +# Distributed Tracing + +Distributed tracing is a way to track and correlate requests across microservices. Distributed tracing must first +be implemented in each application, it cannot be added by Consul. Once implemented in your applications, adding +distributed tracing to Consul will add the sidecar proxies as spans in the request path. + +## Application Changes + +Consul alone cannot implement distributed tracing for your applications. Each application must propagate the required +headers. Typically this is done using a tracing library such as: + +- https://github.com/opentracing/opentracing-go +- https://github.com/DataDog/dd-trace-go +- https://github.com/openzipkin/zipkin-go + +## Configuration + +Once your applications have been instrumented with a tracing library, you are ready to configure Consul to add sidecar +proxy spans to the trace. Your eventual config will look something like: + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +Config { + protocol = "http" + envoy_tracing_json = < + +-> **NOTE:** This example uses a [proxy defaults](/consul/docs/reference/config-entry/proxy-defaults) configuration entry, which applies to all proxies, +but you can also apply the configuration in the +[`proxy` block of your service configuration](/consul/docs/reference/proxy/connect-proxy#proxy-parameters). The proxy service registration is not supported on Kubernetes. + +Within the config there are two keys you need to customize: + +1. [`envoy_tracing_json`](/consul/docs/reference/proxy/envoy#envoy_tracing_json): Sets the tracing configuration for your specific tracing type. + See the [Envoy tracers documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/trace/trace) for your + specific collector's configuration. This configuration will reference the cluster name defined in `envoy_extra_static_clusters_json`. +1. [`envoy_extra_static_clusters_json`](/consul/docs/reference/proxy/envoy#envoy_extra_static_clusters_json): Defines the address + of your tracing collector where Envoy will send its spans. In this example the URL was `collector-url:9411`. + +## Applying the configuration + +This configuration only applies when proxies are _restarted_ since it changes the _bootstrap_ config for Envoy +which can only be applied on startup. This means you must restart all your proxies for changes to this +config to take effect. + +-> **Note:** On Kubernetes this is a matter of restarting your deployments, e.g. `kubectl rollout restart deploy/deploy-name`. + +## Considerations + +1. Distributed tracing is only supported for HTTP and gRPC services. You must specify the protocol either globally + via a proxy defaults config entry: + + + + ```hcl + Kind = "proxy-defaults" + Name = "global" + Config { + protocol = "http" + } + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ProxyDefaults + metadata: + name: global + spec: + config: + protocol: http + ``` + + ```json + { + "Kind": "proxy-defaults", + "Name": "global", + "Config": { + "protocol": "http" + } + } + ``` + + + + Or via a service defaults config entry for each service: + + + + ```hcl + Kind = "service-defaults" + Name = "service-name" + Protocol = "http" + ``` + + ```yaml + apiVersion: consul.hashicorp.com/v1alpha1 + kind: ServiceDefaults + metadata: + name: service-name + spec: + protocol: http + ``` + + ```json + { + "Kind": "service-defaults", + "Name": "service-name", + "Protocol": "http" + } + ``` + + + +1. Requests through [Ingress Gateways](/consul/docs/north-south/ingress-gateway) will not be traced unless the header + `x-client-trace-id: 1` is set (see [hashicorp/consul#6645](https://github.com/hashicorp/consul/issues/6645)). + +1. Consul's proxies do not currently support [OpenTelemetry](https://opentelemetry.io/) spans, as Envoy has not + [fully implemented](https://github.com/envoyproxy/envoy/issues/9958) it. Instead, you can add + OpenTelemetry libraries to your application to emit spans for other + [tracing protocols](https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/observability/tracing) + supported by Envoy, such as Zipkin or Jaeger. + +1. Tracing is only supported with Envoy proxies, not the built-in proxy. + +1. When configuring the Zipkin tracer in `envoy_tracing_json`, set [`trace_id_128bit`](https://www.envoyproxy.io/docs/envoy/v1.21.0/api-v3/config/trace/v3/zipkin.proto#envoy-v3-api-field-config-trace-v3-zipkinconfig-trace-id-128bit) to `true` if your application is configured to generate 128-bit trace IDs. For example: + + + + ```json + { + "http": { + "name": "envoy.tracers.zipkin", + "typedConfig": { + "@type": "type.googleapis.com/envoy.config.trace.v3.ZipkinConfig", + "collector_cluster": "zipkin", + "collector_endpoint_version": "HTTP_JSON", + "collector_endpoint": "/api/v2/spans", + "shared_span_context": false, + "trace_id_128bit": true + } + } + } + ``` + + diff --git a/website/content/docs/observe/docker.mdx b/website/content/docs/observe/docker.mdx new file mode 100644 index 000000000000..1eaa06e1b1f9 --- /dev/null +++ b/website/content/docs/observe/docker.mdx @@ -0,0 +1,285 @@ +--- +layout: docs +page_title: Observe service mesh on Docker containers +description: >- + Deploy a Consul service mesh observability stack with Docker Compose to gain complete insight into metrics, logs, and traces. +--- + +# Observe service mesh on Docker containers + +This page describes the configuration process for service mesh observability features when Consul runs on Docker containers. + +## Introduction + +Service mesh observability consists of three core elements: _metrics_, _logs_, and _traces_. To observe these elements in Consul's service mesh, use Prometheus to collect metrics, Loki to collect logs, and Tempo to collect traces. Then you can visualize this data with Grafana. + +The examples on this page are not properly secured for a production environment. If you are implementing this functionality in a production system, we encourage you to review the [Consul Reference Architecture](/consul/tutorials/production-deploy/reference-architecture) for Consul best practices and the [Docker Documentation](https://docs.docker.com/) for Docker best practices. + +## Prerequisites + +For Docker containers to emit Loki logs, you need to install the Loki logging drivers for Docker. Refer to the [Loki Docker driver plugin page](https://grafana.com/docs/loki/latest/clients/docker-driver/) for more information. + +```shell-session +$ docker plugin install grafana/loki-docker-driver:latest --alias loki --grant-all-permissions +``` + +## Collect metrics with Prometheus + +Configure the Consul target endpoint in Prometheus. The `prometheus.yaml` file contains the configuration for Prometheus to scrape metrics from the Consul server. The `targets` field should point to the Consul server service running in your Docker environment, and the `metrics_path` should be set to Consul's agent metrics enndpoint, `/v1/agent/metrics`. + + + +```yaml +global: + scrape_interval: 30s + scrape_timeout: 10s + +scrape_configs: + - job_name: 'consul-server' + metrics_path: '/v1/agent/metrics' + params: + format: ['prometheus'] + static_configs: + - targets: ['consul-server:8500'] +``` + + + +## Collect logs with Loki + +Loki is an advanced log aggregation system developed by Grafana Labs. The `docker-compose-loki.yaml` file contains the configuration for the Loki service to run as a Docker container. + + + +```yaml + loki: + image: grafana/loki:2.1.0 + container_name: loki + command: -config.file=/etc/loki/local-config.yaml + networks: + vpcbr: + ipv4_address: 10.5.0.11 + ports: + - "3100:3100" # loki needs to be exposed so it receives logs + environment: + - JAEGER_AGENT_HOST=tempo + - JAEGER_ENDPOINT=http://tempo:14268/api/traces # send traces to Tempo + - JAEGER_SAMPLER_TYPE=const + - JAEGER_SAMPLER_PARAM=1 + logging: + driver: loki + options: + loki-url: 'http://localhost:3100/api/prom/push' +``` + + + +This configuration allows Loki to collect logs from all containers in the Docker environment. The logs are sent to Loki using the Loki logging driver. The Loki traces are also sent to Tempo for distributed tracing. + +## Collect traces with Tempo + +Tempo is a distributed tracing system developed by Grafana Labs. The `docker-compose-tempo.yaml` file contains the configuration for the Tempo service to run as a Docker container. + + + +```yaml + tempo: + image: grafana/tempo:1f1c40b3 + container_name: tempo + command: ["-config.file=/etc/tempo.yaml"] + volumes: + - ./tempo/tempo.yaml:/etc/tempo.yaml + networks: + vpcbr: + ipv4_address: 10.5.0.9 + ports: + - "14268:14268" # jaeger ingest + - "3100" # tempo + - "9411:9411" #zipkin + logging: + driver: loki + options: + loki-url: 'http://localhost:3100/api/prom/push' + + + +This Docker Compose configuration sets up Tempo to collect traces from the Docker environment. The traces are sent to Tempo using the Loki logging driver, which allows for seamless integration with Grafana. This Docker Compose configuration also refers to a `tempo/tempo.yaml` file that contains the Tempo configuration. + + + +```yaml +auth_enabled: false + +server: + http_listen_port: 3100 + +distributor: + receivers: # this configuration will listen on all ports and protocols that tempo is capable of. + jaeger: # the receives all come from the OpenTelemetry collector. more configuration information can + protocols: # be found there: https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver + thrift_http: # + grpc: # for a production deployment you should only enable the receivers you need! + thrift_binary: + thrift_compact: + zipkin: + otlp: + protocols: + http: + grpc: + opencensus: + +ingester: + trace_idle_period: 10s # the length of time after a trace has not received spans to consider it complete and flush it + max_block_bytes: 1_000_000 # cut the head block when it hits this size or ... + max_block_duration: 5m # this much time passes + +compactor: + compaction: + compaction_window: 1h # blocks in this time window will be compacted together + max_block_bytes: 100_000_000 # maximum size of compacted blocks + block_retention: 1h + compacted_block_retention: 10m + +storage: + trace: + backend: local # backend configuration to use + block: + bloom_filter_false_positive: .05 # bloom filter false positive rate. lower values create larger filters but fewer false positives + index_downsample_bytes: 1000 # number of bytes per index record + encoding: zstd # block encoding/compression. options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd + wal: + path: /tmp/tempo/wal # where to store the the wal locally + encoding: none # wal encoding/compression. options: none, gzip, lz4-64k, lz4-256k, lz4-1M, lz4, snappy, zstd + local: + path: /tmp/tempo/blocks + pool: + max_workers: 100 # the worker pool mainly drives querying, but is also used for polling the blocklist + queue_depth: 10000 +``` + + + +## Configure Grafana + +You can configure Grafana can to collect and visualize metrics, logs, and traces and use Prometheus, Loki, and Tempo as data sources. + +### Grafana data source: Prometheus + +The `datasource-prometheus.yaml` file contains the configuration for Grafana to connect to Prometheus. The `url` field should point to the Prometheus service running in your Docker environment. + + + +```yaml +datasources: +- name: Prometheus + type: prometheus + access: proxy + orgId: 1 + url: http://prometheus:9090 + basicAuth: false + isDefault: false + version: 1 + editable: false +``` + + + +### Grafana data source: Loki + +The `datasource-loki.yaml` file contains the configuration for Grafana to connect to Loki. The `datasources.url` field must point to the Loki service running in your Docker environment. + + + +```yaml +datasources: +- name: Loki + type: loki + access: proxy + orgId: 1 + url: http://loki:3100 + basicAuth: false + isDefault: true + version: 1 + editable: false + apiVersion: 1 + jsonData: + derivedFields: + - datasourceUid: tempo + matcherRegex: (?:traceID|trace_id)=(\w+) + name: TraceID + url: $${__value.raw} +``` + + + +### Grafana data source: Tempo + +The `datasource-tempo.yaml` file contains the configuration for Grafana to connect to Tempo. The `url` field should point to the Tempo service running in your Docker environment. + + + +```yaml +datasources: +- name: Tempo + type: tempo + access: proxy + orgId: 1 + url: http://tempo:3100 + basicAuth: false + isDefault: false + version: 1 + editable: false + apiVersion: 1 + uid: tempo +``` + + + +### Grafana dashboards + +In order to visualize the data collected by Prometheus, Loki, and Tempo, you can import pre-built dashboards into Grafana. The configuration is applied to Grafana in the following way, and refers to a directory called `dashboards`. + + + +```yaml +apiVersion: 1 + +providers: +- name: 'dashboards' + orgId: 1 + folder: 'dashboards' + folderUid: '' + type: file + disableDeletion: true + editable: true + updateIntervalSeconds: 3600 + allowUiUpdates: false + options: + path: /var/lib/grafana/dashboards +``` + + + +For the contents of the `dashboards` directory, refer to [Dashboards for service mesh observability](/consul/docs/observe/grafana), which documents the dashboard configurations in [the `hashicorp/consul` GitHub Repository](https://github.com/hashicorp/consul/tree/main/grafana). + +You can also refer to the Docker-specific content in the [Grafana Dashboards repository](https://github.com/hashicorp-education/learn-consul-docker/tree/main/datacenter-deploy-observability/grafana/dashboards). + +## Use Grafana to explore your service mesh + +Now that you have configured the Grafana data sources, you can visualize the data collected. + +### Explore traces in Grafana + +You can explore the traces collected by Tempo in Grafana. Navigate to Grafana's **Explore** section and execute a search for traces. For example, to view traces from a container named `web`, execute a Grafana search for `{container_name="web"} |= "trace_id"`. Then open one of the log lines, locate the `TraceID` field, then click the nearby `Tempo` link to jump directly from logs to traces. + +![Image of linked logs and traces in Grafana UI.](/img/docker/observability_grafana_logs_traces.png 'There are two panes within the window: a left and right pane. In the left pane, the Loki logs are shown, with one of its entries selected. On the right pane, a Tempo traces graph is shown. The Tempo traces graph shows communication details between the three services, ingress, web, and api.') + +Traces provide insight into service mesh performance. To learn more about how application communication traverses through the service mesh, refer to [our blog on distributed tracing](https://www.hashicorp.com/blog/enabling-distributed-tracing-with-hashicorp-consul), as well as the [distributed tracing documentation](/consul/docs/observe/distributed-tracing). + +### Explore metrics in Grafana + +You can explore the metrics collected by Prometheus in Grafana. Navigate to the **Dashboards** section and select the **Consul Server Monitoring** dashboard. This dashboard provides a comprehensive view of the Consul server metrics, including Raft commit time, catalog operation time, and autopilot health. + +![Image of Consul Server Monitoring metrics dashboard in Grafana UI.](/img/docker/observability_grafana_consul_metrics.png 'A dashboard is shown with four panes present. Each of the panes shows details around various metrics generated by Consul including Raft commit time, catalog operation time, and autopilot health.') + +This dashboard presents important application-level metrics for Consul and provides important insight into the health and performance of your service mesh. To learn more about the various runtime metrics reported by Consul, refer to the [Consul telemetry docs](/consul/docs/reference/agent/telemetry). diff --git a/website/content/docs/connect/observability/grafanadashboards/consulk8sdashboard.mdx b/website/content/docs/observe/grafana/consul-k8s.mdx similarity index 100% rename from website/content/docs/connect/observability/grafanadashboards/consulk8sdashboard.mdx rename to website/content/docs/observe/grafana/consul-k8s.mdx diff --git a/website/content/docs/connect/observability/grafanadashboards/consuldataplanedashboard.mdx b/website/content/docs/observe/grafana/dataplane.mdx similarity index 100% rename from website/content/docs/connect/observability/grafanadashboards/consuldataplanedashboard.mdx rename to website/content/docs/observe/grafana/dataplane.mdx diff --git a/website/content/docs/observe/grafana/index.mdx b/website/content/docs/observe/grafana/index.mdx new file mode 100644 index 000000000000..7379c78b25d1 --- /dev/null +++ b/website/content/docs/observe/grafana/index.mdx @@ -0,0 +1,91 @@ +--- +layout: docs +page_title: Service Mesh Observability - Dashboards +description: >- + This documentation provides an overview of several dashboards designed for monitoring and managing services within a Consul-managed Envoy service mesh. Learn how to enable access logs and configure key performance and operational metrics to ensure the reliability and performance of services in the service mesh. +--- + +# Dashboards for service mesh observability + +This topic describes the configuration and usage of dashboards for monitoring and managing services within a Consul-managed Envoy service mesh. These dashboards provide critical insights into the health, performance, and resource utilization of services. The dashboards described here are essential tools for ensuring the stability, efficiency, and reliability of your service mesh environment. + +This page provides reference information about the Grafana dashboard configurations included in the [`grafana` directory in the `hashicorp/consul` GitHub repository](https://github.com/hashicorp/consul/tree/main/grafana). + +## Dashboards overview + +The repository includes the following dashboards: + + - **Consul service-to-service dashboard**: Provides a detailed view of service-to-service communications, monitoring key metrics like access logs, HTTP requests, error counts, response code distributions, and request success rates. The dashboard includes customizable filters for focusing on specific services and namespaces. + + - **Consul service dashboard**: Tracks key metrics for Envoy proxies at the cluster and service levels, ensuring the performance and reliability of individual services within the mesh. + + - **Consul dataplane dashboard**: Offers a comprehensive overview of service health and performance, including request success rates, resource utilization (CPU and memory), active connections, and cluster health. It helps operators maintain service reliability and optimize resource usage. + + - **Consul k8s dashboard**: Focuses on monitoring the health and resource usage of the Consul control plane within a Kubernetes environment, ensuring the stability of the control plane. + + - **Consul server dashboard**: Provides detailed monitoring of Consul servers, tracking key metrics like server health, CPU and memory usage, disk I/O, and network performance. This dashboard is critical for ensuring the stability and performance of Consul servers within the service mesh. + +## Enabling prometheus + +Add the following configurations to your Consul Helm chart to enable the prometheus tools. + + + +```yaml +global: + metrics: + enabled: true + provider: "prometheus" + enableAgentMetrics: true + agentMetricsRetentionTime: "10m" + +prometheus: + enabled: true + +ui: + enabled: true + metrics: + enabled: true + provider: "prometheus" + baseURL: http://prometheus-server.consul +``` + + + +## Enable access logs + +Access logs configurations are defined globally in the [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults#accesslogs) configuration entry. + +The following example is a minimal configuration for enabling access logs: + + + +```hcl +Kind = "proxy-defaults" +Name = "global" +AccessLogs { + Enabled = true +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: ProxyDefaults +metadata: + name: global +spec: + accessLogs: + enabled: true +``` + +```json +{ + "Kind": "proxy-defaults", + "Name": "global", + "AccessLogs": { + "Enabled": true + } +} +``` + + diff --git a/website/content/docs/connect/observability/grafanadashboards/consulserverdashboard.mdx b/website/content/docs/observe/grafana/server.mdx similarity index 100% rename from website/content/docs/connect/observability/grafanadashboards/consulserverdashboard.mdx rename to website/content/docs/observe/grafana/server.mdx diff --git a/website/content/docs/connect/observability/grafanadashboards/service-to-servicedashboard.mdx b/website/content/docs/observe/grafana/service-to-service.mdx similarity index 100% rename from website/content/docs/connect/observability/grafanadashboards/service-to-servicedashboard.mdx rename to website/content/docs/observe/grafana/service-to-service.mdx diff --git a/website/content/docs/connect/observability/grafanadashboards/servicedashboard.mdx b/website/content/docs/observe/grafana/service.mdx similarity index 100% rename from website/content/docs/connect/observability/grafanadashboards/servicedashboard.mdx rename to website/content/docs/observe/grafana/service.mdx diff --git a/website/content/docs/observe/index.mdx b/website/content/docs/observe/index.mdx new file mode 100644 index 000000000000..657f5c0a0dc6 --- /dev/null +++ b/website/content/docs/observe/index.mdx @@ -0,0 +1,38 @@ +--- +layout: docs +page_title: Observe your service mesh +description: >- + This topic provides an overview of Consul's service mesh observability features, including L7 telemetry visualizations, access logs, and distributed tracing. +--- + +# Observe your service mesh + +This topic provides an overview of Consul's observability features. Consul can be configured with Prometheus to provide visualizations about agent operations and service traffic between proxies in your service mesh. + +## Introduction + +Consul supports observability functions that enable you to understand the state of your service mesh while it is running. By monitoring agent, client, dataplane, gateway, mesh proxy, and sidecar proxy requests and their L7 traffic, you can diagnose network issues and effectively mitigate communication failures in your service mesh. Consul can be configured to expose three kinds of telemetry features to help you observe your service mesh: + +- Access logs +- Distributed traces +- Service mesh telemetry metrics + +## Access logs + +@include 'text/descriptions/access-log.mdx' + +## Distributed tracing + +@include 'text/descriptions/distributed-tracing.mdx' + +## Service mesh telemetry metrics + +@include 'text/descriptions/telemetry.mdx' + +## Guidance + +@include 'text/guidance/observe.mdx' + +### Constraints, limitations, and troubleshooting + +@include 'text/limitations/observe.mdx' \ No newline at end of file diff --git a/website/content/docs/observe/tech-specs.mdx b/website/content/docs/observe/tech-specs.mdx new file mode 100644 index 000000000000..c79bf39666de --- /dev/null +++ b/website/content/docs/observe/tech-specs.mdx @@ -0,0 +1,58 @@ +--- +layout: docs +page_title: Service mesh observability technical specifications +description: >- + To use Consul's observability features, configure sidecar proxies in the service mesh to collect and emit L7 metrics. Learn about configuring metrics destinations and a service's protocol and upstreams. +--- + +# Service mesh observability technical specifications + +In order to take advantage of the service mesh's L7 observability features you will need +to: + +- Deploy sidecar proxies that are capable of emitting metrics with each of your + services. We have first class support for Envoy. +- Define where your proxies should send metrics that they collect. +- Define the protocols for each of your services. +- Define the upstreams for each of your services. + +If you are using Envoy as your sidecar proxy, you will need to [enable +gRPC](/consul/docs/reference/agent/configuration-file/general#grpc_port) on your client agents. To define the +metrics destination and service protocol you may want to enable [configuration +entries](/consul/docs/reference/agent/configuration-file/general#config_entries) and [centralized service +configuration](/consul/docs/reference/agent/configuration-file/general#enable_central_service_config). + +### Kubernetes +If you are using Kubernetes, the Helm chart can simplify much of the configuration needed to enable observability. See +our [Kubernetes observability docs](/consul/docs/observe/telemetry/k8s) for more information. + +### Metrics destination + +For Envoy the metrics destination can be configured in the proxy configuration +entry's `config` section. + +``` +kind = "proxy-defaults" +name = "global" +config { + "envoy_dogstatsd_url": "udp://127.0.0.1:9125" +} +``` + +Find other possible metrics syncs in the [Envoy documentation](/consul/docs/connect/proxies/envoy#bootstrap-configuration). + +### Service protocol + +You can specify the [`protocol`](/consul/docs/reference/config-entry/service-defaults#protocol) +for all service instances in the `service-defaults` configuration entry. You can also override the default protocol when defining and registering proxies in a service definition file. Refer to [Expose Paths Configuration Reference](/consul/docs/connect/proxies/proxy-config-reference#expose-paths-configuration-reference) for additional information. + +By default, proxies only provide L4 metrics. +Defining the protocol allows proxies to handle requests at the L7 +protocol and emit L7 metrics. It also allows proxies to make per-request +load balancing and routing decisions. + +### Service upstreams + +You can set the upstream for each service using the proxy's +[`upstreams`](/consul/docs/connect/proxies/proxy-config-reference#upstreams) +sidecar parameter, which can be defined in a service's [sidecar registration](/consul/docs/connect/proxy/sidecar). diff --git a/website/content/docs/observe/telemetry/k8s.mdx b/website/content/docs/observe/telemetry/k8s.mdx new file mode 100644 index 000000000000..7a3b5a09c54f --- /dev/null +++ b/website/content/docs/observe/telemetry/k8s.mdx @@ -0,0 +1,147 @@ +--- +layout: docs +page_title: Observe service mesh telemetry on Kubernetes +description: >- + Use the `connectInject.metrics` Helm values to enable Prometheus and Grafana integrations and capture metrics. Consul can collect metrics from the service mesh, sidecar proxies, agents, and gateways in a k8s cluster and then display service traffic metrics in Consul’s UI for additional observability. +--- + +# Observe service mesh telemetry on Kubernetes + +Consul on Kubernetes integrates with Prometheus and Grafana to provide metrics for Consul service mesh. The metrics +available are: + +- Mesh service metrics +- Mesh sidecar proxy metrics +- Consul agent metrics +- Ingress, terminating, and mesh gateway metrics + +Specific sidecar proxy metrics can also be seen in the Consul UI Topology Visualization view. This section documents how to enable each of these. + +## Mesh Service and Sidecar Metrics with Metrics Merging + +Prometheus annotations are used to instruct Prometheus to scrape metrics from Pods. Prometheus annotations only support +scraping from one endpoint on a Pod, so Consul on Kubernetes supports metrics merging whereby service metrics and +sidecar proxy metrics are merged into one endpoint. If there are no service metrics, it also supports just scraping the +sidecar proxy metrics. + + + +Metrics for services in the mesh can be configured with the Helm values nested under [`connectInject.metrics`](/consul/docs/reference/k8s/helm#v-connectinject-metrics). + +Metrics and metrics merging can be enabled by default for all connect-injected Pods with the following Helm values: + +```yaml +connectInject: + metrics: + defaultEnabled: true # by default, this inherits from the value global.metrics.enabled + defaultEnableMerging: true +``` + +They can also be overridden on a per-Pod basis using the annotations `consul.hashicorp.com/enable-metrics` and +`consul.hashicorp.com/enable-metrics-merging`. + +~> In most cases, the default settings are sufficient. If you encounter issues with colliding ports or service +metrics not being merged, you may need to change the defaults. + +The Prometheus annotations specify which endpoint to scrape the metrics from. The annotations point to a listener on `0.0.0.0:20200` on the Envoy sidecar. You can configure the listener and the corresponding Prometheus annotations using the following Helm values. Alternatively, you can specify the `consul.hashicorp.com/prometheus-scrape-port` and `consul.hashicorp.com/prometheus-scrape-path` Consul annotations to override them on a per-Pod basis: + +```yaml +connectInject: + metrics: + defaultPrometheusScrapePort: 20200 + defaultPrometheusScrapePath: "/metrics" +``` + +The Helm values specified in the previous example result in the following Prometheus annotations being automatically added to the Pod for scraping: + +```yaml +metadata: + annotations: + prometheus.io/scrape: "true" + prometheus.io/path: "/metrics" + prometheus.io/port: "20200" +``` + +When metrics and metrics merging are both enabled, metrics are combined from the service and the sidecar proxy, and +exposed through a local server on the Consul Dataplane sidecar for scraping. This endpoint is called the merged metrics endpoint and +defaults to `127.0.0.1:20100/stats/prometheus`. The listener targets the merged metrics endpoint in the above case. +It can be configured with the following Helm values (or overridden on a per-Pod basis with +`consul.hashicorp.com/merged-metrics-port`: + +```yaml +connectInject: + metrics: + defaultMergedMetricsPort: 20100 +``` + +The endpoint to scrape service metrics from can be configured only on a per-Pod basis with the Pod annotations `consul.hashicorp.com/service-metrics-port` and `consul.hashicorp.com/service-metrics-path`. If these are not configured, the service metrics port defaults to the port used to register the service with Consul (`consul.hashicorp.com/connect-service-port`), which in turn defaults to the first port on the first container of the Pod. The service metrics path defaults to `/metrics`. + +## Consul Agent Metrics + +Metrics from the Consul server Pods can be scraped with Prometheus by setting the field `global.metrics.enableAgentMetrics` to `true`. Additionally, one can configure the metrics retention time on the agents by configuring +the field `global.metrics.agentMetricsRetentionTime` which expects a duration and defaults to `"1m"`. This value must be greater than `"0m"` for the Consul servers to emit metrics at all. As the Prometheus deployment currently does not support scraping TLS endpoints, agent metrics are currently unsupported when TLS is enabled. + +```yaml +global: + metrics: + enabled: true + enableAgentMetrics: true + agentMetricsRetentionTime: "1m" +``` + +## Gateway Metrics + +Metrics from the Consul ingress, terminating, and mesh gateways can be scraped +with Prometheus by setting the field `global.metrics.enableGatewayMetrics` to `true`. The gateways emit standard Envoy proxy +metrics. To ensure that the metrics are not exposed to the public internet, as mesh and ingress gateways can have public +IPs, their metrics endpoints are exposed on the Pod IP of the respective gateway instance, rather than on all +interfaces on `0.0.0.0`. + +```yaml +global: + metrics: + enabled: true + enableGatewayMetrics: true +``` + +## Metrics in the UI Topology Visualization + +Consul's built-in UI has a topology visualization for services that are part of the Consul service mesh. The topology visualization has the ability to fetch basic metrics from a metrics provider for each service and display those metrics as part of the [topology visualization](/consul/docs/observe/telemetry/vm). + +The diagram below illustrates how the UI displays service metrics for a sample application: + +![UI Topology View](/img/ui-service-topology-view-hover.png) + +The topology view is configured under `ui.metrics`. This configuration enables the Consul UI to query the provider specified by +`ui.metrics.provider` at the URL of the Prometheus server `ui.metrics.baseURL`, and then display sidecar proxy metrics for the +service. The UI displays some specific sidecar proxy Prometheus metrics when `ui.metrics.enabled` is `true` and +`ui.enabled` is true. The value of `ui.metrics.enabled` defaults to `"-"` which means it inherits from the value of +`global.metrics.enabled.` + +```yaml +ui: + enabled: true + metrics: + enabled: true # by default, this inherits from the value global.metrics.enabled + provider: "prometheus" + baseURL: http://prometheus-server +``` + +## Deploying Prometheus (_for demo and non-production use-cases only_) + +The Helm chart contains demo manifests for deploying Prometheus. It can be installed with Helm with `prometheus.enabled`. This manifest is based on the community manifest for Prometheus. +The Prometheus deployment is designed to allow quick bootstrapping for trial and demo use cases, and is not recommended for production use-cases. + +Prometheus is be installed in the same namespace as Consul, and gets installed +and uninstalled along with the Consul installation. + +Grafana can optionally be utilized with Prometheus to display metrics. The installation and configuration of Grafana must be managed separately from the Consul Helm chart. The [Layer 7 Observability with Prometheus, Grafana, and Kubernetes](/consul/tutorials/kubernetes/kubernetes-layer7-observability) tutorial provides an installation walkthrough using Helm. + +```yaml +prometheus: + enabled: true +``` diff --git a/website/content/docs/observe/telemetry/vm.mdx b/website/content/docs/observe/telemetry/vm.mdx new file mode 100644 index 000000000000..d513fe544860 --- /dev/null +++ b/website/content/docs/observe/telemetry/vm.mdx @@ -0,0 +1,726 @@ +--- +layout: docs +page_title: Observe service mesh telemetry on virtual machines (VMs) +description: >- + Consul's UI can display a service's topology and associated metrics from the service mesh. Learn how to configure the UI to collect metrics from your metrics provider, modify access for metrics proxies, and integrate custom metrics providers. +--- + +# Observe service mesh telemetry on virtual machines (VMs) + +-> Coming here from "Configure metrics dashboard" or "Configure dashboard"? See [Configuring Dashboard URLs](#configuring-dashboard-urls). + +Since Consul 1.9.0, Consul's built in UI includes a topology visualization to +show a service's immediate connectivity at a glance. It is not intended as a +replacement for dedicated monitoring solutions, but rather as a quick overview +of the state of a service and its connections within the Service Mesh. + +The topology visualization requires services to be using [service mesh](/consul/docs/connect) via [sidecar proxies](/consul/docs/connect/proxy). + +The visualization may optionally be configured to include a link to an external +per-service dashboard. This is designed to provide convenient deep links to your +existing monitoring or Application Performance Monitoring (APM) solution for +each service. More information can be found in [Configuring Dashboard +URLs](#configuring-dashboard-urls). + +It is possible to configure the UI to fetch basic metrics from your metrics +provider storage to augment the visualization as displayed below. + +![Consul UI Service Mesh Visualization](/img/ui-service-topology-view-hover.png) + +Consul has built-in support for overlaying metrics from a +[Prometheus](https://prometheus.io) backend. Alternative metrics providers may +be supported using a new and experimental JavaScript API. See [Custom Metrics +Providers](#custom-metrics-providers). + +## Kubernetes + +If running Consul in Kubernetes, the Helm chart can automatically configure Consul's UI to display topology +visualizations. See our [Kubernetes observability docs](/consul/docs/observe/telemetry/k8s) for more information. + +## Configuring the UI To Display Metrics + +To configure Consul's UI to fetch metrics there are two required configuration settings. +These need to be set on each Consul Agent that is responsible for serving the +UI. If there are multiple clients with the UI enabled in a datacenter for +redundancy these configurations must be added to all of them. + +We assume that the UI is already enabled by setting +[`ui_config.enabled`](/consul/docs/reference/agent/configuration-file/ui#ui_config_enabled) to `true` in the +agent's configuration file. + +To use the built-in Prometheus provider +[`ui_config.metrics_provider`](/consul/docs/reference/agent/configuration-file/ui#ui_config_metrics_provider) +must be set to `prometheus`. + +The UI must query the metrics provider through a proxy endpoint. This simplifies +deployment where Prometheus is not exposed externally to UI user's browsers. + +To set this up, provide the URL that the _Consul agent_ should use to reach the +Prometheus server in +[`ui_config.metrics_proxy.base_url`](/consul/docs/reference/agent/configuration-file/ui#ui_config_metrics_proxy_base_url). +For example in Kubernetes, the Prometheus helm chart by default installs a +service named `prometheus-server` so each Consul agent can reach it on +`http://prometheus-server` (using Kubernetes' DNS resolution). + +A full configuration to enable Prometheus is given below. + + + + + +```hcl +ui_config { + enabled = true + metrics_provider = "prometheus" + metrics_proxy { + base_url = "http://prometheus-server" + } +} +``` + + + + + +```yaml +ui: + enabled: true + metrics: + enabled: true # by default, this inherits from the value global.metrics.enabled + provider: "prometheus" + baseURL: http://prometheus-server +``` + + + + + +```json +{ + "ui_config": { + "enabled": true, + "metrics_provider": "prometheus", + "metrics_proxy": { + "base_url": "http://prometheus-server" + } + } +} +``` + + + + + +-> **Note**: For more information on configuring the observability UI on Kubernetes, use this [reference](/consul/docs/observe/telemetry/k8s). + +## Configuring Dashboard URLs + +Since Consul's visualization is intended as an overview of your mesh and not a +comprehensive monitoring tool, you can configure a service dashboard URL +template which allows users to click directly through to the relevant +service-specific dashboard in an external tool like +[Grafana](https://grafana.com) or a hosted provider. + +To configure this, you must provide a URL template in the [agent configuration +file](/consul/docs/reference/agent/configuration-file/ui#ui_config_dashboard_url_templates) for all agents that +have the UI enabled. The template is essentially the URL to the external +dashboard, but can have placeholder values which will be replaced with the +service name, namespace and datacenter where appropriate to allow deep-linking +to the relevant information. + +An example with Grafana is shown below. + + + + + +```hcl +ui_config { + enabled = true + dashboard_url_templates { + service = "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1&var-service={{Service.Name}}&var-namespace={{Service.Namespace}}&var-partition={{Service.Partition}}&var-dc={{Datacenter}}" + } +} +``` + + + + + +```yaml +# The UI is enabled by default so this stanza is not required. +ui: + enabled: true + # This configuration requires version 0.40.0 or later of the Helm chart. + dashboardURLTemplates: + service: "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1&var-service={{Service.Name}}&var-namespace={{Service.Namespace}}&var-dc={{Datacenter}}" + +# If you are using a version of the Helm chart older than 0.40.0, you must +# configure the dashboard URL template using the `server.extraConfig` parameter +# in the Helm chart's values file. +server: + extraConfig: | + { + "ui_config": { + "dashboard_url_templates": { + "service": "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1&var-service={{ "{{" }}Service.Name}}&var-namespace={{ "{{" }}Service.Namespace}}&var-dc={{ "{{" }}Datacenter}}" + } + } + } +``` + + + + + +```json +{ + "ui_config": { + "enabled": true, + "dashboard_url_templates": { + "service": "https://grafana.example.com/d/lDlaj-NGz/service-overview?orgId=1\u0026var-service={{Service.Name}}\u0026var-namespace={{Service.Namespace}}\u0026var-partition={{Service.Partition}}\u0026var-dc={{Datacenter}}" + } + } +} +``` + + + + + +~> **Note**: On Kubernetes, the Consul Server configuration set in the Helm config's +[`server.extraConfig`](/consul/docs/reference/k8s/helm#v-server-extraconfig) key must be specified +as JSON. The `{{` characters in the URL must be escaped using `{{ "{{" }}` so that Helm +doesn't try to template them. + +![Consul UI Service Dashboard Link](/img/ui-dashboard-url-template.png) + +### Metrics Proxy + +In many cases the metrics backend may be inaccessible to UI user's browsers or +may be on a different domain and so subject to CORS restrictions. To make it +simpler to serve the metrics to the UI in these cases, the Consul agent can +proxy requests for metrics from the UI to the backend. + +**This is intended to simplify setup in test and demo environments. Careful +consideration should be given towards using this in production.** + +The simplest configuration is described in [Configuring the UI for +metrics](#configuring-the-ui-for-metrics). + +#### Metrics Proxy Security + +~> **Security Note**: Exposing a backend metrics service to potentially +un-authenticated network traffic via the proxy should be _carefully_ considered +in production. + +The metrics proxy endpoint is internal and intended only for UI use. However by +enabling it anyone with network access to the agent's API port may use it to +access metrics from the backend. + +**If ACLs are not enabled, full access to metrics will be exposed to +un-authenticated workloads on the network**. + +With ACLs enabled, the proxy endpoint requires a valid token with read access +to all nodes and services (across all namespaces in Enterprise): + + + + + + +```hcl +service_prefix "" { + policy = "read" +} +node_prefix "" { + policy = "read" +} +``` + +```json +{ + "service_prefix": { + "": { + "policy": "read" + } + }, + "node_prefix": { + "": { + "policy": "read" + } + } +} +``` + + + + + + + +```hcl +namespace_prefix "" { + service_prefix "" { + policy = "read" + } + node_prefix "" { + policy = "read" + } +} +``` + +```json +{ + "namespace_prefix": { + "": { + "service_prefix": { + "": { + "policy": "read" + } + }, + "node_prefix": { + "": { + "policy": "read" + } + } + } + } +} +``` + + + + + + +It's typical for most authenticated users to have this level of access in Consul +as it's required for viewing the catalog or discovering services. If you use a +[Single Sign-On integration](/consul/docs/secure/acl/auth-method/oidc) (Consul +Enterprise) users of the UI can be automatically issued an ACL token with the +privileges above to be allowed access to the metrics through the proxy. + +Even with ACLs enabled, the proxy endpoint doesn't deeply understand the query +language of the backend so there is no way it can enforce least-privilege access +to only specific service-related metrics. + +_If you are not comfortable with all users of Consul having full access to the +metrics backend, you should not use the proxy and find an alternative like using +a custom provider that can query the metrics backend directly_. + +##### Path Allowlist + +To limit exposure of the metrics backend, paths must be explicitly added to an +allowlist to avoid exposing unintended parts of the API. For example with +Prometheus, both the `/api/v1/query_range` and `/api/v1/query` endpoints are +needed to load time-series and individual stats. If the proxy had the `base_url` +set to `http://prometheus-server` then the proxy would also expose read access +to several other endpoints such as `/api/v1/status/config` which includes all +Prometheus configuration which might include sensitive information. + +If you use the built-in `prometheus` provider the proxy is limited to the +essential endpoints. The default value for `metrics_proxy.path_allowlist` is +`["/api/v1/query_range", "/api/v1/query"]` as required by the built-in +`prometheus` provider . + +If you use a custom provider that uses the metrics proxy, you'll need to +explicitly set the allowlist based on the endpoints the provider needs to +access. + +#### Adding Headers + +It is also possible to configure the proxy to add one or more headers to +requests as they pass through. This is useful when the metrics backend requires +authentication. For example if your metrics are shipped to a hosted provider, +you could provision an API token specifically for the Consul UI and configure +the proxy to add it as in the example below. This keeps the API token only +visible to Consul operators in the configuration file while UI users can query +the metrics they need without separately obtaining a token for that provider or +having a token exposed to them that they might be able to use elsewhere. + + + + + +```hcl +ui_config { + enabled = true + metrics_provider = "example-apm" + metrics_proxy { + base_url = "https://example-apm.com/api/v1/metrics" + add_headers = [ + { + name = "Authorization" + value = "Bearer " + } + ] + } +} +``` + + + + + +```json +{ + "ui_config": { + "enabled": true, + "metrics_provider": "example-apm", + "metrics_proxy": { + "base_url": "https://example-apm.com/api/v1/metrics", + "add_headers": [ + { + "name": "Authorization", + "value": "Bearer \u003ctoken\u003e" + } + ] + } + } +} +``` + + + + + +## Custom Metrics Providers + +Consul 1.9.0 includes a built-in provider for fetching metrics from +[Prometheus](https://prometheus.io). To enable the UI visualization feature +to work with other existing metrics stores and hosted services, we created a +"metrics provider" interface in JavaScript. A custom provider may be written and +the JavaScript file served by the Consul agent. + +~> **Note**: this interface is _experimental_ and may change in breaking ways or +be removed entirely as we discover the needs of the community. Please provide +feedback on [GitHub](https://github.com/hashicorp/consul) or +[Discuss](https://discuss.hashicorp.com/) on how you'd like to use this. + +The template for a complete provider JavaScript file is given below. + + + +```javascript +(function () { + var provider = { + /** + * init is called when the provider is first loaded. + * + * options.providerOptions contains any operator configured parameters + * specified in the `metrics_provider_options_json` field of the Consul + * agent configuration file. + * + * Consul will provide: + * + * 1. A boolean field options.metrics_proxy_enabled to indicate whether the + * agent has a metrics proxy configured. + * + * 2. A function options.fetch which is a thin wrapper around the browser's + * [Fetch API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) + * that prefixes any url with the url of Consul's internal metrics proxy + * endpoint and adds your current Consul ACL token to the request + * headers. Otherwise it functions like the browser's native fetch. + * + * The provider should throw an Exception if the options are not valid, for + * example because it requires a metrics proxy and one is not configured. + */ + init: function(options) {}, + + /** + * serviceRecentSummarySeries should return time series for a recent time + * period summarizing the usage of the named service in the indicated + * datacenter. In Consul Enterprise a non-empty namespace is also provided. + * + * If these metrics aren't available then an empty series array may be + * returned. + * + * The period may (later) be specified in options.startTime and + * options.endTime. + * + * The service's protocol must be given as one of Consul's supported + * protocols e.g. "tcp", "http", "http2", "grpc". If it is empty or the + * provider doesn't recognize the protocol, it should treat it as "tcp" and + * provide basic connection stats. + * + * The expected return value is a JavaScript promise which resolves to an + * object that should look like the following: + * + * { + * // The unitSuffix is shown after the value in tooltips. Values will be + * // rounded and shortened. Larger values will already have a suffix + * // like "10k". The suffix provided here is concatenated directly + * // allowing for suffixes like "mbps/kbps" by using a suffix of "bps". + * // If the unit doesn't make sense in this format, include a + * // leading space for example " rps" would show as "1.2k rps". + * unitSuffix: " rps", + * + * // The set of labels to graph. The key should exactly correspond to a + * // property of every data point in the array below except for the + * // special case "Total" which is used to show the sum of all the + * // stacked graph values. The key is displayed in the tooltip so it + * // should be human-friendly but as concise as possible. The value is a + * // longer description that is displayed in the graph's key on request + * // to explain exactly what the metrics mean. + * labels: { + * "Total": "Total inbound requests per second.", + * "Successes": "Successful responses (with an HTTP response code ...", + * "Errors": "Error responses (with an HTTP response code in the ...", + * }, + * + * data: [ + * { + * time: 1600944516286, // milliseconds since Unix epoch + * "Successes": 1234.5, + * "Errors": 2.3, + * }, + * ... + * ] + * } + * + * Every data point object should have a value for every series label + * (except for "Total") otherwise it will be assumed to be "0". + */ + serviceRecentSummarySeries: function(serviceDC, namespace, serviceName, protocol, options) {}, + + /** + * serviceRecentSummaryStats should return four summary statistics for a + * recent time period for the named service in the indicated datacenter. In + * Consul Enterprise a non-empty namespace is also provided. + * + * If these metrics aren't available then an empty array may be returned. + * + * The period may (later) be specified in options.startTime and + * options.endTime. + * + * The service's protocol must be given as one of Consul's supported + * protocols e.g. "tcp", "http", "http2", "grpc". If it is empty or the + * provider doesn't recognize it it should treat it as "tcp" and provide + * just basic connection stats. + * + * The expected return value is a JavaScript promise which resolves to an + * object that should look like the following: + * + * { + // stats is an array of stats to show. The first four of these will be + // displayed. Fewer may be returned if not available. + * stats: [ + * { + * // label should be 3 chars or fewer as an abbreviation + * label: "SR", + * + * // desc describes the stat in a tooltip + * desc: "Success Rate - the percentage of all requests that were not 5xx status", + * + * // value is a string allowing the provider to format it and add + * // units as appropriate. It should be as compact as possible. + * value: "98%", + * } + * ] + * } + */ + serviceRecentSummaryStats: function(serviceDC, namespace, serviceName, protocol, options) {}, + + /** + * upstreamRecentSummaryStats should return four summary statistics for each + * upstream service over a recent time period, relative to the named service + * in the indicated datacenter. In Consul Enterprise a non-empty namespace + * is also provided. + * + * Note that the upstreams themselves might be in different datacenters but + * we only pass the target service DC since typically these metrics should + * be from the outbound listener of the target service in this DC even if + * the requests eventually end up in another DC. + * + * If these metrics aren't available then an empty array may be returned. + * + * The period may (later) be specified in options.startTime and + * options.endTime. + * + * The expected return value is a JavaScript promise which resolves to an + * object that should look like the following: + * + * { + * stats: { + * // Each upstream will appear as an entry keyed by the upstream + * // service name. The value is an array of stats with the same + * // format as serviceRecentSummaryStats response.stats. Note that + * // different upstreams might show different stats depending on + * // their protocol. + * "upstream_name": [ + * {label: "SR", desc: "...", value: "99%"}, + * ... + * ], + * ... + * } + * } + */ + upstreamRecentSummaryStats: function(serviceDC, namespace, serviceName, upstreamName, options) {}, + + /** + * downstreamRecentSummaryStats should return four summary statistics for + * each downstream service over a recent time period, relative to the named + * service in the indicated datacenter. In Consul Enterprise a non-empty + * namespace is also provided. + * + * Note that the service may have downstreams in different datacenters. For + * some metrics systems which are per-datacenter this makes it hard to query + * for all downstream metrics from one source. For now the UI will only show + * downstreams in the same datacenter as the target service. In the future + * this method may be called multiple times, once for each DC that contains + * downstream services to gather metrics from each. In that case a separate + * option for target datacenter will be used since the target service's DC + * is still needed to correctly identify the outbound clusters that will + * route to it from the remote DC. + * + * If these metrics aren't available then an empty array may be returned. + * + * The period may (later) be specified in options.startTime and + * options.endTime. + * + * The expected return value is a JavaScript promise which resolves to an + * object that should look like the following: + * + * { + * stats: { + * // Each downstream will appear as an entry keyed by the downstream + * // service name. The value is an array of stats with the same + * // format as serviceRecentSummaryStats response.stats. Different + * // downstreams may display different stats if required although the + * // protocol should be the same for all as it is the target + * // service's protocol that matters here. + * "downstream_name": [ + * {label: "SR", desc: "...", value: "99%"}, + * ... + * ], + * ... + * } + * } + */ + downstreamRecentSummaryStats: function(serviceDC, namespace, serviceName, options) {} + } + + // Register the provider with Consul for use. This example would be usable by + // configuring the agent with `ui_config.metrics_provider = "example-provider". + window.consul.registerMetricsProvider("example-provider", provider) + +}()); +``` + + + +Additionally, the built in [Prometheus +provider code](https://github.com/hashicorp/consul/blob/main/ui/packages/consul-ui/vendor/metrics-providers/prometheus.js) +can be used as a reference. + +### Configuring the Agent With a Custom Metrics Provider. + +In the example below, we configure the Consul agent to use a metrics provider +named `example-provider`, which is defined in +`/usr/local/bin/example-metrics-provider.js`. The name `example-provider` must +have been specified in the call to `consul.registerMetricsProvider` as in the +code listing in the last section. + + + + + +```hcl +ui_config { + enabled = true + metrics_provider = "example-provider" + metrics_provider_files = ["/usr/local/bin/example-metrics-provider.js"] + metrics_provider_options_json = <<-EOT + { + "foo": "bar" + } + EOT +} +``` + + + + + +```json +{ + "ui_config": { + "enabled": true, + "metrics_provider": "example-provider", + "metrics_provide_files": ["/usr/local/bin/example-metrics-provider.js"], + "metrics_provider_options_json": "{\"foo\":\"bar\"}" + } +} +``` + + + + +More than one JavaScript file may be specified in +[`metrics_provider_files`](/consul/docs/reference/agent/configuration-file/ui#ui_config_metrics_provider_files) +and all will be served allowing flexibility if needed to include dependencies. +Only one metrics provider can be configured and used at one time. + +The +[`metrics_provider_options_json`](/consul/docs/reference/agent/configuration-file/ui#ui_config_metrics_provider_options_json) +field is an optional literal JSON object which is passed to the provider's +`init` method at startup time. This allows configuring arbitrary parameters for +the provider in config rather than hard coding them into the provider itself to +make providers more reusable. + +The provider may fetch metrics directly from another source although in this +case the agent will probably need to serve the correct CORS headers to prevent +browsers from blocking these requests. These may be configured with +[`http_config.response_headers`](/consul/docs/reference/agent/configuration-file/general#response_headers). + +Alternatively, the provider may choose to use the [built-in metrics +proxy](#metrics-proxy) to avoid cross domain issues or to inject additional +authorization headers without requiring each UI user to be separately +authenticated to the metrics backend. + +A function that behaves like the browser's [Fetch +API](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API) is provided to +the metrics provider JavaScript during `init` as `options.fetch`. This is a thin +wrapper that prefixes any url with the url of Consul's metrics proxy endpoint +and adds your current Consul ACL token to the request headers. Otherwise it +functions like the browser's native fetch and will forward your request on to the +metrics backend. The response will be returned without any modification to be +interpreted by the provider and converted into the format as described in the +interface above. + +Provider authors should make it clear to users which paths are required so they +can correctly configure the [path allowlist](#path-allowlist) in the metrics +proxy to avoid exposing more than needed of the metrics backend. + +### Custom Provider Security Model + +Since the JavaScript file(s) are included in Consul's UI verbatim, the code in +them must be treated as fully trusted by the operator. Typically they will have +authored this or will need to carefully vet providers written by third parties. + +This is equivalent to using the existing `-ui-dir` flag to serve an alternative +version of the UI - in either model the operator takes full responsibility for +the provenance of the code being served since it has the power to intercept ACL +tokens, access cookies and local storage for the Consul UI domain and possibly +more. + +## Current Limitations + +Currently there are some limitations to this feature. + +- **No cross-datacenter support** The initial metrics provider integration is + with Prometheus which is popular and easy to setup within one Kubernetes + cluster. However, when using the Consul UI in a multi-datacenter deployment, + the UI allows users to select any datacenter to view. + + This means that the Prometheus server that the Consul agent serving the UI can + access likely only has metrics for the local datacenter and a full solution + would need additional proxying or exposing remote Prometheus servers on the + network in remote datacenters. Later we may support an easy way to set this up + via Consul service mesh but initially we don't attempt to fetch metrics in the UI + if you are browsing a remote datacenter. + +- **Built-in provider requires metrics proxy** Initially the built-in + `prometheus` provider only support querying Prometheus via the [metrics + proxy](#metrics-proxy). Later it may be possible to configure it for direct + access to an expose Prometheus. diff --git a/website/content/docs/openshift.mdx b/website/content/docs/openshift.mdx new file mode 100644 index 000000000000..089e256c4055 --- /dev/null +++ b/website/content/docs/openshift.mdx @@ -0,0 +1,31 @@ +--- +layout: docs +page_title: Consul on RedHat OpenShift +description: >- + Consul has native OpenShift support, allowing you to deploy Consul sidecars to an OpenShift cluster facilitating a service mesh. Learn how to install Consul on OpenShift with Helm or the Consul K8s CLI and get started with tutorials. +--- + +# Consul on RedHat OpenShift + +Consul supports the OpenShift runtime. You can deploy Consul to an OpenShift cluster with [the Consul Helm chart](/consul/docs/k8s/installation/install#helm-chart-installation) or the [Consul K8s CLI](/consul/docs/k8s/installation/install-cli#consul-k8s-cli-installation). + +For help configuring and running OpenShift, refer to the external [RedHat OpenShift Documentation](https://docs.redhat.com/en). + +## Benefits + +Consul provides the following capabilities to networks with OpenShift clusters: + +- **Hybrid and multi-cloud service discovery**: Consul can automatically sync OpenShift services with its own service registry. Consul makes the OpenShift services discoverable to the rest of the cluster in its network, which you can expand across hardware and runtimes. At the same time, Consul makes the rest of its services discoverable to OpenShift, with options to limit access according to security requirements. +- **Dedicated CLI tool**: The [`consul-k8s` CLI](/consul/docs/reference/cli/consul-k8s) is a dedicated tool for managing Consul deployments on Kubernetes runtimes, including OpenShift clusters. It can help you deploy clusters, update their configurations, interact with sidecar proxies in your service mesh. + +## Guidance + +The following documentation for Consul and RedHat OpenShift is available: + +- [Install Consul on OpenShift](/consul/docs/deploy/server/k8s/platform/openshift) describes the process to deploy Consul in an OpenShift environment using the Helm chart. +- [DNS forwarding on OpenShift](/consul/docs/manage/dns/forwarding/k8s#openshift-clusters) describes the process to enable the `consul` sub-domain in your cluster's DNS operations. +- [Consul DNS views on OpenShift](/consul/docs/manage/dns/views#openshift-clusters) describes the process to deploy a lightweight Consul process in your cluster to return Consul DNS results without bypassing your existing DNS configuration. +- [Troubleshoot Consul on OpenShift](/consul/docs/troubleshoot#kubernetes-and-openshift-deployments) includes steps to resolve issues that may occur. +- [Upgrade Consul on OpenShift](/consul/docs/upgrade/k8s/openshift) + +For more information about Helm configurations for Consul on OpenShift, refer to the [Helm Chart reference documentation](/consul/docs/reference/k8s/helm). \ No newline at end of file diff --git a/website/content/docs/security/acl/auth-methods/aws-iam.mdx b/website/content/docs/reference/acl/auth-method/aws-iam.mdx similarity index 97% rename from website/content/docs/security/acl/auth-methods/aws-iam.mdx rename to website/content/docs/reference/acl/auth-method/aws-iam.mdx index bf4433a1a18e..9dca886e94d0 100644 --- a/website/content/docs/security/acl/auth-methods/aws-iam.mdx +++ b/website/content/docs/reference/acl/auth-method/aws-iam.mdx @@ -1,11 +1,11 @@ --- layout: docs -page_title: AWS Identity and Access Management (IAM) Auth Method +page_title: Access Control List (ACL) AWS Identity and Access Management (IAM) auth method configuration reference description: >- Use the AWS IAM auth method to authenticate to Consul through Amazon Web Service Identity Access Management role and user identities. Learn how to configure the auth method parameters using this reference page and example configuration. --- -# AWS Identity and Access Management (IAM) Auth Method +# ACL AWS Identity and Access Management (IAM) auth method configuration reference The AWS Identity and Access Management (IAM) auth method type allows for AWS IAM Roles and Users to be used to authenticate to Consul in order to obtain @@ -14,7 +14,7 @@ a Consul token. This page assumes general knowledge of [AWS IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) and the concepts described in the main [auth method -documentation](/consul/docs/security/acl/auth-methods). +documentation](/consul/docs/secure/acl/auth-method). ## Overview @@ -33,7 +33,7 @@ method with a strong guarantee of the client's identity. The auth method compare Resource Name (ARN) of the client with the `BoundIAMPrincipalARNs` list to determine if the client is permitted to login. -## Config Parameters +## Config parameters The following are the auth method [`Config`](/consul/api-docs/acl/auth-methods#config) parameters for an auth method of type `aws-iam`: @@ -102,7 +102,7 @@ parameters for an auth method of type `aws-iam`: } ``` -## Trusted Identity Attributes +## Trusted identity attributes The authentication step returns the following trusted identity attributes for use in binding rule selectors and bind name interpolation. All of these attributes are strings that can be interpolated @@ -116,7 +116,7 @@ and support the following selector operations: `Equal, Not Equal, In, Not In, Ma | `entity_path` | The path of the IAM role or user | `EnableIAMEntityDetails=true` | | `entity_tags.` | Value of a tag on the IAM role or user | `EnableIAMEntityDetails=true` and `IAMEntityTags` contains `` | -## IAM Policies +## IAM policies When `EnableIAMEntityDetails=false`, no specific IAM policies are needed. @@ -158,7 +158,7 @@ the client must have permission to fetch the role or user, respectively. } ``` -## Authentication Procedure +## Authentication procedure If `EnableIAMEntityDetails=false`, a client must log in with the following `consul login` command. diff --git a/website/content/docs/reference/acl/auth-method/jwt.mdx b/website/content/docs/reference/acl/auth-method/jwt.mdx new file mode 100644 index 000000000000..93e4dff3b8d5 --- /dev/null +++ b/website/content/docs/reference/acl/auth-method/jwt.mdx @@ -0,0 +1,177 @@ +--- +layout: docs +page_title: Access Control List (ACL) JSON Web Token (JWT) auth method configuration reference +description: >- + Use the JWT auth method to authenticate to Consul with a JSON web token and receive an ACL token with privileges based on JWT identity attributes. Learn how to configure the auth method parameters using this reference page and example configuration. +--- + +# ACL JSON Web Token (JWT) auth method configuration reference + + This feature is available in Consul versions 1.8.0 and newer. + +The `jwt` auth method can be used to authenticate with Consul by providing a +[JWT](https://en.wikipedia.org/wiki/JSON_Web_Token) directly. The JWT is +cryptographically verified using locally-provided keys, or, if configured, an +OIDC Discovery service can be used to fetch the appropriate keys. + +This page assumes general knowledge of JWTs and the concepts described in the +main [auth method documentation](/consul/docs/secure/acl/auth-method). + +Both the [`jwt`](/consul/docs/secure/acl/auth-method/jwt) and the +[`oidc`](/consul/docs/secure/acl/auth-method/oidc) auth method types allow additional +processing of the claims data in the JWT. + +@include 'legacy/jwt_or_oidc.mdx' + +## Config parameters + +The following auth method [`Config`](/consul/api-docs/acl/auth-methods#config) +parameters are required to properly configure an auth method of type +`jwt`: + +- `JWTValidationPubKeys` `(array)` - A list of PEM-encoded public keys + to use to authenticate signatures locally. + + Exactly one of `JWKSURL` `JWTValidationPubKeys`, or `OIDCDiscoveryURL` is required. + +- `OIDCDiscoveryURL` `(string: "")` - The OIDC Discovery URL, without any + .well-known component (base path). + + Exactly one of `JWKSURL` `JWTValidationPubKeys`, or `OIDCDiscoveryURL` is required. + +- `OIDCDiscoveryCACert` `(string: "")` - PEM encoded CA cert for use by the TLS + client used to talk with the OIDC Discovery URL. NOTE: Every line must end + with a newline (`\n`). If not set, system certificates are used. + +- `JWKSURL` `(string: "")` - The JWKS URL to use to authenticate signatures. + + Exactly one of `JWKSURL` `JWTValidationPubKeys`, or `OIDCDiscoveryURL` is required. + +- `JWKSCACert` `(string: "")` - PEM encoded CA cert for use by the TLS client + used to talk with the JWKS URL. NOTE: Every line must end with a newline + (`\n`). If not set, system certificates are used. + +- `ClaimMappings` `(map[string]string)` - Mappings of claims (key) that + [will be copied to a metadata field](#trusted-identity-attributes-via-claim-mappings) + (value). Use this if the claim you are capturing is singular (such as an attribute). + + When mapped, the values can be any of a number, string, or boolean and will + all be stringified when returned. + +- `ListClaimMappings` `(map[string]string)` - Mappings of claims (key) + [will be copied to a metadata field](#trusted-identity-attributes-via-claim-mappings) + (value). Use this if the claim you are capturing is list-like (such as groups). + + When mapped, the values in each list can be any of a number, string, or + boolean and will all be stringified when returned. + +- `JWTSupportedAlgs` `(array)` - JWTSupportedAlgs is a list of + supported signing algorithms. Defaults to `RS256`. + +- `BoundAudiences` `(array)` - List of `aud` claims that are valid for + login; any match is sufficient. + +- `BoundIssuer` `(string: "")` - The value against which to match the `iss` + claim in a JWT. + +- `ExpirationLeeway` `(duration: 0s)` - Duration in seconds of leeway when + validating expiration of a token to account for clock skew. Defaults to 150 + (2.5 minutes) if set to 0 and can be disabled if set to -1. + +- `NotBeforeLeeway` `(duration: 0s)` - Duration in seconds of leeway when + validating not before values of a token to account for clock skew. Defaults + to 150 (2.5 minutes) if set to 0 and can be disabled if set to -1. + +- `ClockSkewLeeway` `(duration: 0s)` - Duration in seconds of leeway when + validating all claims to account for clock skew. Defaults to 60 (1 minute) + if set to 0 and can be disabled if set to -1. + +### Sample configs + +#### Static keys + +```json +{ + "Name": "example-jwt-auth-static-keys", + "Type": "jwt", + "Description": "Example JWT auth method with static keys", + "Config": { + "BoundIssuer": "corp-issuer", + "JWTValidationPubKeys": [ + "" + ], + "ClaimMappings": { + "http://example.com/first_name": "first_name", + "http://example.com/last_name": "last_name" + }, + "ListClaimMappings": { + "http://example.com/groups": "groups" + } + } +} +``` + +#### JWKS + +```json +{ + "Name": "example-jwt-auth-jwks", + "Type": "jwt", + "Description": "Example JWT auth method with JWKS", + "Config": { + "JWKSURL": "https://my-corp-jwks-url.example.com/", + "ClaimMappings": { + "http://example.com/first_name": "first_name", + "http://example.com/last_name": "last_name" + }, + "ListClaimMappings": { + "http://example.com/groups": "groups" + } + } +} +``` + +#### OIDC discovery + +```json +{ + "Name": "example-oidc-auth", + "Type": "oidc", + "Description": "Example OIDC auth method", + "Config": { + "BoundAudiences": [ + "V1RPi2MYptMV1RPi2MYptMV1RPi2MYpt" + ], + "OIDCDiscoveryURL": "https://my-corp-app-name.auth0.com/", + "ClaimMappings": { + "http://example.com/first_name": "first_name", + "http://example.com/last_name": "last_name" + }, + "ListClaimMappings": { + "http://example.com/groups": "groups" + } + } +} +``` + +## JWT verification + +JWT signatures will be verified against public keys from the issuer. This +process can be done one of three ways: + +- **Static Keys** - A set of public keys is stored directly in the + configuration. + +- **JWKS** - A JSON Web Key Set ([JWKS](https://tools.ietf.org/html/rfc7517)) + URL (and optional certificate chain) is configured. Keys will be fetched from + this endpoint during authentication. + +- **OIDC Discovery** - An OIDC Discovery URL (and optional certificate chain) + is configured. Keys will be fetched from this URL during authentication. When + OIDC Discovery is used, OIDC validation criteria (e.g. `iss`, `aud`, etc.) + will be applied. + +If multiple methods are needed, another auth method of this type may be created +with a different name. + +@include 'legacy/jwt_claim_mapping_details.mdx' diff --git a/website/content/docs/reference/acl/auth-method/k8s.mdx b/website/content/docs/reference/acl/auth-method/k8s.mdx new file mode 100644 index 000000000000..bcbd87da550f --- /dev/null +++ b/website/content/docs/reference/acl/auth-method/k8s.mdx @@ -0,0 +1,163 @@ +--- +layout: docs +page_title: Access Control List (ACL) Kubernetes auth method configuration reference +description: >- + Use the Kubernetes auth method type to authenticate to Consul with a Kubernetes service account token and receive an ACL token with privileges based on JWT identity attributes. Learn how to configure auth method parameters using this reference page and example configuration. +--- + +# ACL Kubernetes auth method configuration reference + + This feature is available in Consul versions 1.5.0 and newer. + +The `kubernetes` auth method type allows for a Kubernetes service account token +to be used to authenticate to Consul. This method of authentication makes it +easy to introduce a Consul token into a Kubernetes pod. + +This page assumes general knowledge of [Kubernetes](https://kubernetes.io/) and +the concepts described in the main [auth method +documentation](/consul/docs/secure/acl/auth-method). + +## Config parameters + +The following auth method [`Config`](/consul/api-docs/acl/auth-methods#config) +parameters are required to properly configure an auth method of type +`kubernetes`: + +- `Host` `(string: )` - Must be a host string, a host:port pair, or a + URL to the base of the Kubernetes API server. + +- `CACert` `(string: )` - PEM encoded CA cert for use by the TLS + client used to talk with the Kubernetes API. NOTE: Every line must end with a + newline (`\n`). If not set, system certificates are used. + +- `ServiceAccountJWT` `(string: )` - A Service Account Token + ([JWT](https://jwt.io/ 'JSON Web Token')) used by the Consul leader to + validate application JWTs during login. + +- `MapNamespaces` `(bool: )` - + **Deprecated in Consul 1.8.0 in favor of [namespace rules](/consul/api-docs/acl/auth-methods#namespacerules).** + Indicates whether the auth method should attempt to map the Kubernetes namespace to a Consul + namespace instead of creating tokens in the auth methods own namespace. Note + that mapping namespaces requires the auth method to reside within the + `default` namespace. + Deprecated in Consul 1.8.0 in favor of [namespace rules](/consul/api-docs/acl/auth-methods#namespacerules). + +- `ConsulNamespacePrefix` `(string: )` - + **Deprecated in Consul 1.8.0 in favor of [namespace rules](/consul/api-docs/acl/auth-methods#namespacerules).** + When `MapNamespaces` is enabled, this value will be prefixed to the Kubernetes + namespace to determine the Consul namespace to create the new token within. + Deprecated in Consul 1.8.0 in favor of [namespace rules](/consul/api-docs/acl/auth-methods#namespacerules). + +- `ConsulNamespaceOverrides` `(map: )` - + **Deprecated in Consul 1.8.0 in favor of [namespace rules](/consul/api-docs/acl/auth-methods#namespacerules).** + This field is a mapping of Kubernetes namespace names to Consul namespace + names. If a Kubernetes namespace is present within this map, the value will + be used without adding the `ConsulNamespacePrefix`. If the value in the map + is `""` then the auth methods namespace will be used instead of attempting + to determine an alternate namespace. + Deprecated in Consul 1.8.0 in favor of [namespace rules](/consul/api-docs/acl/auth-methods#namespacerules). + +### Sample config + +```json +{ + "Name": "example-k8s-auth", + "Type": "kubernetes", + "Description": "Example JWT auth method", + "Config": { + "Host": "https://192.0.2.42:8443", + "CACert": "-----BEGIN CERTIFICATE-----\n...-----END CERTIFICATE-----\n", + "ServiceAccountJWT": "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9..." + } +} +``` + +## RBAC + +The Kubernetes service account corresponding to the configured +[`ServiceAccountJWT`](/consul/docs/secure/acl/auth-method/k8s#serviceaccountjwt) +needs to have access to two Kubernetes APIs: + +- [**TokenReview**](https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-review-v1/) + + + + Kubernetes should be running with `--service-account-lookup`. This is + defaulted to true in Kubernetes 1.7, but any versions prior should ensure + the Kubernetes API server is started with this setting. + + + +- [**ServiceAccount**](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens) + +The following is an example +[RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) +configuration snippet to grant the necessary permissions to a service account +named `consul-auth-method-example`: + +```yaml +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: review-tokens + namespace: default +subjects: + - kind: ServiceAccount + name: consul-auth-method-example + namespace: default +roleRef: + kind: ClusterRole + name: system:auth-delegator + apiGroup: rbac.authorization.k8s.io +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: service-account-getter + namespace: default +rules: + - apiGroups: [''] + resources: ['serviceaccounts'] + verbs: ['get'] +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: get-service-accounts + namespace: default +subjects: + - kind: ServiceAccount + name: consul-auth-method-example + namespace: default +roleRef: + kind: ClusterRole + name: service-account-getter + apiGroup: rbac.authorization.k8s.io +``` + +## Kubernetes authentication details + +Initially the +[`ServiceAccountJWT`](/consul/docs/secure/acl/auth-method/k8s#serviceaccountjwt) +given to the Consul leader uses the TokenReview API to validate the provided +JWT. The trusted attributes of `serviceaccount.namespace`, +`serviceaccount.name`, and `serviceaccount.uid` are populated directly from the +Service Account metadata. + +The Consul leader makes an additional query, this time to the ServiceAccount +API to check for the existence of an annotation of +`consul.hashicorp.com/service-name` on the ServiceAccount object. If one is +found its value will override the trusted attribute of `serviceaccount.name` +for the purposes of evaluating any binding rules. + +## Trusted identity attributes + +The authentication step returns the following trusted identity attributes for +use in binding rule selectors and bind name interpolation. + +| Attributes | Supported Selector Operations | Can be Interpolated | +| -------------------------- | -------------------------------------------------- | ------------------- | +| `serviceaccount.namespace` | Equal, Not Equal, In, Not In, Matches, Not Matches | yes | +| `serviceaccount.name` | Equal, Not Equal, In, Not In, Matches, Not Matches | yes | +| `serviceaccount.uid` | Equal, Not Equal, In, Not In, Matches, Not Matches | yes | diff --git a/website/content/docs/reference/acl/auth-method/oidc.mdx b/website/content/docs/reference/acl/auth-method/oidc.mdx new file mode 100644 index 000000000000..b936a589e672 --- /dev/null +++ b/website/content/docs/reference/acl/auth-method/oidc.mdx @@ -0,0 +1,215 @@ +--- +layout: docs +page_title: Access Control List (ACL) OpenID Connect (OIDC) auth method configuration reference +description: >- + Use the OIDC auth method type to authenticate to Consul through a web browser with an OpenID Connect provider. Learn how to configure the auth method parameters using this reference page and example configuration. +--- + +# ACL OpenID Connect (OIDC) auth method configuration reference + + + +This feature requires version 1.8.0+ of +self-managed Consul Enterprise. +Refer to the [enterprise feature matrix](/consul/docs/enterprise#consul-enterprise-feature-availability) for additional information. + + + +The `oidc` auth method can be used to authenticate with Consul using +[OIDC](https://en.wikipedia.org/wiki/OpenID_Connect). This method allows +authentication via a configured OIDC provider using the user's web browser. +This method may be initiated from the Consul UI or the command line. + +This page assumes general knowledge of [OIDC +concepts](https://developer.okta.com/blog/2017/07/25/oidc-primer-part-1) and +the concepts described in the main [auth method +documentation](/consul/docs/secure/acl/auth-method). + +Both the [`jwt`](/consul/docs/secure/acl/auth-method/jwt) and the +[`oidc`](/consul/docs/secure/acl/auth-method/oidc) auth method types allow additional +processing of the claims data in the JWT. + +@include 'legacy/jwt_or_oidc.mdx' + +## Config parameters + +The following auth method [`Config`](/consul/api-docs/acl/auth-methods#config) +parameters are required to properly configure an auth method of type +`oidc`: + +- `OIDCDiscoveryURL` `(string: )` - The OIDC Discovery URL, without any + .well-known component (base path). + +- `OIDCDiscoveryCACert` `(string: "")` - PEM encoded CA cert for use by the TLS + client used to talk with the OIDC Discovery URL. NOTE: Every line must end + with a newline (`\n`). If not set, system certificates are used. + +- `OIDCClientID` `(string: )` - The OAuth Client ID configured with + your OIDC provider. + +- `OIDCClientSecret` `(string: )` - The OAuth Client Secret configured with + your OIDC provider. + +- `AllowedRedirectURIs` `(array)` - A list of allowed + values for `redirect_uri`. Must be non-empty. + +- `ClaimMappings` `(map[string]string)` - Mappings of claims (key) that + [will be copied to a metadata field](#trusted-identity-attributes-via-claim-mappings) + (value). Use this if the claim you are capturing is singular (such as an attribute). + + When mapped, the values can be any of a number, string, or boolean and will + all be stringified when returned. + +- `ListClaimMappings` `(map[string]string)` - Mappings of claims (key) + [will be copied to a metadata field](#trusted-identity-attributes-via-claim-mappings) + (value). Use this if the claim you are capturing is list-like (such as groups). + + When mapped, the values in each list can be any of a number, string, or + boolean and will all be stringified when returned. + +- `OIDCScopes` `(array)` - A list of OIDC scopes. + +- `OIDCACRValues` `(array)` - A list of Authentication Context Class Reference values to use for the authentication request. Refer to [OIDC reference](https://openid.net/specs/openid-connect-core-1_0.html#rfc.section.3.1.2.1) for more info on this parameter. Added in v1.11.0. + +- `JWTSupportedAlgs` `(array)` - JWTSupportedAlgs is a list of + supported signing algorithms. Defaults to `RS256`. ([Available + algorithms](https://github.com/hashicorp/consul/blob/main/internal/go-sso/oidcauth/jwt.go)) + +- `BoundAudiences` `(array)` - List of `aud` claims that are valid for + login; any match is sufficient. + +- `VerboseOIDCLogging` `(bool: false)` - Log received OIDC tokens and claims when + debug-level logging is active. Not recommended in production since sensitive + information may be present in OIDC responses. + +### Sample config + +```json +{ + "Name": "example-oidc-auth", + "Type": "oidc", + "Description": "Example OIDC auth method", + "Config": { + "AllowedRedirectURIs": [ + "http://localhost:8550/oidc/callback", + "http://localhost:8500/ui/oidc/callback" + ], + "BoundAudiences": [ + "V1RPi2MYptMV1RPi2MYptMV1RPi2MYpt" + ], + "ClaimMappings": { + "http://example.com/first_name": "first_name", + "http://example.com/last_name": "last_name" + }, + "ListClaimMappings": { + "http://consul.com/groups": "groups" + }, + "OIDCClientID": "V1RPi2MYptMV1RPi2MYptMV1RPi2MYpt", + "OIDCClientSecret": "...(omitted)...", + "OIDCDiscoveryURL": "https://my-corp-app-name.auth0.com/" + } +} +``` + +## JWT verification + +JWT signatures will be verified against public keys from the issuer via OIDC +discovery. Keys will be fetched from the OIDC Discovery URL during +authentication and OIDC validation criteria (e.g. `iss`, `aud`, etc.) will be +applied. + +## OIDC authentication + +Consul includes two built-in OIDC login flows: the Consul UI, and the CLI using +[`consul login`](/consul/commands/login). + +### Redirect URIs + +An important part of OIDC auth method configuration is properly setting +redirect URIs. This must be done both in Consul and with the OIDC provider, and +these configurations must align. The redirect URIs are specified for an auth +method with the [`AllowedRedirectURIs`](#allowedredirecturis) parameter. There +are different redirect URIs to configure the Consul UI and CLI flows, so one or +both will need to be set up depending on the installation. + +#### Consul UI + +Logging in via the Consul UI requires a redirect URI of the form: +`http://localhost:8500/ui/oidc/callback` or +`https://{host:port}/ui/oidc/callback` + +The "host:port" must be correct for the Consul agent serving the Consul UI. + +#### CLI + +If you plan to support authentication via `consul login -type=oidc -method=`, a localhost redirect URI must be set (usually this is +`http://localhost:8550/oidc/callback`). Logins via the CLI may specify a +different host and/or listening port if needed, and a URI with this host/port +must match one of the configured redirected URIs. These same "localhost" URIs +must be added to the provider as well. + +### OIDC login + +#### Consul UI + +1. Click the "Log in" link at the top right of the menu bar. +2. Click one of the "Continue with..." buttons for your OIDC auth method of choice. +3. Complete the authentication with the configured provider. + +#### CLI + +```shell-session +$ consul login -method=oidc -type=oidc -token-sink-file=consul.token + +Complete the login via your OIDC provider. Launching browser to: + + https://myco.auth0.com/authorize?redirect_uri=http%3A%2F%2Flocalhost%3A8550%2Foidc%2Fcallback&client_id=r3qXc2bix9eF... +``` + +The browser will open to the generated URL to complete the provider's login. +The URL may be entered manually if the browser cannot be automatically opened. + +The callback listener may be customized with the following optional parameters. +These are typically not required to be set: + +The callback listener defaults to listen on `localhost:8550`. If you want to +customize that use the optional flag +[`-oidc-callback-listen-addr=`](/consul/commands/login#oidc-callback-listen-addr). + +## Troubleshoot OIDC configuration + +The amount of configuration required for OIDC is relatively small, but it can +be tricky to debug why things aren't working. Some tips for setting up OIDC: + +- Monitor the log output for the Consul servers. Important information about + OIDC validation failures will be emitted. + +- Ensure Redirect URIs are correct in Consul and on the provider. They need to + match exactly. Check: http/https, 127.0.0.1/localhost, port numbers, whether + trailing slashes are present. + +- [`BoundAudiences`](#boundaudiences) is optional and typically + not required. OIDC providers will use the `client_id` as the audience and + OIDC validation expects this. + +- Check your provider for what scopes are required in order to receive all of + the information you need. The scopes "profile" and "groups" often need to be + requested, and can be added by setting + `[OIDCScopes](#oidcscopes)="profile,groups"` on the auth method. + +- If you're refer to ing claim-related errors in logs, review the provider's docs + very carefully to refer to how they're naming and structuring their claims. + Depending on the provider, you may be able to construct a simple `curl` + [implicit grant](https://developer.okta.com/blog/2018/05/24/what-is-the-oauth2-implicit-grant-type) + request to obtain a JWT that you can inspect. An example of how to decode the + JWT (in this case located in the `access_token` field of a JSON response): + + jq --raw-output '.access_token / "." | .[1] | @base64d' jwt.json + +- The [`VerboseOIDCLogging`](#verboseoidclogging) option is available which + will log the received OIDC token if debug level logging is enabled. This can + be helpful when debugging provider setup and verifying that the received + claims are what you expect. Since claims data is logged verbatim and may + contain sensitive information, this option should not be used in production. + +@include 'legacy/jwt_claim_mapping_details.mdx' diff --git a/website/content/docs/reference/acl/policy.mdx b/website/content/docs/reference/acl/policy.mdx new file mode 100644 index 000000000000..412445ba672c --- /dev/null +++ b/website/content/docs/reference/acl/policy.mdx @@ -0,0 +1,511 @@ +--- +layout: docs +page_title: Access Control List (ACL) policy configuration reference +description: >- + ACL policies define access control rules for resources in Consul. When an ACL token is submitted with a request, Consul authorizes access based on the token's associated policies. Learn how to format and combine rules into policies and apply them to tokens. +--- + +# Access Control List (ACL) policy configuration reference + +This topic describes policies, which are components in Consul's access control list (ACL) system. Policies define which services and agents are authorized to interact with resources in the network. + +## Introduction + +A policy is a group of one or more ACL rules that are linked to [ACL tokens](/consul/docs/secure/acl/token). The following diagram describes the relationships between rules, policies, and tokens: + +![ACL system component relationships](/img/acl-token-policy-rule-relationship.png) + +The term "policy" should not be confused with the keyword `policy`. The keyword is a rule-level element that determines access to a resource (see [Policy Dispositions](#policy-dispositions)). + +## Rules + +Rules are one of several [attributes that form a policy](#policy-attributes). They are building blocks that define access to resources. + +This section describes about how to assemble rules into policies. Refer to the [ACL Rules Reference](/consul/docs/reference/acl/rule) for additional details about how to configure rules and how they affect access to resources. + +### Rule Specification + +A rule is composed of a resource declaration and an access level defined with the `policy` keyword and a [policy disposition](#policy-dispositions). The following syntax describes the basic structure of a rule: + + + +```hcl + { + policy = "" +} +``` + +```json +{ + "": { + "policy": "" + } +} +``` + + + +Access to the specified resource is granted or denied based on the policy disposition. + +### Resource Labels + +Many resources take an additional value that limits the scope of the rule to resources with the same label. A resource label can be the name of a specific set of resources, such as nodes configured with the same `name` value. + +The following syntax describes how to include a resource label in the rule: + + + +```hcl + " + +Labels provide operators with more granular control over access to the resource, but the following resource types do not take a label: + +- `acl` +- `keyring` +- `mesh` +- `operator` + +Use the following syntax to create rules for these resources: + + + +```hcl + = "" +``` + +```json +{ + "": "" +} +``` + + + +### Policy Dispositions + +Use the `policy` keyword and one of the following access levels to set a policy disposition: + +- `read`: Allows the resource to be read but not modified. +- `write`: Allows the resource to be read and modified. +- `deny`: Denies read and write access to the resource. + +The special `list` access level provides access to all keys with the specified resource label in the [Consul KV](/consul/commands/kv/). The `list` access level can only be used with the `key_prefix` resource. The [`acl.enable_key_list_policy`](/consul/docs/reference/agent/configuration-file/acl#acl_enable_key_list_policy) setting must be set to `true`. + +### Matching and Prefix Values + +You can define rules for labeled resources based on exact matches or by using resource prefixes to match several resource labels beginning with the same value. Matching resource labels on exact values is described in the [Resource Labels](#resource-labels) section. + +The following example rule is an exact match that denies access to services labeled `web-prod`: + + + +```hcl +service "web-prod" { + policy = "deny" +} +``` + +```json +{ + "service": { + "web-prod": { + "policy": "deny" + } + } +} +``` + + + +You can append the resource with `_prefix` to match all resource labels beginning with the same value. The following example rule allows `write` access to all services with labels that begin with "web": + + + +```hcl +service_prefix "web" { + policy = "write" +} +``` + +```json +{ + "service_prefix": { + "web": { + "policy": "write" + } + } +} +``` + + + +Prefix-based resource labels can also contain an empty string, which configures the rule to apply to all resources of the declared type. The following example rule allows `read` access to all `service` resources: + + + +```hcl +service_prefix "" { + policy = "read" +} +``` + +```json +{ + "service_prefix": { + "": { + "policy":"read" + } + } +} +``` + + + +When using prefix-based rules, the most specific prefix match determines the action. In a real-world scenario, a combination of rules would be combined to create a flexible policy. Each team or business unit would use tokens based on policies that enforce several rules, for example: + +- A rule that denies access to a specific resource label +- A prefix-based rule that allows write access to a class of resources +- An empty prefix that grants read-only access to all resources within the declared class + +#### Matching Precedence + +Exact matching rules will only apply to the exact resource specified. The order of precedence for matching rules are: + +1. `deny` (highest priority) +1. `write` +1. `read` + +## Policy Format + +Define policies using the +[HashiCorp Configuration Language (HCL)](https://github.com/hashicorp/hcl/). +HCL is human readable and interoperable with JSON, making it easy to automate policy generation. +The following examples show the same policy formatted in HCL and JSON: + + + +```hcl +# These control access to the key/value store. +key_prefix "" { + policy = "read" +} +key_prefix "foo/" { + policy = "write" +} +key_prefix "foo/private/" { + policy = "deny" +} +# Or for exact key matches +key "foo/bar/secret" { + policy = "deny" +} + +# This controls access to cluster-wide Consul operator information. +operator = "read" +``` + +```json +{ + "key_prefix": { + "": { + "policy": "read" + }, + "foo/": { + "policy": "write" + }, + "foo/private/": { + "policy": "deny" + } + }, + "key": { + "foo/bar/secret": { + "policy": "deny" + } + }, + "operator": "read" +} +``` + + + +## Rule Scope + +The rules from all policies, including roles and service identities, linked with a token are combined to form that token's effective rule set. +Policy rules can be defined in either an `allowlist` or `denylist` mode, depending on the configuration of the [`acl_default_policy`](/consul/docs/reference/agent/configuration-file/acl#acl_default_policy). +If the default policy is configured to deny access to all resources, then you can specify `allowlist` in policy rules to explicitly allow access to resources. +Conversely, if the default policy is configured to allow access to all resources, then you can specify `denylist` in policy rules to explicitly deny access to resources. + +## Implementing Policies + +After defining policies, the person responsible for administrating ACLs in your organization can implement them through the command line or by calling the ACL HTTP API endpoint and including rules in the payload. + +### Command Line + +Use the `consul acl policy` command to manage policies. Refer to the [ACL command line documentation](/consul/commands/acl/policy) for details. + +The following example creates a policy called `my-app-policy` and applies the rules defined in `rules.hcl`: + +```shell-session +$ consul acl policy create -name "my-app-policy" -description "Human-readable description of my policy" -rules @rules.hcl -token "" +``` + +Note that the command must present a token with permissions to use the ACL system. If the command is issued successfully, the console wil print information about the policy: + +```shell-session +ID: +Name: my-app-policy +Description: Human-readable description of my policy +Datacenters: +Rules: + +``` + +You can can define several attributes that attach additional metadata and specify the scope of the policy. See [Policy Attributes](#policy-attributes) for details. + +### HTTP API Endpoint + +The endpoint takes data formatted in HCL or JSON. Refer to the [ACL HTTP API endpoint documentation](/consul/api-docs/acl) for details about the API. + +The following example adds a set of rules to a policy called `my-app-policy`. The policy defines access to the `key` resource (Consul K/V). The rules are formatted in HCL, but they are wrapped in JSON so that the data can be sent using cURL: + +```shell-session +$ curl \ + --request PUT \ + --header "X-Consul-Token: " \ + --data \ +'{ + "Name": "my-app-policy", + "Rules": "key \"\" { policy = \"read\" } key \"foo/\" { policy = \"write\" } key \"foo/private/\" { policy = \"deny\" } operator = \"read\"" +}' http://127.0.0.1:8500/v1/acl/policy +``` + +The following call performs the same operation as the previous example using JSON: + +```shell-session +$ curl \ + --request PUT \ + --header "X-Consul-Token: " \ + --data \ +'{ + "Name": "my-app-policy", + "Rules": "{\"key\":{\"\":{\"policy\":\"read\"},\"foo/\":{\"policy\":\"write\"},\"foo/private\":{\"policy\":\"deny\"}},\"operator\":\"read\"}" +}' http://127.0.0.1:8500/v1/acl/policy +``` + +The policy configuration is returned when the call is successfully performed: + +```json +{ + "CreateIndex": 7, + "Hash": "UMG6QEbV40Gs7Cgi6l/ZjYWUwRS0pIxxusFKyKOt8qI=", + "ID": "5f423562-aca1-53c3-e121-cb0eb2ea1cd3", + "ModifyIndex": 7, + "Name": "my-app-policy", + "Rules": "key \"\" { policy = \"read\" } key \"foo/\" { policy = \"write\" } key \"foo/private/\" { policy = \"deny\" } operator = \"read\"" +} +``` + +### Linking Policies to Tokens + +A policy that has been implemented must still be linked to a token before the policy has an effect. A service or agent presents the token when interacting with resources in the network. The ACL system processes evaluate the policies linked to the token to determine if the requester has access to the requested resource. + +The person responsible for administrating ACLs can use the command line or call the API endpoint to link policies to tokens. Tokens can also be generated dynamically from an external system using Consul's [auth methods](/consul/docs/secure/acl/auth-method) functionality. + +Refer to the [tokens documentation](/consul/docs/secure/acl/token), as well as the [ACL tutorial](/consul/tutorials/security/access-control-setup-production#create-the-agent-token), for details about creating and linking policies to tokens. + +## Policy Attributes + +Policies may have several attributes that enable you to perform specific functions. For example, you can configure the policy's scope by specifying the name of a datacenter, namespace (Consul Enterprise), or administrative partition (Consul Enterprise) when interacting or creating policies. + +Additional metadata, such as the values of the `ID` and `name` fields, provide handles for updating and managing policies. + +Refer to the following topics for additional information: + +- [Namespaces](/consul/docs/multi-tenant/namespace) +- [Admin Partitions](/consul/docs/multi-tenant/admin-partition) + +ACL policies can have the following attributes: + +| Attribute | Description | Required | Default | +| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------- | --------- | +| `ID` | The policy's public identifier. Present the `ID` (or the `name`) value when interacting with policies. You can specify a value when creating policies or use the value auto-generated by Consul. | N/A | N/A | +| `name` | Unique name for the policy. | Required | none | +| `description` | Human readable description of the policy. | Optional | none | +| `rules` | Set of rules granting or denying permissions. See the [Rule Specification](/consul/docs/security/acl/acl-rules#rule-specification) documentation for more details. | Optional | none | +| `datacenter` | Datacenter in which the policy is valid. More than one datacenter can be specified. | Optional | none | +| `namespace` | Namespace in which the policy is valid. Added in Consul Enterprise 1.7.0. | Optional | `default` | +| `partition` | Admin partition in which the policy is valid. Added in Consul Enterprise 1.11.0 | Optional | `default` | + +-> **Non-default Namespaces and Partitions** - Rules defined in a policy tied to an namespace or admin partition other than `default` can only grant a subset of privileges that affect the namespace or partition. See [Namespace Rules](/consul/docs/security/acl/acl-rules#namespace-rules) and [Admin Partition Rules](/consul/docs/security/acl/acl-rules#admin-partition-rules) for additional information. + +You can view the current ACL policies on the command line or through the API. The following example demonstrates the command line usage: + +```shell-session +$ consul acl policy list -format json -token +[ + { + "ID": "56595ec1-52e4-d6de-e566-3b78696d5459", + "Name": "b-policy", + "Description": "", + "Datacenters": null, + "Hash": "ULwaXlI6Ecqb9YSPegXWgVL1LlwctY9TeeAOhp5HGBA=", + "CreateIndex": 126, + "ModifyIndex": 126, + "Namespace": "default", + "Partition": "default" + }, + { + "ID": "00000000-0000-0000-0000-000000000001", + "Name": "global-management", + "Description": "Builtin Policy that grants unlimited access", + "Datacenters": null, + "Hash": "W1bQuDAlAlxEb4ZWwnVHplnt3I5oPKOZJQITh79Xlog=", + "CreateIndex": 70, + "ModifyIndex": 70, + "Namespace": "default", + "Partition": "default" + } +] +``` + +The `Hash`, `CreateIndex`, and `ModifyIndex` attributes are also printed. These attributes are printed for all responses and are not specific to ACL policies. + +## Built-in Policies + +New installations of Consul ship with the following built-in policies. + +### Global Management + +The `global-management` policy grants unrestricted privileges to any token linked to it. The policy is assigned the reserved ID of `00000000-0000-0000-0000-000000000001`. You can rename the global management policy, but Consul prevents you from modifying any other attributes, including the rule set and datacenter scope. + +### Global Read-Only + +The `builtin/global-read-only` policy grants unrestricted _read-only_ privileges to any token linked to it. The policy is assigned the reserved ID of `00000000-0000-0000-0000-000000000002`. You can rename the global read-only policy, but Consul prevents you from modifying any other attributes, including the rule set and datacenter scope. + +### Namespace Management + +The `namespace-management` policy will be injected into all namespaces you create. The policy will be assigned a randomized UUID and can be managed as a normal, user-defined policy within the namespace. This feature was added in Consul Enterprise 1.7.0. + +## Example Policies + +This section includes example policy configurations for achieving specific use-cases. + +### Enable the Snapshot Agent to Run on a Specific Node + +The `consul snapshot agent` command starts a process that takes snapshots of the state of the Consul servers and either saves them locally or pushes them to a remote storage service. Refer to [Consul Snapshot Agent](/consul/commands/snapshot/agent) for additional information. + +In the following example, the ACL policy enables the snapshot agent to run on a node named `server-1234`. + + + +```hcl +# Required to read and snapshot ACL data +acl = "write" +# Allow the snapshot agent to create the key consul-snapshot/lock which will +# serve as a leader election lock when multiple snapshot agents are running in +# an environment +key "consul-snapshot/lock" { + policy = "write" +} +# Allow the snapshot agent to create sessions on the specified node +session "server-1234" { + policy = "write" +} +# Allow the snapshot agent to register itself into the catalog +service "consul-snapshot" { + policy = "write" +} +``` + +```json +{ + "acl": "write", + "key": { + "consul-snapshot/lock": { + "policy": "write" + } + }, + "session": { + "server-1234": { + "policy": "write" + } + }, + "service": { + "consul-snapshot": { + "policy": "write" + } + } +} +``` + + + +### Enable Vault to Access the Consul Storage Backend + +If you are using [Vault](/vault/docs) to manage secrets in your infrastructure, you can configure Vault to use Consul's key/value (KV) store as backend storage to persist Vault's data. Refer to the [Consul KV documentation](/consul/docs/automate/kv) and the [Vault storage documentation](/vault/docs/configuration/storage) for additional information. + +In the following example, the ACL policy enables Vault to register as a service +and provides access to the `vault/` path in Consul's KV store. + + + +```hcl +# Provide KV visibility to all agents. +agent_prefix "" { + policy = "read" +} +# Enable resources prefixed with 'vault/' to write to the KV +key_prefix "vault/" { + policy = "write" +} +# Enable the vault service to write to the KV +service "vault" { + policy = "write" +} +# Enable the agent to initialize a new session. +session_prefix "" { + policy = "write" +} +``` + +```json +{ + "agent_prefix": { + "": { + "policy": "read" + } + }, + "key_prefix": { + "vault/": { + "policy": "write" + } + }, + "service": { + "vault": { + "policy": "write" + } + }, + "session_prefix": { + "": { + "policy": "write" + } + } +} +``` + + diff --git a/website/content/docs/reference/acl/role.mdx b/website/content/docs/reference/acl/role.mdx new file mode 100644 index 000000000000..b0abb09ad5d6 --- /dev/null +++ b/website/content/docs/reference/acl/role.mdx @@ -0,0 +1,401 @@ +--- +layout: docs +page_title: Access Control List (ACL) role configuration reference +description: >- + Roles are a named collection of ACL policies, service identities, and node identities. Learn how roles allow you to reuse and update access control policies without needing to distribute new tokens to users. +--- + +# Access Control List (ACL) role configuration reference + +A role is a collection of policies that your ACL administrator can link to a token. +They enable you to reuse policies by decoupling the policies from the token distributed to team members. +Instead, the token is linked to the role, which is able to hold several policies that can be updated asynchronously without distributing new tokens to users. +As a result, roles can provide a more convenient authentication infrastructure than creating unique policies and tokens for each requester. + +## Workflow overview + +Roles are configurations linking several policies to a token. The following procedure describes the workflow for implementing roles. + +1. Assemble rules into policies (see [Policies](/consul/docs/secure/acl/policy)) and register them in Consul. +1. Define a role and include the policy IDs or names. +1. Register the role in Consul and link it to a token. +1. Distribute the tokens to users for implementation. + +## Creating roles + +Creating roles is commonly the responsibility of the Consul ACLs administrator. +Roles have several attributes, including service identities and node identities. +Refer to the following documentation for details: + +- [Role Attributes](#role-attributes) +- [Templated Policies](#templated-policies) +- [Service Identities](#service-identities) +- [Node Identities](#node-identities) + +Use the Consul command line or API endpoint to create roles. + +### Command line + +Issue the `consul acl role create` command to create roles. In the following example, a role named `crawler` is created that contains a policy named `crawler-kv` and a policy named `crawler-key`. + +```shell-session +$ consul acl role create -name "crawler" -description "web crawler role" -policy-name "crawler-kv" -policy-name "crawler-key" +``` + +Refer to the [command line documentation](/consul/commands/acl/role) for details. + +### API + +Make a `PUT` call to the `acl/role` endpoint and specify the role configuration in the payload to create roles. You can save the role definition in a JSON file or use escaped JSON in the call. In the following example call, the payload is defined externally. + +```shell-session +$ curl --request PUT --data @payload.json http://127.0.0.1:8500/v1/acl/role +``` + +Refer to the [API documentation](/consul/api-docs/acl/roles) for details. + +## Role attributes + +Roles may contain the following attributes: + +- `ID`: The `ID` is an auto-generated public identifier. You can specify the role `ID` when linking it to tokens. +- `Name`: A unique meaningful name for the role. You can specify the role `Name` when linking it to tokens. +- `Description`: (Optional) A human-readable description of the role. +- `Policies`: Specifies a the list of policies that are applicable for the role. The object can reference the policy `ID` or `Name` attribute. +- `TemplatedPolicies`: Specifies a list of templated polcicies that are applicable for the role. See [Templated Policies](#templated-policies) for details. +- `ServiceIdentities`: Specifies a list of services that are applicable for the role. See [Service Identities](#service-identities) for details. +- `NodeIdentities`: Specifies a list of nodes that are applicable for the role. See [Node Identities](#node-identities) for details. +- `Namespace`: The namespace that the policy resides in. Roles can only be linked to policies that are defined in the same namespace. See [Namespaces](/consul/docs/multi-tenant/namespace) for additional information. Requires Consul Enterprise 1.7.0+ +- `Partition`: The admin partition that the policy resides in. Roles can only be linked to policies that are defined in the same admin partition. See [Admin Partitions](/consul/docs/multi-tenant/admin-partition) for additional information. Requires Consul Enterprise 1.10.0+. + +## Templated policies + +You can specify a templated policy when configuring roles or linking tokens to policies. Templated policies enable you to quickly construct policies for common Consul use cases, rather than creating identical policies for each use cases. + +Consul uses templated policies during the authorization process to automatically generate a policy for the use case specified. Consul links the generated policy to the role or token so that it will have permission for the specific use case. + +### Templated policy specification + +The following templated policy example configuration uses the `builtin/service` templated policy to give a service named `api` and its sidecar proxy the required permissions to register into the catalog. + +```json +{ + "TemplatedPolicies": [ + { + "TemplateName": "builtin/service", + "TemplateVariables": { + "Name": "api" + } + } + ] +} +``` + +- `TemplatedPolicies`: Declares a templated policy block. +- `TemplatedPolicies.TemplateName`: String value that specifies the name of the templated policy you want to use. +- `TemplatedPolicies.TemplateVariables`: Map that specifies the required variables for the templated policies. This field is optional as not all templated policies require variables. + +Refer to the [API documentation for roles](/consul/api-docs/acl/roles#sample-payload) for additional information and examples. + +-> In Consul Enterprise, templated policies inherit the namespace or admin partition scope of the corresponding ACL token or role. + +The `builtin/service` sample templated policy generates the following policy for a service named `api`: + +```hcl +# Allow the service and its sidecar proxy to register into the catalog. +service "api" { + policy = "write" +} +service "api-sidecar-proxy" { + policy = "write" +} + +# Allow for any potential upstreams to be resolved. +service_prefix "" { + policy = "read" +} +node_prefix "" { + policy = "read" +} +``` + +Refer to the [rules reference](/consul/docs/reference/acl/rule) for information about the rules in the policy. + +### Example + +The following role configuration contains a templated policy that gives the role required permission for a service named `web` to register itself and its sidecar into the catalog. + + + +```json +{ + "Name": "example-role", + "Description": "Showcases all input parameters", + "TemplatedPolicies": [ + { + "TemplateName": "builtin/service", + "TemplateVariables": { + "Name": "web" + } + } + ] +} +``` + + + +During the authorization process, Consul generates the following policies for the `web` services and links it to the token: + + + +```hcl +# Allow the service and its sidecar proxy to register into the catalog. +service "web" { + policy = "write" +} +service "web-sidecar-proxy" { + policy = "write" +} + +# Allow for any potential upstreams to be resolved. +service_prefix "" { + policy = "read" +} +node_prefix "" { + policy = "read" +} +``` + + +## Service Identities + +You can specify a service identity when configuring roles or linking tokens to policies. Service identities enable you to quickly construct policies for services, rather than creating identical polices for each service. + +Service identities are used during the authorization process to automatically generate a policy for the service(s) specified. The policy will be linked to the role or token so that the service(s) can _be discovered_ and _discover other healthy service instances_ in a service mesh. Refer to the [service mesh](/consul/docs/connect) topic for additional information about Consul service mesh. + +### Service identity specification + +Use the following syntax to define a service identity: + +```json +{ + "ServiceIdentities": [ + { + "ServiceName": "", + "Datacenters": [""] + } + ] +} +``` + +- `ServiceIdentities`: Declares a service identity block. +- `ServiceIdentities.ServiceName`: String value that specifies the name of the service you want to associate with the policy. +- `ServiceIdentities.Datacenters`: Array that specifies the names of datacenters in which the service identity applies. This field is optional. + +Refer to the [API documentation for roles](/consul/api-docs/acl/roles#sample-payload) for additional information and examples. + +-> **Scope for Namespace and Admin Partition** - In Consul Enterprise, service identities inherit the namespace or admin partition scope of the corresponding ACL token or role. + +The following policy is generated for each service when a service identity is declared: + +```hcl +# Allow the service and its sidecar proxy to register into the catalog. +service "" { + policy = "write" +} +service "-sidecar-proxy" { + policy = "write" +} + +# Allow for any potential upstreams to be resolved. +service_prefix "" { + policy = "read" +} +node_prefix "" { + policy = "read" +} +``` + +Refer to the [rules reference](/consul/docs/reference/acl/rule) for information about the rules in the policy. + +### Example + +The following role configuration contains service identities for the `web` and `db` services. Note that the `db` service is also scoped to the `dc1` datacenter so that the policy will only be applied to instances of `db` in `dc1`. + + + +```json +{ + "Name": "example-role", + "Description": "Showcases all input parameters", + "Policies": [ + { + "ID": "783beef3-783f-f41f-7422-7087dc272765" + }, + { + "Name": "node-read" + } + ], + "ServiceIdentities": [ + { + "ServiceName": "web" + }, + { + "ServiceName": "db", + "Datacenters": ["dc1"] + } + ], + "NodeIdentities": [ + { + "NodeName": "node-1", + "Datacenter": "dc2" + } + ] +} +``` + + + +During the authorization process, the following policies for the `web` and `db` services will be generated and linked to the token: + + + +```hcl +# Allow the service and its sidecar proxy to register into the catalog. +service "web" { + policy = "write" +} +service "web-sidecar-proxy" { + policy = "write" +} + +# Allow for any potential upstreams to be resolved. +service_prefix "" { + policy = "read" +} +node_prefix "" { + policy = "read" +} +``` + + + +Per the `ServiceIdentities.Datacenters` configuration, the `db` policy is scoped to resources in the `dc1` datacenter. + + + +```hcl +# Allow the service and its sidecar proxy to register into the catalog. +service "db" { + policy = "write" +} +service "db-sidecar-proxy" { + policy = "write" +} + +# Allow for any potential upstreams to be resolved. +service_prefix "" { + policy = "read" +} +node_prefix "" { + policy = "read" +} +``` + + + +## Node Identities + +You can specify a node identity when configuring roles or linking tokens to policies. _Node_ commonly refers to a Consul agent, but a node can also be a physical server, cloud instance, virtual machine, or container. + +Node identities enable you to quickly construct policies for nodes, rather than manually creating identical polices for each node. They are used during the authorization process to automatically generate a policy for the node(s) specified. You can specify the token linked to the policy in the [`acl_tokens_agent`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent) field when configuring the agent. + +### Node identity specification + +Use the following syntax to define a node identity: + +```json +{ + "NodeIdentities": [ + { + "NodeName": "", + "Datacenter": "" + } + ] +} +``` + +- `NodeIdentities`: Declares a node identity block. +- `NodeIdentities.NodeName`: String value that specifies the name of the node you want to associate with the policy. +- `NodeIdentities.Datacenter`: String value that specifies the name of the datacenter in which the node identity applies. + +Refer to the [API documentation for roles](/consul/api-docs/acl/roles#sample-payload) for additional information and examples. + +-> **Consul Enterprise Namespacing** - Node Identities can only be applied to tokens and roles in the `default` namespace. The generated policy rules allow for `service:read` permissions on all services in all namespaces. + +The following policy is generated for each node when a node identity is declared: + +```hcl +# Allow the agent to register its own node in the Catalog and update its network coordinates +node "" { + policy = "write" +} + +# Allows the agent to detect and diff services registered to itself. This is used during +# anti-entropy to reconcile difference between the agents knowledge of registered +# services and checks in comparison with what is known in the Catalog. +service_prefix "" { + policy = "read" +} +``` + +Refer to the [rules reference](/consul/docs/reference/acl/rule) for information about the rules in the policy. + +### Example + +The following role configuration contains a node identity for `node-1`. Note that the node identity is also scoped to the `dc2` datacenter. As a result, the policy will only be applied to nodes named `node-1` in `dc2`. + + + +```json +{ + "Name": "example-role", + "Description": "Showcases all input parameters", + "Policies": [ + { + "ID": "783beef3-783f-f41f-7422-7087dc272765" + }, + { + "Name": "node-read" + } + ], + "NodeIdentities": [ + { + "NodeName": "node-1", + "Datacenter": "dc2" + } + ] +} +``` + + + +During the authorization process, the following policy will be generated and linked to the token: + + + +```hcl +# Allow the agent to register its own node in the Catalog and update its network coordinates +node "node-1" { + policy = "write" +} + +# Allows the agent to detect and diff services registered to itself. This is used during +# anti-entropy to reconcile differences between the agent's knowledge of registered +# services and checks in comparison with what is known in the Catalog. +service_prefix "" { + policy = "read" +} +``` + + diff --git a/website/content/docs/reference/acl/rule.mdx b/website/content/docs/reference/acl/rule.mdx new file mode 100644 index 000000000000..fbe5ba94187d --- /dev/null +++ b/website/content/docs/reference/acl/rule.mdx @@ -0,0 +1,960 @@ +--- +layout: docs +page_title: Access Control List (ACL) rule configuration reference +description: >- + Consul documentation provides reference material for all features and options available in Consul. +--- + +# ACL rule configuration reference + +This topic provides reference information for the types of access control list (ACL) rules you can create and how they affect access to datacenter resources. For details on how to create rules and group them into policies, refer to [Policies](/consul/docs/secure/acl/policy). + +## Overview + +The following table provides an overview of the resources you can use to create ACL rules. + +| Resource | Description | Labels | +| ---------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | +| `acl` | Controls access to ACL operations in the [ACL API](/consul/api-docs/acl).
    Refer to [ACL Resource Rules](#acl-resource-rules) for details. | No | +| `partition`
    `partition_prefix` | Controls access to one or more admin partitions.
    Refer to [Admin Partition Rules](#admin-partition-rules) for details. | Yes | +| `agent`
    `agent_prefix` | Controls access to the utility operations in the [Agent API](/consul/api-docs/agent), such as `join` and `leave`.
    Refer to [Agent Rules](#agent-rules) for details. | Yes | +| `event`
    `event_prefix` | Controls access to event operations in the [Event API](/consul/api-docs/event), such as firing and listing events.
    Refer to [Event Rules](#event-rules) for details. | Yes | +| `key`
    `key_prefix`   | Controls access to key/value store operations in the [KV API](/consul/api-docs/kv).
    Can also use the `list` access level when setting the policy disposition.
    Has additional value options in Consul Enterprise for integrating with [Sentinel](https://docs.hashicorp.com/sentinel/consul).
    Refer to [Key/Value Rules](#key-value-rules) for details. | Yes | +| `keyring`       | Controls access to keyring operations in the [Keyring API](/consul/api-docs/operator/keyring).
    Refer to [Keyring Rules](#keyring-rules) for details. | No | +| `mesh`       | Provides operator-level permissions for resources in the admin partition, such as ingress gateways or mesh proxy defaults. Refer to [Mesh Rules](#mesh-rules) for details. | No | +| `peering`       | Controls access to cluster peerings in the [Cluster Peering API](/consul/api-docs/peering). For more details, refer to [Peering Rules](#peering-rules). | No | +| `namespace`
    `namespace_prefix` | Controls access to one or more namespaces.
    Refer to [Namespace Rules](#namespace-rules) for details. | Yes | +| `node`
    `node_prefix`   | Controls access to node-level operations in the [Catalog API](/consul/api-docs/catalog), [Health API](/consul/api-docs/health), [Prepared Query API](/consul/api-docs/query), [Network Coordinate API](/consul/api-docs/coordinate), and [Agent API](/consul/api-docs/agent)
    Refer to [Node Rules](#node-rules) for details. | Yes | +| `operator`       | Controls access to cluster-level operations available in the [Operator API](/consul/api-docs/operator) excluding keyring API endpoints.
    Refer to [Operator Rules](#operator-rules) for details. | No | +| `query`
    `query_prefix` | Controls access to create, update, and delete prepared queries in the [Prepared Query API](/consul/api-docs/query). Access to the [node](#node-rules) and [service](#service-rules) must also be granted.
    Refer to [Prepared Query Rules](#prepared-query-rules) for details. | Yes | +| `service`
    `service_prefix` | Controls service-level operations in the [Catalog API](/consul/api-docs/catalog), [Health API](/consul/api-docs/health), [Intentions API](/consul/api-docs/connect/intentions), [Prepared Query API](/consul/api-docs/query), and [Agent API](/consul/api-docs/agent).
    Refer to [Service Rules](#service-rules) for details. | Yes | +| `session`
    `session_prefix` | Controls access to operations in the [Session API](/consul/api-docs/session).
    Refer to [Session Rules](#session-rules) for details. | Yes | + +The following resources are not covered by ACL policies: + +- The [Status API](/consul/api-docs/status) is used by servers when bootstrapping and exposes basic IP and port information about the servers, and does not allow modification of any state. +- The datacenter listing operation of the [Catalog API](/consul/api-docs/catalog#list-datacenters) similarly exposes the names of known Consul datacenters, and does not allow modification of any state. +- The [service mesh CA roots endpoint](/consul/api-docs/connect/ca#list-ca-root-certificates) exposes just the public TLS certificate which other systems can use to verify the TLS connection with Consul. + +-> **Consul Enterprise Namespace** - In addition to directly-linked policies, roles, and service identities, Consul Enterprise enables ACL policies and roles to be defined in the [Namespaces definition](/consul/docs/multi-tenant/namespace#namespace-definition) (Consul Enterprise 1.7.0+). + +The following topics provide additional details about the available resources. + +## ACL Resource Rules + +The `acl` resource controls access to ACL operations in the [ACL API](/consul/api-docs/acl). Only one `acl` rule is allowed per policy. The value is set to one of the [policy dispositions](#policy-dispositions). + +The `acl = "write"` rule is also required to create snapshots. This is because all token secrets are contained within the snapshot. + +Rules for ACL resources do not use labels. + +In the following example, the `acl` rule is configured with `write` access to the ACL API. +The rule enables the operator to read or write ACLs, as well as discover the secret ID of any token. + + + +```hcl +acl = "write" +``` + +```json +{ + "acl": "write" +} +``` + + + +## Admin Partition Rules + +The `partition` and `partition_prefix` resource controls access to one or more admin partitions. +You can include any number of namespace rules inside the admin partition. + +In the following example, the policy grants `write` access to the `ex-namespace` +namespace, as well as namespaces prefixed with `exns-` in the `example` partition. +The `mesh` and `peering` resources are also scoped to the admin partition rule, which grants +`write` access to the `mesh` and `peering` resources in the `example` partition. + +In addition, the policy grants `read` access to the `ex-namespace` namespace, as +well as namespaces prefixed with `exns-` in all partitions containing the +`example-` prefix. Read access is granted for the `mesh` and `peering` resources +scoped within the associated partition. + + + +```hcl +partition "example" { + mesh = "write" + peering = "write" + + node "my-node" { + policy = "write" + } + + namespace "ex-namespace" { + policy = "write" + } + + namespace_prefix "exns-" { + policy = "write" + } +} + +partition_prefix "example-" { + mesh = "read" + peering = "read" + + node "my-node" { + policy = "read" + } + + namespace "ex-namespace" { + policy = "read" + } +} +``` + +```json +{ + "partition": { + "example": { + "mesh": "write", + "node": { + "my-node": { + "policy": "write" + } + }, + "namespace": { + "ex-namespace": { + "policy": "write" + } + }, + "namespace_prefix": { + "exns-": { + "policy": "write" + } + } + } + }, + "partition_prefix": { + "example-": { + "mesh": "read", + "node": { + "my-node": { + "policy": "read" + } + }, + "namespace": { + "ex-namespace": { + "policy": "read" + } + } + } + } +} +``` + + + +## Agent Rules + +The `agent` and `agent_prefix` resources control access to the utility operations in the [Agent API](/consul/api-docs/agent), +such as join and leave. All of the catalog-related operations are covered by the [`node` or `node_prefix`](#node-rules) +and [`service` or `service_prefix`](#service-rules) policies instead. + + + +```hcl +agent "foo" { + policy = "write" +} +agent_prefix "" { + policy = "read" +} +agent_prefix "bar" { + policy = "deny" +} +``` + +```json +{ + "agent": { + "foo": { + "policy": "write" + } + }, + "agent_prefix": { + "": { + "policy": "read" + }, + "bar": { + "policy": "deny" + } + } +} +``` + + + +Agent rules are keyed by the node name they apply to. In the example above the rules +allow read-write access to the node with the _exact_ name `foo`, read-only access +to any node name by using the empty prefix, and denies all access to any node +name that starts with `bar`. + +Since [Agent API](/consul/api-docs/agent) utility operations may be required before an agent is joined to +a cluster, or during an outage of the Consul servers or ACL datacenter, a special token may be +configured with [`acl.tokens.agent_recovery`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent_recovery) to allow +write access to these operations even if no ACL resolution capability is available. + +## Event Rules + +The `event` and `event_prefix` resources control access to event operations in the [Event API](/consul/api-docs/event), such as +firing events and listing events. + + + +```hcl +event_prefix "" { + policy = "read" +} +event "deploy" { + policy = "write" +} +``` + +```json +{ + "event_prefix": { + "": { + "policy": "read" + } + }, + "event": { + "deploy": { + "policy": "write" + } + } +} +``` + + + +Event rules are labeled with the event name they apply to. In the example above, the rules allow +read-only access to any event, and firing of the "deploy" event. + +The [`consul exec`](/consul/commands/exec) command uses events with the "\_rexec" prefix during +operation, so to enable this feature in a Consul environment with ACLs enabled, you will need to +give agents a token with access to this event prefix, in addition to configuring +[`disable_remote_exec`](/consul/docs/reference/agent/configuration-file/general#disable_remote_exec) to `false`. + +## Identity Rules + +Specify the resource label in identity rules to set the scope of the rule. +The resource label in the following example is empty. As a result, the rules allow read-only access to any workload identity name with the empty prefix. +The rules also allow read-write access to the `app` identity and deny all access to the `admin` identity: + + + +```hcl +identity_prefix "" { + policy = "read" +} +identity "app" { + policy = "write" +} +identity "admin" { + policy = "deny" +} +``` + +```json +{ + "identity_prefix": { + "": { + "policy": "read" + } + }, + "identity": { + "app": { + "policy": "write" + }, + "admin": { + "policy": "deny" + } + } +} +``` + + + + +## Key/Value Rules + +The `key` and `key_prefix` resources control access to key/value store operations in the [KV API](/consul/api-docs/kv). + + + +```hcl +key_prefix "" { + policy = "read" +} +key "foo" { + policy = "write" +} +key "bar" { + policy = "deny" +} +``` + +```json +{ + "key_prefix": { + "": { + "policy": "read" + } + }, + "key": { + "foo": { + "policy": "write" + }, + "bar": { + "policy": "deny" + } + } +} +``` + + + +Key rules are labeled with the key name they apply to. In the example above, the rules allow read-only access +to any key name with the empty prefix rule, allow read-write access to the "foo" key, and deny access to the "bar" key. + +### List Policy for Keys + +Enable the `list` policy disposition (Consul 1.0+) by setting the +`acl.enable_key_list_policy` parameter to `true`. The disposition provides +recursive access to `key` entries. Refer to the [KV API](/consul/api-docs/kv#recurse) +documentation for additional information. + +In the following example, `key` resources that start with `bar` may be listed. + + + +```hcl +key_prefix "" { + policy = "deny" +} + +key_prefix "bar" { + policy = "list" +} + +key_prefix "baz" { + policy = "read" +} +``` + +```json +{ + "key_prefix": { + "": { + "policy": "deny" + }, + "bar": { + "policy": "list" + }, + "baz": { + "policy": "read" + } + } +} +``` + + + +In the example above, the rules allow reading the key "baz", and only allow recursive reads on the prefix "bar". + +A token with `write` access on a prefix also has `list` access. A token with `list` access on a prefix also has `read` access on all its suffixes. + +#### Sentinel Integration + +Consul Enterprise supports additional optional fields for key write policies for +[Sentinel](https://docs.hashicorp.com/sentinel/consul) integration. + +```hcl +key "foo" { + policy = "write" + sentinel { + code = < + +```hcl +keyring = "write" +``` + +```json +{ + "keyring": "write" +} +``` + +
    + +## Mesh Rules + +The `mesh` resource controls access to ingress gateways, terminating gateways, and mesh configuration entries. + +In Consul Enterprise, mesh rules are scoped to an admin partition. Therefore, they can be nested in an +[admin partition rule](#admin-partition-rules) but not a [namespace rule](#namespace-rules). + +The following rule grants read and write access: + + + +```hcl +mesh = "write" +``` + +```json +{ + "mesh": "write" +} +``` + + + +Refer to [Admin Partition Rules](#admin-partition-rules) for another example rule that uses the `mesh` resource. + +## Namespace Rules + +The `namespace` and `namespace_prefix` resource controls access to Consul namespaces. Namespaces define a scope of resources for which ACL rules apply. ACL rules, themselves, can then be defined to only to apply to specific namespaces. + + + +The ability to add many types of resources to separate namespaces was added to [Consul Enterprise](https://www.hashicorp.com/consul) 1.7.0. + + + +The following examples describe how namespace rules can be defined in a policy: + + + +```hcl +namespace_prefix "" { + # grants permission to create and edit all namespaces + policy = "write" + + # grant service:read for all services in all namespaces + service_prefix "" { + policy = "read" + } + + # grant node:read for all nodes in all namespaces + node_prefix "" { + policy = "read" + } +} + +namespace "foo" { + # grants permission to manage ACLs only for the foo namespace + acl = "write" + + # grants permission to create and edit the foo namespace + policy = "write" + + # grants write permissions to the KV for namespace foo + key_prefix "" { + policy = "write" + } + + # grants write permissions for sessions for namespace foo + session_prefix "" { + policy = "write" + } + + # grants service:write for all services in the foo namespace + service_prefix "" { + policy = "write" + } + + # grants node:read for all nodes + node_prefix "" { + policy = "read" + } +} +``` + +```json +{ + "namespace_prefix": { + "": { + "policy": "write", + "service_prefix": { + "": { + "policy": "read" + } + }, + "node_prefix": { + "": { + "policy": "read" + } + } + } + }, + "namespace": { + "foo": { + "acl": "write", + "policy": "write", + "key_prefix": { + "": { + "policy": "write" + } + }, + "session_prefix": { + "": { + "policy": "write" + } + }, + "service_prefix": { + "": { + "policy": "write" + } + }, + "node_prefix": { + "": { + "policy": "read" + } + } + } + } +} +``` + + + +### Restrictions + +The following restrictions apply when a rule is defined in any user-created namespace: + +1. `operator` rules are not allowed. +2. `event` rules are not allowed. +3. `keyring` rules are not allowed. +4. `query` rules are not allowed. +5. `node` rules that attempt to grant `write` privileges are not allowed. + +These restrictions do not apply to the `default` namespace created by Consul. In general all of the +above are permissions that only an operator should have and thus granting these permissions can +only be done within the default namespace. + +### Implicit Namespacing + +Rules and policies created within a namespace will inherit the namespace configuration. +This means that rules and policies will be implicitly namespaced and do not need additional configuration. +The restrictions outlined above will apply to these rules and policies. Additionally, rules and policies within a +specific namespace are prevented from accessing resources in another namespace. + +## Node Rules + +The `node` and `node_prefix` resources control access to the following API behaviors: + +- node-level registration and read access to the [Catalog API](/consul/api-docs/catalog) +- service discovery with the [Health API](/consul/api-docs/health) +- filtering results in [Agent API](/consul/api-docs/agent) operations, such as fetching the list of cluster members. + +You can use resource labels to scope the rule to a specific resource or set of resources. + +The following example rule uses an empty prefix label, which provides read-only access to all nodes. +The rule also provides read-write access to the `app` node and denies all access to the `admin` node: + + + +```hcl +node_prefix "" { + policy = "read" +} +node "app" { + policy = "write" +} +node "admin" { + policy = "deny" +} +``` + +```json +{ + "node_prefix": { + "": { + "policy": "read" + } + }, + "node": { + "app": { + "policy": "write" + }, + "admin": { + "policy": "deny" + } + } +} +``` + + + +### Registering and Querying Node Information + +Agents must be configured with `write` privileges for their own node name so that the agent can register their node metadata, tagged addresses, and other information in the catalog. +If configured incorrectly, the agent will print an error to the console when it tries to sync its state with the catalog. +Configure `write` access in the [`acl.tokens.agent`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_agent) parameter. + +The [`acl.token.default`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_default) used by the agent should have `read` access to a given node so that the DNS interface can be queried. + +Node rules are used to filter query results when reading from the catalog or retrieving information from the health endpoints. This allows for configurations where a token has access to a given service name, but only on an allowed subset of node names. + +Consul agents check tokens locally when health checks are registered and when Consul performs periodic [anti-entropy](/consul/docs/concept/consistency) syncs. +These actions may required an ACL token to complete. Use the following methods to configure ACL tokens for registration events: + +* Configure a global token in the [acl.tokens.default](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_default) parameter. + This allows a single token to be used during all check registration operations. +* Provide an ACL token with `service` and `check` definitions at registration time. + This allows for greater flexibility and enables the use of multiple tokens on the same agent. + Refer to the [services](/consul/docs/register/service/vm/define) and [checks](/consul/docs/register/health-check/vm) documentation for examples. You can also pass tokens to the [HTTP API](/consul/api-docs) for operations that require them. + +### Reading Imported Nodes + +Nodes rules affect read access to nodes with services exported by [`exported-services` configuration entries](/consul/docs/reference/config-entry/exported-services#reading-services), including nodes imported from [cluster peerings](/consul/docs/east-west/cluster-peering) or [admin partitions](/consul/docs/multi-tenant/admin-partition) (Enterprise-only). +Read access to all imported nodes is granted when either of the following rule sets are attached to a token: +- `service:write` is granted to any service. +- `node:read` is granted to all nodes. + +For Consul Enterprise, either set of rules must be scoped to the requesting services's partition and at least one namespace. + +You may need similarly scoped [Service Rules](#reading-imported-services) to read Consul data, depending on the endpoint (e.g. `/v1/health/service/:name`). +These permissions are satisfied when using a [service identity](/consul/docs/reference/acl/rule#service-identities). + +Refer to [Reading Services](/consul/docs/reference/config-entry/exported-services#reading-services) for example ACL policies used to read imported services using the health endpoint. + +## Operator Rules + +The `operator` resource controls access to cluster-level operations in the +[Operator API](/consul/api-docs/operator), other than the [Keyring API](/consul/api-docs/operator/keyring). + +Only one operator rule allowed per rule set. In the following example, the token may be used to query the operator endpoints for +diagnostic purposes but it will not make changes. + + + +```hcl +operator = "read" +``` + +```json +{ + "operator": "read" +} +``` + + + +## Peering Rules +The `peering` resource controls access to cluster peerings in the [Cluster Peering API](/consul/api-docs/peering). + +In Consul Enterprise, peering rules are scoped to an admin partition. Therefore, they can be nested in an +[admin partition rule](#admin-partition-rules) but not a [namespace rule](#namespace-rules). + +The following rule grants read and write access: + + + +```hcl +peering = "write" +``` + +```json +{ + "peering": "write" +} +``` + + + +For an example of how to apply rules for the `peering` resource alongside other rules, refer to the example configuration in [Admin Partition Rules](#admin-partition-rules). + +## Prepared Query Rules + +The `query` and `query_prefix` resources control access to create, update, and delete prepared queries in the +[Prepared Query API](/consul/api-docs/query). Specify the resource label in query rules to determine the scope of the rule. +The resource label in the following example is empty. As a result, the rules allow read-only access to query resources with any name. +The rules also grant read-write access to the query named `foo`, which allows control of the query namespace to be delegated based on ACLs: + + + +```hcl +query_prefix "" { + policy = "read" +} +query "foo" { + policy = "write" +} +``` + +```json +{ + "query_prefix": { + "": { + "policy": "read" + } + }, + "query": { + "foo": { + "policy": "write" + } + } +} +``` + + + +Executing queries is subject to `node`/`node_prefix` and `service`/`service_prefix` +policies. + +There are a few variations when using ACLs with prepared queries, each of which uses ACLs in one of two +ways: open, protected by unguessable IDs or closed, managed by ACL policies. These variations are covered +here, with examples: + +- Static queries with no `Name` defined are not controlled by any ACL policies. + These types of queries are meant to be ephemeral and not shared to untrusted + clients, and they are only reachable if the prepared query ID is known. Since + these IDs are generated using the same random ID scheme as ACL Tokens, it is + infeasible to guess them. When listing all prepared queries, only a management + token will be able to refer to these types, though clients can read instances for + which they have an ID. An example use for this type is a query built by a + startup script, tied to a session, and written to a configuration file for a + process to use via DNS. + +- Static queries with a `Name` defined are controlled by the `query` and `query_prefix` + ACL resources. Clients are required to have an ACL token with permissions on to + access that query name. Clients can list or read queries for + which they have "read" access based on their prefix, and similar they can + update any queries for which they have "write" access. An example use for + this type is a query with a well-known name (eg. `prod-primary-customer-db`) + that is used and known by many clients to provide geo-failover behavior for + a database. + +- [Template queries](/consul/api-docs/query#prepared-query-templates) + queries work like static queries with a `Name` defined, except that a catch-all + template with an empty `Name` requires an ACL token that can write to any query + prefix. + +When prepared queries are executed via DNS lookups or HTTP requests, the ACL +checks are run against the service being queried, similar to how ACLs work with +other service lookups. There are several ways the ACL token is selected for this +check: + +- If an ACL Token was captured when the prepared query was defined, it will be + used to perform the service lookup. This allows queries to be executed by + clients with lesser or even no ACL Token, so this should be used with care. + +- If no ACL Token was captured, then the client's ACL Token will be used to + perform the service lookup. + +- If no ACL Token was captured and the client has no ACL Token, then the + anonymous token will be used to perform the service lookup. + +In the common case, the ACL Token of the invoker is used +to test the ability to look up a service. If a `Token` was specified when the +prepared query was created, the behavior changes and now the captured +ACL Token set by the definer of the query is used when looking up a service. + +Capturing ACL Tokens is analogous to +[PostgreSQL's](http://www.postgresql.org/docs/current/static/sql-createfunction.html) +`SECURITY DEFINER` attribute which can be set on functions, and using the client's ACL +Token is similar to the complementary `SECURITY INVOKER` attribute. + +Prepared queries were originally introduced in Consul 0.6.0. The ACL behavior remained +unchanged through version 0.6.3, but versions after 0.6.3 included changes that improve management of the +prepared query namespace. + +These differences are outlined in the table below: + +| Operation | Version <= 0.6.3 | Version > 0.6.3 | +| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Create static query without `Name` | The ACL Token used to create the prepared query is checked to make sure it can access the service being queried. This token is captured as the `Token` to use when executing the prepared query. | No ACL policies are used as long as no `Name` is defined. No `Token` is captured by default unless specifically supplied by the client when creating the query. | +| Create static query with `Name` | The ACL Token used to create the prepared query is checked to make sure it can access the service being queried. This token is captured as the `Token` to use when executing the prepared query. | The client token's `query` ACL policy is used to determine if the client is allowed to register a query for the given `Name`. No `Token` is captured by default unless specifically supplied by the client when creating the query. | +| Manage static query without `Name` | The ACL Token used to create the query or a token with management privileges must be supplied in order to perform these operations. | Any client with the ID of the query can perform these operations. | +| Manage static query with a `Name` | The ACL token used to create the query or a token with management privileges must be supplied in order to perform these operations. | Similar to create, the client token's `query` ACL policy is used to determine if these operations are allowed. | +| List queries | A token with management privileges is required to list any queries. | The client token's `query` ACL policy is used to determine which queries they can refer to . Only tokens with management privileges can refer to prepared queries without `Name`. | +| Execute query | Since a `Token` is always captured when a query is created, that is used to check access to the service being queried. Any token supplied by the client is ignored. | The captured token, client's token, or anonymous token is used to filter the results, as described above. | + +## Service Rules + +The `service` and `service_prefix` resources control service-level registration and read access to the [Catalog API](/consul/api-docs/catalog) and service discovery with the [Health API](/consul/api-docs/health). +Specify the resource label in service rules to set the scope of the rule. +The resource label in the following example is empty. As a result, the rules allow read-only access to any service name with the empty prefix. +The rules also allow read-write access to the `app` service and deny all access to the `admin` service: + + + +```hcl +service_prefix "" { + policy = "read" +} +service "app" { + policy = "write" +} +service "admin" { + policy = "deny" +} +``` + +```json +{ + "service_prefix": { + "": { + "policy": "read" + } + }, + "service": { + "app": { + "policy": "write" + }, + "admin": { + "policy": "deny" + } + } +} +``` + + + +Consul's DNS interface is affected by restrictions on service rules. If the +[`acl.tokens.default`](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_default) used by the agent does not have `read` access to a +given service, then the DNS interface will return no records when queried for it. + +When reading from the catalog or retrieving information from the health endpoints, service rules are +used to filter the results of the query. + +Service rules come into play when using the [Agent API](/consul/api-docs/agent) to register services or +checks. The agent will check tokens locally as a service or check is registered, and Consul also +performs periodic [anti-entropy](/consul/docs/concept/consistency) syncs, which may require an +ACL token to complete. To accommodate this, Consul provides two methods of configuring ACL tokens +to use for registration events: + +1. Using the [acl.tokens.default](/consul/docs/reference/agent/configuration-file/acl#acl_tokens_default) configuration + directive. This allows a single token to be configured globally and used + during all service and check registration operations. +2. Providing an ACL token with service and check definitions at registration + time. This allows for greater flexibility and enables the use of multiple + tokens on the same agent. Examples of what this looks like are available for + both [services](/consul/docs/register/service/vm/define) and + [checks](/consul/docs/register/health-check/vm). Tokens may also be passed to the [HTTP + API](/consul/api-docs) for operations that require them. Note that all tokens + passed to an agent are persisted on local disk to allow recovery from + restarts. Refer to [`-data-dir` flag + documentation](/consul/docs/architecture/backend) for information about securing + access. + +In addition to ACLs, in Consul 0.9.0 and later, the agent must be configured with +[`enable_script_checks`](/consul/docs/reference/agent/configuration-file/general#enable_script_checks) or +[`enable_local_script_checks`](/consul/docs/reference/agent/configuration-file/general#enable_local_script_checks) +set to `true` in order to enable script checks. + +### Reading Imported Services + +Service rules affect read access to services exported by [`exported-services` configuration entries](/consul/docs/reference/config-entry/exported-services#reading-services), including services exported between [cluster peerings](/consul/docs/east-west/cluster-peering) or [admin partitions](/consul/docs/multi-tenant/admin-partition) (Enterprise-only). +Read access to all imported services is granted when either of the following rule sets are attached to a token: +- `service:write` is granted to any service. +- `service:read` is granted to all services. + +For Consul Enterprise, either set of rules must be scoped to the requesting services's partition and at least one namespace. + +You may need similarly scoped [Node Rules](#reading-imported-nodes) to read Consul data, depending on the endpoint (e.g. `/v1/health/service/:name`). +These permissions are satisfied when using a [service identity](/consul/docs/reference/acl/rule#service-identities). + +Refer to [Reading Services](/consul/docs/reference/config-entry/exported-services#reading-services) for example ACL policies used to read imported services using the health endpoint. + +### Intentions + +Service rules are also used to grant read or write access to intentions. The +following policy provides read-write access to the "app" service, and explicitly +grants `intentions:read` access to view intentions associated with the "app" service. + + + +```hcl +service "app" { + policy = "write" + intentions = "read" +} +``` + +```json +{ + "service": { + "app": { + "policy": "write", + "intentions": "read" + } + } +} +``` + + + +Refer to [ACL requirements for intentions](/consul/docs/secure-mesh/intention/create#acl-requirements) +for more information about managing intentions access with service rules. + +## Session Rules + +The `session` and `session_prefix` resources controls access to [Session API](/consul/api-docs/session) operations. + +Specify the resource label in session rules to set the scope of the rule. +The resource label in the following example is empty. As a result, the rules allow read-only access to all sessions. +The rules also allow creating sessions on the node named `app` and deny all access to any sessions on the `admin` node: + + + +```hcl +session_prefix "" { + policy = "read" +} +session "app" { + policy = "write" +} +session "admin" { + policy = "deny" +} +``` + +```json +{ + "session_prefix": { + "": { + "policy": "read" + } + }, + "session": { + "app": { + "policy": "write" + }, + "admin": { + "policy": "deny" + } + } +} +``` + + diff --git a/website/content/docs/reference/acl/token.mdx b/website/content/docs/reference/acl/token.mdx new file mode 100644 index 000000000000..18fcc0575535 --- /dev/null +++ b/website/content/docs/reference/acl/token.mdx @@ -0,0 +1,29 @@ +--- +layout: docs +page_title: Access Control List (ACL) token configuration reference +description: >- + Consul documentation provides reference material for all features and options available in Consul. +--- + +# ACL token configuration reference + +This topic provides reference information for the types of access control list (ACL) rules you can create and how they affect access to datacenter resources. For details on how to create rules and group them into policies, refer to [Policies](/consul/docs/secure/acl/policy). + +## Token attributes + +The following table is a partial list of attributes that a token may contain. +Refer to the [API](/consul/api-docs/acl/tokens) or [command line](/consul/commands/acl/token) documentation for all attributes that can be assigned or generated for a token: + +| Attribute | Description | Type | Default | +| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | -------------- | +| `AccessorID` | Used for [audit logging](/consul/docs/monitor/log/audit). The accessor ID is also returned in API responses to identify the token without revealing the secret ID. | String | auto-generated | +| `SecretID` | Used to request access to resources, data, and APIs. | String | auto-generated | +| `Partition` | Specifies the name of the admin partition in which the token is valid. Refer to [Admin Partitions](/consul/docs/multi-tenant/admin-partition) for additional information. | String | `default` | +| `Namespace` | Specifies the name of the Consul namespace in which the token is valid. Refer to [Namespaces](/consul/docs/multi-tenant/namespace) for additional information. | String | `default` | +| `Description` | Human-readable description for documenting the purpose of the token. | String | none | +| `Local` | Indicates whether the token should be replicated globally or local to the datacenter.
    Set to `false` to replicate globally across all reachable datacenters.
    Setting to `true` configures the token to functional in the local datacenter only. | Boolean | `false` | +| `TemplatedPolicies` | Specifies a list of templated policies to apply to the token. Refer to [Templated Policies](/consul/docs/reference/acl/rule#templated-policies) in the "Roles" topic for additional information. | Array | none | +| `ServiceIdentities` | Specifies a list of service identities to apply to the token. Refer to [Service Identities](/consul/docs/reference/acl/rule#service-identities) in the "Roles" topic for additional information. | Array | none | +| `NodeIdentities` | Specifies a list of node identities to apply to the token. Refer to [Node Identities](/consul/docs/reference/acl/rule#node-identities) in the "Roles" topic for additional information. | Array | none | +| `Policies` | List of policies linked to the token, including the policy ID and name. | String | none | +| `Roles` | List of roles linked to the token, including the role ID and name. | String | none | \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/acl.mdx b/website/content/docs/reference/agent/configuration-file/acl.mdx new file mode 100644 index 000000000000..25a5147e84bb --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/acl.mdx @@ -0,0 +1,284 @@ +--- +layout: docs +page_title: ACL parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# ACL parameters for Consul agent configuration files + +The page provides reference information for ACL parameters in a Consul agent configuration file. + +## ACL Parameters + +- `acl` ((#acl)) - This object allows a number of sub-keys to be set which + controls the ACL system. Configuring the ACL system within the ACL stanza was added + in Consul 1.4.0 + + The following sub-keys are available: + + - `enabled` ((#acl_enabled)) - Enables ACLs. + + - `policy_ttl` ((#acl_policy_ttl)) - Used to control Time-To-Live caching + of ACL policies. By default, this is 30 seconds. This setting has a major performance + impact: reducing it will cause more frequent refreshes while increasing it reduces + the number of refreshes. However, because the caches are not actively invalidated, + ACL policy may be stale up to the TTL value. + + - `role_ttl` ((#acl_role_ttl)) - Used to control Time-To-Live caching + of ACL roles. By default, this is 30 seconds. This setting has a major performance + impact: reducing it will cause more frequent refreshes while increasing it reduces + the number of refreshes. However, because the caches are not actively invalidated, + ACL role may be stale up to the TTL value. + + - `token_ttl` ((#acl_token_ttl)) - Used to control Time-To-Live caching + of ACL tokens. By default, this is 30 seconds. This setting has a major performance + impact: reducing it will cause more frequent refreshes while increasing it reduces + the number of refreshes. However, because the caches are not actively invalidated, + ACL token may be stale up to the TTL value. + + - `down_policy` ((#acl_down_policy)) - Either "allow", "deny", "extend-cache" + or "async-cache"; "extend-cache" is the default. In the case that a policy or + token cannot be read from the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) or + leader node, the down policy is applied. In "allow" mode, all actions are permitted, + "deny" restricts all operations, and "extend-cache" allows any cached objects + to be used, ignoring the expiry time of the cached entry. If the request uses an + ACL that is not in the cache, "extend-cache" falls back to the behavior of + `default_policy`. + The value "async-cache" acts the same way as "extend-cache" + but performs updates asynchronously when ACL is present but its TTL is expired, + thus, if latency is bad between the primary and secondary datacenters, latency + of operations is not impacted. + + - `default_policy` ((#acl_default_policy)) - Either "allow" or "deny"; + defaults to "allow" but this will be changed in a future major release. The default + policy controls the behavior of a token when there is no matching rule. In "allow" + mode, ACLs are a denylist: any operation not specifically prohibited is allowed. + In "deny" mode, ACLs are an allowlist: any operation not specifically + allowed is blocked. **Note**: this will not take effect until you've enabled ACLs. + + - `enable_key_list_policy` ((#acl_enable_key_list_policy)) - Boolean value, defaults to false. + When true, the `list` permission will be required on the prefix being recursively read from the KV store. + Regardless of being enabled, the full set of KV entries under the prefix will be filtered + to remove any entries that the request's ACL token does not grant at least read + permissions. This option is only available in Consul 1.0 and newer. + + - `enable_token_replication` ((#acl_enable_token_replication)) - By default + secondary Consul datacenters will perform replication of only ACL policies and + roles. Setting this configuration will will enable ACL token replication and + allow for the creation of both [local tokens](/consul/api-docs/acl/tokens#local) and + [auth methods](/consul/docs/secure/acl/auth-method) in connected secondary datacenters. + + ~> **Warning:** When enabling ACL token replication on the secondary datacenter, + global tokens already present in the secondary datacenter will be lost. For + production environments, consider configuring ACL replication in your initial + datacenter bootstrapping process. + + - `enable_token_persistence` ((#acl_enable_token_persistence)) - Either + `true` or `false`. When `true` tokens set using the API will be persisted to + disk and reloaded when an agent restarts. + + - `tokens` ((#acl_tokens)) - This object holds all of the configured + ACL tokens for the agents usage. + + - `initial_management` ((#acl_tokens_initial_management)) - This is available in + Consul 1.11 and later. In prior versions, use [`acl.tokens.master`](#acl_tokens_master). + + Only used for servers in the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter). + This token will be created with management-level permissions if it does not exist. + It allows operators to bootstrap the ACL system with a token Secret ID that is + well-known. + + The `initial_management` token is only installed when a server acquires cluster + leadership. If you would like to install or change it, set the new value for + `initial_management` in the configuration for all servers. Once this is done, + restart the current leader to force a leader election. If the `initial_management` + token is not supplied, then the servers do not create an initial management token. + When you provide a value, it should be a UUID. To maintain backwards compatibility + and an upgrade path this restriction is not currently enforced but will be in a + future major Consul release. + + - `master` ((#acl_tokens_master)) **Renamed in Consul 1.11 to + [`acl.tokens.initial_management`](#acl_tokens_initial_management).** + + - `default` ((#acl_tokens_default)) - When provided, this agent will + use this token by default when making requests to the Consul servers + instead of the [anonymous token](/consul/docs/security/acl/tokens#anonymous-token). + Consul HTTP API requests can provide an alternate token in their authorization header + to override the `default` or anonymous token on a per-request basis, + as described in [HTTP API Authentication](/consul/api-docs/api-structure#authentication). + + - `agent` ((#acl_tokens_agent)) - Used for clients and servers to perform + internal operations. If this isn't specified, then the + [`default`](#acl_tokens_default) will be used. + + This token must at least have write access to the node name it will + register as in order to set any of the node-level information in the + catalog such as metadata, or the node's tagged addresses. + + - `agent_recovery` ((#acl_tokens_agent_recovery)) - This is available in Consul 1.11 + and later. In prior versions, use [`acl.tokens.agent_master`](#acl_tokens_agent_master). + + Used to access [agent endpoints](/consul/api-docs/agent) that require agent read or write privileges, + or node read privileges, even if Consul servers aren't present to validate any tokens. + This should only be used by operators during outages, regular ACL tokens should normally + be used by applications. + + - `agent_master` ((#acl_tokens_agent_master)) **Renamed in Consul 1.11 to + [`acl.tokens.agent_recovery`](#acl_tokens_agent_recovery).** + + - `config_file_service_registration` ((#acl_tokens_config_file_service_registration)) - Specifies the ACL + token the agent uses to register services and checks from [service](/consul/docs/register/service/vm/define) and [check](/consul/docs/register/health-check/vm) definitions + specified in configuration files or fragments passed to the agent using the `-hcl` + flag. + + If the `token` field is defined in the service or check definition, then that token is used to + register the service or check instead. If the `config_file_service_registration` token is not + defined and if the `token` field is not defined in the service or check definition, then the + agent uses the [`default`](#acl_tokens_default) token to register the service or check. + + This token needs write permission to register all services and checks defined in this agent's + configuration. For example, if there are two service definitions in the agent's configuration + files for services "A" and "B", then the token needs `service:write` permissions for both + services "A" and "B" in order to successfully register both services. If the token is missing + `service:write` permissions for service "B", the agent will successfully register service "A" + and fail to register service "B". Failed registration requests are eventually retried as part + of [anti-entropy enforcement](/consul/docs/concept/consistency). If a registration request is + failing due to missing permissions, the token for this agent can be updated with + additional policy rules or the `config_file_service_registration` token can be replaced using + the [Set Agent Token](/consul/commands/acl/set-agent-token) CLI command. + + - `dns` ((#acl_tokens_dns)) - Specifies the token that agents use to request information needed to respond to DNS queries. + If the `dns` token is not set, the `default` token is used instead. + Because the `default` token allows unauthenticated HTTP API access to list nodes and services, we + strongly recommend using the `dns` token. Create DNS tokens using the [templated policy](/consul/docs/secure/acl/token/dns) + option to ensure that the token has the permissions needed to respond to all DNS queries. + + - `replication` ((#acl_tokens_replication)) - Specifies the token that the agent uses to + authorize secondary datacenters with the primary datacenter for replication + operations. This token is required for servers outside the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) when ACLs are enabled. This token may be provided later using the [agent token API](/consul/api-docs/agent#update-acl-tokens) on each server. This token must have at least "read" permissions on ACL data but if ACL token replication is enabled then it must have "write" permissions. This also enables service mesh data replication, for which the token will require both operator "write" and intention "read" permissions for replicating CA and Intention data. + + ~> **Warning:** When enabling ACL token replication on the secondary datacenter, + policies and roles already present in the secondary datacenter will be lost. For + production environments, consider configuring ACL replication in your initial + datacenter bootstrapping process. + + - `managed_service_provider` ((#acl_tokens_managed_service_provider)) - An + array of ACL tokens used by Consul managed service providers for cluster + operations. Refer to the [managed service provider + example](#managed-service-provider) for more information. + +- `acl_datacenter` - **This field is deprecated in Consul 1.4.0. See the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) field instead.** + + This designates the datacenter which is authoritative for ACL information. It must be provided to enable ACLs. All servers and datacenters must agree on the ACL datacenter. Setting it on the servers is all you need for cluster-level enforcement, but for the APIs to forward properly from the clients, + it must be set on them too. In Consul 0.8 and later, this also enables agent-level enforcement + of ACLs. Please review the [ACL tutorial](/consul/tutorials/security/access-control-setup-production) for more details. + +- `acl_default_policy` ((#acl_default_policy_legacy)) - **Deprecated in Consul 1.4.0. See the [`acl.default_policy`](#acl_default_policy) field instead.** + Either "allow" or "deny"; defaults to "allow". The default policy controls the + behavior of a token when there is no matching rule. In "allow" mode, ACLs are a + denylist: any operation not specifically prohibited is allowed. In "deny" mode, + ACLs are an allowlist: any operation not specifically allowed is blocked. **Note**: + this will not take effect until you've set `primary_datacenter` to enable ACL support. + +- `acl_down_policy` ((#acl_down_policy_legacy)) - **Deprecated in Consul + 1.4.0. See the [`acl.down_policy`](#acl_down_policy) field instead.** Either "allow", + "deny", "extend-cache" or "async-cache"; "extend-cache" is the default. In the + case that the policy for a token cannot be read from the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) + or leader node, the down policy is applied. In "allow" mode, all actions are permitted, + "deny" restricts all operations, and "extend-cache" allows any cached ACLs to be + used, ignoring their TTL values. If a non-cached ACL is used, "extend-cache" acts + like "deny". The value "async-cache" acts the same way as "extend-cache" but performs + updates asynchronously when ACL is present but its TTL is expired, thus, if latency + is bad between ACL authoritative and other datacenters, latency of operations is + not impacted. + +- `acl_agent_master_token` ((#acl_agent_master_token_legacy)) - **Deprecated + in Consul 1.4.0. See the [`acl.tokens.agent_master`](#acl_tokens_agent_master) + field instead.** Used to access [agent endpoints](/consul/api-docs/agent) that + require agent read or write privileges, or node read privileges, even if Consul + servers aren't present to validate any tokens. This should only be used by operators + during outages, regular ACL tokens should normally be used by applications. This + was added in Consul 0.7.2 and is only used when [`acl_enforce_version_8`](#acl_enforce_version_8) is set to true. + +- `acl_agent_token` ((#acl_agent_token_legacy)) - **Deprecated in Consul + 1.4.0. See the [`acl.tokens.agent`](#acl_tokens_agent) field instead.** Used for + clients and servers to perform internal operations. If this isn't specified, then + the [`acl_token`](#acl_token) will be used. This was added in Consul 0.7.2. + + This token must at least have write access to the node name it will register as in order to set any + of the node-level information in the catalog such as metadata, or the node's tagged addresses. + +- `acl_enforce_version_8` - **Deprecated in + Consul 1.4.0 and removed in 1.8.0.** Used for clients and servers to determine if enforcement should + occur for new ACL policies being previewed before Consul 0.8. Added in Consul 0.7.2, + this defaults to false in versions of Consul prior to 0.8, and defaults to true + in Consul 0.8 and later. This helps ease the transition to the new ACL features + by allowing policies to be in place before enforcement begins. + +- `acl_master_token` ((#acl_master_token_legacy)) - **Deprecated in Consul + 1.4.0. See the [`acl.tokens.master`](#acl_tokens_master) field instead.** + +- `acl_replication_token` ((#acl_replication_token_legacy)) - **Deprecated + in Consul 1.4.0. See the [`acl.tokens.replication`](#acl_tokens_replication) field + instead.** Only used for servers outside the [`primary_datacenter`](/consul/docs/reference/agent/configuration-file/general#primary_datacenter) + running Consul 0.7 or later. When provided, this will enable [ACL replication](/consul/tutorials/security-operations/access-control-replication-multiple-datacenters) + using this ACL replication using this token to retrieve and replicate the ACLs + to the non-authoritative local datacenter. In Consul 0.9.1 and later you can enable + ACL replication using [`acl.enable_token_replication`](#acl_enable_token_replication) and then + set the token later using the [agent token API](#update-acl-tokens) + on each server. If the `acl_replication_token` is set in the config, it will automatically + set [`acl.enable_token_replication`](#acl_enable_token_replication) to true for backward compatibility. + + If there's a partition or other outage affecting the authoritative datacenter, and the + [`acl_down_policy`](#acl_down_policy) is set to "extend-cache", tokens not + in the cache can be resolved during the outage using the replicated set of ACLs. + +- `acl_token` ((#acl_token_legacy)) - **Deprecated in Consul 1.4.0. See + the [`acl.tokens.default`](#acl_tokens_default) field instead.** + +- `acl_ttl` ((#acl_ttl_legacy)) - **Deprecated in Consul 1.4.0. See the + [`acl.token_ttl`](#acl_token_ttl) field instead.**Used to control Time-To-Live + caching of ACLs. By default, this is 30 seconds. This setting has a major performance + impact: reducing it will cause more frequent refreshes while increasing it reduces + the number of refreshes. However, because the caches are not actively invalidated, + ACL policy may be stale up to the TTL value. + +- `enable_acl_replication` **Deprecated in Consul 1.11. Use the [`acl.enable_token_replication`](#acl_enable_token_replication) field instead.** + When set on a Consul server, enables ACL replication without having to set + the replication token via [`acl_replication_token`](#acl_replication_token). Instead, enable ACL replication + and then introduce the token using the [agent token API](#update-acl-tokens) on each server. + See [`acl_replication_token`](#acl_replication_token) for more details. + + ~> **Warning:** When enabling ACL token replication on the secondary datacenter, + policies and roles already present in the secondary datacenter will be lost. For + production environments, consider configuring ACL replication in your initial + datacenter bootstrapping process. + + +## Examples + +The following examples demonstrate common ACL configurations for Consul agents. + +### Managed service provider + +

    + + + +```hcl +managed_service_provider { + accessor_id = "ed22003b-0832-4e48-ac65-31de64e5c2ff" + secret_id = "cb6be010-bba8-4f30-a9ed-d347128dde17" +} +``` + +```json +"managed_service_provider": [ + { + "accessor_id": "ed22003b-0832-4e48-ac65-31de64e5c2ff", + "secret_id": "cb6be010-bba8-4f30-a9ed-d347128dde17" + } +] +``` + diff --git a/website/content/docs/reference/agent/configuration-file/address.mdx b/website/content/docs/reference/agent/configuration-file/address.mdx new file mode 100644 index 000000000000..02726d2290e3 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/address.mdx @@ -0,0 +1,45 @@ +--- +layout: docs +page_title: Advertise address parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Advertise address parameters for Consul agent configuration files + +The page provides reference information for address advertisement parameters in a Consul agent configuration file. + +## Advertise Address Parameters + +- `advertise_addr` ((#\_advertise)) - The advertise address is used to change + the address advertised to other nodes in the cluster. By default, the bind + address is advertised. However, in some cases, there may be a routable address + that cannot be bound. This parameter enables gossiping a different address to support + this. If this address is not routable, the node will be in a constant flapping + state as other nodes will treat the non-routability as a failure. In Consul v1.1.0 and later this can be dynamically defined with a [go-sockaddr] + template that is resolved at runtime. Equivalent to the [`-advertise` command-line flag](/consul/commands/agent#_advertise). + +- `advertise_addr_ipv4` - This was added together with [`advertise_addr_ipv6`](#advertise_addr_ipv6) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. + +- `advertise_addr_ipv6` - This was added together with [`advertise_addr_ipv4`](#advertise_addr_ipv4) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. + +- `advertise_addr_wan` ((#\_advertise-wan)) - The advertise WAN address is used + to change the address that we advertise to server nodes joining through the WAN. + This can also be set on client agents when used in combination with the [`translate_wan_addrs`](/consul/docs/reference/agent#translate_wan_addrs) configuration option. By default, the [`advertise_addr`](#_advertise) address + is advertised. However, in some cases all members of all datacenters cannot be + on the same physical or virtual network, especially on hybrid setups mixing cloud + and private datacenters. This flag enables server nodes gossiping through the public + network for the WAN while using private VLANs for gossiping to each other and their + client agents, and it allows client agents to be reached at this address when being + accessed from a remote datacenter if the remote datacenter is configured with [`translate_wan_addrs`](/consul/docs/reference/agent#translate_wan_addrs). In Consul 1.1.0 and later this can be dynamically defined with a [go-sockaddr] + template that is resolved at runtime. Equivalent to the [`-advertise-wan` command-line flag](/consul/commands/agent#_advertise-wan). + +- `advertise_addr_wan_ipv4` This was added together with [`advertise_addr_wan_ipv6`](#advertise_addr_wan_ipv6) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. + +- `advertise_addr_wan_ipv6` This was added together with [`advertise_addr_wan_ipv4`](#advertise_addr_wan_ipv4) to support dual stack IPv4/IPv6 environments. Using this, both IPv4 and IPv6 addresses can be specified and requested during eg service discovery. + +- `advertise_reconnect_timeout` This is a per-agent setting of the [`reconnect_timeout`](/consul/docs/reference/agent/configuration-file/general#reconnect_timeout) parameter. + This agent will advertise to all other nodes in the cluster that after this timeout, the node may be completely + removed from the cluster. This may only be set on client agents and if unset then other nodes will use the main + `reconnect_timeout` setting when determining when this node may be removed + from the cluster. \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/auto-config.mdx b/website/content/docs/reference/agent/configuration-file/auto-config.mdx new file mode 100644 index 000000000000..b859b7102c47 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/auto-config.mdx @@ -0,0 +1,172 @@ +--- +layout: docs +page_title: Auto config parameters for Consul agent configuration +description: >- + Enable the `auto_config` parameter in the Consul agent configuration so that a Consul client agent fetches configuration settings from a specified server upon startup. +--- + +# Auto config parameters for Consul agent configuration + +The page provides reference information for the `auto_config` parameters in a +Consul agent configuration file. + +When you enable `auto_config`, the client agent upon startup makes an RPC to the +configured server addresses to request configuration settings, such as ACL +token, TLS certificates, and Gossip encryption key. These configuration settings +get merged in as defaults with any user-supplied configuration on the client +agent able to override them. The initial RPC uses a JWT specified with either +`intro_token`, `intro_token_file` or the `CONSUL_INTRO_TOKEN` environment +variable to authorize the request. How the JWT token is verified is controlled +by the `auto_config.authorizer` object available for use on Consul servers. + +Enabling this option also enables service mesh because it is vital for +`auto_config`, more specifically the service mesh CA and certificates +infrastructure. + + + +Enabling `auto_config` conflicts with the [`auto_encrypt.tls`](/consul/docs/reference/agent/configuration-file/encryption#tls) feature. +Only one option may be specified. + + + +# Parameters + +- `auto_config` This object allows setting options for the `auto_config` feature. + + - `enabled` (Defaults to `false`) This option enables `auto_config` on a client + agent. + + - `intro_token` (Defaults to `""`) This specifies the JWT to use for the initial + `auto_config` RPC to the Consul servers. This can be overridden with the + `CONSUL_INTRO_TOKEN` environment variable + + - `intro_token_file` (Defaults to `""`) This specifies a file containing the JWT + to use for the initial `auto_config` RPC to the Consul servers. This token + from this file is only loaded if the `intro_token` configuration is unset as + well as the `CONSUL_INTRO_TOKEN` environment variable + + - `server_addresses` (Defaults to `[]`) This specifies the addresses of servers in + the local datacenter to use for the initial RPC. These addresses support + [Cloud Auto-Joining](/consul/commands/agent#cloud-auto-joining) and can optionally include a port to + use when making the outbound connection. If no port is provided, the `server_port` + will be used. + + - `dns_sans` (Defaults to `[]`) This is a list of extra DNS SANs to request in the + client agent's TLS certificate. The `localhost` DNS SAN is always requested. + + - `ip_sans` (Defaults to `[]`) This is a list of extra IP SANs to request in the + client agent's TLS certificate. The `::1` and `127.0.0.1` IP SANs are always requested. + + - `authorization` This object controls how a Consul server will authorize `auto_config` + requests and in particular how to verify the JWT intro token. + + - `enabled` (Defaults to `false`) This option enables `auto_config` authorization + capabilities on the server. + + - `static` This object controls configuring the static authorizer setup in the Consul + configuration file. Almost all sub-keys are identical to those provided by the [JWT + Auth Method](/consul/docs/secure/acl/auth-method/jwt). + + - `jwt_validation_pub_keys` (Defaults to `[]`) A list of PEM-encoded public keys + to use to authenticate signatures locally. + + Exactly one of `jwks_url` `jwt_validation_pub_keys`, or `oidc_discovery_url` is required. + + - `oidc_discovery_url` (Defaults to `""`) The OIDC Discovery URL, without any + .well-known component (base path). + + Exactly one of `jwks_url` `jwt_validation_pub_keys`, or `oidc_discovery_url` is required. + + - `oidc_discovery_ca_cert` (Defaults to `""`) PEM encoded CA cert for use by the TLS + client used to talk with the OIDC Discovery URL. NOTE: Every line must end + with a newline (`\n`). If not set, system certificates are used. + + - `jwks_url` (Defaults to `""`) The JWKS URL to use to authenticate signatures. + + Exactly one of `jwks_url` `jwt_validation_pub_keys`, or `oidc_discovery_url` is required. + + - `jwks_ca_cert` (Defaults to `""`) PEM encoded CA cert for use by the TLS client + used to talk with the JWKS URL. NOTE: Every line must end with a newline + (`\n`). If not set, system certificates are used. + + - `claim_mappings` (Defaults to `(map[string]string)`) Mappings of claims (key) that + will be copied to a metadata field (value). Use this if the claim you are capturing + is singular (such as an attribute). + + When mapped, the values can be any of a number, string, or boolean and will + all be stringified when returned. + + - `list_claim_mappings` (Defaults to `(map[string]string)`) Mappings of claims (key) + will be copied to a metadata field (value). Use this if the claim you are capturing + is list-like (such as groups). + + When mapped, the values in each list can be any of a number, string, or + boolean and will all be stringified when returned. + + - `jwt_supported_algs` (Defaults to `["RS256"]`) JWTSupportedAlgs is a list of + supported signing algorithms. + + - `bound_audiences` (Defaults to `[]`) List of `aud` claims that are valid for + login; any match is sufficient. + + - `bound_issuer` (Defaults to `""`) The value against which to match the `iss` + claim in a JWT. + + - `expiration_leeway` (Defaults to `"0s"`) Duration of leeway when + validating expiration of a token to account for clock skew. Defaults to 150s + (2.5 minutes) if set to 0s and can be disabled if set to -1ns. + + - `not_before_leeway` (Defaults to `"0s"`) Duration of leeway when + validating not before values of a token to account for clock skew. Defaults + to 150s (2.5 minutes) if set to 0s and can be disabled if set to -1. + + - `clock_skew_leeway` (Defaults to `"0s"`) Duration of leeway when + validating all claims to account for clock skew. Defaults to 60s (1 minute) + if set to 0s and can be disabled if set to -1ns. + + - `claim_assertions` (Defaults to `[]`) List of assertions about the mapped + claims required to authorize the incoming RPC request. The syntax uses + [github.com/hashicorp/go-bexpr](https://github.com/hashicorp/go-bexpr) which is shared with the + [API filtering feature](/consul/api-docs/features/filtering). + The assertions are lightly templated using [HIL syntax](https://github.com/hashicorp/hil) + to interpolate some values from the RPC request. The list of variables that can be interpolated + are: + + - `node` - The node name the client agent is requesting. + - `segment` - The network segment name the client is requesting. + - `partition` - The admin partition name the + client is requesting. + + Refer to the [claim assertions example](#claim-assertions) for more information. + +## Examples + +The following examples demonstrate common configurations for a Consul agent's auto config settings. + +### Claim assertions + +When combines, the following configurations ensure that the JWT `sub` +matches the node name requested by the client. + + + +```hcl +claim_mappings { + sub = "node_name" +} +claim_assertions = [ + "value.node_name == \"${node}\"" +] +``` + +```json +{ + "claim_mappings": { + "sub": "node_name" + }, + "claim_assertions": ["value.node_name == \"${node}\""] +} +``` + + \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/bootstrap.mdx b/website/content/docs/reference/agent/configuration-file/bootstrap.mdx new file mode 100644 index 000000000000..66cd32703506 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/bootstrap.mdx @@ -0,0 +1,71 @@ +--- +layout: docs +page_title: Bootstrap parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Bootstrap configuration parameters for Consul agent configuration files + +This page provides reference information for bootstrapping parameters in a Consul agent configuration file. + +## Bootstrap parameters + +- `autopilot` This object allows a number of sub-keys to be set which can configure operator-friendly settings for + Consul servers. When these keys are provided as configuration, they will only be + respected on bootstrapping. If they are not provided, the defaults will be used. + In order to change the value of these options after bootstrapping, you will need + to use the [Consul Operator Autopilot](/consul/commands/operator/autopilot) + command. For more information about Autopilot, review the [Autopilot tutorial](/consul/tutorials/datacenter-operations/autopilot-datacenter-operations). + + The following sub-keys are available: + + - `cleanup_dead_servers` - This controls the + automatic removal of dead server nodes periodically and whenever a new server + is added to the cluster. Defaults to `true`. + + - `last_contact_threshold` - Controls the + maximum amount of time a server can go without contact from the leader before + being considered unhealthy. Must be a duration value such as `10s`. Defaults + to `200ms`. + + - `max_trailing_logs` - Controls the maximum number + of log entries that a server can trail the leader by before being considered + unhealthy. Defaults to 250. + + - `min_quorum` - Sets the minimum number of servers necessary + in a cluster. Autopilot will stop pruning dead servers when this minimum is reached. There is no default. + + - `server_stabilization_time` - Controls + the minimum amount of time a server must be stable in the 'healthy' state before + being added to the cluster. Only takes effect if all servers are running Raft + protocol version 3 or higher. Must be a duration value such as `30s`. Defaults + to `10s`. + + - `redundancy_zone_tag` - + This controls the [`node_meta`](/consul/docs/reference/agent/configuration-file/node#node_meta) key to use when Autopilot is separating + servers into zones for redundancy. Only one server in each zone can be a voting + member at one time. If left blank (the default), this feature will be disabled. + + - `disable_upgrade_migration` - + If set to `true`, this setting will disable Autopilot's upgrade migration strategy + in Consul Enterprise of waiting until enough newer-versioned servers have been + added to the cluster before promoting any of them to voters. Defaults to `false`. + + - `upgrade_version_tag` - + The node_meta tag to use for version info when performing upgrade migrations. + If this is not set, the Consul version will be used. + +- `bootstrap` ((#\_bootstrap)) - This parameter controls if a server + is in "bootstrap" mode. It is important that no more than one server **per** datacenter + be running in this mode. Technically, a server in bootstrap mode is allowed to + self-elect as the Raft leader. It is important that only a single node is in this + mode; otherwise, consistency cannot be guaranteed as multiple nodes are able to + self-elect. It is not recommended to use this flag after a cluster has been bootstrapped. + +- `bootstrap_expect` ((#\_bootstrap_expect)) - This parameter provides the number + of expected servers in the datacenter. Either this value should not be provided + or the value must agree with other servers in the cluster. When provided, Consul + waits until the specified number of servers are available and then bootstraps the + cluster. This allows an initial leader to be elected automatically. This parameter requires + the agent to be configured for [`server` modes](/consul/docs/reference/agent/configuration-file/general#_server). \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/dns.mdx b/website/content/docs/reference/agent/configuration-file/dns.mdx new file mode 100644 index 000000000000..1beb62fc5d67 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/dns.mdx @@ -0,0 +1,148 @@ +--- +layout: docs +page_title: DNS parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# DNS parameters for Consul agent configuration files + +The page provides reference information for DNS parameters in a Consul agent configuration file. + +## DNS and Domain Parameters + +- `alt_domain` ((#\_alt_domain)) - This parameter allows Consul to respond to DNS queries in an alternate domain, in addition to the primary domain. If unset, + no alternate domain is used. Equivalent to the [`-alt-domain` command-line flag](/consul/commands/agent#_alt_domain). + +- `dns_config` This object allows a number of sub-keys to be set which can tune how DNS queries are serviced. Refer to [DNS caching](/consul/docs/discover/dns/scale) for more information. + + The following sub-keys are available: + + - `allow_stale` - Enables a stale query for DNS information. + This allows any Consul server, rather than only the leader, to service the request. + The advantage of this is you get linear read scalability with Consul servers. + In versions of Consul prior to 0.7, this defaulted to false, meaning all requests + are serviced by the leader, providing stronger consistency but less throughput + and higher latency. In Consul 0.7 and later, this defaults to true for better + utilization of available servers. + + - `max_stale` - When [`allow_stale`](#allow_stale) is + specified, this is used to limit how stale results are allowed to be. If a Consul + server is behind the leader by more than `max_stale`, the query will be re-evaluated + on the leader to get more up-to-date results. Prior to Consul 0.7.1 this defaulted + to 5 seconds; in Consul 0.7.1 and later this defaults to 10 years ("87600h") + which effectively allows DNS queries to be answered by any server, no matter + how stale. In practice, servers are usually only milliseconds behind the leader, + so this lets Consul continue serving requests in long outage scenarios where + no leader can be elected. + + - `node_ttl` - By default, this is "0s", so all node lookups + are served with a 0 TTL value. DNS caching for node lookups can be enabled by + setting this value. This should be specified with the "s" suffix for second or + "m" for minute. + + - `service_ttl` - This is a sub-object which allows + for setting a TTL on service lookups with a per-service policy. The "\*" wildcard + service can be used when there is no specific policy available for a service. + By default, all services are served with a 0 TTL value. DNS caching for service + lookups can be enabled by setting this value. + + - `enable_truncate` ((#enable_truncate)) - If set to true, a UDP DNS + query that would return more than 3 records, or more than would fit into a valid + UDP response, will set the truncated flag, indicating to clients that they should + re-query using TCP to get the full set of records. + + - `only_passing` - If set to true, any nodes whose + health checks are warning or critical will be excluded from DNS results. If false, + the default, only nodes whose health checks are failing as critical will be excluded. + For service lookups, the health checks of the node itself, as well as the service-specific + checks are considered. For example, if a node has a health check that is critical + then all services on that node will be excluded because they are also considered + critical. + + - `recursor_strategy` - If set to `sequential`, Consul will query recursors in the + order listed in the [`recursors`](/consul/docs/reference/agent/configuration-file/general#recursors) option. If set to `random`, + Consul will query an upstream DNS resolvers in a random order. Defaults to + `sequential`. + + - `recursor_timeout` - Timeout used by Consul when + recursively querying an upstream DNS server. See [`recursors`](/consul/docs/reference/agent/configuration-file/general#recursors) for more details. Default is 2s. This is available in Consul 0.7 and later. + + - `disable_compression` - If set to true, DNS + responses will not be compressed. Compression was added and enabled by default + in Consul 0.7. + + - `udp_answer_limit` - Limit the number of resource + records contained in the answer section of a UDP-based DNS response. This parameter + applies only to UDP DNS queries that are less than 512 bytes. This setting is + deprecated and replaced in Consul 1.0.7 by [`a_record_limit`](#a_record_limit). + + - `a_record_limit` - Limit the number of resource + records contained in the answer section of a A, AAAA or ANY DNS response (both + TCP and UDP). When answering a question, Consul will use the complete list of + matching hosts, shuffle the list randomly, and then limit the number of answers + to `a_record_limit` (default: no limit). This limit does not apply to SRV records. + + In environments where [RFC 3484 Section 6](https://tools.ietf.org/html/rfc3484#section-6) Rule 9 + is implemented and enforced (i.e. DNS answers are always sorted and + therefore never random), clients may need to set this value to `1` to + preserve the expected randomized distribution behavior (note: + [RFC 3484](https://tools.ietf.org/html/rfc3484) has been obsoleted by + [RFC 6724](https://tools.ietf.org/html/rfc6724) and as a result it should + be increasingly uncommon to need to change this value with modern + resolvers). + + - `enable_additional_node_meta_txt` - When set to true, Consul + will add TXT records for Node metadata into the Additional section of the DNS responses for several query types such as SRV queries. When set to false those records are not emitted. This does not impact the behavior of those same TXT records when they would be added to the Answer section of the response like when querying with type TXT or ANY. This defaults to true. + + - `soa` Allow to tune the setting set up in SOA. Non specified + values fallback to their default values, all values are integers and expressed + as seconds. + + The following settings are available: + + - `expire` ((#soa_expire)) - Configure SOA Expire duration in seconds, + default value is 86400, ie: 24 hours. + + - `min_ttl` ((#soa_min_ttl)) - Configure SOA DNS minimum TTL. As explained + in [RFC-2308](https://tools.ietf.org/html/rfc2308) this also controls negative + cache TTL in most implementations. Default value is 0, ie: no minimum delay + or negative TTL. + + - `refresh` ((#soa_refresh)) - Configure SOA Refresh duration in seconds, + default value is `3600`, ie: 1 hour. + + - `retry` ((#soa_retry)) - Configures the Retry duration expressed + in seconds, default value is 600, ie: 10 minutes. + + - `use_cache` ((#dns_use_cache)) - When set to true, DNS resolution will + use the agent cache described in [agent caching](/consul/api-docs/features/caching). + This setting affects all service and prepared queries DNS requests. Implies [`allow_stale`](#allow_stale) + + - `cache_max_age` ((#dns_cache_max_age)) - When [use_cache](#dns_use_cache) + is enabled, the agent will attempt to re-fetch the result from the servers if + the cached value is older than this duration. See: [agent caching](/consul/api-docs/features/caching). + + **Note** that unlike the `max-age` HTTP header, a value of 0 for this field is + equivalent to "no max age". To get a fresh value from the cache use a very small value + of `1ns` instead of 0. + + - `prefer_namespace` ((#dns_prefer_namespace)) **Deprecated in Consul 1.11. + Use the [canonical DNS format for enterprise service lookups](/consul/docs/services/discovery/dns-static-lookups#service-lookups-for-consul-enterprise) instead.** - + When set to `true`, in a DNS query for a service, a single label between the domain + and the `service` label is treated as a namespace name instead of a datacenter. + When set to `false`, the default, the behavior is the same as non-Enterprise + versions and treats the single label as the datacenter. + +- `domain` ((#\_domain)) - By default, Consul responds to DNS queries in + the `consul.` domain. This flag can be used to change that domain. All queries + in this domain are assumed to be handled by Consul and will not be recursively + resolved. + +- `recursors` ((#\_recursors)) - Provides addresses of upstream DNS +servers that are used to recursively resolve queries if they are not inside the +service domain for Consul. For example, a node can use Consul directly as a DNS +server, and if the record is outside of the "consul." domain, the query will be +resolved upstream. As of Consul 1.0.1 recursors can be provided as IP addresses +or as go-sockaddr templates. IP addresses are resolved in order, and duplicates +are ignored. \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/encryption.mdx b/website/content/docs/reference/agent/configuration-file/encryption.mdx new file mode 100644 index 000000000000..e08ba5a10db3 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/encryption.mdx @@ -0,0 +1,73 @@ +--- +layout: docs +page_title: Encryption parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Encryption parameters for Consul agent configuration files + +The page provides reference information for encryption parameters in a Consul agent configuration file. + +## Encryption parameters + +- `auto_encrypt` This object allows setting options for the `auto_encrypt` feature. + + The following sub-keys are available: + + - `allow_tls` (Defaults to `false`) This option enables + `auto_encrypt` on the servers and allows them to automatically distribute certificates + from the service mesh CA to the clients. If enabled, the server can accept incoming + connections from both the built-in CA and the service mesh CA, as well as their certificates. + Note, the server will only present the built-in CA and certificate, which the + client can verify using the CA it received from `auto_encrypt` endpoint. If disabled, + a client configured with `auto_encrypt.tls` will be unable to start. + + - `tls` (Defaults to `false`) Allows the client to request the + service mesh CA and certificates from the servers, for encrypting RPC communication. + The client will make the request to any servers listed in the `-retry-join` + option. This requires that every server to have `auto_encrypt.allow_tls` enabled. + When both `auto_encrypt` options are used, it allows clients to receive certificates + that are generated on the servers. If the `-server-port` is not the default one, + it has to be provided to the client as well. Usually this is discovered through + LAN gossip, but `auto_encrypt` provision happens before the information can be + distributed through gossip. The most secure `auto_encrypt` setup is when the + client is provided with the built-in CA, `verify_server_hostname` is turned on, + and when an ACL token with `node.write` permissions is setup. It is also possible + to use `auto_encrypt` with a CA and ACL, but without `verify_server_hostname`, + or only with a ACL enabled, or only with CA and `verify_server_hostname`, or + only with a CA, or finally without a CA and without ACL enabled. In any case, + the communication to the `auto_encrypt` endpoint is always TLS encrypted. + + ~> **Warning:** Enabling `auto_encrypt.tls` conflicts with the [`auto_config`](/consul/docs/reference/agent/configuration-file/auto-config) feature. + Only one option may be specified. + + - `dns_san` (Defaults to `[]`) When this option is being + used, the certificates requested by `auto_encrypt` from the server have these + `dns_san` set as DNS SAN. + + - `ip_san` (Defaults to `[]`) When this option is being used, + the certificates requested by `auto_encrypt` from the server have these `ip_san` + set as IP SAN. + +- `encrypt` ((#\_encrypt)) - Specifies the secret key to use for encryption + of Consul network traffic. This key must be 32-bytes that are Base64-encoded. The + easiest way to create an encryption key is to use [`consul keygen`](/consul/commands/keygen). + All nodes within a cluster must share the same encryption key to communicate. The + provided key is automatically persisted to the data directory and loaded automatically + whenever the agent is restarted. This means that to encrypt Consul's gossip protocol, + this option only needs to be provided once on each agent's initial startup sequence. + If it is provided after Consul has been initialized with an encryption key, then + the provided key is ignored and a warning will be displayed. + +- `encrypt_verify_incoming` - This is an optional + parameter that can be used to disable enforcing encryption for incoming gossip + in order to upshift from unencrypted to encrypted gossip on a running cluster. + See [this section](/consul/docs/security/encryption#configuring-gossip-encryption-on-an-existing-cluster) + for more information. Defaults to true. + +- `encrypt_verify_outgoing` - This is an optional + parameter that can be used to disable enforcing encryption for outgoing gossip + in order to upshift from unencrypted to encrypted gossip on a running cluster. + See [this section](/consul/docs/security/encryption#configuring-gossip-encryption-on-an-existing-cluster) + for more information. Defaults to true. diff --git a/website/content/docs/reference/agent/configuration-file/general.mdx b/website/content/docs/reference/agent/configuration-file/general.mdx new file mode 100644 index 000000000000..77c2559d5d0c --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/general.mdx @@ -0,0 +1,551 @@ +--- +layout: docs +page_title: General parameters for Consul agent configuration +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# General parameters for Consul agent configuration + +The page provides reference information for general parameters in a Consul agent configuration file. + +## General parameters + +- `addresses` ((#\_addresses)) - This is a nested object that allows setting + bind addresses. In Consul 1.0 and later these can be set to a space-separated list + of addresses to bind to, or a go-sockaddr template that can potentially resolve to multiple addresses. + + `http`, `https` and `grpc` all support binding to a Unix domain socket. A + socket can be specified in the form `unix:///path/to/socket`. A new domain + socket will be created at the given path. If the specified file path already + exists, Consul will attempt to clear the file and create the domain socket + in its place. The permissions of the socket file are tunable via the + [`unix_sockets` config construct](#unix_sockets). + + When running Consul agent commands against Unix socket interfaces, use the + `-http-addr` argument to specify the path to the socket. You can also place + the desired values in the `CONSUL_HTTP_ADDR` environment variable. + + For TCP addresses, the environment variable value should be an IP address + _with the port_. For example: `10.0.0.1:8500` and not `10.0.0.1`. However, + ports are set separately in the [`ports`](#ports) structure when + defining them in a configuration file. + + The following keys are valid: + + - `dns` - The DNS server. Defaults to `client_addr` + - `http` - The HTTP API. Defaults to `client_addr` + - `https` - The HTTPS API. Defaults to `client_addr` + - `grpc` - The gRPC API. Defaults to `client_addr` + - `grpc_tls` - The gRPC API with TLS. Defaults to `client_addr` + +- `auto_reload_config` ((#\_auto_reload_config)) - This option directs Consul to automatically reload the [reloadable configuration options](/consul/docs/agent/config#reloadable-configuration) when configuration files change. Consul also watches the certificate and key files specified with the `cert_file` and `key_file` parameters and reloads the configuration if the files are updated. Equivalent to the [`-auto-reload-config` command-line flag](/consul/commands/agent#_auto_reload_config). + +- `bind_addr` ((#\_bind)) - The address to bind to for internal + cluster communications. This is an IP address that should be reachable by all other + nodes in the cluster. By default, this is `0.0.0.0`, meaning Consul will bind to + all addresses on the local machine and will [advertise](#_advertise) + the private IPv4 address to the rest of the cluster. If there are multiple private + IPv4 addresses available, Consul will exit with an error at startup. If you specify + `"[::]"`, Consul will [advertise](#_advertise) the public + IPv6 address. If there are multiple public IPv6 addresses available, Consul will + exit with an error at startup. Consul uses both TCP and UDP and the same port for + both. If you have any firewalls, be sure to allow both protocols. In Consul 1.1.0 and later this can be dynamically defined with a `go-sockaddr` + template that must resolve at runtime to a single address. Special characters such as backslashes `\` or double quotes `"` + within a double quoted string value must be escaped with a backslash `\`. + Refer to the [bind address example](#bind-address) to review HCL and JSON templates. + +- `cache` - Configures caching behavior for client agents. The configurable values are the following: + + - `entry_fetch_max_burst` The size of the token bucket used to recharge the rate-limit per + cache entry. The default value is 2 and means that when cache has not been updated + for a long time, 2 successive queries can be made as long as the rate-limit is not + reached. + + - `entry_fetch_rate` configures the rate-limit at which the cache may refresh a single + entry. On a cluster with many changes/s, watching changes in the cache might put high + pressure on the servers. This ensures the number of requests for a single cache entry + will never go beyond this limit, even when a given service changes every 1/100s. + Since this is a per cache entry limit, having a highly unstable service will only rate + limit the watched on this service, but not the other services/entries. + The value is strictly positive, expressed in queries per second as a float, + 1 means 1 query per second, 0.1 mean 1 request every 10s maximum. + The default value is "No limit" and should be tuned on large + clusters to avoid performing too many RPCs on entries changing a lot. + +- `check_update_interval` ((#check_update_interval)) + This interval controls how often check output from checks in a steady state is + synchronized with the server. By default, this is set to 5 minutes ()`"5m"`). Many + checks which are in a steady state produce slightly different output per run (timestamps, + etc) which cause constant writes. This configuration allows deferring the sync + of check output for a given interval to reduce write pressure. If a check ever + changes state, the new state and associated output is synchronized immediately. + To disable this behavior, set the value to `"0s"`. + +- `client_addr` ((#\_client)) - The address Consul binds client + interfaces to, including the HTTP and DNS servers. By default, this is `127.0.0.1`, + allowing only loopback connections. + +- `config_entries` This object allows setting options for centralized config entries. + + The following sub-keys are available: + + - `bootstrap` ((#config_entries_bootstrap)) + This is a list of inlined config entries to insert into the state store when + the Consul server gains leadership. This option is only applicable to server + nodes. Each bootstrap entry will be created only if it does not exist. When reloading, + any new entries that have been added to the configuration will be processed. + Refer to the [configuration entry docs](/consul/docs/fundamentals/config-entry) for more + details about the contents of each entry. + +- `datacenter` ((#\_datacenter)) - This parameter controls the datacenter in + which the agent is running. If not provided, it defaults to `dc1`. Consul has first-class + support for multiple datacenters, but it relies on proper configuration. Nodes + in the same datacenter should be on a single LAN. + +~> **Warning:** This `datacenter` string must conform to [RFC 1035 DNS label requirements](https://datatracker.ietf.org/doc/html/rfc1035#section-2.3.1), + consisting solely of letters, digits, and hyphens, with a maximum + length of 63 characters, and no hyphens at the beginning or end of the label. + Non-compliant names create Consul DNS entries incompatible with PKI X.509 certificate generation. + +- `data_dir` ((#\_data_dir)) - This parameter provides a data directory for + the agent to store state. This is required for all agents. The directory should + be durable across reboots. This is especially critical for agents that are running + in server mode as they must be able to persist cluster state. Additionally, the + directory must support the use of filesystem locking, meaning some types of mounted + folders (e.g. VirtualBox shared folders) may not be suitable. + +- `disable_anonymous_signature` Disables providing an anonymous + signature for de-duplication with the update check. Refer to [`disable_update_check`](#disable_update_check). + +- `disable_coordinates` - Disables sending of [network coordinates](/consul/docs/architecture/coordinates). + When network coordinates are disabled the `near` query param will not work to sort the nodes, + and the [`consul rtt`](/consul/commands/rtt) command will not be able to provide round trip time between nodes. + +- `disable_http_unprintable_char_filter` Defaults to false. Consul v1.0.3 fixed a potential security vulnerability where malicious users could craft KV keys with unprintable chars that would confuse operators using the CLI or UI into taking wrong actions. Users who had data written in older versions of Consul that did not have this restriction will be unable to delete those values by default in 1.0.3 or later. This setting enables those users to **temporarily** disable the filter such that delete operations can work on those keys again to get back to a healthy state. It is strongly recommended that this filter is not disabled permanently as it exposes the original security vulnerability. + +- `disable_keyring_file` ((#\_disable_keyring_file)) - If set, the keyring + will not be persisted to a file. Any installed keys will be lost on shutdown, and + only the given encryption key will be available on startup. Defaults to `false`. Equivalent to the + [`-disable-keyring-file` command-line flag](/consul/commands/agent#_disable_keyring_file). + +- `disable_remote_exec` ((#disable_remote_exec)) - Disables support for remote execution. When set to true, the agent will ignore + any incoming remote exec requests. In versions of Consul prior to 0.8, this defaulted + to false. In Consul 0.8 the default was changed to true, to make remote exec opt-in + instead of opt-out. + +- `disable_update_check` Disables automatic checking for security bulletins and new version releases. This is disabled in Consul Enterprise. + +- `discard_check_output` Discards the output of health checks before storing them. This reduces the number of writes to the Consul raft log in environments where health checks have volatile output like timestamps, process ids, ... + +- `discovery_max_stale` - Enables stale requests for all service discovery HTTP endpoints. This is + equivalent to the [`max_stale`](/consul/docs/reference/agent/configuration-file/dns#max_stale) configuration for DNS requests. If this value is zero (default), all service discovery HTTP endpoints are forwarded to the leader. If this value is greater than zero, any Consul server can handle the service discovery request. If a Consul server is behind the leader by more than `discovery_max_stale`, the query will be re-evaluated on the leader to get more up-to-date results. Consul agents also add a new `X-Consul-Effective-Consistency` response header which indicates if the agent did a stale read. `discover-max-stale` was introduced in Consul 1.0.7 as a way for Consul operators to force stale requests from clients at the agent level, and defaults to zero which matches default consistency behavior in earlier Consul versions. + +- `enable_agent_tls_for_checks` When set, uses a subset of the agent's TLS configuration (`key_file`, + `cert_file`, `ca_file`, `ca_path`, and `server_name`) to set up the client for HTTP or gRPC health checks. This allows services requiring 2-way TLS to be checked using the agent's credentials. This was added in Consul v1.0.1 and defaults to `false`. + +- `enable_central_service_config` ((#enable_central_service_config)) - When set, the Consul agent will look for any + [centralized service configuration](/consul/docs/fundamentals/config-entry) + that match a registering service instance. If it finds any, the agent will merge the centralized defaults with the service instance configuration. This allows for things like service protocol or proxy configuration to be defined centrally and inherited by any affected service registrations. + This defaults to `false` in versions of Consul prior to 1.9.0, and defaults to `true` in Consul 1.9.0 and later. + +- `enable_debug` (boolean, default is `false`): When set to `true`, enables Consul to report additional debugging information, including runtime profiling (`pprof`) data. This setting is only required for clusters without ACL [enabled](/consul/docs/reference/agent/configuration-file/acl#acl_enabled). If you change this setting, you must restart the agent for the change to take effect. + +- `enable_script_checks` ((#\_enable_script_checks)) This parameter controls whether + [health checks that execute scripts](/consul/docs/register/health-check/vm) are enabled on this + agent, and defaults to `false` so operators must opt-in to allowing these. + + ACLs must be enabled for agents and the `enable_script_checks` option must be set to `true` to enable script checks in Consul 0.9.0 and later. See [Registering and Querying Node Information](/consul/docs/secure/acl/rule#registering-and-querying-node-information) for related information. + + + + Enabling script checks in some configurations may introduce a known remote execution vulnerability targeted by malware. We strongly recommend `enable_local_script_checks` instead. Refer to the following article for additional guidance: [_Protecting Consul from RCE Risk in Specific Configurations_](https://www.hashicorp.com/blog/protecting-consul-from-rce-risk-in-specific-configurations) + for more details. + + + +- `enable_local_script_checks` ((#\_enable_local_script_checks)) + Enables local script checks only when they are defined in the local configuration files. Script checks defined in HTTP + API registrations will still not be allowed. Equivalent to the [`-enable-local-script-checks` command-line flag](/consul/commands/agent#_enable_local_script_checks). + + + +- `enable_xds_load_balancing` - Controls load balancing of xDS sessions across servers. When enabled, xDS sessions are evenly distributed across available Consul servers. If a server reaches its session limit, new connections are rejected, allowing clients to retry with another server. When disabled, servers accept all xDS sessions without enforcing a limit, which is recommended when an external load balancer is used to distribute connections. + +- `http_config` This object allows setting options for the HTTP API and UI. + + The following sub-keys are available: + + - `block_endpoints` + This object is a list of HTTP API endpoint prefixes to block on the agent, and + defaults to an empty list, meaning all endpoints are enabled. Any endpoint that + has a common prefix with one of the entries on this list will be blocked and + will return a 403 response code when accessed. For example, to block all of the + V1 ACL endpoints, set this to `["/v1/acl"]`, which will block `/v1/acl/create`, + `/v1/acl/update`, and the other ACL endpoints that begin with `/v1/acl`. This + only works with API endpoints, not `/ui` or `/debug`, those must be disabled + with their respective configuration options. Any CLI commands that use disabled + endpoints will no longer function as well. For more general access control, Consul's + [ACLs system](/consul/docs/secure/acl) + should be used, but this option is useful for removing access to HTTP API endpoints + completely, or on specific agents. This is available in Consul 0.9.0 and later. + + - `response_headers` This object allows adding headers to the HTTP API and UI + responses. Refer to the [CORS example](#cors) for how to enable CORS on HTTP + API endpoints. + + - `allow_write_http_from` This object is a list of networks in CIDR notation (eg "127.0.0.0/8") that are allowed to call the agent write endpoints. It defaults to an empty list, which means all networks are allowed. This is used to make the agent read-only, except for select ip ranges. - To block write calls from anywhere, use `[ "255.255.255.255/32" ]`. - To only allow write calls from localhost, use `[ "127.0.0.0/8" ]` - To only allow specific IPs, use `[ "10.0.0.1/32", "10.0.0.2/32" ]` + + - `use_cache` ((#http_config_use_cache)) Defaults to true. If disabled, the agent won't be using [agent caching](/consul/api-docs/features/caching) to answer the request. Even when the url parameter is provided. + + - `max_header_bytes` This setting controls the maximum number of bytes the consul http server will read parsing the request header's keys and values, including the request line. It does not limit the size of the request body. If zero, or negative, http.DefaultMaxHeaderBytes is used, which equates to 1 Megabyte. + +- `leave_on_terminate` If enabled, when the agent receives a TERM signal, it will send a `Leave` message to the rest of the cluster and gracefully leave. The default behavior for this feature varies based on whether or not the agent is running as a client or a server (prior to Consul 0.7 the default value was unconditionally set to `false`). On agents in client-mode, this defaults to `true` and for agents in server-mode, this defaults to `false`. + +- `license_path` This specifies the path to a file that contains the Consul Enterprise license. Alternatively the license may also be specified in either the `CONSUL_LICENSE` or `CONSUL_LICENSE_PATH` environment variables. Refer to the [licensing documentation](/consul/docs/enterprise/license) for more information about Consul Enterprise license management. Added in versions 1.10.0, 1.9.7 and 1.8.13. Prior to version 1.10.0 the value may be set for all agents to facilitate forwards compatibility with 1.10 but will only actually be used by client agents. + +- `limits`: This block specifies various types of limits that the Consul server agent enforces. + + - `http_max_conns_per_client` - Configures a limit of how many concurrent TCP connections a single client IP address is allowed to open to the agent's HTTP(S) server. This affects the HTTP(S) servers in both client and server agents. Default value is `200`. + - `https_handshake_timeout` - Configures the limit for how long the HTTPS server in both client and server agents will wait for a client to complete a TLS handshake. This should be kept conservative as it limits how many connections an unauthenticated attacker can open if `verify_incoming` is being using to authenticate clients (strongly recommended in production). Default value is `5s`. + - `request_limits` - This object specifies configurations that limit the rate of RPC and gRPC requests on the Consul server. Limiting the rate of gRPC and RPC requests also limits HTTP requests to the Consul server. + - `mode` - String value that specifies an action to take if the rate of requests exceeds the limit. You can specify the following values: + - `permissive`: The server continues to allow requests and records an error in the logs. + - `enforcing`: The server stops accepting requests and records an error in the logs. + - `disabled`: Limits are not enforced or tracked. This is the default value for `mode`. + - `read_rate` - Integer value that specifies the number of read requests per second. Default is `-1` which represents infinity. + - `write_rate` - Integer value that specifies the number of write requests per second. Default is `-1` which represents infinity. + - `rpc_handshake_timeout` - Configures the limit for how long servers will wait after a client TCP connection is established before they complete the connection handshake. When TLS is used, the same timeout applies to the TLS handshake separately from the initial protocol negotiation. All Consul clients should perform this immediately on establishing a new connection. This should be kept conservative as it limits how many connections an unauthenticated attacker can open if `verify_incoming` is being using to authenticate clients (strongly recommended in production). When `verify_incoming` is true on servers, this limits how long the connection socket and associated goroutines will be held open before the client successfully authenticates. Default value is `5s`. + - `rpc_client_timeout` - Configures the limit for how long a client is allowed to read from an RPC connection. This is used to set an upper bound for calls to eventually terminate so that RPC connections are not held indefinitely. Blocking queries can override this timeout. Default is `60s`. + - `rpc_max_conns_per_client` - Configures a limit of how many concurrent TCP connections a single source IP address is allowed to open to a single server. It affects both clients connections and other server connections. In general Consul clients multiplex many RPC calls over a single TCP connection so this can typically be kept low. It needs to be more than one though since servers open at least one additional connection for raft RPC, possibly more for WAN federation when using network areas, and snapshot requests from clients run over a separate TCP conn. A reasonably low limit significantly reduces the ability of an unauthenticated attacker to consume unbounded resources by holding open many connections. You may need to increase this if WAN federated servers connect via proxies or NAT gateways or similar causing many legitimate connections from a single source IP. Default value is `100` which is designed to be extremely conservative to limit issues with certain deployment patterns. Most deployments can probably reduce this safely. 100 connections on modern server hardware should not cause a significant impact on resource usage from an unauthenticated attacker though. + - `rpc_rate` - Configures the RPC rate limiter on Consul _clients_ by setting the maximum request rate that this agent is allowed to make for RPC requests to Consul servers, in requests per second. Defaults to infinite, which disables rate limiting. + - `rpc_max_burst` - The size of the token bucket used to recharge the RPC rate limiter on Consul _clients_. Defaults to 1000 tokens, and each token is good for a single RPC call to a Consul server. Refer to https://en.wikipedia.org/wiki/Token_bucket for more details about how token bucket rate limiters operate. + - `kv_max_value_size` ((#kv_max_value_size)) - **(Advanced)** Configures the maximum number of bytes for a kv request body to the [`/v1/kv`](/consul/api-docs/kv) endpoint. This limit defaults to [raft's](https://github.com/hashicorp/raft) suggested max size (512KB). **Note that tuning these improperly can cause Consul to fail in unexpected ways**, it may potentially affect leadership stability and prevent timely heartbeat signals by increasing RPC IO duration. This option affects the txn endpoint too, but Consul 1.7.2 introduced `txn_max_req_len` which is the preferred way to set the limit for the txn endpoint. If both limits are set, the higher one takes precedence. + - `txn_max_req_len` - **(Advanced)** Configures the maximum number of bytes for a transaction request body to the [`/v1/txn`](/consul/api-docs/txn) endpoint. This limit defaults to [raft's](https://github.com/hashicorp/raft) suggested max size (512KB). **Note that tuning these improperly can cause Consul to fail in unexpected ways**, it may potentially affect leadership stability and prevent timely heartbeat signals by increasing RPC IO duration. + +- `default_query_time` ((#\_default_query_time)) - This parameter controls the + amount of time a blocking query waits before Consul will force a response. + This value can be overridden by the `wait` query parameter. Note that Consul applies + some jitter on top of this time. Defaults to `300s`. Equivalent to the [`-default-query-time` command-line flag](/consul/commands/agent#_default_query_time). + +- `max_query_time` ((#\_max_query_time)) - This parameter controls the maximum + amount of time a blocking query can wait before Consul will force a response. Consul + applies jitter to the wait time. The jittered time will be capped to this time. + Defaults to `600s`. Equivalent to the [`-max-query-time` command-line flag](/consul/commands/agent#_max_query_time). + +- `peering` This object allows setting options for cluster peering. + + The following sub-keys are available: + + - `enabled` ((#peering_enabled)) (Defaults to `true`) Controls whether cluster peering is enabled. + When disabled, the UI won't show peering, all peering APIs will return + an error, any peerings stored in Consul already will be ignored (but they will not be deleted), + and all peering connections from other clusters will be rejected. This was added in Consul 1.13.0. + +- `partition` - This flag is used to set + the name of the admin partition the agent belongs to. An agent can only join + and communicate with other agents within its admin partition. Review the + [Admin Partitions documentation](/consul/docs/multi-tenant/admin-partition) for more + details. By default, this is an empty string, which is the `default` admin + partition. This cannot be set on a server agent. + + + + The `partition` option cannot be used either the + [`segment`](/consul/docs/reference/agent/configuration-file/general#segment) option or [`-segment`](/consul/commands/agent#_segment) flag. + + + +- `performance` ((#\_performance)) - Available in Consul v0.7 and later, this nested object allows tuning the performance of different subsystems in Consul. Refer to the [Server Performance](/consul/docs/deploy/server/vm/requirements) documentation for more details. The following parameters are available: + + - `leave_drain_time` - A duration that a server will dwell during a graceful leave in order to allow requests to be retried against other Consul servers. Under normal circumstances, this can prevent clients from experiencing "no leader" errors when performing a rolling update of the Consul servers. This was added in Consul 1.0. Must be a duration value such as 10s. Defaults to 5s. + + - `raft_multiplier` ((#\_raft_multiplier)) - An integer multiplier used by Consul servers to scale key Raft timing parameters. Omitting this value or setting it to 0 uses default timing described below. Lower values are used to tighten timing and increase sensitivity while higher values relax timings and reduce sensitivity. Tuning this affects the time it takes Consul to detect leader failures and to perform leader elections, at the expense of requiring more network and CPU resources for better performance. + + By default, Consul will use a lower-performance timing that's suitable + for [minimal Consul servers](/consul/docs/deploy/server/vm/requirements#minimum), currently equivalent + to setting this to a value of 5 (this default may be changed in future versions of Consul, + depending if the target minimum server profile changes). Setting this to a value of 1 will + configure Raft to its highest-performance mode, equivalent to the default timing of Consul + prior to 0.7, and is recommended for [production Consul servers](/consul/docs/deploy/server/vm/requirements#production). + + Refer to the note on [last contact](/consul/docs/deploy/server/vm/requirements#production-server-requirements) timing for more + details on tuning this parameter. The maximum allowed value is 10. + + - `rpc_hold_timeout` - A duration that a client + or server will retry internal RPC requests during leader elections. Under normal + circumstances, this can prevent clients from experiencing "no leader" errors. + This was added in Consul 1.0. Must be a duration value such as 10s. Defaults + to 7s. + + - `grpc_keepalive_interval` - A duration that determines the frequency that Consul servers send keep-alive messages to inactive gRPC clients. Configure this setting to modify how quickly Consul detects and removes improperly closed xDS or peering connections. Default is `30s`. + + - `grpc_keepalive_timeout` - A duration that determines how long a Consul server waits for a reply to a keep-alive message. If the server does not receive a reply before the end of the duration, Consul flags the gRPC connection as unhealthy and forcibly removes it. Defaults to `20s`. + + - `enable_xds_load_balancing` - Controls load balancing of xDS sessions across servers. When enabled, xDS sessions are distributed evenly across available Consul servers. If an individual server reaches its session limit, it rejects new connections and clients retry with another server. When disabled, servers accept all xDS sessions without enforcing a limit. We recommend disabling this parameter when you use an external load balancer in front of the Consul servers. + +- `pid_file` ((#\_pid_file)) - This parameter provides the file path for the agent to store its PID. Use it to send signals to the agent such as `SIGINT` to close the agent or `SIGHUP` to update check definitions. Equivalent to the [`-pid-file` command line flag](/consul/commands/agent#_pid_file). + +- `ports` ((#ports)) - This is a nested object that allows setting the bind ports for the following keys: + + - `dns` ((#dns_port)) - The DNS server, -1 to disable. Default 8600. + TCP and UDP. + - `http` ((#http_port)) - The HTTP API, -1 to disable. Default 8500. + TCP only. + - `https` ((#https_port)) - The HTTPS API, -1 to disable. Default -1 + (disabled). **We recommend using `8501`** for `https` by convention as some tooling + will work automatically with this. + - `grpc` ((#grpc_port)) - The gRPC API, -1 to disable. Default -1 (disabled). + **We recommend using `8502` for `grpc`** as your conventional gRPC port number, as it allows some + tools to work automatically. This parameter is set to `8502` by default when the agent runs + in `-dev` mode. The `grpc` port only supports plaintext traffic starting in Consul 1.14. + Refer to `grpc_tls` for more information on configuring a TLS-enabled port. + - `grpc_tls` ((#grpc_tls_port)) - The gRPC API with TLS connections, -1 to disable. gRPC_TLS is enabled by default on port 8503 for Consul servers. + **We recommend using `8503` for `grpc_tls`** as your conventional gRPC port number, as it allows some + tools to work automatically. `grpc_tls` is always guaranteed to be encrypted. Both `grpc` and `grpc_tls` + can be configured at the same time, but they may not utilize the same port number. This field was added in Consul 1.14. + - `serf_lan` ((#serf_lan_port)) - The Serf LAN port. Default 8301. TCP + and UDP. Equivalent to the [`-serf-lan-port` command line flag](/consul/commands/agent#_serf_lan_port). + - `serf_wan` ((#serf_wan_port)) - The Serf WAN port. Default 8302. + Equivalent to the [`-serf-wan-port` command line flag](/consul/commands/agent#_serf_wan_port). Set + to -1 to disable. **Note**: this will disable WAN federation which is not recommended. + Various catalog and WAN related endpoints will return errors or empty results. + TCP and UDP. + - `server` ((#server_rpc_port)) - Server RPC address. Default 8300. TCP + only. + - `sidecar_min_port` ((#sidecar_min_port)) - Inclusive minimum port number + to use for automatically assigned [sidecar service registrations](/consul/docs/reference/proxy/sidecar). + Default 21000. Set to `0` to disable automatic port assignment. + - `sidecar_max_port` ((#sidecar_max_port)) - Inclusive maximum port number + to use for automatically assigned [sidecar service registrations](/consul/docs/reference/proxy/sidecar). + Default 21255. Set to `0` to disable automatic port assignment. + - `expose_min_port` ((#expose_min_port)) - Inclusive minimum port number + to use for automatically assigned [exposed check listeners](/consul/docs/reference/proxy/connect-proxy#expose-paths-configuration-reference). + Default 21500. Set to `0` to disable automatic port assignment. + - `expose_max_port` ((#expose_max_port)) - Inclusive maximum port number + to use for automatically assigned [exposed check listeners](/consul/docs/reference/proxy/connect-proxy#expose-paths-configuration-reference). + Default 21755. Set to `0` to disable automatic port assignment. + +- `primary_datacenter` ((#primary_datacenter)) - This designates the datacenter + which is authoritative for ACL information, intentions and is the root Certificate + Authority for service mesh. It must be provided to enable ACLs. All servers and datacenters + must agree on the primary datacenter. Setting it on the servers is all you need + for cluster-level enforcement, but for the APIs to forward properly from the clients, + it must be set on them too. In Consul 0.8 and later, this also enables agent-level + enforcement of ACLs. + +- `primary_gateways` ((#primary_gateways)) - Takes a list of addresses to use as the + mesh gateways for the primary datacenter when authoritative replicated catalog + data is not present. Discovery happens every [`primary_gateways_interval`](#primary_gateways_interval) + until at least one primary mesh gateway is discovered. This was added in Consul + 1.8.0. Equivalent to the [`-primary-gateway` command-line flag](/consul/commands/agent#_primary_gateway). + +- `primary_gateways_interval` Time to wait + between [`primary_gateways`](#primary_gateways) discovery attempts. Defaults to + `"30s"`. This was added in Consul 1.8.0. + +- `protocol` ((#protocol)) - The Consul protocol version to use. Consul + agents speak protocol `2` by default, however agents will automatically use protocol > 2 when speaking to compatible agents. This should be set only when [upgrading](/consul/docs/upgrade). You can view the protocol versions supported by Consul by running `consul version`. Equivalent to the [`-protocol` command-line + flag](/consul/commands/agent#_protocol). + +- `reap` This controls Consul's automatic reaping of child processes, + which is useful if Consul is running as PID 1 in a Docker container. If this isn't + specified, then Consul will automatically reap child processes if it detects it + is running as PID 1. If this is set to true or false, then it controls reaping + regardless of Consul's PID (forces reaping on or off, respectively). This option + was removed in Consul 0.7.1. For later versions of Consul, you will need to reap + processes using a wrapper, please refer to the [Consul Docker image entry point script](https://github.com/hashicorp/docker-consul/blob/master/0.X/docker-entrypoint.sh) + for an example. If you are using Docker 1.13.0 or later, you can use the new `--init` + option of the `docker run` command and docker will enable an init process with + PID 1 that reaps child processes for the container. More info on [Docker docs](https://docs.docker.com/engine/reference/commandline/run/#options). + +- `reconnect_timeout` This controls how long it + takes for a failed node to be completely removed from the cluster. This defaults + to 72 hours and it is recommended that this is set to at least double the maximum + expected recoverable outage time for a node or network partition. WARNING: Setting + this time too low could cause Consul servers to be removed from quorum during an + extended node failure or partition, which could complicate recovery of the cluster. + The value is a time with a unit suffix, which can be "s", "m", "h" for seconds, + minutes, or hours. The value must be >= 8 hours. + +- `reconnect_timeout_wan` This is the WAN equivalent + of the [`reconnect_timeout`](#reconnect_timeout) parameter, which controls + how long it takes for a failed server to be completely removed from the WAN pool. + This also defaults to 72 hours, and must be >= 8 hours. + +- `rpc` configuration for Consul servers. + + - `enable_streaming` ((#rpc_enable_streaming)) defaults to true. If set to false it will disable + the gRPC subscribe endpoint on a Consul Server. All + servers in all federated datacenters must have this enabled before any client can use + [`use_streaming_backend`](#use_streaming_backend). + +- `reporting` - This option allows options for HashiCorp reporting. + - `license` - The license object allows users to control automatic reporting of license utilization metrics to HashiCorp. + - `enabled`: (Defaults to `true`) Enables automatic license utilization reporting. + +- `segment` ((#\_segment)) - This parameter sets the name of the network segment the agent belongs to. An agent can only join and + communicate with other agents within its network segment. Ensure the [join operation uses the correct port for this segment](/consul/docs/multi-tenant/network-segment/vm#configure-clients-to-join-segments). Review the [Network Segments documentation](/consul/docs/multi-tenant/network-segment/vm) + for more details. By default, this is an empty string, which is the `` + network segment. Equivalent to the [`-segment` command-line flag](/consul/commands/agent#_segment). + + + + The `segment` option cannot be used with the [`partition`](#partition) option. + + + +- `segments` - (Server agents only) This is a list of nested objects + that specifies user-defined network segments, not including the `` segment, which is + created automatically. Refer to the [network segments documentation](/consul/docs/multi-tenant/network-segment/vm)for additional information. + for more details. + + - `name` ((#segment_name)) - The name of the segment. Must be a string + between 1 and 64 characters in length. + - `bind` ((#segment_bind)) - The bind address to use for the segment's + gossip layer. Defaults to the [`-bind`](/consul/commands/agent#_bind) value if not provided. + - `port` ((#segment_port)) - The port to use for the segment's gossip + layer (required). + - `advertise` ((#segment_advertise)) - The advertise address to use for + the segment's gossip layer. Defaults to the [`-advertise`](/consul/commands/agent#_advertise) value + if not provided. + - `rpc_listener` ((#segment_rpc_listener)) - If true, a separate RPC + listener will be started on this segment's [`-bind`](/consul/commands/agent#_bind) address on the rpc + port. Only valid if the segment's bind address differs from the [`-bind`](/consul/commands/agent#_bind) + address. Defaults to false. + +- `server` ((#\_server)) - This parameter controls if an agent is + in server or client mode. When provided, an agent will act as a Consul server. + Each Consul cluster must have at least one server and ideally no more than 5 per + datacenter. All servers participate in the Raft consensus algorithm to ensure that + transactions occur in a consistent, linearizable manner. Transactions modify cluster + state, which is maintained on all server nodes to ensure availability in the case + of node failure. Server nodes also participate in a WAN gossip pool with server + nodes in other datacenters. Servers act as gateways to other datacenters and forward + RPC traffic as appropriate. + +- `server_rejoin_age_max` - Controls the allowed maximum age of a stale server attempting to rejoin a cluster. + If the server has not ran during this period, it will refuse to start up again until an operator intervenes by manually deleting the `server_metadata.json` + file located in the data dir. + This is to protect clusters from instability caused by decommissioned servers accidentally being started again. + The default value is 168h (equal to 7d) and the minimum value is 6h. + +- `read_replica`((#\_read_replica)) - This + parameter makes the server not participate in the Raft quorum, and have it + only receive the data replication stream. This can be used to add read scalability + to a cluster in cases where a high volume of reads to servers are needed. + +- `session_ttl_min` The minimum allowed session TTL. This ensures sessions are not created with TTLs + shorter than the specified limit. It is recommended to keep this limit at or above + the default to encourage clients to send infrequent heartbeats. Defaults to 10s. + +- `skip_leave_on_interrupt` This is similar + to [`leave_on_terminate`](#leave_on_terminate) but only affects interrupt handling. + When Consul receives an interrupt signal (such as hitting Control-C in a terminal), + Consul will gracefully leave the cluster. Setting this to `true` disables that + behavior. The default behavior for this feature varies based on whether or not + the agent is running as a client or a server (prior to Consul 0.7 the default value + was unconditionally set to `false`). On agents in client-mode, this defaults to + `false` and for agents in server-mode, this defaults to `true` (i.e. Ctrl-C on + a server will keep the server in the cluster and therefore quorum, and Ctrl-C on + a client will gracefully leave). + +- `translate_wan_addrs` ((#translate_wan_addrs)) If set to true, Consul + will prefer a node's configured [WAN address](/consul/commands/agent#_advertise-wan) + when servicing DNS and HTTP requests for a node in a remote datacenter. This allows + the node to be reached within its own datacenter using its local address, and reached + from other datacenters using its WAN address, which is useful in hybrid setups + with mixed networks. This is disabled by default. + + Starting in Consul 0.7 and later, node addresses in responses to HTTP requests will also prefer a + node's configured [WAN address](/consul/commands/agent#_advertise-wan) when querying for a node in a remote + datacenter. An [`X-Consul-Translate-Addresses`](/consul/api-docs/api-structure#translated-addresses) header + will be present on all responses when translation is enabled to help clients know that the addresses + may be translated. The `TaggedAddresses` field in responses also have a `lan` address for clients that + need knowledge of that address, regardless of translation. + + The following endpoints translate addresses: + + - [`/v1/catalog/nodes`](/consul/api-docs/catalog#list-nodes) + - [`/v1/catalog/node/`](/consul/api-docs/catalog#retrieve-map-of-services-for-a-node) + - [`/v1/catalog/service/`](/consul/api-docs/catalog#list-nodes-for-service) + - [`/v1/health/service/`](/consul/api-docs/health#list-nodes-for-service) + - [`/v1/query//execute`](/consul/api-docs/query#execute-prepared-query) + +- `unix_sockets` - This allows tuning the ownership and + permissions of the Unix domain socket files created by Consul. Domain sockets are + only used if the HTTP address is configured with the `unix://` prefix. + + It is important to note that this option may have different effects on + different operating systems. Linux generally observes socket file permissions + while many BSD variants ignore permissions on the socket file itself. It is + important to test this feature on your specific distribution. This feature is + currently not functional on Windows hosts. + + The following options are valid within this construct and apply globally to all + sockets created by Consul: + + - `user` - The name or ID of the user who will own the socket file. + - `group` - The group ID ownership of the socket file. This option + currently only supports numeric IDs. + - `mode` - The permission bits to set on the file. + +- `use_streaming_backend` defaults to true. When enabled Consul client agents will use + streaming rpc, instead of the traditional blocking queries, for endpoints which support + streaming. All servers must have [`rpc.enable_streaming`](#rpc_enable_streaming) + enabled before any client can enable `use_streaming_backend`. + +- `watches` - Watches is a list of watch specifications which + allow an external process to be automatically invoked when a particular data view + is updated. Refer to the [watch documentation](/consul/docs/automate/watch) for more detail. + Watches can be modified when the configuration is reloaded. + + +## Examples + +The following examples demonstrate common Consul agent configuration patterns. + +### Bind address + +Example templates: + + + +```hcl +bind_addr = "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}" +``` + +```json +{ + "bind_addr": "{{ GetPrivateInterfaces | include \"network\" \"10.0.0.0/8\" | attr \"address\" }}" +} +``` + + + +### CORS + +Use the following config to enable [CORS](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) on the HTTP API endpoints. + + + + ```hcl + http_config { + response_headers { + Access-Control-Allow-Origin = "*" + } + } + ``` + + ```json + { + "http_config": { + "response_headers": { + "Access-Control-Allow-Origin": "*" + } + } + } + ``` + + + diff --git a/website/content/docs/reference/agent/configuration-file/gossip.mdx b/website/content/docs/reference/agent/configuration-file/gossip.mdx new file mode 100644 index 000000000000..db30334a4b7a --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/gossip.mdx @@ -0,0 +1,96 @@ +--- +layout: docs +page_title: Gossip parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Gossip parameters for Consul agent configuration files + +The page provides reference information for gossip parameters in a Consul agent configuration file. + +## Gossip parameters + +- `gossip_lan` - **(Advanced)** This object contains a + number of sub-keys which can be set to tune the LAN gossip communications. These + are only provided for users running especially large clusters that need fine tuning + and are prepared to spend significant effort correctly tuning them for their environment + and workload. **Tuning these improperly can cause Consul to fail in unexpected + ways**. The default values are appropriate in almost all deployments. + + - `gossip_nodes` - The number of random nodes to send + gossip messages to per gossip_interval. Increasing this number causes the gossip + messages to propagate across the cluster more quickly at the expense of increased + bandwidth. The default is 3. + + - `gossip_interval` - The interval between sending + messages that need to be gossiped that haven't been able to piggyback on probing + messages. If this is set to zero, non-piggyback gossip is disabled. By lowering + this value (more frequent) gossip messages are propagated across the cluster + more quickly at the expense of increased bandwidth. The default is 200ms. + + - `probe_interval` - The interval between random + node probes. Setting this lower (more frequent) will cause the cluster to detect + failed nodes more quickly at the expense of increased bandwidth usage. The default + is 1s. + + - `probe_timeout` - The timeout to wait for an ack + from a probed node before assuming it is unhealthy. This should be at least the + 99-percentile of RTT (round-trip time) on your network. The default is 500ms + and is a conservative value suitable for almost all realistic deployments. + + - `retransmit_mult` - The multiplier for the number + of retransmissions that are attempted for messages broadcasted over gossip. The + number of retransmits is scaled using this multiplier and the cluster size. The + higher the multiplier, the more likely a failed broadcast is to converge at the + expense of increased bandwidth. The default is 4. + + - `suspicion_mult` - The multiplier for determining + the time an inaccessible node is considered suspect before declaring it dead. + The timeout is scaled with the cluster size and the probe_interval. This allows + the timeout to scale properly with expected propagation delay with a larger cluster + size. The higher the multiplier, the longer an inaccessible node is considered + part of the cluster before declaring it dead, giving that suspect node more time + to refute if it is indeed still alive. The default is 4. + +- `gossip_wan` - **(Advanced)** This object contains a + number of sub-keys which can be set to tune the WAN gossip communications. These + are only provided for users running especially large clusters that need fine tuning + and are prepared to spend significant effort correctly tuning them for their environment + and workload. **Tuning these improperly can cause Consul to fail in unexpected + ways**. The default values are appropriate in almost all deployments. + + - `gossip_nodes` - The number of random nodes to send + gossip messages to per gossip_interval. Increasing this number causes the gossip + messages to propagate across the cluster more quickly at the expense of increased + bandwidth. The default is 4. + + - `gossip_interval` - The interval between sending + messages that need to be gossiped that haven't been able to piggyback on probing + messages. If this is set to zero, non-piggyback gossip is disabled. By lowering + this value (more frequent) gossip messages are propagated across the cluster + more quickly at the expense of increased bandwidth. The default is 500ms. + + - `probe_interval` - The interval between random + node probes. Setting this lower (more frequent) will cause the cluster to detect + failed nodes more quickly at the expense of increased bandwidth usage. The default + is 5s. + + - `probe_timeout` - The timeout to wait for an ack + from a probed node before assuming it is unhealthy. This should be at least the + 99-percentile of RTT (round-trip time) on your network. The default is 3s + and is a conservative value suitable for almost all realistic deployments. + + - `retransmit_mult` - The multiplier for the number + of retransmissions that are attempted for messages broadcasted over gossip. The + number of retransmits is scaled using this multiplier and the cluster size. The + higher the multiplier, the more likely a failed broadcast is to converge at the + expense of increased bandwidth. The default is 4. + + - `suspicion_mult` - The multiplier for determining + the time an inaccessible node is considered suspect before declaring it dead. + The timeout is scaled with the cluster size and the probe_interval. This allows + the timeout to scale properly with expected propagation delay with a larger cluster + size. The higher the multiplier, the longer an inaccessible node is considered + part of the cluster before declaring it dead, giving that suspect node more time + to refute if it is indeed still alive. The default is 6. \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/hcp.mdx b/website/content/docs/reference/agent/configuration-file/hcp.mdx new file mode 100644 index 000000000000..baa5112eff0a --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/hcp.mdx @@ -0,0 +1,22 @@ +--- +layout: docs +page_title: HCP Consul Dedicated parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# HCP Consul Dedicated parameters for Consul agent configuration files + +@include 'alerts/hcp-dedicated-eol.mdx' + +This page provides reference information for HCP Consul Dedicated in a Consul agent configuration file. + +## HCP Consul Dedicated parameters + +- `cloud` This object specifies settings for connecting self-managed clusters to HCP. This was added in Consul 1.14 + + - `client_id` The OAuth2 client ID for authentication with HCP. This can be overridden using the `HCP_CLIENT_ID` environment variable. + + - `client_secret` The OAuth2 client secret for authentication with HCP. This can be overridden using the `HCP_CLIENT_SECRET` environment variable. + + - `resource_id` The HCP resource identifier. This can be overridden using the `HCP_RESOURCE_ID` environment variable. \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/index.mdx b/website/content/docs/reference/agent/configuration-file/index.mdx new file mode 100644 index 000000000000..a87f130ab557 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/index.mdx @@ -0,0 +1,100 @@ +--- +layout: docs +page_title: Consul agents configuration file reference +description: >- + This section contains reference information for individual agent configuration files. Configure multiple agents at once. Assign attributes to agents. Learn about agent configuration file parameters and formatting. Review example configuration. +--- + +# Consul agents configuration file reference + +This topic describes the parameters for configuring Consul agents. + +Refer to the [Configure a Consul agent](/consul/docs/fundamentals/agent) guide for information on the following: + +- Configuration files location +- Common configuration settings +- Reloadable configurations +- Starting and stopping a Consul agent + +## Overview + +Create one or more files to configure the Consul agent on startup. We recommend +grouping similar configurations, such as ACL parameters, into separate files to +better manage configuration changes. + +Write configuration files in HCL or JSON. Both humans and computers can read and +edit JSON configuration files. JSON configuration consists of a single JSON +object with multiple configuration keys specified within it. + + + +```hcl +datacenter = "east-aws" +data_dir = "/opt/consul" +log_level = "INFO" +node_name = "foobar" +server = true +watches = [ + { + type = "checks" + handler = "/usr/bin/health-check-handler.sh" + } +] + +telemetry { + statsite_address = "127.0.0.1:2180" +} +``` + +```json +{ + "datacenter": "east-aws", + "data_dir": "/opt/consul", + "log_level": "INFO", + "node_name": "foobar", + "server": true, + "watches": [ + { + "type": "checks", + "handler": "/usr/bin/health-check-handler.sh" + } + ], + "telemetry": { + "statsite_address": "127.0.0.1:2180" + } +} +``` + + + +### Time-to-live values + +Consul uses the Go `time` package to parse all time-to-live (TTL) values used in Consul agent configuration files. Specify integer and float values as a string and include one or more of the following units of time: + +- `ns` +- `us` +- `µs` +- `ms` +- `s` +- `m` +- `h` + +Examples: + +- `'300ms'` +- `'1.5h'` +- `'2h45m'` + +Refer to the [formatting specification](https://golang.org/pkg/time/#ParseDuration) for additional information. + +## Examples + +The following configuration examples demonstrate scenarios for server and client agent configuration files. + +@include 'examples/agent/server/encrypted.mdx' + +@include 'examples/agent/server/service-mesh.mdx' + +@include 'examples/agent/client/multi-interface.mdx' + +@include 'examples/agent/client/register-service.mdx' diff --git a/website/content/docs/reference/agent/configuration-file/join.mdx b/website/content/docs/reference/agent/configuration-file/join.mdx new file mode 100644 index 000000000000..bcbc41f3f1af --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/join.mdx @@ -0,0 +1,38 @@ +--- +layout: docs +page_title: Join parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Join parameters for Consul agent configuration files + +The page provides reference information for join parameters in a Consul agent configuration file. + +## Join Parameters + +- `rejoin_after_leave` ((#\_rejoin)) - When provided, Consul ignores a previous + leave and attempts to rejoin the cluster when starting. By default, Consul treats + leave as a permanent intent and does not attempt to join the cluster again automatically when + it starts. This parameter allows the previous state to be used to rejoin the cluster. Equivalent to the [`-rejoin` command-line flag](/consul/commands/agent#_rejoin). + +- `retry_join` ((#\_retry_join)) - Address of another agent to join on start up. Joining is + retried until success. Once the agent joins successfully as a member, it will not attempt to join + again. After joining, the agent solely maintains its membership through gossip. By default, the Consul agent does not join any + nodes when it starts up. The value can contain IPv4, IPv6, or DNS addresses. Literal IPv6 + addresses must be enclosed in square brackets. If multiple values are given, they are tried and + retried in the order listed until the first succeeds. Equivalent to the [`-retry-join`](/consul/commands/agent#retry-join) command-line flag. + +- `retry_interval` ((#\_retry_interval)) - Time to wait between join attempts. + Defaults to `30s`.Equivalent to the [`-retry-interval` command-line flag](/consul/commands/agent#_retry_interval). + +- `retry_max` ((#\_retry_max)) - The maximum number of join attempts when using + [`retry-join`](#_retry_join) before exiting with return code 1. By default, this parameter is set + to `0`, which is interpreted as infinite retries. Equivalent to the [`-retry-max`](/consul/commands/agent#_retry_max) command-line flag. + +- `retry_join_wan` ((#\_retry_join_wan)) - Address of another WAN agent to join on start up. + WAN joining is retried until success. This can be specified as a list of addresses to specify multiple WAN + agents to join. If multiple values are given, they are tried and retried in the order listed + until the first succeeds. By default, the agent does not WAN join any nodes when it starts up. Equivalent to the [`-retry-join-wan` command-line flag](/consul/commands/agent#_retry_join_wan). + +- `retry_interval_wan` Time to wait between [`retry-join-wan`](#_retry_join_wan) attempts. Defaults to `30s`. Equivalent to the [`-retry-interval-wan` command-line flag](/consul/commands/agent#_retry_interval_wan). \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/log.mdx b/website/content/docs/reference/agent/configuration-file/log.mdx new file mode 100644 index 000000000000..0bcedefcf9f1 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/log.mdx @@ -0,0 +1,132 @@ +--- +layout: docs +page_title: Log parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Log parameters for Consul agent configuration files + +The page provides reference information for log parameters in a Consul agent configuration file. + +## Log parameters + +- `audit` - The audit object allow users to enable + auditing and configure a sink and filters for their audit logs. For more + information, review the [audit configuration example](#audit-configuration) + and the [audit log + tutorial](/consul/tutorials/datacenter-operations/audit-logging). + + The following sub-keys are available: + + - `enabled` - Controls whether Consul logs out each time a user + performs an operation. ACLs must be enabled to use this feature. Defaults to `false`. + + - `sink` - This object provides configuration for the destination to which + Consul will log auditing events. Sink is an object containing keys to sink objects, where the key is the name of the sink. + + - `type` - Type specifies what kind of sink this is. + The following keys are valid: + - `file` - Currently only file sinks are available, they take the following keys. + - `format` - Format specifies what format the events will + be emitted with. + The following keys are valid: + - `json` - Currently only json events are offered. + - `path` - The directory and filename to write audit events to. + - `delivery_guarantee` - Specifies + the rules governing how audit events are written. + The following keys are valid: + - `best-effort` - Consul only supports `best-effort` event delivery. + - `mode` - The permissions to set on the audit log files. + - `rotate_duration` - Specifies the + interval by which the system rotates to a new log file. At least one of `rotate_duration` or `rotate_bytes` + must be configured to enable audit logging. + - `rotate_max_files` - Defines the + limit that Consul should follow before it deletes old log files. + - `rotate_bytes` - Specifies how large an + individual log file can grow before Consul rotates to a new file. At least one of `rotate_bytes` or + `rotate_duration` must be configured to enable audit logging. + +- `log_file` ((#\_log_file)) - Writes all the Consul agent log messages + to a file at the path indicated by this flag. The filename defaults to `consul.log`. + When the log file rotates, this value is used as a prefix for the path to the log and the current timestamp is + appended to the file name. If the value ends in a path separator, `consul-` + will be appended to the value. If the file name is missing an extension, `.log` + is appended. For example, setting `log-file` to `/var/log/` would result in a log + file path of `/var/log/consul.log`. `log-file` can be combined with + [`-log-rotate-bytes`](#_log_rotate_bytes) and [`-log-rotate-duration`](#_log_rotate_duration) + for a fine-grained log rotation experience. After rotation, the path and filename take the following form: + `/var/log/consul-{timestamp}.log` + +- `log_rotate_duration` ((#\_log_rotate_duration)) - Specifies the maximum + duration a log should be written to before it needs to be rotated. Must be a duration + value such as 30s. Defaults to 24h. + +- `log_rotate_bytes` ((#\_log_rotate_bytes)) - Specifies the number of + bytes that should be written to a log before it needs to be rotated. Unless specified, + there is no limit to the number of bytes that can be written to a log file. + +- `log_rotate_max_files` ((#\_log_rotate_max_files)) - Specifies the maximum + number of older log file archives to keep. Defaults to 0 (no files are ever deleted). + Set to -1 to discard old log files when a new one is created. + +- `log_level` ((#\_log_level)) - The level of logging to show after the + Consul agent has started. This defaults to "info". The available log levels are + "trace", "debug", "info", "warn", and "error". You can always connect to an agent + via [`consul monitor`](/consul/commands/monitor) and use any log level. Also, + the log level can be changed during a config reload. + +- `log_json`((#\_log_json)) - Enables the agent to output logs + in a JSON format. By default this is `false`. + +- `enable_syslog` ((#\_syslog)) - Enables logging to syslog. This is + only supported on Linux and macOS. It will result in an error if provided on Windows. Equivalent to the [`-syslog` command-line flag](/consul/commands/agent#_syslog). + +- `syslog_facility` When [`enable_syslog`](#enable_syslog) + is provided, this controls to which facility messages are sent. By default, `LOCAL0` will be used. + +## Examples + +The following examples demonstrate common logging configuration patterns for Consul agents. + +### Audit log configuration + +

    + + + +```hcl +audit { + enabled = true + sink "My sink" { + type = "file" + format = "json" + path = "data/audit/audit.json" + delivery_guarantee = "best-effort" + rotate_duration = "24h" + rotate_max_files = 15 + rotate_bytes = 25165824 + } +} +``` + +```json +{ + "audit": { + "enabled": true, + "sink": { + "My sink": { + "type": "file", + "format": "json", + "path": "data/audit/audit.json", + "delivery_guarantee": "best-effort", + "rotate_duration": "24h", + "rotate_max_files": 15, + "rotate_bytes": 25165824 + } + } + } +} +``` + + \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/node.mdx b/website/content/docs/reference/agent/configuration-file/node.mdx new file mode 100644 index 000000000000..b201ac22cbaf --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/node.mdx @@ -0,0 +1,61 @@ +--- +layout: docs +page_title: Node parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Node parameters for Consul agent configuration files + +The page provides reference information for node parameters in a Consul agent configuration file. + +## Node Parameters + +- `node_id` ((#\_node_id)) - Specifies a unique identifier for this node across all time, even if the name of the node + or address changes. This must be in the form of a hex string, 36 characters long, + such as `adf4238a-882b-9ddc-4a9d-5b6758e4159e`. If this isn't supplied, which is + the most common case, then the agent will generate an identifier at startup and + persist it in the data directory so that it will remain the same + across agent restarts. Information from the host will be used to generate a deterministic + node ID if possible, unless [`disable-host-node-id`](#_disable_host_node_id) is + set to true. + +- `node_name`((#\_node)) - The name of this node in the cluster. This name must + be unique within the cluster. By default, it assumes the hostname of the machine. + The node name cannot contain whitespace or quotation marks. To query the node from DNS, the name must only contain alphanumeric characters and hyphens (`-`). Equivalent to the [`-node` command-line flag](/consul/commands/agent#_node). + +- `node_meta` This object allows + associating arbitrary metadata key/value pairs with the local node, which can + then be used for filtering results from certain catalog endpoints. Refer to the + [`-node-meta` command-line flag](/consul/commands/agent#_node_meta) for more + information. + +- `disable_host_node_id` ((#\_disable_host_node_id)) - Set this parameter to `true` to prevent Consul from using information from the host to generate a deterministic + node ID. Instead, Consul generates a random node ID that it persists in the data directory. This parameter is useful when running multiple Consul agents on the same + host for testing. The default value is `false`, so you must opt-in for host-based IDs. Host-based IDs are generated using [gopsutil](https://github.com/shirou/gopsutil/), which + is shared with [HashiCorp Nomad](/nomad/docs), so if you opt-in to host-based IDs, then Consul and Nomad will use information on the host to automatically assign the same ID in both systems. Equivalent to the [`-disable-host-node-id` command-line flag](/consul/commands/agent#_disable_host_node_id). + +## Examples + +### Node meta + +This example illustrates `node_meta` configuration. + + + +```hcl +node_meta { + instance_type = "t2.medium" +} +``` + +```json +{ + "node_meta": { + "instance_type": "t2.medium" + } +} +``` + + + diff --git a/website/content/docs/reference/agent/configuration-file/raft.mdx b/website/content/docs/reference/agent/configuration-file/raft.mdx new file mode 100644 index 000000000000..d63ed4c9ff02 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/raft.mdx @@ -0,0 +1,144 @@ +--- +layout: docs +page_title: Raft parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Raft parameters for Consul agent configuration files + +The page provides reference information for Raft parameters in a Consul agent configuration file. + +## Raft parameters + +- `raft_logstore` ((#raft_logstore)) This is a nested object that allows + configuring options for Raft's LogStore component which is used to persist + logs and crucial Raft state on disk during writes. This was added in Consul + v1.15.0. + + - `backend` ((#raft_logstore_backend)) Specifies which storage + engine to use to persist logs. Valid options are `boltdb` or `wal`. Default + is `boltdb`. The `wal` option specifies an experimental backend that + should be used with caution. Refer to + [Experimental WAL LogStore backend](/consul/docs/deploy/server/wal) + for more information. + + - `disable_log_cache` ((#raft_logstore_disable_log_cache)) Disables the in-memory cache for recent logs. We recommend using it for performance testing purposes, as no significant improvement has been measured when the cache is disabled. While the in-memory log cache theoretically prevents disk reads for recent logs, recent logs are also stored in the OS page cache, which does not slow either the `boltdb` or `wal` backend's ability to read them. + + - `verification` ((#raft_logstore_verification)) This is a nested object that + allows configuring the online verification of the LogStore. Verification + provides additional assurances that LogStore backends are correctly storing + data. It imposes low overhead on servers and is safe to run in + production. It is most useful when evaluating a new backend + implementation. + + Verification must be enabled on the leader to have any effect and can be + used with any backend. When enabled, the leader periodically writes a + special "checkpoint" log message that includes the checksums of all log entries + written to Raft since the last checkpoint. Followers that have verification + enabled run a background task for each checkpoint that reads all logs + directly from the LogStore and then recomputes the checksum. A report is output + as an INFO level log for each checkpoint. + + Checksum failure should never happen and indicate unrecoverable corruption + on that server. The only correct response is to stop the server, remove its + data directory, and restart so it can be caught back up with a correct + server again. Please report verification failures including details about + your hardware and workload via GitHub issues. Refer to + [Experimental WAL LogStore backend](/consul/docs/deploy/server/wal) + for more information. + + - `enabled` ((#raft_logstore_verification_enabled)) - Set to `true` to + allow this Consul server to write and verify log verification checkpoints + when elected leader. + + - `interval` ((#raft_logstore_verification_interval)) - Specifies the time + interval between checkpoints. There is no default value. You must + configure the `interval` and set [`enabled`](#raft_logstore_verification_enabled) + to `true` to correctly enable intervals. We recommend using an interval + between `30s` and `5m`. The performance overhead is insignificant when the + interval is set to `5m` or less. + + - `boltdb` ((#raft_logstore_boltdb)) - Object that configures options for + Raft's `boltdb` backend. It has no effect if the `backend` is not `boltdb`. + + - `no_freelist_sync` ((#raft_logstore_boltdb_no_freelist_sync)) - Set to + `true` to disable storing BoltDB's freelist to disk within the + `raft.db` file. Disabling freelist syncs reduces the disk IO required + for write operations, but could potentially increase start up time + because Consul must scan the database to find free space + within the file. + + - `wal` ((#raft_logstore_wal)) - Object that configures the `wal` backend. + Refer to [Experimental WAL LogStore backend](/consul/docs/deploy/server/wal) + for more information. + + - `segment_size_mb` ((#raft_logstore_wal_segment_size_mb)) - Integer value + that represents the target size in MB for each segment file before + rolling to a new segment. The default value is `64` and is suitable for + most deployments. While a smaller value may use less disk space because you + can reclaim space by deleting old segments sooner, the smaller segment that results + may affect performance because safely rotating to a new file more + frequently can impact tail latencies. Larger values are unlikely + to improve performance significantly. We recommend using this + configuration for performance testing purposes. + +- `raft_protocol` ((#raft_protocol)) Equivalent to the [`-raft-protocol` + command-line flag](/consul/commands/agent#_raft_protocol). + +- `raft_snapshot_threshold` ((#\_raft_snapshot_threshold)) This controls the + minimum number of raft commit entries between snapshots that are saved to + disk. This is a low-level parameter that should rarely need to be changed. + Very busy clusters experiencing excessive disk IO may increase this value to + reduce disk IO, and minimize the chances of all servers taking snapshots at + the same time. Increasing this trades off disk IO for disk space since the log + will grow much larger and the space in the raft.db file can't be reclaimed + till the next snapshot. Servers may take longer to recover from crashes or + failover if this is increased significantly as more logs will need to be + replayed. In Consul 1.1.0 and later this defaults to 16384, and in prior + versions it was set to 8192. + + Since Consul 1.10.0 this can be reloaded using `consul reload` or sending the + server a `SIGHUP` to allow tuning snapshot activity without a rolling restart + in emergencies. + +- `raft_snapshot_interval` ((#\_raft_snapshot_interval)) This controls how often + servers check if they need to save a snapshot to disk. This is a low-level + parameter that should rarely need to be changed. Very busy clusters + experiencing excessive disk IO may increase this value to reduce disk IO, and + minimize the chances of all servers taking snapshots at the same time. + Increasing this trades off disk IO for disk space since the log will grow much + larger and the space in the raft.db file can't be reclaimed till the next + snapshot. Servers may take longer to recover from crashes or failover if this + is increased significantly as more logs will need to be replayed. In Consul + 1.1.0 and later this defaults to `30s`, and in prior versions it was set to + `5s`. + + Since Consul 1.10.0 this can be reloaded using `consul reload` or sending the + server a `SIGHUP` to allow tuning snapshot activity without a rolling restart + in emergencies. + +- `raft_trailing_logs` - This controls how many log entries are left in the log + store on disk after a snapshot is made. This should only be adjusted when + followers cannot catch up to the leader due to a very large snapshot size + and high write throughput causing log truncation before an snapshot can be + fully installed on a follower. If you need to use this to recover a cluster, + consider reducing write throughput or the amount of data stored on Consul as + it is likely under a load it is not designed to handle. The default value is + 10000 which is suitable for all normal workloads. Added in Consul 1.5.3. + + Since Consul 1.10.0 this can be reloaded using `consul reload` or sending the + server a `SIGHUP` to allow recovery without downtime when followers can't keep + up. + +- `raft_boltdb` ((#raft_boltdb)) **This field was deprecated in Consul v1.15.0. + Use [`raft_logstore`](#raft_logstore) instead. This is a nested + object that allows configuring options for Raft's BoltDB-based log store. + + - `NoFreelistSync` **This field was deprecated in Consul v1.15.0. Use the + [`raft_logstore.boltdb.no_freelist_sync`](#raft_logstore_boltdb_no_freelist_sync) field + instead. Setting this to `true` disables syncing the BoltDB freelist + to disk within the raft.db file. Not syncing the freelist to disk + reduces disk IO required for write operations at the expense of potentially + increasing start up time due to needing to scan the db to discover where the + free space resides within the file. \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/serf.mdx b/website/content/docs/reference/agent/configuration-file/serf.mdx new file mode 100644 index 000000000000..a5eece8652d3 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/serf.mdx @@ -0,0 +1,25 @@ +--- +layout: docs +page_title: Serf parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Serf parameters for Consul agent configuration files + +The page provides reference information for Serf parameters in a Consul agent configuration file. + +## Serf parameters + +- `serf_lan` ((#serf_lan_bind)) - The address to bind to for Serf LAN gossip communications. The value is an IP address that should be reachable + by all other LAN nodes in the cluster. By default, the value follows the same rules + as [`bind`](/consul/docs/reference/agent/configuration-file/general#_bind), and if this is not specified, the value of `bind` is used. You can dynamically define this parameter with a [go-sockaddr] template that is resolved at runtime. Equivalent to the [`-serf-lan-bind` command-line flag](/consul/commands/agent#_serf_lan_bind). + Do not mistake this parameter for [`ports.serf_lan`](/consul/docs/reference/agent/configuration-file/general#serf_lan_port). + +- `serf_lan_allowed_cidrs` ((#serf_lan_allowed_cidrs)) - Allowed CIDRs to accept incoming connections for LAN Serf from several networks. This parameter supports multiple values. + Specify networks with CIDR notation, such as `192.168.1.0/24`. Equivalent to the [`-serf-lan-allowed-cidrs` command-line flag](/consul/commands/agent#_serf_lan_allowed_cidrs). + +- `serf_wan` ((#serf_wan_bind)) - The address thatto bind to for Serf WAN gossip communications. By default, the value follows the same rules + as [`bind`](/consul/docs/reference/agent/configuration-file/general#_bind), and if this is not specified, he value of `bind` is used.You can dynamically define this parameter with a [go-sockaddr] template that is resolved at runtime. Equivalent to the [`-serf-wan-bind` command-line flag](/consul/commands/agent#_serf_wan_bind). + +- `serf_wan_allowed_cidrs` ((#serf_wan_allowed_cidrs)) - Allowed CIDRs allow to accept incoming connections for WAN Serf from several networks. This parameter supports multiple values. Specify networks with CIDR notation, such as `192.168.1.0/24`. Equivalent to the [`-serf-wan-allowed-cidrs` command-lin flag](/consul/commands/agent#_serf_wan_allowed_cidrs). diff --git a/website/content/docs/reference/agent/configuration-file/service-mesh.mdx b/website/content/docs/reference/agent/configuration-file/service-mesh.mdx new file mode 100644 index 000000000000..f6bcd3f838fd --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/service-mesh.mdx @@ -0,0 +1,187 @@ +--- +layout: docs +page_title: Service mesh parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Service mesh parameters for Consul agent configuration files + +The page provides reference information for service mesh parameters in a Consul agent configuration file. + +## Service Mesh Parameters ((#connect-parameters)) + +The noun _connect_ is used throughout this documentation to refer to the connect +subsystem that provides Consul's service mesh capabilities. + +- `connect` This object allows setting options for the Connect feature. + + The following sub-keys are available: + + - `enabled` ((#connect_enabled)) (Defaults to `true`) Controls whether Connect features are + enabled on this agent. Should be enabled on all servers in the cluster + in order for service mesh to function properly. + Will be set to `true` automatically if `auto_config.enabled` or `auto_encrypt.allow_tls` is `true`. + + - `enable_mesh_gateway_wan_federation` ((#connect_enable_mesh_gateway_wan_federation)) (Defaults to `false`) Controls whether cross-datacenter federation traffic between servers is funneled + through mesh gateways. This was added in Consul 1.8.0. + + - `ca_provider` ((#connect_ca_provider)) Controls which CA provider to + use for the service mesh's CA. Currently only the `aws-pca`, `consul`, and `vault` providers are supported. + This is only used when initially bootstrapping the cluster. For an existing cluster, + use the [Update CA Configuration Endpoint](/consul/api-docs/connect/ca#update-ca-configuration). + + - `ca_config` ((#connect_ca_config)) An object which allows setting different + config options based on the CA provider chosen. This is only used when initially + bootstrapping the cluster. For an existing cluster, use the [Update CA Configuration + Endpoint](/consul/api-docs/connect/ca#update-ca-configuration). + + The following providers are supported: + + #### AWS ACM Private CA Provider (`ca_provider = "aws-pca"`) + + - `existing_arn` ((#aws_ca_existing_arn)) The Amazon Resource Name (ARN) of + an existing private CA in your ACM account. If specified, Consul will + attempt to use the existing CA to issue certificates. + + #### Consul CA Provider (`ca_provider = "consul"`) + + - `private_key` ((#consul_ca_private_key)) The PEM contents of the + private key to use for the CA. + + - `root_cert` ((#consul_ca_root_cert)) The PEM contents of the root + certificate to use for the CA. + + #### Vault CA Provider (`ca_provider = "vault"`) + + - `address` ((#vault_ca_address)) The address of the Vault server to + connect to. + + - `token` ((#vault_ca_token)) The Vault token to use. In Consul 1.8.5 and later, if + the token has the [renewable](/vault/api-docs/auth/token#renewable) + flag set, Consul will attempt to renew its lease periodically after half the + duration has expired. + + - `root_pki_path` ((#vault_ca_root_pki)) The path to use for the root + CA pki backend in Vault. This can be an existing backend with a CA already + configured, or a blank/unmounted backend in which case Consul will automatically + mount/generate the CA. The Vault token given above must have `sudo` access + to this backend, as well as permission to mount the backend at this path if + it is not already mounted. + + - `intermediate_pki_path` ((#vault_ca_intermediate_pki)) + The path to use for the temporary intermediate CA pki backend in Vault. **Consul + will overwrite any data at this path in order to generate a temporary intermediate + CA**. The Vault token given above must have `write` access to this backend, + as well as permission to mount the backend at this path if it is not already + mounted. + + - `auth_method` ((#vault_ca_auth_method)) + Vault auth method to use for logging in to Vault. + Please see [Vault Auth Methods](/vault/docs/auth) for more information + on how to configure individual auth methods. If auth method is provided, Consul will obtain a + new token from Vault when the token can no longer be renewed. + + - `type` The type of Vault auth method. + + - `mount_path` The mount path of the auth method. + If not provided the auth method type will be used as the mount path. + + - `params` The parameters to configure the auth method. + Please see [Vault Auth Methods](/vault/docs/auth) for information on how + to configure the auth method you wish to use. If using the Kubernetes auth method, Consul will + read the service account token from the default mount path `/var/run/secrets/kubernetes.io/serviceaccount/token` + if the `jwt` parameter is not provided. + + #### Common CA Config Options + + There are also a number of common configuration options supported by all providers: + + - `csr_max_concurrent` ((#ca_csr_max_concurrent)) Sets a limit on the number + of Certificate Signing Requests that can be processed concurrently. Defaults + to 0 (disabled). This is useful when you want to limit the number of CPU cores + available to the server for certificate signing operations. For example, on an + 8 core server, setting this to 1 will ensure that no more than one CPU core + will be consumed when generating or rotating certificates. Setting this is + recommended **instead** of `csr_max_per_second` when you want to limit the + number of cores consumed since it is simpler to reason about limiting CSR + resources this way without artificially slowing down rotations. Added in 1.4.1. + + - `csr_max_per_second` ((#ca_csr_max_per_second)) Sets a rate limit + on the maximum number of Certificate Signing Requests (CSRs) the servers will + accept. This is used to prevent CA rotation from causing unbounded CPU usage + on servers. It defaults to 50 which is conservative – a 2017 Macbook can process + about 100 per second using only ~40% of one CPU core – but sufficient for deployments + up to ~1500 service instances before the time it takes to rotate is impacted. + For larger deployments we recommend increasing this based on the expected number + of server instances and server resources, or use `csr_max_concurrent` instead + if servers have more than one CPU core. Setting this to zero disables rate limiting. + Added in 1.4.1. + + - `leaf_cert_ttl` ((#ca_leaf_cert_ttl)) Specifies the upper bound on the expiry + of a leaf certificate issued for a service. In most cases a new leaf + certificate will be requested by a proxy before this limit is reached. This + is also the effective limit on how long a server outage can last (with no leader) + before network connections will start being rejected. Defaults to `72h`. + + You can specify a range from one hour (minimum) up to one year (maximum) using + the following units: `h`, `m`, `s`, `ms`, `us` (or `µs`), `ns`, or a combination + of those units, e.g. `1h5m`. + + This value is also used when rotating out old root certificates from + the cluster. When a root certificate has been inactive (rotated out) + for more than twice the _current_ `leaf_cert_ttl`, it will be removed + from the trusted list. + + - `intermediate_cert_ttl` ((#ca_intermediate_cert_ttl)) Specifies the expiry for the + intermediate certificates. Defaults to `8760h` (1 year). Must be at least 3 times `leaf_cert_ttl`. + + - `root_cert_ttl` ((#ca_root_cert_ttl)) Specifies the expiry for a root certificate. + Defaults to 10 years as `87600h`. This value, if provided, needs to be higher than the + intermediate certificate TTL. + + This setting applies to all Consul CA providers. + + For the Vault provider, this value is only used if the backend is not initialized at first. + + This value is also applied on the `ca set-config` command. + + - `private_key_type` ((#ca_private_key_type)) The type of key to generate + for this CA. This is only used when the provider is generating a new key. If + `private_key` is set for the Consul provider, or existing root or intermediate + PKI paths given for Vault then this will be ignored. Currently supported options + are `ec` or `rsa`. Default is `ec`. + + It is required that all servers in a datacenter have + the same config for the CA. It is recommended that servers in + different datacenters use the same key type and size, + although the built-in CA and Vault provider will both allow mixed CA + key types. + + Some CA providers (currently Vault) will not allow cross-signing a + new CA certificate with a different key type. This means that if you + migrate from an RSA-keyed Vault CA to an EC-keyed CA from any + provider, you may have to proceed without cross-signing which risks + temporary connection issues for workloads during the new certificate + rollout. We highly recommend testing this outside of production to + understand the impact and suggest sticking to same key type where + possible. + + Note that this only affects _CA_ keys generated by the provider. + Leaf certificate keys are always EC 256 regardless of the CA + configuration. + + - `private_key_bits` ((#ca_private_key_bits)) The length of key to + generate for this CA. This is only used when the provider is generating a new + key. If `private_key` is set for the Consul provider, or existing root or intermediate + PKI paths given for Vault then this will be ignored. + + Currently supported values are: + + - `private_key_type = ec` (default): `224, 256, 384, 521` + corresponding to the NIST P-\* curves of the same name. + - `private_key_type = rsa`: `2048, 4096` + +- `locality` : Specifies a map of configurations that set the region and zone of the Consul agent. When specified on server agents, `locality` applies to all partitions on the server. When specified on clients, `locality` applies to all services registered to the client. Configure this field to enable Consul to route traffic to the nearest physical service instance. This field is intended for use primarily with VM and Nomad workloads. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. + - `region`: String value that specifies the region where the Consul agent is running. Consul assigns this value to services registered to that agent. When service proxy regions match, Consul is able to prioritize routes between service instances in the same region over instances in other regions. You must specify values that are consistent with how regions are defined in your network, for example `us-west-1` for networks in AWS. + - `zone`: String value that specifies the availability zone where the Consul agent is running. Consul assigns this value to services registered to that agent. When service proxy regions match, Consul is able to prioritize routes between service instances in the same region and zone over instances in other regions and zones. When healthy service instances are available in multiple zones within the most-local region, Consul prioritizes instances that also match the downstream proxy's `zone`. You must specify values that are consistent with how zones are defined in your network, for example `us-west-1a` for networks in AWS. diff --git a/website/content/docs/reference/agent/configuration-file/telemetry.mdx b/website/content/docs/reference/agent/configuration-file/telemetry.mdx new file mode 100644 index 000000000000..4099630c60d9 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/telemetry.mdx @@ -0,0 +1,197 @@ +--- +layout: docs +page_title: Telemetry parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# Telemetry parameters for Consul agent configuration files + +The page provides reference information for telemetry parameters in a Consul agent configuration file. + +## Telemetry parameters + +- `telemetry` This is a nested object that configures where + Consul sends its runtime telemetry, and contains the following keys: + + - `circonus_api_token` ((#telemetry-circonus_api_token)) A valid API + Token used to create/manage check. If provided, metric management is + enabled. + + - `circonus_api_app` ((#telemetry-circonus_api_app)) A valid app name + associated with the API token. By default, this is set to "consul". + + - `circonus_api_url` ((#telemetry-circonus_api_url)) + The base URL to use for contacting the Circonus API. By default, this is set + to "https://api.circonus.com/v2". + + - `circonus_submission_interval` ((#telemetry-circonus_submission_interval)) The interval at which metrics are submitted to Circonus. By default, this is set to "10s" (ten seconds). + + - `circonus_submission_url` ((#telemetry-circonus_submission_url)) + The `check.config.submission_url` field, of a Check API object, from a previously + created HTTPTrap check. + + - `circonus_check_id` ((#telemetry-circonus_check_id)) + The Check ID (not **check bundle**) from a previously created HTTPTrap check. + The numeric portion of the `check._cid` field in the Check API object. + + - `circonus_check_force_metric_activation` ((#telemetry-circonus_check_force_metric_activation)) Force activation of metrics which already exist and are not currently active. + If check management is enabled, the default behavior is to add new metrics as + they are encountered. If the metric already exists in the check, it will **not** + be activated. This setting overrides that behavior. By default, this is set to + false. + + - `circonus_check_instance_id` ((#telemetry-circonus_check_instance_id)) Uniquely identifies the metrics coming from this **instance**. It can be used to + maintain metric continuity with transient or ephemeral instances as they move + around within an infrastructure. By default, this is set to hostname:application + name (e.g. "host123:consul"). + + - `circonus_check_search_tag` ((#telemetry-circonus_check_search_tag)) A special tag which, when coupled with the instance id, helps to narrow down + the search results when neither a Submission URL or Check ID is provided. By + default, this is set to service:application name (e.g. "service:consul"). + + - `circonus_check_display_name` ((#telemetry-circonus_check_display_name)) Specifies a name to give a check when it is created. This name is displayed in + the Circonus UI Checks list. Available in Consul 0.7.2 and later. + + - `circonus_check_tags` ((#telemetry-circonus_check_tags)) + Comma separated list of additional tags to add to a check when it is created. + Available in Consul 0.7.2 and later. + + - `circonus_broker_id` ((#telemetry-circonus_broker_id)) + The ID of a specific Circonus Broker to use when creating a new check. The numeric + portion of `broker._cid` field in a Broker API object. If metric management is + enabled and neither a Submission URL nor Check ID is provided, an attempt will + be made to search for an existing check using Instance ID and Search Tag. If + one is not found, a new HTTPTrap check will be created. By default, this is not + used and a random Enterprise Broker is selected, or the default Circonus Public + Broker. + + - `circonus_broker_select_tag` ((#telemetry-circonus_broker_select_tag)) A special tag which will be used to select a Circonus Broker when a Broker ID + is not provided. The best use of this is to as a hint for which broker should + be used based on **where** this particular instance is running (e.g. a specific + geo location or datacenter, dc:sfo). By default, this is left blank and not used. + + - `disable_hostname` ((#telemetry-disable_hostname)) + Set to `true` to stop prepending the machine's hostname to gauge-type metrics. Default is `false`. + + - `disable_per_tenancy_usage_metrics` ((#telemetry-disable_per_tenancy_usage_metrics)) + Set to `true` to exclude tenancy labels from usage metrics. This significantly decreases CPU utilization in clusters with many admin partitions or namespaces. + + - `dogstatsd_addr` ((#telemetry-dogstatsd_addr)) This provides the address + of a DogStatsD instance in the format `host:port`. DogStatsD is a protocol-compatible + flavor of statsd, with the added ability to decorate metrics with tags and event + information. If provided, Consul will send various telemetry information to that + instance for aggregation. This can be used to capture runtime information. + + - `dogstatsd_tags` ((#telemetry-dogstatsd_tags)) This provides a list + of global tags that will be added to all telemetry packets sent to DogStatsD. + It is a list of strings, where each string looks like "my_tag_name:my_tag_value". + + - `enable_host_metrics` ((#telemetry-enable_host_metrics)) + This enables reporting of host metrics about system resources, defaults to false. + + - `filter_default` ((#telemetry-filter_default)) + This controls whether to allow metrics that have not been specified by the filter. + Defaults to `true`, which will allow all metrics when no filters are provided. + When set to `false` with no filters, no metrics will be sent. + + - `metrics_prefix` ((#telemetry-metrics_prefix)) + The prefix used while writing all telemetry data. By default, this is set to + "consul". This was added in Consul 1.0. For previous versions of Consul, use + the config option `statsite_prefix` in this same structure. This was renamed + in Consul 1.0 since this prefix applied to all telemetry providers, not just + statsite. + + - `prefix_filter` ((#telemetry-prefix_filter)) + This is a list of filter rules to apply for allowing or blocking metrics by + prefix in the following formats: + + - `"+"` + - `"-"` + + A leading "**+**" enables any metrics with the given prefix, and a leading + "**-**" blocks them. If there is overlap between two rules, the more + specific rule takes precedence. Blocking takes priority if the same prefix + is listed multiple times. + + Refer to the [prefix filter example](#prefix-filter) for more information. + + - `prometheus_retention_time` ((#telemetry-prometheus_retention_time)) If the + value is greater than `0s` (the default), this enables + [Prometheus](https://prometheus.io/) export of metrics. You may express the + duration with the duration semantics. This aggregates all counters for + the duration specified, which might have an impact on Consul's memory usage. + + A good value for `prometheus_retention_time` is at least 2 times the + interval of scrape of Prometheus, but you might also put a very high + retention time such as a few days (for instance 744h to enable retention to + 31 days). + + Fetch the metrics for Prometheus with the + [`/v1/agent/metrics?format=prometheus`](/consul/api-docs/agent#view-metrics) + endpoint. The format is compatible natively with Prometheus. When running in + this mode, we recommend that you also enable the option + [`disable_hostname`](#telemetry-disable_hostname) to avoid having prefixed + metrics with hostname. + + Consul does not use the default Prometheus path, so you must configure + Prometheus to scrape metrics as in the following example. + + ```yaml + metrics_path: '/v1/agent/metrics' + params: + format: ['prometheus'] + ``` + + Note that using `?format=prometheus` in the metrics path does not work. + Since `?` is escaped, you must specify the format as a parameter. Refer to + the [Prometheus configuration + documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) + for details. + + - `statsd_address` ((#telemetry-statsd_address)) This provides the address + of a statsd instance in the format `host:port`. If provided, Consul will send + various telemetry information to that instance for aggregation. This can be used + to capture runtime information. This sends UDP packets only and can be used with + statsd or statsite. + + - `statsite_address` ((#telemetry-statsite_address)) This provides the + address of a statsite instance in the format `host:port`. If provided, Consul + will stream various telemetry information to that instance for aggregation. This + can be used to capture runtime information. This streams via TCP and can only + be used with statsite. + +## Examples + +The following examples demonstrate common agent telemetry configuration patterns. + +### Prefix filter + +This example allows `consul.raft.apply` and `consul.http.GET` but blocks `consul.http`. + + + +```hcl +telemetry { + prefix_filter = ["+consul.raft.apply", "-consul.http", "+consul.http.GET"] +} +``` + +```json +{ + "telemetry": { + "prefix_filter": [ + "+consul.raft.apply", + "-consul.http", + "+consul.http.GET" + ] + } +} +``` + + + + + + + diff --git a/website/content/docs/reference/agent/configuration-file/tls.mdx b/website/content/docs/reference/agent/configuration-file/tls.mdx new file mode 100644 index 000000000000..1884d8c0abbc --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/tls.mdx @@ -0,0 +1,303 @@ +--- +layout: docs +page_title: TLS configuration parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# TLS configuration parameters for Consul agent configuration files + +This page provides reference information for TLS configuration parameters in a Consul agent configuration file. + +## TLS configuration parameters + +This section documents all of the configuration settings that apply to Agent TLS. Agent +TLS is used by the HTTP API, internal RPC, and gRPC/xDS interfaces. Some of these settings +may also be applied automatically by auto_config or auto_encrypt. + +~> **Security Note:** The Certificate Authority (CA) configured on the internal RPC interface +(either explicitly by `tls.internal_rpc` or implicitly by `tls.defaults`) should be a private +CA, not a public one. We recommend using a dedicated CA which should not be used with any other +systems. Any certificate signed by the CA will be allowed to communicate with the cluster and a +specially crafted certificate signed by the CA can be used to gain full access to Consul. + +- `tls` Added in Consul 1.12, for previous versions see + [Deprecated Options](#tls_deprecated_options). + + - `defaults` ((#tls_defaults)) Provides default settings that will be applied + to every interface unless explicitly overridden by `tls.grpc`, `tls.https`, + or `tls.internal_rpc`. + + - `ca_file` ((#tls_defaults_ca_file)) This provides a file path to a + PEM-encoded certificate authority. The certificate authority is used to + check the authenticity of client and server connections with the + appropriate [`verify_incoming`](#tls_defaults_verify_incoming) or + [`verify_outgoing`](#tls_defaults_verify_outgoing) flags. + + - `ca_path` ((#tls_defaults_ca_path)) This provides a path to a directory + of PEM-encoded certificate authority files. These certificate authorities + are used to check the authenticity of client and server connections with + the appropriate [`verify_incoming`](#tls_defaults_verify_incoming) or + [`verify_outgoing`](#tls_defaults_verify_outgoing) flags. + + - `cert_file` ((#tls_defaults_cert_file)) This provides a file path to a + PEM-encoded certificate. The certificate is provided to clients or servers + to verify the agent's authenticity. It must be provided along with + [`key_file`](#tls_defaults_key_file). + + - `key_file` ((#tls_defaults_key_file)) This provides a the file path to a + PEM-encoded private key. The key is used with the certificate to verify + the agent's authenticity. This must be provided along with + [`cert_file`](#tls_defaults_cert_file). + + - `tls_min_version` ((#tls_defaults_tls_min_version)) This specifies the + minimum supported version of TLS. The following values are accepted: + * `TLSv1_0` + * `TLSv1_1` + * `TLSv1_2` (default) + * `TLSv1_3` + + - `verify_server_hostname` ((#tls_internal_rpc_verify_server_hostname)) When + set to true, Consul verifies the TLS certificate presented by the servers + match the hostname `server..`. By default this is false, + and Consul does not verify the hostname of the certificate, only that it + is signed by a trusted CA. + + **WARNING: TLS 1.1 and lower are generally considered less secure and + should not be used if possible.** + + The following values are also valid, but only when using the + [deprecated top-level `tls_min_version` config](#tls_deprecated_options), + and will be removed in a future release: + + * `tls10` + * `tls11` + * `tls12` + * `tls13` + + A warning message will appear if a deprecated value is specified. + + - `tls_cipher_suites` ((#tls_defaults_tls_cipher_suites)) This specifies + the list of supported ciphersuites as a comma-separated-list. Applicable + to TLS 1.2 and below only. The list of all ciphersuites supported by Consul is + available in [the TLS configuration source code](https://github.com/hashicorp/consul/search?q=%22var+goTLSCipherSuites%22). + + ~> **Note:** The ordering of cipher suites will not be guaranteed from + Consul 1.11 onwards. See this [post](https://go.dev/blog/tls-cipher-suites) + for details. + + - `verify_incoming` - ((#tls_defaults_verify_incoming)) If set to true, + Consul requires that all incoming connections make use of TLS and that + the client provides a certificate signed by a Certificate Authority from + the [`ca_file`](#tls_defaults_ca_file) or [`ca_path`](#tls_defaults_ca_path). + By default, this is false, and Consul will not enforce the use of TLS or + verify a client's authenticity. + + - `verify_outgoing` - ((#tls_defaults_verify_outgoing)) If set to true, + Consul requires that all outgoing connections from this agent make use + of TLS and that the server provides a certificate that is signed by a + Certificate Authority from the [`ca_file`](#tls_defaults_ca_file) or + [`ca_path`](#tls_defaults_ca_path). By default, this is false, and Consul + will not make use of TLS for outgoing connections. This applies to clients + and servers as both will make outgoing connections. This setting does not + apply to the gRPC interface as Consul makes no outgoing connections on this + interface. When set to true for the HTTPS interface, this parameter applies to [watches](/consul/docs/automate/watch), which operate by making HTTPS requests to the local agent. + + - `grpc` ((#tls_grpc)) Provides settings for the gRPC/xDS interface. To enable + the gRPC interface you must define a port via [`ports.grpc_tls`](/consul/docs/reference/agent/configuration-file/general#grpc_tls_port). + + - `ca_file` ((#tls_grpc_ca_file)) Overrides [`tls.defaults.ca_file`](#tls_defaults_ca_file). + + - `ca_path` ((#tls_grpc_ca_path)) Overrides [`tls.defaults.ca_path`](#tls_defaults_ca_path). + + - `cert_file` ((#tls_grpc_cert_file)) Overrides [`tls.defaults.cert_file`](#tls_defaults_cert_file). + + - `key_file` ((#tls_grpc_key_file)) Overrides [`tls.defaults.key_file`](#tls_defaults_key_file). + + - `tls_min_version` ((#tls_grpc_tls_min_version)) Overrides [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). + + - `tls_cipher_suites` ((#tls_grpc_tls_cipher_suites)) Overrides [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). + + - `verify_incoming` - ((#tls_grpc_verify_incoming)) Overrides [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). + + - `use_auto_cert` - (Defaults to `false`) Enables or disables TLS on gRPC servers. Set to `true` to allow [`auto_encrypt`](/consul/docs/reference/agent/configuration-file/encryption#auto_encrypt) TLS settings to apply to gRPC listeners. We recommend disabling TLS on gRPC servers if you are using `auto_encrypt` for other TLS purposes, such as enabling HTTPS. + + - `https` ((#tls_https)) Provides settings for the HTTPS interface. To enable + the HTTPS interface you must define a port via [`ports.https`](/consul/docs/reference/agent/configuration-file/general#https_port). + + - `ca_file` ((#tls_https_ca_file)) Overrides [`tls.defaults.ca_file`](#tls_defaults_ca_file). + + - `ca_path` ((#tls_https_ca_path)) Overrides [`tls.defaults.ca_path`](#tls_defaults_ca_path). + + - `cert_file` ((#tls_https_cert_file)) Overrides [`tls.defaults.cert_file`](#tls_defaults_cert_file). + + - `key_file` ((#tls_https_key_file)) Overrides [`tls.defaults.key_file`](#tls_defaults_key_file). + + - `tls_min_version` ((#tls_https_tls_min_version)) Overrides [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). + + - `tls_cipher_suites` ((#tls_https_tls_cipher_suites)) Overrides [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). + + - `verify_incoming` - ((#tls_https_verify_incoming)) Overrides [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). + + - `verify_outgoing` - ((#tls_https_verify_outgoing)) Overrides [`tls.defaults.verify_outgoing`](#tls_defaults_verify_outgoing). + + - `internal_rpc` ((#tls_internal_rpc)) Provides settings for the internal + "server" RPC interface configured by [`ports.server`](/consul/docs/reference/agent/configuration-file/general#server_rpc_port). + + - `ca_file` ((#tls_internal_rpc_ca_file)) Overrides [`tls.defaults.ca_file`](#tls_defaults_ca_file). + + - `ca_path` ((#tls_internal_rpc_ca_path)) Overrides [`tls.defaults.ca_path`](#tls_defaults_ca_path). + + - `cert_file` ((#tls_internal_rpc_cert_file)) Overrides [`tls.defaults.cert_file`](#tls_defaults_cert_file). + + - `key_file` ((#tls_internal_rpc_key_file)) Overrides [`tls.defaults.key_file`](#tls_defaults_key_file). + + - `tls_min_version` ((#tls_internal_rpc_tls_min_version)) Overrides [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). + + - `tls_cipher_suites` ((#tls_internal_rpc_tls_cipher_suites)) Overrides [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). + + - `verify_incoming` - ((#tls_internal_rpc_verify_incoming)) Overrides [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). + + ~> **Security Note:** `verify_incoming` *must* be set to true to prevent + anyone with access to the internal RPC port from gaining full access to + the Consul cluster. + + - `verify_outgoing` ((#tls_internal_rpc_verify_outgoing)) Overrides [`tls.defaults.verify_outgoing`](#tls_defaults_verify_outgoing). + + ~> **Security Note:** Servers that specify `verify_outgoing = true` will + always talk to other servers over TLS, but they still _accept_ non-TLS + connections to allow for a transition of all clients to TLS. Currently the + only way to enforce that no client can communicate with a server unencrypted + is to also enable `verify_incoming` which requires client certificates too. + + - `verify_server_hostname` Overrides [tls.defaults.verify_server_hostname](#tls_internal_rpc_verify_server_hostname). When + set to true, Consul verifies the TLS certificate presented by the servers + match the hostname `server..`. By default this is false, + and Consul does not verify the hostname of the certificate, only that it + is signed by a trusted CA. + + ~> **Security Note:** `verify_server_hostname` *must* be set to true to prevent a + compromised client from gaining full read and write access to all cluster + data *including all ACL tokens and service mesh CA root keys*. + +- `server_name` When provided, this overrides the [`node_name`](/consul/docs/reference/agent/configuration-file/node#node_name) + for the TLS certificate. It can be used to ensure that the certificate name matches + the hostname we declare. + +### Deprecated Options ((#tls_deprecated_options)) + +The following options were deprecated in Consul 1.12, please use the +`tls` stanza instead. + +- `ca_file` See: [`tls.defaults.ca_file`](#tls_defaults_ca_file). + +- `ca_path` See: [`tls.defaults.ca_path`](#tls_defaults_ca_path). + +- `cert_file` See: [`tls.defaults.cert_file`](#tls_defaults_cert_file). + +- `key_file` See: [`tls.defaults.key_file`](#tls_defaults_key_file). + +- `tls_min_version` Added in Consul 0.7.4. + See: [`tls.defaults.tls_min_version`](#tls_defaults_tls_min_version). + +- `tls_cipher_suites` Added in Consul 0.8.2. + See: [`tls.defaults.tls_cipher_suites`](#tls_defaults_tls_cipher_suites). + +- `tls_prefer_server_cipher_suites` Added in Consul 0.8.2. This setting will + be ignored (see [this post](https://go.dev/blog/tls-cipher-suites) for details). + +- `verify_incoming` See: [`tls.defaults.verify_incoming`](#tls_defaults_verify_incoming). + +- `verify_incoming_rpc` See: [`tls.internal_rpc.verify_incoming`](#tls_internal_rpc_verify_incoming). + +- `verify_incoming_https` See: [`tls.https.verify_incoming`](#tls_https_verify_incoming). + +- `verify_outgoing` See: [`tls.defaults.verify_outgoing`](#tls_defaults_verify_outgoing). + +- `verify_server_hostname` See: [`tls.internal_rpc.verify_server_hostname`](#tls_internal_rpc_verify_server_hostname). + +## Examples + +The following examples demonstrate common agent TLS configuration patterns. + +### Secure mTLS configuration + +All three verify options should be set as `true` to enable +secure mTLS communication, enabling both encryption and authentication. Failing +to set [`verify_incoming`](#tls_defaults_verify_incoming) or +[`verify_outgoing`](#tls_defaults_verify_outgoing) either in the +interface-specific stanza (e.g. `tls.internal_rpc`, `tls.https`) or in +`tls.defaults` results in TLS not being enabled at all, even when specifying +a [`ca_file`](#tls_defaults_ca_file), [`cert_file`](#tls_defaults_cert_file), +and [`key_file`](#tls_defaults_key_file). + +Review, especially, the use of the `ports` setting in the highlighted code +lines. + + + + + +```hcl +datacenter = "east-aws" +data_dir = "/opt/consul" +log_level = "INFO" +node_name = "foobar" +server = true + +addresses = { + https = "0.0.0.0" +} +ports { + https = 8501 +} + +tls { + defaults { + key_file = "/etc/pki/tls/private/my.key" + cert_file = "/etc/pki/tls/certs/my.crt" + ca_file = "/etc/pki/tls/certs/ca-bundle.crt" + verify_incoming = true + verify_outgoing = true + verify_server_hostname = true + } +} +``` + + + + + +```json +{ + "datacenter": "east-aws", + "data_dir": "/opt/consul", + "log_level": "INFO", + "node_name": "foobar", + "server": true, + "addresses": { + "https": "0.0.0.0" + }, + "ports": { + "https": 8501 + }, + "tls": { + "defaults": { + "key_file": "/etc/pki/tls/private/my.key", + "cert_file": "/etc/pki/tls/certs/my.crt", + "ca_file": "/etc/pki/tls/certs/ca-bundle.crt", + "verify_incoming": true, + "verify_outgoing": true, + "verify_server_hostname": true + } + } +} +``` + + + + + +Consul does not enable TLS for the HTTP or gRPC API unless the `https` port has +been assigned a port number `> 0`. We recommend using `8501` for `https` as this +default automatically works with some tooling. diff --git a/website/content/docs/reference/agent/configuration-file/ui.mdx b/website/content/docs/reference/agent/configuration-file/ui.mdx new file mode 100644 index 000000000000..c2e6ab966f65 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/ui.mdx @@ -0,0 +1,133 @@ +--- +layout: docs +page_title: UI parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# UI parameters for Consul agent configuration files + +This page provides reference information for UI parameters in a Consul agent configuration file. + +## UI parameters + +- `ui_config` - This object allows a number of sub-keys to be set which controls + the display or features available in the UI. + + The following sub-keys are available: + + - `enabled` ((#ui_config_enabled)) - This enables the service of the web UI + from this agent. Boolean value, defaults to false. In `-dev` mode this + defaults to true. Replaces `ui` from before 1.9.0. Equivalent to the + [`-ui`](/consul/commands/agent#_ui) command-line flag. + + - `dir` ((#ui_config_dir)) - This specifies that the web UI should be served + from an external dir rather than the build in one. This allows for + customization or development. Replaces `ui_dir` from before 1.9.0. + Equivalent to the [`-ui-dir`](/consul/commands/agent#_ui_dir) command-line flag. + + - `content_path` ((#ui_config_content_path)) - This specifies the HTTP path + that the web UI should be served from. Defaults to `/ui/`. Equivalent to the + [`-ui-content-path`](/consul/commands/agent#_ui_content_path) flag. + + - `metrics_provider` ((#ui_config_metrics_provider)) - Specifies a named + metrics provider implementation the UI should use to fetch service metrics. + By default metrics are disabled. Consul 1.9.0 includes a built-in provider + named `prometheus` that can be enabled explicitly here. It also requires the + `metrics_proxy` to be configured below and direct queries to a Prometheus + instance that has Envoy metrics for all services in the datacenter. + + - `metrics_provider_files` ((#ui_config_metrics_provider_files)) - An optional array + of absolute paths to javascript files on the Agent's disk which will be + served as part of the UI. These files should contain metrics provider + implementations and registration enabling UI metric queries to be customized + or implemented for an alternative time-series backend. + + ~> **Security Note:** These javascript files are included in the UI with no + further validation or sand-boxing. By configuring them here the operator is + fully trusting anyone able to write to them as well as the original authors + not to include malicious code in the UI being served. + + - `metrics_provider_options_json` ((#ui_config_metrics_provider_options_json)) - + This is an optional raw JSON object as a string which is passed to the + provider implementation's `init` method at startup to allow arbitrary + configuration to be passed through. + + - `metrics_proxy` ((#ui_config_metrics_proxy)) - This object configures an + internal agent API endpoint that will proxy GET requests to a metrics + backend to allow querying metrics data in the UI. This simplifies deployment + where the metrics backend is not exposed externally to UI users' browsers. + It may also be used to augment requests with API credentials to allow + serving graphs to UI users without them needing individual access tokens for + the metrics backend. + + ~> **Security Note:** Exposing your metrics backend via Consul in this way + should be carefully considered in production. As Consul doesn't understand + the requests, it can't limit access to only specific resources. For example + **this might make it possible for a malicious user on the network to query + for arbitrary metrics about any server or workload in your infrastructure, + or overload the metrics infrastructure with queries**. See [Metrics Proxy + Security](/consul/docs/connect/observability/ui-visualization#metrics-proxy-security) + for more details. + + The following sub-keys are available: + + - `base_url` ((#ui_config_metrics_provider_base_url)) - This is required to + enable the proxy. It should be set to the base URL that the Consul agent + should proxy requests for metrics too. For example a value of + `http://prometheus-server` would target a Prometheus instance with local + DNS name "prometheus-server" on port 80. This may include a path prefix + which will then not be necessary in provider requests to the backend and + the proxy will prevent any access to paths without that prefix on the + backend. + + - `path_allowlist` ((#ui_config_metrics_provider_path_allowlist)) - This + specifies the paths that may be proxies to when appended to the + `base_url`. It defaults to `["/api/v1/query_range", "/api/v1/query"]` + which are the endpoints required for the built-in Prometheus provider. If + a [custom + provider](/consul/docs/connect/observability/ui-visualization#custom-metrics-providers) + is used that requires the metrics proxy, the correct allowlist must be + specified to enable proxying to necessary endpoints. See [Path + Allowlist](/consul/docs/connect/observability/ui-visualization#path-allowlist) + for more information. + + - `add_headers` ((#ui_config_metrics_proxy_add_headers)) - This is an + optional list if headers to add to requests that are proxied to the + metrics backend. It may be used to inject Authorization tokens within the + agent without exposing those to UI users. + + Each item in the list is an object with the following keys: + + - `name` ((#ui_config_metrics_proxy_add_headers_name)) - Specifies the + HTTP header name to inject into proxied requests. + + - `value` ((#ui_config_metrics_proxy_add_headers_value)) - Specifies the + value in inject into proxied requests. + + - `dashboard_url_templates` ((#ui_config_dashboard_url_templates)) - This map + specifies URL templates that may be used to render links to external + dashboards in various contexts in the UI. It is a map with the name of the + template as a key. The value is a string URL with optional placeholders. + + Each template may contain placeholders which will be substituted for the + correct values in content when rendered in the UI. The placeholders + available are listed for each template. + + For more information and examples see [UI + Visualization](/consul/docs/connect/observability/ui-visualization#configuring-dashboard-urls) + + The following named templates are defined: + + - `service` ((#ui_config_dashboard_url_templates_service)) - This is the URL + to use when linking to the dashboard for a specific service. It is shown + as part of the [Topology + Visualization](/consul/docs/observe/telemetry/vm). + + The placeholders available are: + + - `{{Service.Name}}` - Replaced with the current service's name. + - `{{Service.Namespace}}` - Replaced with the current service's namespace or empty if namespaces are not enabled. + - `{{Service.Partition}}` - Replaced with the current service's admin + partition or empty if admin partitions are not enabled. + - `{{Datacenter}}` - Replaced with the current service's datacenter. \ No newline at end of file diff --git a/website/content/docs/reference/agent/configuration-file/xds.mdx b/website/content/docs/reference/agent/configuration-file/xds.mdx new file mode 100644 index 000000000000..b83cb8d3f112 --- /dev/null +++ b/website/content/docs/reference/agent/configuration-file/xds.mdx @@ -0,0 +1,22 @@ +--- +layout: docs +page_title: xDS parameters for Consul agent configuration files +description: >- + Use agent configuration files to assign attributes to agents and configure multiple agents at once. Learn about agent configuration file parameters and formatting with this reference page and sample code. +--- + +# xDS configuration parameters for Consul agent configuration files + +This page provides reference information for xDS parameters in a Consul agent configuration file. + +## xDS server parameters + +- `xds`: This object allows you to configure the behavior of Consul's +[xDS protocol](https://www.envoyproxy.io/docs/envoy/latest/api-docs/xds_protocol) +server. + + - `update_max_per_second`: Specifies the number of proxy configuration updates across all connected xDS streams that are allowed per second. This configuration prevents updates to global resources, such as wildcard intentions, from consuming system resources at the expense of other processes, such as Raft and Gossip, which could cause general cluster instability. + + The default value is `250`. It is based on a load test of 5,000 streams connected to a single server with two CPU cores. + + If necessary, you can lower or increase the limit without a rolling restart by using the `consul reload` command or by sending the server a `SIGHUP`. diff --git a/website/content/docs/reference/agent/telemetry.mdx b/website/content/docs/reference/agent/telemetry.mdx new file mode 100644 index 000000000000..14ec9dbdcc3f --- /dev/null +++ b/website/content/docs/reference/agent/telemetry.mdx @@ -0,0 +1,430 @@ +--- +layout: docs +page_title: Consul agent telemetry metrics reference +description: >- + Configure agent telemetry to collect operations metrics you can use to debug and observe Consul behavior and performance. Learn about configuration options, the metrics you can collect, and why they're important. +--- + +# Consul agent telemetry reference + +This page provides reference information for Consul agent events and the metrics they produce. + +For information about service mesh traffic metrics, refer to [Observe service mesh telemetry](/consul/docs/observe). + +## Agent metrics + +The following table describes the metrics that Consul agents emit. + +| Metric | Description | Unit | Type | +|--------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------|---------| +| `consul.acl.blocked.{check,service}.deregistration` | Increments whenever a deregistration fails for an entity (check or service) is blocked by an ACL. | requests | counter | +| `consul.acl.blocked.{check,node,service}.registration` | Increments whenever a registration fails for an entity (check, node or service) is blocked by an ACL. | requests | counter | +| `consul.api.http` | This samples how long it takes to service the given HTTP request for the given verb and path. Includes labels for `path` and `method`. `path` does not include details like service or key names, for these an underscore will be present as a placeholder (eg. path=`v1.kv._`) | ms | timer | +| `consul.client.rpc` | Increments whenever a Consul agent makes an RPC request to a Consul server. This gives a measure of how much a given agent is loading the Consul servers. Currently, this is only generated by agents in client mode, not Consul servers. | requests | counter | +| `consul.client.rpc.exceeded` | Increments whenever a Consul agent makes an RPC request to a Consul server gets rate limited by that agent's [`limits`](/consul/docs/reference/agent/configuration-file/general#limits) configuration. This gives an indication that there's an abusive application making too many requests on the agent, or that the rate limit needs to be increased. Currently, this only applies to agents in client mode, not Consul servers. | rejected requests | counter | +| `consul.client.rpc.failed` | Increments whenever a Consul agent makes an RPC request to a Consul server and fails. | requests | counter | +| `consul.client.api.catalog_register` | Increments whenever a Consul agent receives a catalog register request. | requests | counter | +| `consul.client.api.success.catalog_register` | Increments whenever a Consul agent successfully responds to a catalog register request. | requests | counter | +| `consul.client.rpc.error.catalog_register` | Increments whenever a Consul agent receives an RPC error for a catalog register request. | errors | counter | +| `consul.client.api.catalog_deregister` | Increments whenever a Consul agent receives a catalog deregister request. | requests | counter | +| `consul.client.api.success.catalog_deregister` | Increments whenever a Consul agent successfully responds to a catalog deregister request. | requests | counter | +| `consul.client.rpc.error.catalog_deregister` | Increments whenever a Consul agent receives an RPC error for a catalog deregister request. | errors | counter | +| `consul.client.api.catalog_datacenters` | Increments whenever a Consul agent receives a request to list datacenters in the catalog. | requests | counter | +| `consul.client.api.success.catalog_datacenters` | Increments whenever a Consul agent successfully responds to a request to list datacenters. | requests | counter | +| `consul.client.rpc.error.catalog_datacenters` | Increments whenever a Consul agent receives an RPC error for a request to list datacenters. | errors | counter | +| `consul.client.api.catalog_nodes` | Increments whenever a Consul agent receives a request to list nodes from the catalog. | requests | counter | +| `consul.client.api.success.catalog_nodes` | Increments whenever a Consul agent successfully responds to a request to list nodes. | requests | counter | +| `consul.client.rpc.error.catalog_nodes` | Increments whenever a Consul agent receives an RPC error for a request to list nodes. | errors | counter | +| `consul.client.api.catalog_services` | Increments whenever a Consul agent receives a request to list services from the catalog. | requests | counter | +| `consul.client.api.success.catalog_services` | Increments whenever a Consul agent successfully responds to a request to list services. | requests | counter | +| `consul.client.rpc.error.catalog_services` | Increments whenever a Consul agent receives an RPC error for a request to list services. | errors | counter | +| `consul.client.api.catalog_service_nodes` | Increments whenever a Consul agent receives a request to list nodes offering a service. | requests | counter | +| `consul.client.api.success.catalog_service_nodes` | Increments whenever a Consul agent successfully responds to a request to list nodes offering a service. | requests | counter | +| `consul.client.api.error.catalog_service_nodes` | Increments whenever a Consul agent receives an RPC error for request to list nodes offering a service. | requests | counter | +| `consul.client.rpc.error.catalog_service_nodes` | Increments whenever a Consul agent receives an RPC error for a request to list nodes offering a service.   | errors | counter | +| `consul.client.api.catalog_node_services` | Increments whenever a Consul agent receives a request to list services registered in a node.   | requests | counter | +| `consul.client.api.success.catalog_node_services` | Increments whenever a Consul agent successfully responds to a request to list services in a node.   | requests | counter | +| `consul.client.rpc.error.catalog_node_services` | Increments whenever a Consul agent receives an RPC error for a request to list services in a node.   | errors | counter | +| `consul.client.api.catalog_node_service_list` | Increments whenever a Consul agent receives a request to list a node's registered services. | requests | counter | +| `consul.client.rpc.error.catalog_node_service_list` | Increments whenever a Consul agent receives an RPC error for request to list a node's registered services. | errors | counter | +| `consul.client.api.success.catalog_node_service_list` | Increments whenever a Consul agent successfully responds to a request to list a node's registered services. | requests | counter | +| `consul.client.api.catalog_gateway_services` | Increments whenever a Consul agent receives a request to list services associated with a gateway. | requests | counter | +| `consul.client.api.success.catalog_gateway_services` | Increments whenever a Consul agent successfully responds to a request to list services associated with a gateway. | requests | counter | +| `consul.client.rpc.error.catalog_gateway_services` | Increments whenever a Consul agent receives an RPC error for a request to list services associated with a gateway. | errors | counter | +| `consul.runtime.num_goroutines` | Tracks the number of running goroutines and is a general load pressure indicator. This may burst from time to time but should return to a steady state value. | number of goroutines | gauge | +| `consul.runtime.alloc_bytes` | Measures the number of bytes allocated by the Consul process. This may burst from time to time but should return to a steady state value. | bytes | gauge | +| `consul.runtime.heap_objects` | Measures the number of objects allocated on the heap and is a general memory pressure indicator. This may burst from time to time but should return to a steady state value. | number of objects | gauge | +| `consul.state.nodes` | Measures the current number of nodes registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | +| `consul.state.peerings` | Measures the current number of peerings registered with Consul. It is only emitted by Consul servers. Added in v1.13.0. | number of objects | gauge | +| `consul.state.services` | Measures the current number of unique services registered with Consul, based on service name. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | +| `consul.state.service_instances` | Measures the current number of unique service instances registered with Consul. It is only emitted by Consul servers. Added in v1.9.0. | number of objects | gauge | +| `consul.state.kv_entries` | Measures the current number of entries in the Consul KV store. It is only emitted by Consul servers. Added in v1.10.3. | number of objects | gauge | +| `consul.state.connect_instances` | Measures the current number of unique mesh service instances registered with Consul labeled by Kind (e.g. connect-proxy, connect-native, etc). Added in v1.10.4 | number of objects | gauge | +| `consul.state.config_entries` | Measures the current number of configuration entries registered with Consul labeled by Kind (e.g. service-defaults, proxy-defaults, etc). See [Configuration Entries](/consul/docs/fundamentals/config-entry) for more information. Added in v1.10.4 | number of objects | gauge | +| `consul.members.clients` | Measures the current number of client agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of clients | gauge | +| `consul.members.servers` | Measures the current number of server agents registered with Consul. It is only emitted by Consul servers. Added in v1.9.6. | number of servers | gauge | +| `consul.dns.stale_queries` | Increments when an agent serves a query within the allowed stale threshold. | queries | counter | +| `consul.dns.ptr_query` | Measures the time spent handling a reverse DNS query for the given node. | ms | timer | +| `consul.dns.domain_query` | Measures the time spent handling a domain query for the given node. | ms | timer | +| `consul.system.licenseExpiration` | This measures the number of hours remaining on the agents license. | hours | gauge | +| `consul.version` | Represents the Consul version. | agents | gauge | + +## Server Health + +These metrics are used to monitor the health of the Consul servers. + +| Metric | Description | Unit | Type | +|-----------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------|---------| +| `consul.acl.ResolveToken` | Measures the time it takes to resolve an ACL token. | ms | timer | +| `consul.acl.ResolveTokenToIdentity` | Measures the time it takes to resolve an ACL token to an Identity. This metric was removed in Consul 1.12. The time will now be reflected in `consul.acl.ResolveToken`. | ms | timer | +| `consul.acl.token.cache_hit` | Increments if Consul is able to resolve a token's identity from the cache. | cache read op | counter | +| `consul.acl.token.cache_miss` | Increments if Consul cannot resolve a token's identity from the cache. | cache read op | counter | +| `consul.cache.bypass` | Counts how many times a request bypassed the cache because no cache-key was provided. | counter | counter | +| `consul.cache.fetch_success` | Counts the number of successful fetches by the cache. | counter | counter | +| `consul.cache.fetch_error` | Counts the number of failed fetches by the cache. | counter | counter | +| `consul.cache.evict_expired` | Counts the number of expired entries that are evicted. | counter | counter | +| `consul.raft.applied_index` | Represents the raft applied index. | index | gauge | +| `consul.raft.apply` | Counts the number of Raft transactions occurring over the interval, which is a general indicator of the write load on the Consul servers. | raft transactions / interval | counter | +| `consul.raft.barrier` | Counts the number of times the agent has started the barrier i.e the number of times it has issued a blocking call, to ensure that the agent has all the pending operations that were queued, to be applied to the agent's FSM. | blocks / interval | counter | +| `consul.raft.boltdb.freelistBytes` | Represents the number of bytes necessary to encode the freelist metadata. When [`raft_logstore.boltdb.no_freelist_sync`](/consul/docs/reference/agent/configuration-file/raft#raft_logstore_boltdb_no_freelist_sync) is set to `false` these metadata bytes must also be written to disk for each committed log. | bytes | gauge | +| `consul.raft.boltdb.freePageBytes` | Represents the number of bytes of free space within the raft.db file. | bytes | gauge | +| `consul.raft.boltdb.getLog` | Measures the amount of time spent reading logs from the db. | ms | timer | +| `consul.raft.boltdb.logBatchSize` | Measures the total size in bytes of logs being written to the db in a single batch. | bytes | sample | +| `consul.raft.boltdb.logsPerBatch` | Measures the number of logs being written per batch to the db. | logs | sample | +| `consul.raft.boltdb.logSize` | Measures the size of logs being written to the db. | bytes | sample | +| `consul.raft.boltdb.numFreePages` | Represents the number of free pages within the raft.db file. | pages | gauge | +| `consul.raft.boltdb.numPendingPages` | Represents the number of pending pages within the raft.db that will soon become free. | pages | gauge | +| `consul.raft.boltdb.openReadTxn` | Represents the number of open read transactions against the db | transactions | gauge | +| `consul.raft.boltdb.totalReadTxn` | Represents the total number of started read transactions against the db | transactions | gauge | +| `consul.raft.boltdb.storeLogs` | Measures the amount of time spent writing logs to the db. | ms | timer | +| `consul.raft.boltdb.txstats.cursorCount` | Counts the number of cursors created since Consul was started. | cursors | counter | +| `consul.raft.boltdb.txstats.nodeCount` | Counts the number of node allocations within the db since Consul was started. | allocations | counter | +| `consul.raft.boltdb.txstats.nodeDeref` | Counts the number of node dereferences in the db since Consul was started. | dereferences | counter | +| `consul.raft.boltdb.txstats.pageAlloc` | Represents the number of bytes allocated within the db since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. | bytes | gauge | +| `consul.raft.boltdb.txstats.pageCount` | Represents the number of pages allocated since Consul was started. Note that this does not take into account space having been freed and reused. In that case, the value of this metric will still increase. | pages | gauge | +| `consul.raft.boltdb.txstats.rebalance` | Counts the number of node rebalances performed in the db since Consul was started. | rebalances | counter | +| `consul.raft.boltdb.txstats.rebalanceTime` | Measures the time spent rebalancing nodes in the db. | ms | timer | +| `consul.raft.boltdb.txstats.spill` | Counts the number of nodes spilled in the db since Consul was started. | spills | counter | +| `consul.raft.boltdb.txstats.spillTime` | Measures the time spent spilling nodes in the db. | ms | timer | +| `consul.raft.boltdb.txstats.split` | Counts the number of nodes split in the db since Consul was started. | splits | counter | +| `consul.raft.boltdb.txstats.write` | Counts the number of writes to the db since Consul was started. | writes | counter | +| `consul.raft.boltdb.txstats.writeTime` | Measures the amount of time spent performing writes to the db. | ms | timer | +| `consul.raft.boltdb.writeCapacity` | Theoretical write capacity in terms of the number of logs that can be written per second. Each sample outputs what the capacity would be if future batched log write operations were similar to this one. This similarity encompasses 4 things: batch size, byte size, disk performance and boltdb performance. While none of these will be static and its highly likely individual samples of this metric will vary, aggregating this metric over a larger time window should provide a decent picture into how this BoltDB store can perform | logs/second | sample | +| `consul.raft.commitNumLogs` | Measures the count of logs processed for application to the FSM in a single batch. | logs | gauge | +| `consul.raft.commitTime` | Measures the time it takes to commit a new entry to the Raft log on the leader. | ms | timer | +| `consul.raft.fsm.lastRestoreDuration` | Measures the time taken to restore the FSM from a snapshot on an agent restart or from the leader calling installSnapshot. This is a gauge that holds it's value since most servers only restore during restarts which are typically infrequent. | ms | gauge | +| `consul.raft.fsm.snapshot` | Measures the time taken by the FSM to record the current state for the snapshot. | ms | timer | +| `consul.raft.fsm.apply` | Measures the time to apply a log to the FSM. | ms | timer | +| `consul.raft.fsm.enqueue` | Measures the amount of time to enqueue a batch of logs for the FSM to apply. | ms | timer | +| `consul.raft.fsm.restore` | Measures the time taken by the FSM to restore its state from a snapshot. | ms | timer | +| `consul.raft.last_index` | Represents the raft applied index. | index | gauge | +| `consul.raft.leader.dispatchLog` | Measures the time it takes for the leader to write log entries to disk. | ms | timer | +| `consul.raft.leader.dispatchNumLogs` | Measures the number of logs committed to disk in a batch. | logs | gauge | +| `consul.raft.logstore.verifier.checkpoints_written` | Counts the number of checkpoint entries written to the LogStore. | checkpoints | counter | +| `consul.raft.logstore.verifier.dropped_reports` | Counts how many times the verifier routine was still busy when the next checksum came in and so verification for a range was skipped. If you see this happen, consider increasing the interval between checkpoints with [`raft_logstore.verification.interval`](/consul/docs/reference/agent/configuration-file/raft#raft_logstore_verification) | reports dropped | counter | +| `consul.raft.logstore.verifier.ranges_verified` | Counts the number of log ranges for which a verification report has been completed. Refer to [Monitor Raft metrics and logs for WAL](/consul/docs/deploy/server/wal/monitor-raft) for more information. | log ranges verifications | counter | +| `consul.raft.logstore.verifier.read_checksum_failures` | Counts the number of times a range of logs between two check points contained at least one disk corruption. Refer to [Monitor Raft metrics and logs for WAL](/consul/docs/deploy/server/wal/monitor-raft) for more information. | disk corruptions | counter | +| `consul.raft.logstore.verifier.write_checksum_failures` | Counts the number of times a follower has a different checksum to the leader at the point where it writes to the log. This could be caused by either a disk-corruption on the leader (unlikely) or some other corruption of the log entries in-flight. | in-flight corruptions | counter | +| `consul.raft.leader.lastContact` | Measures the time since the leader was last able to contact the follower nodes when checking its leader lease. It can be used as a measure for how stable the Raft timing is and how close the leader is to timing out its lease.The lease timeout is 500 ms times the [`raft_multiplier` configuration](/consul/docs/reference/agent/configuration-file/raft#raft_multiplier), so this telemetry value should not be getting close to that configured value, otherwise the Raft timing is marginal and might need to be tuned, or more powerful servers might be needed. See the [Server Performance](/consul/docs/deploy/server/vm/requirements) guide for more details. | ms | timer | +| `consul.raft.leader.oldestLogAge` | The number of milliseconds since the _oldest_ log in the leader's log store was written. This can be important for replication health where write rate is high and the snapshot is large as followers may be unable to recover from a restart if restoring takes longer than the minimum value for the current leader. Compare this with `consul.raft.fsm.lastRestoreDuration` and `consul.raft.rpc.installSnapshot` to monitor. In normal usage this gauge value will grow linearly over time until a snapshot completes on the leader and the log is truncated. Note: this metric won't be emitted until the leader writes a snapshot. After an upgrade to Consul 1.10.0 it won't be emitted until the oldest log was written after the upgrade. | ms | gauge | +| `consul.raft.replication.heartbeat` | Measures the time taken to invoke appendEntries on a peer, so that it doesn't timeout on a periodic basis. | ms | timer | +| `consul.raft.replication.appendEntries` | Measures the time it takes to replicate log entries to followers. This is a general indicator of the load pressure on the Consul servers, as well as the performance of the communication between the servers. | ms | timer | +| `consul.raft.replication.appendEntries.rpc` | Measures the time taken by the append entries RPC to replicate the log entries of a leader agent onto its follower agent(s). | ms | timer | +| `consul.raft.replication.appendEntries.logs` | Counts the number of logs replicated to an agent to bring it up to speed with the leader's logs. | logs appended/ interval | counter | +| `consul.raft.restore` | Counts the number of times the restore operation has been performed by the agent. Here, restore refers to the action of raft consuming an external snapshot to restore its state. | operation invoked / interval | counter | +| `consul.raft.restoreUserSnapshot` | Measures the time taken by the agent to restore the FSM state from a user's snapshot | ms | timer | +| `consul.raft.rpc.appendEntries` | Measures the time taken to process an append entries RPC call from an agent. | ms | timer | +| `consul.raft.rpc.appendEntries.storeLogs` | Measures the time taken to add any outstanding logs for an agent, since the last appendEntries was invoked | ms | timer | +| `consul.raft.rpc.appendEntries.processLogs` | Measures the time taken to process the outstanding log entries of an agent. | ms | timer | +| `consul.raft.rpc.installSnapshot` | Measures the time taken to process the installSnapshot RPC call. This metric should only be seen on agents which are currently in the follower state. | ms | timer | +| `consul.raft.rpc.processHeartBeat` | Measures the time taken to process a heartbeat request. | ms | timer | +| `consul.raft.rpc.requestVote` | Measures the time taken to process the request vote RPC call. | ms | timer | +| `consul.raft.snapshot.create` | Measures the time taken to initialize the snapshot process. | ms | timer | +| `consul.raft.snapshot.persist` | Measures the time taken to dump the current snapshot taken by the Consul agent to the disk. | ms | timer | +| `consul.raft.snapshot.takeSnapshot` | Measures the total time involved in taking the current snapshot (creating one and persisting it) by the Consul agent. | ms | timer | +| `consul.serf.snapshot.appendLine` | Measures the time taken by the Consul agent to append an entry into the existing log. | ms | timer | +| `consul.serf.snapshot.compact` | Measures the time taken by the Consul agent to compact a log. This operation occurs only when the snapshot becomes large enough to justify the compaction . | ms | timer | +| `consul.raft.state.candidate` | Increments whenever a Consul server starts an election. If this increments without a leadership change occurring it could indicate that a single server is overloaded or is experiencing network connectivity issues. | election attempts / interval | counter | +| `consul.raft.state.leader` | Increments whenever a Consul server becomes a leader. If there are frequent leadership changes this may be indication that the servers are overloaded and aren't meeting the soft real-time requirements for Raft, or that there are networking problems between the servers. | leadership transitions / interval | counter | +| `consul.raft.state.follower` | Counts the number of times an agent has entered the follower mode. This happens when a new agent joins the cluster or after the end of a leader election. | follower state entered / interval | counter | +| `consul.raft.transition.heartbeat_timeout` | The number of times an agent has transitioned to the Candidate state, after receive no heartbeat messages from the last known leader. | timeouts / interval | counter | +| `consul.raft.verify_leader` | This metric doesn't have a direct correlation to the leader change. It just counts the number of times an agent checks if it is still the leader or not. For example, during every consistent read, the check is done. Depending on the load in the system, this metric count can be high as it is incremented each time a consistent read is completed. | checks / interval | Counter | +| `consul.raft.wal.head_truncations` | Counts how many log entries have been truncated from the head - i.e. the oldest entries. by graphing the rate of change over time you can see individual truncate calls as spikes. | logs entries truncated | counter | +| `consul.raft.wal.last_segment_age_seconds` | A gauge that is set each time we rotate a segment and describes the number of seconds between when that segment file was first created and when it was sealed. this gives a rough estimate how quickly writes are filling the disk. | seconds | gauge | +| `consul.raft.wal.log_appends` | Counts the number of calls to StoreLog(s) i.e. number of batches of entries appended. | calls | counter | +| `consul.raft.wal.log_entries_read` | Counts the number of log entries read. | log entries read | counter | +| `consul.raft.wal.log_entries_written` | Counts the number of log entries written. | log entries written | counter | +| `consul.raft.wal.log_entry_bytes_read` | Counts the bytes of log entry read from segments before decoding. actual bytes read from disk might be higher as it includes headers and index entries and possible secondary reads for large entries that don't fit in buffers. | bytes | counter | +| `consul.raft.wal.log_entry_bytes_written` | Counts the bytes of log entry after encoding with Codec. Actual bytes written to disk might be slightly higher as it includes headers and index entries. | bytes | counter | +| `consul.raft.wal.segment_rotations` | Counts how many times we move to a new segment file. | rotations | counter | +| `consul.raft.wal.stable_gets` | Counts how many calls to StableStore.Get or GetUint64. | calls | counter | +| `consul.raft.wal.stable_sets` | Counts how many calls to StableStore.Set or SetUint64. | calls | counter | +| `consul.raft.wal.tail_truncations` | Counts how many log entries have been truncated from the head - i.e. the newest entries. by graphing the rate of change over time you can see individual truncate calls as spikes. | logs entries truncated | counter | +| `consul.rpc.accept_conn` | Increments when a server accepts an RPC connection. | connections | counter | +| `consul.rpc.rate_limit.exceeded` | Increments whenever an RPC is over a configured rate limit. In permissive mode, the RPC is still allowed to proceed. | RPCs | counter | +| `consul.rpc.rate_limit.log_dropped` | Increments whenever a log that is emitted because an RPC exceeded a rate limit gets dropped because the output buffer is full. | log messages dropped | counter | +| `consul.catalog.register` | Measures the time it takes to complete a catalog register operation. | ms | timer | +| `consul.catalog.deregister` | Measures the time it takes to complete a catalog deregister operation. | ms | timer | +| `consul.server.isLeader` | Track if a server is a leader(1) or not(0) | 1 or 0 | gauge | +| `consul.fsm.register` | Measures the time it takes to apply a catalog register operation to the FSM. | ms | timer | +| `consul.fsm.deregister` | Measures the time it takes to apply a catalog deregister operation to the FSM. | ms | timer | +| `consul.fsm.session` | Measures the time it takes to apply the given session operation to the FSM. | ms | timer | +| `consul.fsm.kvs` | Measures the time it takes to apply the given KV operation to the FSM. | ms | timer | +| `consul.fsm.tombstone` | Measures the time it takes to apply the given tombstone operation to the FSM. | ms | timer | +| `consul.fsm.coordinate.batch-update` | Measures the time it takes to apply the given batch coordinate update to the FSM. | ms | timer | +| `consul.fsm.prepared-query` | Measures the time it takes to apply the given prepared query update operation to the FSM. | ms | timer | +| `consul.fsm.txn` | Measures the time it takes to apply the given transaction update to the FSM. | ms | timer | +| `consul.fsm.autopilot` | Measures the time it takes to apply the given autopilot update to the FSM. | ms | timer | +| `consul.fsm.persist` | Measures the time it takes to persist the FSM to a raft snapshot. | ms | timer | +| `consul.fsm.intention` | Measures the time it takes to apply an intention operation to the state store. | ms | timer | +| `consul.fsm.ca` | Measures the time it takes to apply CA configuration operations to the FSM. | ms | timer | +| `consul.fsm.ca.leaf` | Measures the time it takes to apply an operation while signing a leaf certificate. | ms | timer | +| `consul.fsm.acl.token` | Measures the time it takes to apply an ACL token operation to the FSM. | ms | timer | +| `consul.fsm.acl.policy` | Measures the time it takes to apply an ACL policy operation to the FSM. | ms | timer | +| `consul.fsm.acl.bindingrule` | Measures the time it takes to apply an ACL binding rule operation to the FSM. | ms | timer | +| `consul.fsm.acl.authmethod` | Measures the time it takes to apply an ACL authmethod operation to the FSM. | ms | timer | +| `consul.fsm.system_metadata` | Measures the time it takes to apply a system metadata operation to the FSM. | ms | timer | +| `consul.kvs.apply` | Measures the time it takes to complete an update to the KV store. | ms | timer | +| `consul.leader.barrier` | Measures the time spent waiting for the raft barrier upon gaining leadership. | ms | timer | +| `consul.leader.reconcile` | Measures the time spent updating the raft store from the serf member information. | ms | timer | +| `consul.leader.reconcileMember` | Measures the time spent updating the raft store for a single serf member's information. | ms | timer | +| `consul.leader.reapTombstones` | Measures the time spent clearing tombstones. | ms | timer | +| `consul.leader.replication.acl-policies.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL policy replication was successful or 0 if there was an error. | healthy | gauge | +| `consul.leader.replication.acl-policies.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL policies in the primary datacenter that have been successfully replicated. | index | gauge | +| `consul.leader.replication.acl-roles.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL role replication was successful or 0 if there was an error. | healthy | gauge | +| `consul.leader.replication.acl-roles.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL roles in the primary datacenter that have been successfully replicated. | index | gauge | +| `consul.leader.replication.acl-tokens.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of ACL token replication was successful or 0 if there was an error. | healthy | gauge | +| `consul.leader.replication.acl-tokens.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of ACL tokens in the primary datacenter that have been successfully replicated. | index | gauge | +| `consul.leader.replication.config-entries.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of config entry replication was successful or 0 if there was an error. | healthy | gauge | +| `consul.leader.replication.config-entries.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of config entries in the primary datacenter that have been successfully replicated. | index | gauge | +| `consul.leader.replication.federation-state.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of federation state replication was successful or 0 if there was an error. | healthy | gauge | +| `consul.leader.replication.federation-state.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of federation states in the primary datacenter that have been successfully replicated. | index | gauge | +| `consul.leader.replication.namespaces.status` | This will only be emitted by the leader in a secondary datacenter. The value will be a 1 if the last round of namespace replication was successful or 0 if there was an error. | healthy | gauge | +| `consul.leader.replication.namespaces.index` | This will only be emitted by the leader in a secondary datacenter. Increments to the index of namespaces in the primary datacenter that have been successfully replicated. | index | gauge | +| `consul.prepared-query.apply` | Measures the time it takes to apply a prepared query update. | ms | timer | +| `consul.prepared-query.execute_remote` | Measures the time it takes to process a prepared query execute request that was forwarded to another datacenter. | ms | timer | +| `consul.prepared-query.execute` | Measures the time it takes to process a prepared query execute request. | ms | timer | +| `consul.prepared-query.explain` | Measures the time it takes to process a prepared query explain request. | ms | timer | +| `consul.rpc.raft_handoff` | Increments when a server accepts a Raft-related RPC connection. | connections | counter | +| `consul.rpc.request` | Increments when a server receives a Consul-related RPC request. | requests | counter | +| `consul.rpc.request_error` | Increments when a server returns an error from an RPC request. | errors | counter | +| `consul.rpc.query` | Increments when a server receives a read RPC request, indicating the rate of new read queries. See consul.rpc.queries_blocking for the current number of in-flight blocking RPC calls. This metric changed in 1.7.0 to only increment on the start of a query. The rate of queries will appear lower, but is more accurate. | queries | counter | +| `consul.rpc.queries_blocking` | The current number of in-flight blocking queries the server is handling. | queries | gauge | +| `consul.rpc.cross-dc` | Increments when a server sends a (potentially blocking) cross datacenter RPC query. | queries | counter | +| `consul.rpc.consistentRead` | Measures the time spent confirming that a consistent read can be performed. | ms | timer | +| `consul.session.apply` | Measures the time spent applying a session update. | ms | timer | +| `consul.session.renew` | Measures the time spent renewing a session. | ms | timer | +| `consul.session_ttl.invalidate` | Measures the time spent invalidating an expired session. | ms | timer | +| `consul.txn.apply` | Measures the time spent applying a transaction operation. | ms | timer | +| `consul.txn.read` | Measures the time spent returning a read transaction. | ms | timer | +| `consul.grpc.client.request.count` | Counts the number of gRPC requests made by the client agent to a Consul server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | requests | counter | +| `consul.grpc.client.connection.count` | Counts the number of new gRPC connections opened by the client agent to a Consul server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | counter | +| `consul.grpc.client.connections` | Measures the number of active gRPC connections open from the client agent to any Consul servers. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | gauge | +| `consul.grpc.server.request.count` | Counts the number of gRPC requests received by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | requests | counter | +| `consul.grpc.server.connection.count` | Counts the number of new gRPC connections received by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | counter | +| `consul.grpc.server.connections` | Measures the number of active gRPC connections open on the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | connections | gauge | +| `consul.grpc.server.stream.count` | Counts the number of new gRPC streams received by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | streams | counter | +| `consul.grpc.server.streams` | Measures the number of active gRPC streams handled by the server. Includes a `server_type` label indicating either the `internal` or `external` gRPC server. | streams | gauge | +| `consul.xds.server.streams` | Measures the number of active xDS streams handled by the server split by protocol version. | streams | gauge | +| `consul.xds.server.streamsUnauthenticated` | Measures the number of active xDS streams handled by the server that are unauthenticated because ACLs are not enabled or ACL tokens were missing. | streams | gauge | +| `consul.xds.server.idealStreamsMax` | The maximum number of xDS streams per server, chosen to achieve a roughly even spread of load across servers. | streams | gauge | +| `consul.xds.server.streamDrained` | Counts the number of xDS streams that are drained when rebalancing the load between servers. | streams | counter | +| `consul.xds.server.streamStart` | Measures the time taken to first generate xDS resources after an xDS stream is opened. | ms | timer | + + +## Server Workload + +** Requirements: ** +* Consul 1.12.0+ + +Label based RPC metrics were added in Consul 1.12.0 as a Beta feature to better understand the workload on a Consul server and, where that workload is coming from. The following metric(s) provide that insight + +| Metric | Description | Unit | Type | +| ------------------------------------- | --------------------------------------------------------- | ------ | --------- | +| `consul.rpc.server.call` | Measures the elapsed time taken to complete an RPC call. | ms | summary | + +### Labels + +The server workload metrics above come with the following labels: + +| Label Name | Description | Possible values | +| ------------------------------------- | -------------------------------------------------------------------- | --------------------------------------- | +| `method` | The name of the RPC method. | The value of any RPC request in Consul. | +| `errored` | Indicates whether the RPC call errored. | `true` or `false`. | +| `request_type` | Whether it is a `read` or `write` request. | `read`, `write` or `unreported`. | +| `rpc_type` | The RPC implementation. | `net/rpc` or `internal`. | +| `leader` | Whether the server was a `leader` or not at the time of the request. | `true`, `false` or `unreported`. | + +#### Label Explanations + +The `internal` value for the `rpc_type` in the table above refers to leader and cluster management RPC operations that Consul performs. +Historically, `internal` RPC operation metrics were accounted under the same metric names. + +The `unreported` value for the `request_type` in the table above refers to RPC requests within Consul where it is difficult to ascertain whether a request is `read` or `write` type. + +The `unreported` value for the `leader` label in the table above refers to RPC requests where Consul cannot determine the leadership status for a server. + +#### Read Request Labels + +In addition to the labels above, for read requests, the following may be populated: + +| Label Name | Description | Possible values | +| ------------------------------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ | +| `blocking` | Whether the read request passed in a `MinQueryIndex`. | `true` if a MinQueryIndex was passed, `false` otherwise. | +| `target_datacenter` | The target datacenter for the read request. | The string value of the target datacenter for the request. | +| `locality` | Gives an indication of whether the RPC request is local or has been forwarded. | `local` if current server data center is the same as `target_datacenter`, otherwise `forwarded`. | + +Here is a Prometheus style example of an RPC metric and its labels: + + + +``` + ... + consul_rpc_server_call{errored="false",method="Catalog.ListNodes",request_type="read",rpc_type="net/rpc",quantile="0.5"} 255 + ... +``` + + + +Any metric in this section can be turned off with the [`prefix_filter`](/consul/docs/reference/agent/configuration-file/telemetry#telemetry-prefix_filter). + +## Cluster Health + +These metrics give insight into the health of the cluster as a whole. +Query for the `consul.memberlist.*` and `consul.serf.*` metrics can be appended +with certain labels to further distinguish data between different gossip pools. +The supported label for CE is `network`, while `segment`, `partition`, `area` +are allowed for . + +| Metric | Description | Unit | Type | +|----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------|---------| +| `consul.memberlist.degraded.probe` | Counts the number of times the agent has performed failure detection on another agent at a slower probe rate. The agent uses its own health metric as an indicator to perform this action. (If its health score is low, means that the node is healthy, and vice versa.) | probes / interval | counter | +| `consul.memberlist.degraded.timeout` | Counts the number of times an agent was marked as a dead node, whilst not getting enough confirmations from a randomly selected list of agent nodes in an agent's membership. | occurrence / interval | counter | +| `consul.memberlist.msg.dead` | Counts the number of times an agent has marked another agent to be a dead node. | messages / interval | counter | +| `consul.memberlist.health.score` | Describes a node's perception of its own health based on how well it is meeting the soft real-time requirements of the protocol. This metric ranges from 0 to 8, where 0 indicates "totally healthy". This health score is used to scale the time between outgoing probes, and higher scores translate into longer probing intervals. For more details see section IV of the Lifeguard paper: https://arxiv.org/pdf/1707.00788.pdf | score | gauge | +| `consul.memberlist.msg.suspect` | Increments when an agent suspects another as failed when executing random probes as part of the gossip protocol. These can be an indicator of overloaded agents, network problems, or configuration errors where agents can not connect to each other on the [required ports](/consul/docs/reference/agent/configuration-file/general#ports). | suspect messages received / interval | counter | +| `consul.memberlist.tcp.accept` | Counts the number of times an agent has accepted an incoming TCP stream connection. | connections accepted / interval | counter | +| `consul.memberlist.udp.sent/received` | Measures the total number of bytes sent/received by an agent through the UDP protocol. | bytes sent or bytes received / interval | counter | +| `consul.memberlist.tcp.connect` | Counts the number of times an agent has initiated a push/pull sync with an other agent. | push/pull initiated / interval | counter | +| `consul.memberlist.tcp.sent` | Measures the total number of bytes sent by an agent through the TCP protocol | bytes sent / interval | counter | +| `consul.memberlist.gossip` | Measures the time taken for gossip messages to be broadcasted to a set of randomly selected nodes. | ms | timer | +| `consul.memberlist.msg_alive` | Counts the number of alive messages, that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter | +| `consul.memberlist.msg_dead` | The number of dead messages that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter | +| `consul.memberlist.msg_suspect` | The number of suspect messages that the agent has processed so far, based on the message information given by the network layer. | messages / Interval | counter | +| `consul.memberlist.node.instances` | Tracks the number of instances in each of the node states: alive, dead, suspect, and left. | nodes | gauge | +| `consul.memberlist.probeNode` | Measures the time taken to perform a single round of failure detection on a select agent. | nodes / Interval | counter | +| `consul.memberlist.pushPullNode` | Measures the number of agents that have exchanged state with this agent. | nodes / Interval | counter | +| `consul.memberlist.queue.broadcasts` | Measures the number of messages waiting to be broadcast to other gossip participants. | messages | sample | +| `consul.memberlist.size.local` | Measures the size in bytes of the memberlist before it is sent to another gossip recipient. | bytes | gauge | +| `consul.memberlist.size.remote` | Measures the size in bytes of incoming memberlists from other gossip participants. | bytes | gauge | +| `consul.serf.member.failed` | Increments when an agent is marked dead. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the [required ports](/consul/docs/reference/agent/configuration-file/general#ports). | failures / interval | counter | +| `consul.serf.member.flap` | Available in Consul 0.7 and later, this increments when an agent is marked dead and then recovers within a short time period. This can be an indicator of overloaded agents, network problems, or configuration errors where agents cannot connect to each other on the [required ports](/consul/docs/reference/agent/configuration-file/general#ports). | flaps / interval | counter | +| `consul.serf.member.join` | Increments when an agent joins the cluster. If an agent flapped or failed this counter also increments when it re-joins. | joins / interval | counter | +| `consul.serf.member.left` | Increments when an agent leaves the cluster. | leaves / interval | counter | +| `consul.serf.events` | Increments when an agent processes an [event](/consul/commands/event). Consul uses events internally so there may be additional events showing in telemetry. There are also a per-event counters emitted as `consul.serf.events.`. | events / interval | counter | +| `consul.serf.events.` | Breakdown of `consul.serf.events` by type of event. | events / interval | counter | +| `consul.serf.msgs.sent` | This metric is sample of the number of bytes of messages broadcast to the cluster. In a given time interval, the sum of this metric is the total number of bytes sent and the count is the number of messages sent. | message bytes / interval | counter | +| `consul.autopilot.failure_tolerance` | Tracks the number of voting servers that the cluster can lose while continuing to function. | servers | gauge | +| `consul.autopilot.healthy` | Tracks the overall health of the local server cluster. If all servers are considered healthy by Autopilot, this will be set to 1. If any are unhealthy, this will be 0. | boolean | gauge | +| `consul.session_ttl.active` | Tracks the active number of sessions being tracked. | sessions | gauge | +| `consul.catalog.service.query` | Increments for each catalog query for the given service. | queries | counter | +| `consul.catalog.service.query-tag` | Increments for each catalog query for the given service with the given tag. | queries | counter | +| `consul.catalog.service.query-tags` | Increments for each catalog query for the given service with the given tags. | queries | counter | +| `consul.catalog.service.not-found` | Increments for each catalog query where the given service could not be found. | queries | counter | +| `consul.catalog.connect.query` | Increments for each mesh-based catalog query for the given service. | queries | counter | +| `consul.catalog.connect.query-tag` | Increments for each mesh-based catalog query for the given service with the given tag. | queries | counter | +| `consul.catalog.connect.query-tags` | Increments for each mesh-based catalog query for the given service with the given tags. | queries | counter | +| `consul.catalog.connect.not-found` | Increments for each mesh-based catalog query where the given service could not be found. | queries | counter | + +## Service Mesh Built-in Proxy Metrics + +Consul service mesh's built-in proxy is by default configured to log metrics to the +same sink as the agent that starts it. + +When running in this mode it emits some basic metrics. These will be expanded +upon in the future. + +All metrics are prefixed with `consul.proxy.` to distinguish +between multiple proxies on a given host. The table below use `web` as an +example service name for brevity. + +### Labels + +Most labels have a `dst` label and some have a `src` label. When using metrics +sinks and timeseries stores that support labels or tags, these allow aggregating +the connections by service name. + +Assuming all services are using a managed built-in proxy, you can get a complete +overview of both number of open connections and bytes sent and received between +all services by aggregating over these metrics. + +For example aggregating over all `upstream` (i.e. outbound) connections which +have both `src` and `dst` labels, you can get a sum of all the bandwidth in and +out of a given service or the total number of connections between two services. + +### Metrics Reference + +The standard go runtime metrics are exported by `go-metrics` as with Consul +agent. The table below describes the additional metrics exported by the proxy. + +| Metric | Description | Unit | Type | +| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ------- | +| `consul.proxy.web.runtime.*` | The same go runtime metrics as documented for the agent above. | mixed | mixed | +| `consul.proxy.web.inbound.conns` | Shows the current number of connections open from inbound requests to the proxy. Where supported a `dst` label is added indicating the service name the proxy represents. | connections | gauge | +| `consul.proxy.web.inbound.rx_bytes` | Increments by the number of bytes received from an inbound client connection. Where supported a `dst` label is added indicating the service name the proxy represents. | bytes | counter | +| `consul.proxy.web.inbound.tx_bytes` | Increments by the number of bytes transferred to an inbound client connection. Where supported a `dst` label is added indicating the service name the proxy represents. | bytes | counter | +| `consul.proxy.web.upstream.conns` | Shows the current number of connections open from a proxy instance to an upstream. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | connections | gauge | +| `consul.proxy.web.inbound.rx_bytes` | Increments by the number of bytes received from an upstream connection. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | bytes | counter | +| `consul.proxy.web.inbound.tx_bytes` | Increments by the number of bytes transferred to an upstream connection. Where supported a `src` label is added indicating the service name the proxy represents, and a `dst` label is added indicating the service name the upstream is connecting to. | bytes | counter | + +## Peering metrics + +**Requirements:** +- Consul 1.13.0+ + +[Cluster peering](/consul/docs/east-west/cluster-peering) refers to Consul clusters that communicate through a peer connection, as opposed to a federated connection. Consul collects metrics that describe the number of services exported to a peered cluster. Peering metrics are only emitted by the leader server. These metrics are emitted every 9 seconds. + +| Metric | Description | Unit | Type | +| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------ | ------- | +| `consul.peering.exported_services` | Counts the number of services exported with [exported service configuration entries](/consul/docs/reference/config-entry/exported-services) to a peer cluster. | count | gauge | +| `consul.peering.healthy` | Tracks the health of a peering connection as reported by the server. If Consul detects errors while sending or receiving from a peer which do not recover within a reasonable time, this metric returns 0. Healthy connections return 1. | health | gauge | + +### Labels + +Consul attaches the following labels to metric values. + +| Label Name | Description | Possible values | +| ------------------------------------- | -------------------------------------------------------------------------------- | ----------------------------------------- | +| `peer_name` | The name of the peering on the reporting cluster or leader. | Any defined peer name in the cluster | +| `peer_id` | The ID of a peer connected to the reporting cluster or leader. | Any UUID | +| `partition` | Name of the partition that the peering is created in. | Any defined partition name in the cluster | + +## Server Host Metrics + +Consul servers can report the following metrics about the host's system resources. +This feature must be enabled in the [agent telemetry configuration](/consul/docs/reference/agent/configuration-file/telemetry#telemetry-enable_host_metrics). +Note that if the Consul server is operating inside a container these metrics +still report host resource usage and do not report any resource limits placed +on the container. + +**Requirements:** +- Consul 1.15.3+ + +| Metric | Description | Unit | Type | +| ----------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ------- | +| `consul.host.memory.total` | The total physical memory in bytes | mixed | mixed | +| `consul.host.memory.available` | The available physical memory in bytes | mixed | mixed | +| `consul.host.memory.free` | The free physical memory in bytes | mixed | mixed | +| `consul.host.memory.used` | The used physical memory in bytes | mixed | mixed | +| `consul.host.memory.used_percent` | The used physical memory as a percentage of total physical memory | mixed | mixed | +| `consul.host.cpu.total` | The host's total cpu utilization +| `consul.host.cpu.user` | The cpu utilization in user space +| `consul.host.cpu.idle` | The cpu utilization in idle state +| `consul.host.cpu.iowait` | The cpu utilization in iowait state +| `consul.host.cpu.system` | The cpu utilization in system space +| `consul.host.disk.size` | The size in bytes of the data_dir disk +| `consul.host.disk.used` | The number of bytes used on the data_dir disk +| `consul.host.disk.available` | The number of bytes available on the data_dir disk +| `consul.host.disk.used_percent` | The percentage of disk space used on the data_dir disk +| `consul.host.disk.inodes_percent` | The percentage of inode usage on the data_dir disk +| `consul.host.uptime` | The uptime of the host in seconds diff --git a/website/content/docs/reference/architecture/capacity.mdx b/website/content/docs/reference/architecture/capacity.mdx new file mode 100644 index 000000000000..2f5538e2c5ff --- /dev/null +++ b/website/content/docs/reference/architecture/capacity.mdx @@ -0,0 +1,192 @@ +--- +layout: docs +page_title: Consul capacity planning +description: >- + Learn how to maintain your Consul cluster in a healthy state by provisioning the correct resources. +--- + +# Consul capacity planning + +This page describes our capacity planning recommendations when deploying and maintaining a Consul cluster in production. When your organization designs a production environment, you should consider your available resources and their impact on network capacity. + +## Introduction + +It is important to select the correct size for your server instances. Consul server environments have a standard set of minimum requirements. However, these requirements may vary depending on what you are using Consul for. + +Insufficient resource allocations may cause network issues or degraded performance in general. When a slowdown in performance results in a Consul leader node that is unable to respond to requests in sufficient time, the Consul cluster triggers a new leader election. Consul pauses all network requests and Raft updates until the election ends. + +## Hardware requirements + +The minimum hardware requirements for Consul servers in production clusters as recommended by the [reference architecture](/consul/tutorials/production-deploy/reference-architecture#hardware-sizing-for-consul-servers) are: + +| CPU | Memory | Disk Capacity | Disk IO | Disk Throughput | Avg Round-Trip-Time | 99% Round-Trip-Time | +| --------- | ------------ | ------------- | ----------- | --------------- | ------------------- | ------------------- | +| 8-16 core | 32-64 GB RAM | 200+ GB | 7500+ IOPS | 250+ MB/s | Lower than 50ms | Lower than 100ms | + +For the major cloud providers, we recommend starting with one of the following instances that meet the minimum requirements. Then scale up as needed. We also recommend avoiding "burstable" CPU and storage options where performance may drop after a consistent load. + +| Provider | Size | Instance/VM Types | Disk Volume Specs | +| --------- | ----- | ------------------------------------- | --------------------------------- | +| **AWS** | Large | `m5.2xlarge`, `m5.4xlarge` | 200+GB `gp3`, 10000 IOPS, 250MB/s | +| **Azure** | Large | `Standard_D8s_v3`, `Standard_D16s_v3` | 2048GB `Premium SSD`, 7500 IOPS, 200MB/s | +| **GCP** | Large | `n2-standard-8`, `n2-standard-16` | 1000GB `pd-ssd`, 30000 IOPS, 480MB/s | + + +For HCP Consul Dedicated, cluster size is measured in the number of service instances supported. Find out more information in the [HCP Consul Dedicated pricing page](https://cloud.hashicorp.com/products/consul/pricing). + +## Workload input and output requirements + +Workloads are any actions that interact with the Consul cluster. These actions consist of key/value reads and writes, service registrations and deregistrations, adding or removing Consul client agents, and more. + +Input/output operations per second (IOPS) is a unit of measurement for the amount of reads and writes to non-adjacent storage locations. +For high workloads, ensure that the Consul server disks support a [high number of IOPS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-io-characteristics.html#ebs-io-iops) to keep up with the rapid Raft log update rate. +Unlike bare-metal environments, IOPS for virtual instances in cloud environments is often tied to storage sizing. More storage GBs typically grants you more IOPS. Therefore, we recommend deploying on [IOPS-optimized instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/provisioned-iops.html). + +Consul server agents are generally I/O bound for writes and CPU bound for reads. For additional tuning recommendations, refer to [raft tuning](#raft-tuning). + +## Memory requirements + +You should allocate RAM for server agents so that they contain 2 to 4 times the working set size. You can determine the working set size of a running cluster by noting the value of `consul.runtime.alloc_bytes` in the leader node's telemetry data. Inspect your monitoring solution for the telemetry value, or run the following commands with the [jq](https://stedolan.github.io/jq/download/) tool installed on your Consul leader instance. + + + +For Kubernetes, execute the command from the leader pod. `jq` is available in the Consul server containers. + + + +Set `$CONSUL_HTTP_TOKEN` to an ACL token with valid permissions, then retrieve the working set size. + +```shell-session +$ curl --silent --header "X-Consul-Token: $CONSUL_HTTP_TOKEN" http://127.0.0.1:8500/v1/agent/metrics | jq '.Gauges[] | select(.Name=="consul.runtime.alloc_bytes") | .Value'` +616017920 +``` + +## Kubernetes storage requirements + +When you set up persistent volumes (PV) resources, you should define the correct server storage class parameter because the defaults are likely insufficient in performance. To set the [storageClass Helm chart parameter](/consul/docs/reference/k8s/helm#v-server-storageclass), refer to the [Kubernetes documentation on storageClasses](https://kubernetes.io/docs/concepts/storage/storage-classes/) for more information about your specific cloud provider. + +## Read and write heavy workload recommendations + +In production, your use case may lead to Consul performing read-heavy workloads, write-heavy workloads, or both. Refer to the following table for specific resource recommendations for these types of workloads. + +| Workload type | Instance Recommendations | Workload element examples | Enterprise Feature Recommendations | +| ------------- | ------------------------- | ------------------------ | ------------------------ | +| Read-heavy | Instances of type `m5.4xlarge (AWS)`, `Standard_D16s_v3 (Azure)`, `n2-standard-16 (GCP)` | Raft RPCs calls, DNS queries, key/value retrieval | [Read replicas](/consul/docs/manage/scale/read-replica) | +| Write-heavy | IOPS performance of `10 000+` | Consul agent joins and leaves, services registration and deregistration, key/value writes | [Network segments](/consul/docs/multi-tenant/network-segment) | + +For recommendations on troubleshooting issues with read-heavy or write-heavy workloads, refer to [Consul at Scale](/consul/docs/architecture/scale#resource-usage-and-metrics-recommendations). + +## Monitor performance + +Monitoring is critical to ensure that your Consul datacenter has sufficient resources to continue operations. A proactive monitoring strategy helps you find problems in your network before they impact your deployments. + +We recommend completing the [Monitor Consul server health and performance with metrics and logs](/consul/tutorials/observe-your-network/server-metrics-and-logs) tutorial as a starting point for Consul metrics and telemetry. The following tutorials guide you through specific monitoring solutions for your Consul cluster. + +- [Monitor Consul server health and performance with metrics and logs](/consul/tutorials/observe-your-network/server-metrics-and-logs) +- [Observe Consul service mesh traffic](/consul/tutorials/get-started-kubernetes/kubernetes-gs-observability) + +### Important metrics + +In production environments, create baselines for your Consul cluster's metrics. After you discover the baselines, you will be able to define alerts and receive notifications when there are unexpected values. For a detailed explanation on the metrics and their values, refer to [Consul Agent telemetry](/consul/docs/reference/agent/telemetry). + +### Transaction metrics + +These metrics indicate how long it takes to complete write operations in various parts of the Consul cluster. + +- [`consul.kvs.apply`](/consul/docs/agent/monitor/telemetry#transaction-timing) measures the time it takes to complete an update to the KV store. +- [`consul.txn.apply`](/consul/docs/agent/monitor/telemetry#transaction-timing) measures the time spent applying a transaction operation. +- [`consul.raft.apply`](/consul/docs/agent/monitor/telemetry#transaction-timing) counts the number of Raft transactions applied during the measurement interval. This metric is only reported on the leader. +- [`consul.raft.commitTime`](/consul/docs/agent/monitor/telemetry#transaction-timing) measures the time it takes to commit a new entry to the Raft log on disk on the leader. + +### Memory metrics + +These performance indicators can help you diagnose if the current instance sizing is unable to handle the workload. + +- [`consul.runtime.alloc_bytes`](/consul/docs/agent/monitor/telemetry#memory-usage) measures the number of bytes allocated by the Consul process. +- [`consul.runtime.sys_bytes`](/consul/docs/agent/monitor/telemetry#memory-usage) measures the total number of bytes of memory obtained from the OS. +- [`consul.runtime.heap_objects`](/consul/docs/agent/monitor/telemetry#metrics-reference) measures the number of objects allocated on the heap and is a general memory pressure indicator. + +### Leadership metrics + +Leadership changes are not a cause for concern but frequent changes may be a symptom of a deeper problem. Frequent elections or leadership changes may indicate network issues between the Consul servers, or the Consul servers are unable to keep up with the load. + +- [`consul.raft.leader.lastContact`](/consul/docs/agent/monitor/telemetry#leadership-changes) measures the time since the leader was last able to contact the follower nodes when checking its leader lease. +- [`consul.raft.state.candidate`](/consul/docs/agent/monitor/telemetry#leadership-changes) increments whenever a Consul server starts an election. +- [`consul.raft.state.leader`](/consul/docs/agent/monitor/telemetry#leadership-changes) increments whenever a Consul server becomes a leader. +- [`consul.server.isLeader`](/consul/docs/agent/monitor/telemetry#leadership-changes) tracks whether a server is a leader. + +### Network metrics + +Network activity and RPC count measurements indicate the current load created from a Consul agent, including when the load becomes high enough to be rate limited. If an unusually high RPC count occurs, you should investigate before it overloads the cluster. + +- [`consul.client.rpc`](/consul/docs/agent/monitor/telemetry#network-activity-rpc-count) increments whenever a Consul agent in client mode makes an RPC request to a Consul server. +- [`consul.client.rpc.exceeded`](/consul/docs/agent/monitor/telemetry#network-activity-rpc-count) increments whenever a Consul agent in client mode makes an RPC request to a Consul server gets rate limited by that agent's limits configuration. +- [`consul.client.rpc.failed`](/consul/docs/agent/monitor/telemetry#network-activity-rpc-count) increments whenever a Consul agent in client mode makes an RPC request to a Consul server and fails. + +## Network constraints and alternate approaches + +If it is impossible for you to allocate the required resources, you can make changes to Consul's performance so that it operates with lower speed or resilience. These changes ensure that your cluster remains within its resource capacity. + +- Soft limits prevent your cluster from degrading due to overload. +- Raft tuning lets you compensate for unfavorable environments. + +### Soft limits + +The recommended maximum size for a single datacenter is 5,000 Consul client agents. This recommendation is based on a standard, non-tuned environment and considers a blast radius's risk management factor. The maximum number of agents may be lower, depending on how you use Consul. + +If you require more than 5,000 client agents, you should break up the single Consul datacenter into multiple smaller datacenters. + +- When the nodes are spread across separate physical locations such as different regions, you can model multiple datacenter structures based on physical locations. +- Use [network segments](/consul/docs/multi-tenant/network-segment) in a single available zone or region to lower overall resource usage in a single datacenter. + +When deploying [Consul in Kubernetes](/consul/docs/k8s), we recommend you set both _requests_ and _limits_ in the Helm chart. Refer to the [Helm chart documentation](/consul/docs/reference/k8s/helm#v-server-resources) for more information. + +- Requests allocate the required resources for your Consul workloads. +- Limits prevent your pods from being terminated and restarted if they consume more resources than requested and Kubernetes needs to reclaim these resources. Limits can prevent outage situations where the Consul leader's container gets terminated and redeployed due to resource constraints. + +The following is an example Helm configuration that allocates 16 CPU cores and 64 gigabytes of memory: + + + +```yaml +global: + image: "hashicorp/consul" +## ... +resources: + requests: + memory: '64G' + cpu: '16000m' + limits: + memory: '64G' + cpu: '16000m' +``` + + + +### Raft tuning + +Consul uses the [Raft consensus algorithm](/consul/docs/concept/consensus) to provide consistency. +You may need to adjust Raft to suit your specific environment. Adjust the [`raft_multiplier` configuration](/consul/docs/reference/agent/configuration-file/general#raft_multiplier) to define the trade-off between leader stability and time to recover from a leader failure. + +- A lower multiplier minimizes failure detection and election time, but it may trigger frequently in high latency situations. +- A higher multiplier reduces the chances that failures cause leadership churn, but your cluster takes longer to detect real failures and restore availability. + +The value of `raft_multiplier` has a default value of 5. It is a scaling factor setting that directly affects the following parameters: + +| Parameter name | Default value | Derived from | +| --- | --- | --- | +| HeartbeatTimeout | 5000ms | 5 x 1000ms | +| ElectionTimeout | 5000ms | 5 x 1000ms | +| LeaderLeaseTimeout | 2500ms | 5 x 500ms | + +You can use the telemetry from [`consul.raft.leader.lastContact`](/consul/docs/reference/agent/telemetry#leadership-changes) to observe Raft timing performance. + +Wide networks with more latency perform better with larger values of `raft_multiplier`, but cluster failure detection will take longer. If your network operates with low latency, we recommend that you do not set the Raft multiplier higher than 5. Instead, you should either replace the servers with more powerful ones or minimize the network latency between nodes. + +We recommend you start from a baseline and perform [chaos engineering testing](/consul/tutorials/resiliency/introduction-chaos-engineering?in=consul%2Fresiliency) with different values for the Raft multiplier to find the acceptable time for problem detection and recovery for the cluster. Then scale the cluster and its dedicated resources with the number of workloads handled. This approach gives you the best balance between pure resource growth and pure Raft tuning strategies because it lets you use Raft tuning as a backup plan if you cannot scale your resources. + +The types of workloads the Consul cluster handles also play an important role in +Raft tuning. For example, if your Consul clusters are mostly static and do not +handle many events, you should increase your Raft multiplier instead of scaling +your resources because the risk of an important event happening while the +cluster is converging or re-electing a leader is lower. diff --git a/website/content/docs/reference/architecture/ecs.mdx b/website/content/docs/reference/architecture/ecs.mdx new file mode 100644 index 000000000000..e142775b4bc8 --- /dev/null +++ b/website/content/docs/reference/architecture/ecs.mdx @@ -0,0 +1,98 @@ +--- +layout: docs +page_title: Consul on AWS Elastic Container Service (ECS) architecture +description: >- + Learn about the Consul architecture on Amazon Web Services ECS deployments. Learn about how the two work together, including the order tasks and containers startup and shutdown, as well as requirements for the AWS IAM auth method, the ACL controller and tokens, and health check syncing. +--- + +# Consul on AWS Elastic Container Service (ECS) architecture + +This topic provides reference information about the Consul's deployment architecture on AWS ECS. The following diagram shows the main components of the Consul architecture when deployed to an ECS cluster. + +![Diagram that provides an overview of the Consul Architecture on ECS](/img/ecs/consul-on-ecs-architecture-dataplanes.png#light-theme-only) +![Diagram that provides an overview of the Consul Architecture on ECS ](/img/ecs/consul-on-ecs-architecture-dataplanes-dark.png#dark-theme-only) + +## Components + +Consul starts several components and containers inside the ECS cluster. Using a combination of short-lived containers (`mesh-init`) and long-lived containers (`health-sync`) ensures that any long running containers do not have root access to Consul. Refer to [Startup sequence](#startup-sequence) for details about the order of the startup procedure. + +### `mesh-init` container + +The `mesh-init` container is a short-lived container that performs the following actions: + +- Logs into Consul servers +- Communicates directly with Consul server +- Registers proxies and services +- Creates a bootstrap configuration file for Consul dataplane container and stores it in a shared volume +- Invokes the `iptables` SDK to configure traffic redirection rules + +### `health-sync` container + +The `health-sync` container is a long-lived container that performs the following actions: + +- Synchronizes ECS health checks +- Watches the Consul server for changes + +When you stop the ECS task, it performs the following actions: + +- Deregisters service and proxy instances on receiving SIGTERM to support graceful shutdown +- Performs logout from [ACL auth method](/consul/docs/secure/acl/auth-method) + +### `dataplane` container + +The dataplane process runs in the same container as the Envoy proxy and performs the following actions: + +- Consumes and configures itself according to the bootstrap configuration written by the `mesh-init` container. +- Contains and starts up the Envoy sidecar. + +### ECS controller container + +One ECS task in the cluster contains the controller container, which performs the following actions: + +- Creates AWS IAM auth methods +- Creates ACL policies and roles +- Maintains ACL state +- Removes tokens when services exit +- Deregisters services if the ECS task exits without deregistering them +- Registers a _synthetic node_ that enables Consul to register services to the catalog + +## Startup sequence + +Deploying Consul to ECS starts the following process to build the architecture: + +1. The `mesh-init` container starts and logs in to Consul. +1. The `mesh-init` container registers services and proxies with the Consul servers. +1. The `mesh-init` container writes the bootstrap configuration for the Consul dataplane process and stores it in a shared volume. +1. The `mesh-init` container configures Consul DNS and modifies traffic redirection rules. +1. The `dataplane` container starts and configures itself using the bootstrap configuration generated by the `mesh-init` container. +1. The `dataplane` container starts the Envoy sidecar proxy. +1. The `health-sync` container starts listening for ECS health checks. +1. When the ECS task indicates that the application instance is healthy, the `health-sync` container marks the service as healthy and allows traffic to flow. + +## Consul security components + +Consul leverages AWS components to facilitate its own security features. + +### Auth methods + +Consul on ECS uses the AWS IAM auth method so that ECS tasks can automatically obtain Consul ACL tokens during startup. + +When ACLs are enabled, the Terraform modules for Consul on ECS support AWS IAM auth methods by default. The ECS controller sets up the auth method on the Consul servers. The `mesh-task` module configures the ECS task definition to be compatible with the auth method. + +A unique task IAM role is required for each ECS task family. A task family represents only one Consul service and the task IAM role must encode the Consul service name. As a result, task IAM roles must not be shared by different task families. + +By default, the mesh-task module creates and configures the task IAM role for you. + +To pass an existing IAM role to the mesh-task module using the `task_role` input variable, configure the IAM role as described in ECS Task Role Configuration to be compatible with the AWS IAM auth method. + +### ECS task roles + +The [ECS task role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) is an IAM role associated with an ECS task. + +When an ECS task starts up, it runs a `consul login` command. The command obtains credentials for the task role from AWS and then uses those credentials to sign the login request to the AWS IAM auth method. The credentials prove the ECS task's identity to the Consul servers. + +You must configure the task role with the following details for it to be compatible with the AWS IAM auth method: + +- An `iam:GetRole` permission to fetch itself. Refer to [IAM Policies](/consul/docs/security/acl/auth-methods/aws-iam#iam-policies) for additional information. +- A `consul.hashicorp.com.service-name` tag on the task role which contains the Consul service name for the application in this task. +- When using Consul Enterprise, add a `consul.hashicorp.com.namespace` tag on the task role indicating the Consul Enterprise namespace where this service is registered. \ No newline at end of file diff --git a/website/content/docs/install/ports.mdx b/website/content/docs/reference/architecture/ports.mdx similarity index 78% rename from website/content/docs/install/ports.mdx rename to website/content/docs/reference/architecture/ports.mdx index dea835f51be1..adf31f547f6c 100644 --- a/website/content/docs/install/ports.mdx +++ b/website/content/docs/reference/architecture/ports.mdx @@ -1,14 +1,14 @@ --- layout: docs page_title: Consul ports reference -description: Find information about the ports that Consul requires for its networking functions, including required ports for HCP Consul Dedicated. Required ports differ for Consul servers and clients. +description: Find information about the ports that Consul requires for its networking functions, including required ports for HCP Consul. Required ports differ for Consul servers and clients. --- # Consul ports reference This page provides reference information about the required ports that Consul exposes for its operations. -You can change or disable Consul's default ports in the [agent configuration file](/consul/docs/agent/config/config-files#ports) or using the [`consul agent` CLI command](/consul/docs/agent/config/cli-flags). +You can change or disable Consul's default ports in the [agent configuration file](/consul/docs/reference/agent/configuration-file/general#ports) or using the [`consul agent` CLI command](/consul/commands/agent). ## Overview @@ -16,13 +16,15 @@ The exact ports that Consul requires depend on your network's specific configura There are slight differences between the port requirements for Consul servers and clients. When a Consul server has services, proxies, or gateways registered to it, then it acts as both a server and client. -[HCP Consul Dedicated servers](/hcp/docs/consul) have distinct port assignments. For more information, refer to [cluster management in the HCP documentation](https://developer.hashicorp.com/hcp/docs/consul/concepts/cluster-management#hashicorp-managed-clusters). +HashiCorp-managed servers deployed using [HCP Consul](/hcp/docs/consul) have distinct port assignments. For more information, refer to [cluster management in the HCP documentation](/hcp/docs/consul/concepts/cluster-management#hashicorp-managed-clusters). + +@include 'alerts/hcp-dedicated-eol.mdx' ## Consul servers -The following table lists port names, their function, their network protocols, their default port numbers, whether they are enabled or disabled by default, port assignments for HCP Consul Dedicated server clusters, and the direction of traffic from the Consul server's perspective. +This table lists port names, their function, their network protocols, their default port numbers, whether they are enabled or disabled by default, port assignments for HashiCorp-managed server clusters, and the direction of traffic from the Consul server's perspective. -| Port name | Use | Protocol | Default port | Default status | HCP Consul Dedicated port | Direction | +| Port name | Use | Protocol | Default port | Default status | HCP-managed port | Direction | | :------------------------ | :----------------------------------------- | :---------- | :----------- | :------------- | :--------------- | :-------------------- | | [DNS](#dns) | The DNS server | TCP and UDP | `8600` | Enabled | Unsupported | Incoming | | [HTTP](#http) | The HTTP API | TCP | `8500` | Enabled | Unsupported | Incoming | @@ -47,7 +49,7 @@ The server's DNS port does not need to be open when DNS queries are sent to Cons If you configure recursors in Consul to upstream DNS servers, then you need outbound access to those servers on port `53`. -To resolve Consul DNS requests when using HCP Consul Dedicated, we recommend running Consul clients and resolving DNS against the clients. If your use case cannot accommodate this recommendation, open a support ticket. +To resolve Consul DNS requests when using HashiCorp-managed servers on HCP Consul, we recommend running Consul clients and resolving DNS against the clients. If your use case cannot accommodate this recommendation, open a support ticket. ### HTTP @@ -63,13 +65,13 @@ The server's HTTP port does not need to be open when Consul clients service all The Consul CLI uses the HTTP port to interact with Consul by default. -HCP Consul Dedicated does not support the HTTP port. +HCP Consul does not support the HTTP port. ### HTTPS The following table lists information about the Consul server API's HTTPS port defaults: -| Default port | Protocol | Default status | HCP Consul Dedicated server port | +| Default port | Protocol | Default status | Hashicorp-managed server port | | :----------- | :------- | :------------------ | :---------------------------- | | `8501` | TCP | Disabled by default | `443` | @@ -77,9 +79,9 @@ This port receives incoming traffic from workloads that make HTTPS API calls. The server HTTPS port does not need to be open when Consul clients service all HTTPS API calls. Consul does not use this port for internal communication between servers, clients, dataplanes, gateways, and Envoy proxies. -This port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/agent/config/config-files#ports) or using the [`consul agent` CLI command](/consul/docs/agent/config/cli-flags). +This port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/reference/agent/configuration-file/general#ports) or using the [`consul agent` CLI command](/consul/commands/agent). -HCP Consul Dedicated assigns port `443` to HCP Consul Dedicated clusters, instead of the default `8501`. +HCP Consul assigns port `443` to HashiCorp-managed clusters, instead of the default `8501`. ### gRPC @@ -89,23 +91,23 @@ The following table lists information about the Consul API's gRPC port defaults: | :----------- | :------- | :------------------ | | `8502` | TCP | Disabled by default | -When using [Consul Dataplane](/consul/docs/connect/dataplane), this port receives incoming traffic from the dataplanes. +When using [Consul Dataplane](/consul/docs/architecture/control-plane/dataplane), this port receives incoming traffic from the dataplanes. -We recommend using gRPC TLS instead, so this port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/agent/config/config-files#ports) or using the [`consul agent` CLI command](/consul/docs/agent/config/cli-flags). +We recommend using gRPC TLS instead, so this port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/reference/agent/configuration-file/general#ports) or using the [`consul agent` CLI command](/consul/commands/agent). -HCP Consul Dedicated does not support the gRPC port. +HCP Consul does not support the gRPC port. ### gRPC TLS The following table lists information about the Consul API's gRPC with TLS port defaults: -| Default port | Protocol | Default status | HCP Consul Dedicated server port | +| Default port | Protocol | Default status | Hashicorp-managed server port | | :----------- | :------- | :------------------ | :---------------------------- | | `8503` | TCP | Enabled by default | `8502` | -This port receives incoming traffic from the dataplanes when using [Consul Dataplane](/consul/docs/connect/dataplane) instead of client agents. We recommend using `8503` as your conventional gRPC port number because it allows some tools to work automatically. +This port receives incoming traffic from the dataplanes when using [Consul Dataplane](/consul/docs/architecture/control-plane/dataplane) instead of client agents. We recommend using `8503` as your conventional gRPC port number because it allows some tools to work automatically. -In deployments with [cluster peering connections](/consul/docs/connect/cluster-peering), this port provides incoming and outgoing access between remote server peers. Specifically, the dialing peer needs outgoing access and the accepting peer needs incoming access. The address dialed depends on whether or not the cluster peering connection uses mesh gateways and whether the mesh gateway is in remote or local mode: +In deployments with [cluster peering connections](/consul/docs/east-west/cluster-peering), this port provides incoming and outgoing access between remote server peers. Specifically, the dialing peer needs outgoing access and the accepting peer needs incoming access. The address dialed depends on whether or not the cluster peering connection uses mesh gateways and whether the mesh gateway is in remote or local mode: - When not using mesh gateways, servers dial the remote server addresses directly. - When using mesh gateways in local mode, servers dial the local mesh gateway. @@ -113,13 +115,14 @@ In deployments with [cluster peering connections](/consul/docs/connect/cluster-p In both local and remote cases, incoming traffic comes from the mesh gateways. -HCP Consul Dedicated assigns port `8502` to clusters, instead of the default `8503`. +HCP Consul assigns port `8502` to HashiCorp-managed clusters, instead of the default `8503`. + ### Server RPC The following table lists information about the Server RPC port defaults: -| Default port | Protocol | Default status | HCP Consul Dedicated server port | +| Default port | Protocol | Default status | Hashicorp-managed server port | | :----------- | :------- | :----------------- | :---------------------------- | | `8300` | TCP | Enabled by default | `8300` | @@ -133,11 +136,11 @@ When using WAN federation with mesh gateways, Consul servers must accept server The following table lists information about the LAN serf port defaults: -| Default port | Protocol | Default status | HCP Consul Dedicated server port | +| Default port | Protocol | Default status | Hashicorp-managed server port | | :----------- | :-----------| :----------------- | :---------------------------- | | `8301` | TCP and UDP | Enabled by default | `8301` | -This port sends and receives traffic from Consul clients and other Consul servers in the same datacenter. Refer to [gossip protocol](/consul/docs/architecture/gossip) for more information. +This port sends and receives traffic from Consul clients and other Consul servers in the same datacenter. Refer to [gossip protocol](/consul/docs/concept/gossip) for more information. When running Enterprise deployments that use multiple admin partitions, all Consul clients across all partitions still require access to this port on all servers. Servers also require access to this port on all clients. @@ -145,11 +148,11 @@ When running Enterprise deployments that use multiple admin partitions, all Cons The following table lists information about the WAN serf port defaults: -| Default port | Protocol | Default status | HCP Consul Dedicated server port | +| Default port | Protocol | Default status | Hashicorp-managed server port | | :----------- | :---------- | :----------------- | :---------------------------- | | `8302` | TCP and UDP | Enabled by default | `8302` | -This port sends and receives traffic between Consul servers in a federated network. WAN-federated networks require one cluster to serve as the primary datacenter while the others function as secondary datacenters. Refer to [Enabling WAN Federation Control Plane Traffic](/consul/docs/connect/gateways/mesh-gateway/wan-federation-via-mesh-gateways) for additional information. +This port sends and receives traffic between Consul servers in a federated network. WAN-federated networks require one cluster to serve as the primary datacenter while the others function as secondary datacenters. Refer to [Enabling WAN Federation Control Plane Traffic](/consul/docs/east-west/mesh-gateway/enable) for additional information. When using WAN federation without mesh gateways, incoming and outgoing traffic on this port is required between all federated servers. @@ -202,7 +205,7 @@ The following table lists information about the Consul client's HTTPS port defau This port receives incoming traffic from workloads that make HTTPS API calls. Consul does not use this port for internal communication between servers, clients, dataplanes, gateways, and Envoy proxies. -This port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/agent/config/config-files#ports) or using the [`consul agent` CLI command](/consul/docs/agent/config/cli-flags). +This port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/reference/agent/configuration-file/general#ports) or using the [`consul agent` CLI command](/consul/commands/agent). When this port is enabled, the Consul CLI uses it to interact with Consul. @@ -228,7 +231,7 @@ The following table lists information about the Consul client's gRPC with TLS po This port receives incoming traffic from the gateways and Envoy proxies registered to this client. We recommend using `8503` as your conventional gRPC port number, as it allows some tools to work automatically. -This port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/agent/config/config-files#ports) or using the [`consul agent` CLI command](/consul/docs/agent/config/cli-flags). +This port is disabled by default. You can enable it in the [agent configuration file](/consul/docs/reference/agent/configuration-file/general#ports) or using the [`consul agent` CLI command](/consul/commands/agent). ### Client LAN serf @@ -238,6 +241,6 @@ The following table lists information about the Consul client's LAN serf port de | :----------- | :---------- | :----------------- | | `8301` | TCP and UDP | Enabled by default | -This port sends and receives traffic from Consul clients and Consul servers in the same datacenter. Refer to [gossip protocol](/consul/docs/architecture/gossip) for more information. +This port sends and receives traffic from Consul clients and Consul servers in the same datacenter. Refer to [gossip protocol](/consul/docs/concept/gossip) for more information. When running Enterprise deployments that use network segments or admin partitions, Consul clients _within_ a segment or partition require access to each other's ports. Clients do not require port access _across_ segments or partitions. diff --git a/website/content/docs/reference/architecture/server.mdx b/website/content/docs/reference/architecture/server.mdx new file mode 100644 index 000000000000..a556495ee761 --- /dev/null +++ b/website/content/docs/reference/architecture/server.mdx @@ -0,0 +1,219 @@ +--- +layout: docs +page_title: Server resource requirements reference +description: >- + Learn about the resource requirements for Consul to run on a server. These requirements can change in production or at scale. +--- + +# Server resource requirements reference + +This page provides a reference for the server resources Consul requires for operation. + +## Introduction + +Since Consul servers run a [consensus protocol](/consul/docs/concept/consensus) to +process all write operations and are contacted on nearly all read operations, server +performance is critical for overall throughput and health of a Consul cluster. Servers +are generally I/O bound for writes because the underlying Raft log store performs a sync +to disk every time an entry is appended. Servers are generally CPU bound for reads since +reads work from a fully in-memory data store that is optimized for concurrent access. + +## Minimum Server Requirements ((#minimum)) + +In Consul 0.7, the default server [performance parameters](/consul/docs/reference/agent/configuration-file/general#performance) +were tuned to allow Consul to run reliably (but relatively slowly) on a server cluster of three +[AWS t2.micro](https://aws.amazon.com/ec2/instance-types/) instances. These thresholds +were determined empirically using a leader instance that was under sufficient read, write, +and network load to cause it to permanently be at zero CPU credits, forcing it to the baseline +performance mode for that instance type. Real-world workloads typically have more bursts of +activity, so this is a conservative and pessimistic tuning strategy. + +This default was chosen based on feedback from users, many of whom wanted a low cost way +to run small production or development clusters with low cost compute resources, at the +expense of some performance in leader failure detection and leader election times. + +The default performance configuration is equivalent to this: + +```json +{ + "performance": { + "raft_multiplier": 5 + } +} +``` + +## Production Server Requirements ((#production)) + +When running Consul 0.7 and later in production, it is recommended to configure the server +[performance parameters](/consul/docs/reference/agent/configuration-file/general#performance) back to Consul's original +high-performance settings. This will let Consul servers detect a failed leader and complete +leader elections much more quickly than the default configuration which extends key Raft +timeouts by a factor of 5, so it can be quite slow during these events. + +The high performance configuration is simple and looks like this: + +```json +{ + "performance": { + "raft_multiplier": 1 + } +} +``` + +This value must take into account the network latency between the servers and the read/write load on the servers. + +The value of `raft_multiplier` is a scaling factor and directly affects the following parameters: + +| Param | Value | | +| ------------------ | -----: | ------: | +| HeartbeatTimeout | 1000ms | default | +| ElectionTimeout | 1000ms | default | +| LeaderLeaseTimeout | 500ms | default | + +By default, Consul uses a scaling factor of `5` (i.e. `raft_multiplier: 5`), which results in the following values: + +| Param | Value | Calculation | +| ------------------ | -----: | ----------: | +| HeartbeatTimeout | 5000ms | 5 x 1000ms | +| ElectionTimeout | 5000ms | 5 x 1000ms | +| LeaderLeaseTimeout | 2500ms | 5 x 500ms | + +~> **NOTE** Wide networks with more latency will perform better with larger values of `raft_multiplier`. + +The trade off is between leader stability and time to recover from an actual +leader failure. A short multiplier minimizes failure detection and election time +but may be triggered frequently in high latency situations. This can cause +constant leadership churn and associated unavailability. A high multiplier +reduces the chances that spurious failures will cause leadership churn but it +does this at the expense of taking longer to detect real failures and thus takes +longer to restore cluster availability. + +Leadership instability can also be caused by under-provisioned CPU resources and +is more likely in environments where CPU cycles are shared with other workloads. +In order for a server to remain the leader, it must send frequent heartbeat +messages to all other servers every few hundred milliseconds. If some number of +these are missing or late due to the leader not having sufficient CPU to send +them on time, the other servers will detect it as failed and hold a new +election. + +It's best to benchmark with a realistic workload when choosing a production server for Consul. +Here are some general recommendations: + +- Consul will make use of multiple cores, and at least 2 cores are recommended. + +- Spurious leader elections can be caused by networking + issues between the servers or insufficient CPU resources. Users in cloud environments + often bump their servers up to the next instance class with improved networking + and CPU until leader elections stabilize, and in Consul 0.7 or later the [performance + parameters](/consul/docs/reference/agent/configuration-file/general#performance) configuration now gives you tools + to trade off performance instead of upsizing servers. You can use the [`consul.raft.leader.lastContact` + telemetry](/consul/docs/reference/agent/telemetry#leadership-changes) to observe how the Raft timing is + performing and guide the decision to de-tune Raft performance or add more powerful + servers. + +- For DNS-heavy workloads, configuring all Consul agents in a cluster with the + [`allow_stale`](/consul/docs/reference/agent/configuration-file/dns#allow_stale) configuration option will allow reads to + scale across all Consul servers, not just the leader. Consul 0.7 and later enables stale reads + for DNS by default. See [Stale Reads](/consul/docs/services/discovery/dns-cache#stale-reads) in the + [DNS Caching](/consul/docs/discover/dns/scale) guide for more details. It's also good to set + reasonable, non-zero [DNS TTL values](/consul/docs/services/discovery/dns-cache#ttl-values) if your clients will + respect them. + +- In other applications that perform high volumes of reads against Consul, consider using the + [stale consistency mode](/consul/api-docs/features/consistency#stale) available to allow reads to scale + across all the servers and not just be forwarded to the leader. + +- In Consul 0.9.3 and later, a new [`limits`](/consul/docs/reference/agent/configuration-file/general#limits) configuration is + available on Consul clients to limit the RPC request rate they are allowed to make against the + Consul servers. After hitting the limit, requests will start to return rate limit errors until + time has passed and more requests are allowed. Configuring this across the cluster can help with + enforcing a max desired application load level on the servers, and can help mitigate abusive + applications. + +## Memory Requirements + +Consul server agents operate on a working set of data comprised of key/value +entries, the service catalog, prepared queries, access control lists, and +sessions in memory. These data are persisted through Raft to disk in the form +of a snapshot and log of changes since the previous snapshot for durability. + +When planning for memory requirements, you should typically allocate +enough RAM for your server agents to contain between 2 to 4 times the working +set size. You can determine the working set size by noting the value of +`consul.runtime.alloc_bytes` in the [Telemetry data](/consul/docs/reference/agent/telemetry). + +> NOTE: Consul is not designed to serve as a general purpose database, and you +> should keep this in mind when choosing what data are populated to the +> key/value store. + +## Read/Write Tuning + +Consul is write limited by disk I/O and read limited by CPU. Memory requirements will be dependent on the total size of KV pairs stored and should be sized according to that data (as should the hard drive storage). The limit on a key's value size is `512KB`. + +-> Consul is write limited by disk I/O and read limited by CPU. + +For **write-heavy** workloads, the total RAM available for overhead must approximately be equal to + +``` +RAM NEEDED = number of keys * average key size * 2-3x +``` + +Since writes must be synced to disk (persistent storage) on a quorum of servers before they are committed, deploying a disk with high write throughput (or an SSD) will enhance performance on the write side. ([Documentation](/consul/commands/agent#_data_dir)) + +For a **read-heavy** workload, configure all Consul server agents with the `allow_stale` DNS option, or query the API with the `stale` [consistency mode](/consul/api-docs/features/consistency). By default, all queries made to the server are RPC forwarded to and serviced by the leader. By enabling stale reads, any server will respond to any query, thereby reducing overhead on the leader. Typically, the stale response is `100ms` or less from consistent mode but it drastically improves performance and reduces latency under high load. + +If the leader server is out of memory or the disk is full, the server eventually stops responding, loses its election and cannot move past its last commit time. However, by configuring `max_stale` and setting it to a large value, Consul will continue to respond to queries during such outage scenarios. ([max_stale documentation](/consul/docs/reference/agent/configuration-file/dns#max_stale)). + +It should be noted that `stale` is not appropriate for coordination where strong consistency is important (i.e. locking or application leader election). For critical cases, the optional `consistent` API query mode is required for true linearizability; the trade off is that this turns a read into a full quorum write so requires more resources and takes longer. + +**Read-heavy** clusters may take advantage of the [enhanced reading](/consul/docs/manage/scale/read-replica) feature (Enterprise) for better scalability. This feature allows additional servers to be introduced as non-voters. Being a non-voter, the server will still participate in data replication, but it will not block the leader from committing log entries. + +Consul's agents use network sockets for communicating with the other nodes (gossip) and with the server agent. In addition, file descriptors are also opened for watch handlers, health checks, and log files. For a **write heavy** cluster, the `ulimit` size must be increased from the default value (`1024`) to prevent the leader from running out of file descriptors. + +To prevent any CPU spikes from a misconfigured client, RPC requests to the server should be [rate limited](/consul/docs/reference/agent/configuration-file/general#limits). + +~> **NOTE** Rate limiting is configured on the client agent only. + +In addition, two [performance indicators](/consul/docs/reference/agent/telemetry) — `consul.runtime.alloc_bytes` and `consul.runtime.heap_objects` — can help diagnose if the current sizing is not adequately meeting the load. + +## Service Mesh Certificate Signing CPU Limits + +If you enable [service mesh](/consul/docs/connect), the leader server will need +to perform public key signing operations for every service instance in the +cluster. Typically these operations are fast on modern hardware, however when +the CA is changed or its key rotated, the leader will face an influx of +requests for new certificates for every service instance running. + +While the client agents distribute these randomly over 30 seconds to avoid an +immediate thundering herd, they don't have enough information to tune that +period based on the number of certificates in use in the cluster so picking +longer smearing results in artificially slow rotations for small clusters. + +Smearing requests over 30s is sufficient to bring RPC load to a reasonable level +in all but the very largest clusters, but the extra CPU load from cryptographic +operations could impact the server's normal work. To limit that, Consul since +1.4.1 exposes two ways to limit the impact Certificate signing has on the leader +[`csr_max_per_second`](/consul/docs/reference/agent/configuration-file/service-mesh#ca_csr_max_per_second) and +[`csr_max_concurrent`](/consul/docs/reference/agent/configuration-file/service-mesh#ca_csr_max_concurrent). + +By default we set a limit of 50 per second which is reasonable on modest +hardware but may be too low and impact rotation times if more than 1500 service +instances are using service mesh in the cluster. `csr_max_per_second` is likely best +if you have fewer than four cores available since a whole core being used by +signing is likely to impact the server stability if it's all or a large portion +of the cores available. The downside is that you need to capacity plan: how many +service instances will need service mesh certificates? What CSR rate can your server +tolerate without impacting stability? How fast do you want CA rotations to +process? + +For larger production deployments, we generally recommend multiple CPU cores for +servers to handle the normal workload. With four or more cores available, it's +simpler to limit signing CPU impact with `csr_max_concurrent` rather than tune +the rate limit. This effectively sets how many CPU cores can be monopolized by +certificate signing work (although it doesn't pin that work to specific cores). +In this case `csr_max_per_second` should be disabled (set to `0`). + +For example if you have an 8 core server, setting `csr_max_concurrent` to `1` +would allow you to process CSRs as fast as a single core can (which is likely +sufficient for the very large clusters), without consuming all available +CPU cores and impacting normal server work or stability. diff --git a/website/content/docs/reference/cli/consul-aws.mdx b/website/content/docs/reference/cli/consul-aws.mdx new file mode 100644 index 000000000000..917f1c37d762 --- /dev/null +++ b/website/content/docs/reference/cli/consul-aws.mdx @@ -0,0 +1,48 @@ +--- +layout: docs +page_title: Consul AWS CLI reference +description: >- + The Consul AWS tool syncs services between AWS Cloud Map and a Consul datacenter and enables service discovery across both AWS Cloud Map and Consul. +--- + +# Consul on AWS CLI reference + +The [Consul AWS CLI](https://github.com/hashicorp/consul-aws), `consul-aws`, lets you services between AWS Cloud Map and a Consul datacenter. + +Refer to the [Sync Consul service catalog with AWS Cloud Map page](/consul/docs/register/service/aws) for installation instructions and example usages. + +## Usage + +Usage: `consul-aws [options]` + +```shell-session +$ consul-aws +Usage: consul-aws [--version] [--help] [] + +Available commands are: + sync-catalog Sync AWS services and Consul services. + version Prints the version +``` + +## Commands + +- `sync-catalog`: Syncs AWS services and Consul services. +- `version`: Prints the version. + +### `sync-catalog` + +The `sync-catalog` command syncs the services between AWS Cloud Map and Consul. It accepts several flags specific to the tool as well as the general flags for connecting to the cluster including `-http-addr`, `-token`, and `-ca-file`. + +|Flag|Description|Default| +|---|---|---| +|`-aws-dns-ttl=`|DNS TTL for services created in AWS CloudMap in seconds.|`60`| +|`-aws-namespace-id=`|The AWS namespace to sync with Consul services.|| +|`-aws-poll-interval=`|The interval between fetching from AWS CloudMap. Accepts a sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "10s", "1.5m".|`30s`| +|`-aws-service-prefix=`|A prefix to prepend to all services written to AWS from Consul. If this is not set then services will have no prefix.|| +|`-consul-service-prefix=`|A prefix to prepend to all services written to Consul from AWS. If this is not set then services will have no prefix.|| +|`-to-aws`|If true, Consul services will be synced to AWS.|`false`| +|`-to-consul`|If true, AWS services will be synced to Consul.|`false`| + +### `version` + +The `version` commands prints out the version of the `consul-aws` tool. \ No newline at end of file diff --git a/website/content/docs/reference/cli/consul-k8s.mdx b/website/content/docs/reference/cli/consul-k8s.mdx new file mode 100644 index 000000000000..d78e91203ab9 --- /dev/null +++ b/website/content/docs/reference/cli/consul-k8s.mdx @@ -0,0 +1,1315 @@ +--- +layout: docs +page_title: Consul on Kubernetes CLI reference +description: >- + The Consul on Kubernetes CLI tool enables you to manage Consul with the `consul-k8s` command instead of direct interaction with Helm, kubectl, or Consul's CLI. Learn about commands, their flags, and review examples in this reference guide. +--- + +# Consul on Kubernetes CLI reference + +The Consul on Kubernetes CLI, `consul-k8s`, lets you manage Consul without interacting directly with Helm, the [Consul CLI](/consul/commands), or `kubectl`. + +This topic describes how to install `consul-k8s`, and lists the commands and available options for using `consul-k8s`. + +## Install the CLI + +The following instructions describe how to install the latest version of `consul-k8s`, as well as earlier versions, so that you can install an appropriate version of tool for your control plane. + + + +You must install the correct version of the CLI for your Consul on Kubernetes deployment. To deploy a previous version of Consul on Kubernetes, download the specific version of the CLI that matches the version of the control plane that you would like to deploy. Refer to the [compatibility matrix](/consul/docs/upgrade/k8s/compatibility) for details. + + + +### Install the latest version + +Complete the following instructions for a fresh installation of Consul on Kubernetes. + + + + +The [Homebrew](https://brew.sh) package manager is required to complete the following installation instructions. The Homebrew formulae always installs the latest version of a binary. + +1. Install the HashiCorp `tap`, which is a repository of all Homebrew packages for HashiCorp: + ```shell-session + $ brew tap hashicorp/tap + ``` + +1. Install the Consul K8s CLI with `hashicorp/tap/consul` formula. + ```shell-session + $ brew install hashicorp/tap/consul-k8s + ``` + +1. (Optional) Issue the `consul-k8s version` command to verify the installation: + + ```shell-session + $ consul-k8s version + consul-k8s 1.0 + ``` + + + + +1. Add the HashiCorp GPG key. + + ```shell-session + $ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - + ``` + +1. Add the HashiCorp apt repository. + + ```shell-session + $ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" + ``` + +1. Run apt-get install to install the `consul-k8s` CLI. + + ```shell-session + $ sudo apt-get update && sudo apt-get install consul-k8s + ``` + +1. (Optional) Issue the `consul-k8s version` command to verify the installation. + + ```shell-session + $ consul-k8s version + consul-k8s 1.0 + ``` + + + + + +1. Install `yum-config-manager` to manage your repositories. + + ```shell-session + $ sudo yum install -y yum-utils + ``` + +1. Use `yum-config-manager` to add the official HashiCorp Linux repository. + + ```shell-session + $ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo + ``` + +1. Install the `consul-k8s` CLI. + + ```shell-session + $ sudo yum -y install consul-k8s + ``` + +1. (Optional) Issue the `consul-k8s version` command to verify the installation. + + ```shell-session + $ consul-k8s version + consul-k8s 1.0 + ``` + + + + + +### Install a previous version + +Complete the following instructions to install a specific version of the CLI so that your tool is compatible with your Consul on Kubernetes control plane. Refer to the [compatibility matrix](/consul/docs/upgrade/k8s/compatibility) for additional information. + + + + + +1. Download the appropriate version of Consul K8s CLI using the following `curl` command. Set the `$VERSION` environment variable to the appropriate version for your deployment. + + ```shell-session + $ export VERSION=1.1.1 && \ + curl --location "https://releases.hashicorp.com/consul-k8s/${VERSION}/consul-k8s_${VERSION}_darwin_amd64.zip" --output consul-k8s-cli.zip + ``` + +1. Unzip the zip file output to extract the `consul-k8s` CLI binary. This overwrites existing files and also creates a `.consul-k8s` subdirectory in your `$HOME` folder. + + ```shell-session + $ unzip -o consul-k8s-cli.zip -d ~/consul-k8s + ``` + +1. Add the path to your directory. In order to persist the `$PATH` across sessions, dd it to your shellrc (i.e. shell run commands) file for the shell used by your terminal. + + ```shell-session + $ export PATH=$PATH:$HOME/consul-k8s + ``` + +1. (Optional) Issue the `consul-k8s version` command to verify the installation. + + ```shell-session + $ consul-k8s version + consul-k8s 1.0 + ``` + + + + + +1. Add the HashiCorp GPG key. + + ```shell-session + $ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - + ``` + +1. Add the HashiCorp apt repository. + + ```shell-session + $ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" + ``` + +1. Run apt-get install to install the `consul-k8s` CLI. + + ```shell-session + $ export VERSION=0.39.0 && \ + sudo apt-get update && sudo apt-get install consul-k8s=${VERSION} + ``` + +1. (Optional) Issue the `consul-k8s version` command to verify the installation. + + ```shell-session + $ consul-k8s version + consul-k8s 1.0 + ``` + + + + + +1. Install `yum-config-manager` to manage your repositories. + + ```shell-session + $ sudo yum install -y yum-utils + ``` + +1. Use `yum-config-manager` to add the official HashiCorp Linux repository. + + ```shell-session + $ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo + ``` + +1. Install the `consul-k8s` CLI. + + ```shell-session + $ export VERSION=-1.0 && \ + sudo yum -y install consul-k8s-${VERSION}-1 + ``` + +2. (Optional) Issue the `consul-k8s version` command to verify the installation. + + ```shell-session + $ consul-k8s version + consul-k8s 1.0 + ``` + + + + + +## Usage + +The Consul on Kubernetes CLI uses the following syntax: + +```shell-session +$ consul-k8s +``` + +## Commands + +You can use the following commands with `consul-k8s`. + + - [`config`](#config): Interact with helm configuration. + - [`config read`](#config-read): Read helm configuration of a Consul installation. + - [`install`](#install): Install Consul on Kubernetes. + - [`proxy`](#proxy): Inspect Envoy proxies managed by Consul. + - [`proxy list`](#proxy-list): List all Pods running proxies managed by Consul. + - [`proxy read`](#proxy-read): Inspect the Envoy configuration for a given Pod. + - [`proxy log`](#proxy-log): Inspect and modify the Envoy logging configuration for a given Pod. + - [`proxy stats`](#proxy-stats): View the Envoy cluster stats for a given Pod. + - [`status`](#status): Check the status of a Consul installation on Kubernetes. + - [`troubleshoot`](#troubleshoot): Troubleshoot Consul service mesh and networking issues from a given pod. + - [`uninstall`](#uninstall): Uninstall Consul deployment. + - [`upgrade`](#upgrade): Upgrade Consul on Kubernetes from an existing installation. + - [`version`](#version): Print the version of the Consul on Kubernetes CLI. + +### `config` + +The `config` command exposes the `read` subcommand that allows to read the helm configuration of a Consul installation. + +- [`config read`](#config-read): Read helm configuration of a Consul installation. + +### `config read` + +```shell-session +$ consul-k8s config read +``` + +| Flag | Description | Default | +| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `-all-namespaces`, `-A` | `Boolean` List pods in all Kubernetes namespaces. | `false` | +| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | + +Refer to the [Global Options](#global-options) for additional options that you can use +when installing Consul on Kubernetes. + +#### Example Commands + +The following example command reads the Helm configuration in the `myNS` namespace. + +```shell-session +$ consul-k8s config read -namespace=myNS +``` + +``` +global: + cloud: + clientId: + secretKey: client-id + secretName: consul-hcp-client-id + clientSecret: + secretKey: client-secret + secretName: consul-hcp-client-secret + enabled: true + resourceId: + secretKey: resource-id + secretName: consul-hcp-resource-id + image: hashicorp/consul:1.14.7 + name: consul +``` + +### `install` + +The `install` command installs Consul on your Kubernetes cluster. + +```shell-session +$ consul-k8s install +``` + +The following options are available. + +| Flag | Description | Default | +| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------| +| `-auto-approve` | Boolean value that enables you to skip the installation confirmation prompt. | `false` | +| `-dry-run` | Boolean value that validates the installation and returns a summary. | `false` | +| `-config-file` | String value that specifies the path to a file containing custom installation configurations, e.g., Consul Helm chart values file.
    You can use the `-config-file` flag multiple times to specify multiple files. | none | +| `-namespace` | String value that specifies the namespace of the Consul installation. | `consul` | +| `-preset` | String value that installs Consul based on a preset configuration. You can specify the following values:
    `demo`: Installs a single replica server with sidecar injection enabled; useful for testing service mesh functionality.
    `secure`: Installs a single replica server with sidecar injection, ACLs, and TLS enabled; useful for testing service mesh functionality. | Configuration of the Consul Helm chart. | +| `-set` | String value that enables you to set a customizable value. This flag is comparable to the `helm install --set` flag.
    You can use the `-set` flag multiple times to set multiple values.
    Consul Helm chart values are supported. | none | +| `-set-file` | String value that specifies the name of an arbitrary config file. This flag is comparable to the `helm install --set-file`
    flag. The contents of the file will be used to set a customizable value. You can use the `-set-file` flag multiple times to specify multiple files.
    Consul Helm chart values are supported. | none | +| `-set-string` | String value that enables you to set a customizable string value. This flag is comparable to the `helm install --set-string`
    flag. You can use the `-set-string` flag multiple times to specify multiple strings.
    Consul Helm chart values are supported. | none | +| `-timeout` | Specifies how long to wait for the installation process to complete before timing out. The value is specified with an integer and string value indicating a unit of time.
    The following units are supported:
    `ms` (milliseconds)
    `s` (seconds)
    `m` (minutes)
    In the following example, installation will timeout after one minute:
    `consul-k8s install -timeout 1m` | `10m` | +| `-wait` | Boolean value that determines if Consul should wait for resources in the installation to be ready before exiting the command. | `true` | +| `-verbose`, `-v` | Boolean value that specifies whether to output verbose logs from the install command with the status of resources being installed. | `false` | +| `-help`, `-h` | Prints usage information for this option. | none | + +See [Global Options](#global-options) for additional commands that you can use when installing Consul on Kubernetes. + +#### Example Commands + +The following example command installs Consul in the `myNS` namespace according to the `secure` preset. + +```shell-session +$ consul-k8s install -preset=secure -namespace=myNS +``` + +The following example commands install Consul on Kubernetes using custom values, files, or strings that are set via flags. The underlying Consul-on-Kubernetes Helm chart uses the flags to customize the installation. The flags are comparable to the `helm install` [flags](https://helm.sh/docs/helm/helm_install/#helm-install). + +```shell-session +$ consul-k8s install -set key=value +``` + +```shell-session +$ consul-k8s install -set key1=value1 -set key2=value2 +``` +```shell-session +$ consul-k8s install -set-file config1=value1.conf +``` + +```shell-session +$ consul-k8s install -set-file config1=value1.conf -set-file config2=value2.conf +``` + +```shell-session +$ consul-k8s install -set-string key=value-bool +``` + +### `proxy` + +The `proxy` command exposes two subcommands for interacting with proxies managed by +Consul in your Kubernetes Cluster. + +- [`proxy list`](#proxy-list): List all Pods running proxies managed by Consul. +- [`proxy read`](#proxy-read): Inspect the Envoy configuration for a given Pod. +- [`proxy log`](#proxy-log): Inspect and modify the Envoy logging configuration for a given Pod. +- [`proxy stats`](#proxy-stats): View the Envoy cluster stats for a given Pod. + +### `proxy list` + +```shell-session +$ consul-k8s proxy list +``` + +| Flag | Description | Default | +| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `-all-namespaces`, `-A` | `Boolean` List pods in all Kubernetes namespaces. | `false` | +| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | +| `-output-format`, `-o` | `String` If set to json, outputs the result in json format, else table format | `table` + +Refer to the [Global Options](#global-options) for additional options that you can use +when installing Consul on Kubernetes. + +This command lists proxies and their `Type`. Types of proxies include: + +- `Sidecar`: The majority of pods in the cluster are `Sidecar` types. They run the + proxy as a sidecar to connect the pod as a service in the mesh. +- `API Gateway`: These pods run a proxy to manage connections with networks + outside of the Consul cluster. Read more about [API gateways](/consul/docs/api-gateway). +- `Ingress Gateway`: These pods run a proxy to manage ingress into the + Kubernetes cluster. Read more about [ingress gateways](/consul/docs/north-south/ingress-gateway/k8s). +- `Terminating Gateway`: These pods run a proxy to control connections to + external services. Read more about [terminating gateways](/consul/docs/register/external/terminating-gateway/k8s). +- `Mesh Gateway`: These pods run a proxy to manage connections between + Consul clusters connected using mesh federation. Read more about [Consul Mesh Federation](/consul/docs/east-west/wan-federation/k8s). + +#### Example Commands + +Display all pods in the current Kubernetes namespace that run proxies managed +by Consul. + +```shell-session +$ consul-k8s proxy list +``` + +``` +Namespace: default + +Name Type +backend-658b679b45-d5xlb Sidecar +client-767ccfc8f9-6f6gx Sidecar +client-767ccfc8f9-f8nsn Sidecar +client-767ccfc8f9-ggrtx Sidecar +frontend-676564547c-v2mfq Sidecar +``` + +Display all pods in the `consul` Kubernetes namespace that run proxies managed +by Consul. + +```shell-session +$ consul-k8s proxy list -n consul +``` + +``` +Namespace: consul + +Name Type +consul-ingress-gateway-6fb5544485-br6fl Ingress Gateway +consul-ingress-gateway-6fb5544485-m54sp Ingress Gateway +``` + +Display all Pods across all namespaces that run proxies managed by Consul. + +```shell-session +$ consul-k8s proxy list -A +Namespace: All namespaces + +Namespace Name Type +consul consul-ingress-gateway-6fb5544485-br6fl Ingress Gateway +consul consul-ingress-gateway-6fb5544485-m54sp Ingress Gateway +default backend-658b679b45-d5xlb Sidecar +default client-767ccfc8f9-6f6gx Sidecar +default client-767ccfc8f9-f8nsn Sidecar +default client-767ccfc8f9-ggrtx Sidecar +default frontend-676564547c-v2mfq Sidecar +``` + +Display all Pods across all namespaces that run proxies managed by Consul in JSON format + +```shell-session +$ consul-k8s proxy list -A -o json +Namespace: All namespaces + +[ + { + "Name": "frontend-6fd97b8fb5-spqb8", + "Namespace": "default", + "Type": "Sidecar" + }, + { + "Name": "nginx-6d7469694f-p5wrz", + "Namespace": "default", + "Type": "Sidecar" + }, + { + "Name": "payments-667d87bf95-ktb8n", + "Namespace": "default", + "Type": "Sidecar" + }, + { + "Name": "product-api-7c4d77c7c9-g4g2b", + "Namespace": "default", + "Type": "Sidecar" + }, + { + "Name": "product-api-db-685c844cb-k5l8f", + "Namespace": "default", + "Type": "Sidecar" + }, + { + "Name": "public-api-567d949866-cgksl", + "Namespace": "default", + "Type": "Sidecar" + } +] +``` + +### `proxy read` + +The `proxy read` command allows you to inspect the configuration of Envoy proxies running on a given Pod. + +```shell-session +$ consul-k8s proxy read +``` + +The command takes a required value, ``. This should be the full name +of a Kubernetes Pod. If a Pod is running more than one Envoy proxy managed by +Consul, as in the [Multiport configuration](/consul/docs/k8s/connect#kubernetes-pods-with-multiple-ports), +configuration for all proxies in the Pod will be displayed. + +The following options are available. + +| Flag | Description | Default | +| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | +| `-namespace`, `-n` | `String` The namespace where the target Pod can be found. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | +| `-output`, `-o` | `String` Output the Envoy configuration as 'table', 'json', or 'raw'. | `'table'` | +| `-clusters` | `Boolean` Filter output to only show clusters. | `false` | +| `-endpoints` | `Boolean` Filter output to only show endpoints. | `false` | +| `-listeners` | `Boolean` Filter output to only show listeners. | `false` | +| `-routes` | `Boolean` Filter output to only show routes. | `false` | +| `-secrets` | `Boolean` Filter output to only show secrets. | `false` | +| `-address` | `String` Filter clusters, endpoints, and listeners output to only those with endpoint addresses which contain the given value. | `""` | +| `-fqdn` | `String` Filter cluster output to only clusters with a fully qualified domain name which contains the given value. | `""` | +| `-port` | `Int` Filter endpoints output to only endpoints with the given port number. | `-1` which does not filter by port | + +#### Example commands + +Get the configuration summary for the Envoy proxy running on the Pod +`backend-658b679b45-d5xlb`. + +```shell-session +$ consul-k8s proxy read backend-658b679b45-d5xlb +Envoy configuration for backend-658b679b45-d5xlb in namespace default: + +==> Clusters (5) +Name FQDN Endpoints Type Last Updated +local_agent local_agent 192.168.79.187:8502 STATIC 2022-05-13T04:22:39.553Z +client client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.18.110:20000, 192.168.52.101:20000, 192.168.65.131:20000 EDS 2022-08-08T12:02:07.471Z +frontend frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.63.120:20000 EDS 2022-08-08T12:02:07.354Z +local_app local_app 127.0.0.1:8080 STATIC 2022-05-13T04:22:39.655Z +original-destination original-destination ORIGINAL_DST 2022-05-13T04:22:39.743Z + + +==> Endpoints (6) +Address:Port Cluster Weight Status +192.168.79.187:8502 local_agent 1.00 HEALTHY +192.168.18.110:20000 client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY +192.168.52.101:20000 client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY +192.168.65.131:20000 client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY +192.168.63.120:20000 frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 1.00 HEALTHY +127.0.0.1:8080 local_app 1.00 HEALTHY + +==> Listeners (2) +Name Address:Port Direction Filter Chain Match Filters Last Updated +public_listener 192.168.69.179:20000 INBOUND Any * to local_app/ 2022-08-08T12:02:22.261Z +outbound_listener 127.0.0.1:15001 OUTBOUND 10.100.134.173/32, 240.0.0.3/32 to client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 2022-07-18T15:31:03.246Z + 10.100.31.2/32, 240.0.0.5/32 to frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul + Any to original-destination + +==> Routes (1) +Name Destination Cluster Last Updated +public_listener local_app/ 2022-08-08T12:02:22.260Z + +==> Secrets (0) +Name Type Last Updated + + +``` + +Get the Envoy configuration summary for all clusters with a fully qualified +domain name that includes `"default"`. Display only clusters and listeners. + +```shell-session +$ consul-k8s proxy read backend-658b679b45-d5xlb -fqdn default -clusters -listeners +==> Filters applied + Fully qualified domain names containing: default + +Envoy configuration for backend-658b679b45-d5xlb in namespace default: + +==> Clusters (2) +Name FQDN Endpoints Type Last Updated +client client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.18.110:20000, 192.168.52.101:20000, 192.168.65.131:20000 EDS 2022-08-08T12:02:07.471Z +frontend frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 192.168.63.120:20000 EDS 2022-08-08T12:02:07.354Z + + +==> Listeners (2) +Name Address:Port Direction Filter Chain Match Filters Last Updated +public_listener 192.168.69.179:20000 INBOUND Any * to local_app/ 2022-08-08T12:02:22.261Z +outbound_listener 127.0.0.1:15001 OUTBOUND 10.100.134.173/32, 240.0.0.3/32 to client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul 2022-07-18T15:31:03.246Z + 10.100.31.2/32, 240.0.0.5/32 to frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul + Any to original-destination + +``` + +Get the Envoy configuration summary in a JSON format. Note that this is not the +same as the raw configuration dump from the admin API. This information is the +same as what is displayed in the table output above, but in a JSON format. + +```shell-session +$ consul-k8s proxy read backend-658b679b45-d5xlb -o json +{ + "backend-658b679b45-d5xlb": { + "clusters": [ + { + "Name": "local_agent", + "FullyQualifiedDomainName": "local_agent", + "Endpoints": [ + "192.168.79.187:8502" + ], + "Type": "STATIC", + "LastUpdated": "2022-05-13T04:22:39.553Z" + }, + { + "Name": "client", + "FullyQualifiedDomainName": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", + "Endpoints": [ + "192.168.18.110:20000", + "192.168.52.101:20000", + "192.168.65.131:20000" + ], + "Type": "EDS", + "LastUpdated": "2022-08-08T12:02:07.471Z" + }, + { + "Name": "frontend", + "FullyQualifiedDomainName": "frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", + "Endpoints": [ + "192.168.63.120:20000" + ], + "Type": "EDS", + "LastUpdated": "2022-08-08T12:02:07.354Z" + }, + { + "Name": "local_app", + "FullyQualifiedDomainName": "local_app", + "Endpoints": [ + "127.0.0.1:8080" + ], + "Type": "STATIC", + "LastUpdated": "2022-05-13T04:22:39.655Z" + }, + { + "Name": "original-destination", + "FullyQualifiedDomainName": "original-destination", + "Endpoints": [], + "Type": "ORIGINAL_DST", + "LastUpdated": "2022-05-13T04:22:39.743Z" + } + ], + "endpoints": [ + { + "Address": "192.168.79.187:8502", + "Cluster": "local_agent", + "Weight": 1, + "Status": "HEALTHY" + }, + { + "Address": "192.168.18.110:20000", + "Cluster": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", + "Weight": 1, + "Status": "HEALTHY" + }, + { + "Address": "192.168.52.101:20000", + "Cluster": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", + "Weight": 1, + "Status": "HEALTHY" + }, + { + "Address": "192.168.65.131:20000", + "Cluster": "client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", + "Weight": 1, + "Status": "HEALTHY" + }, + { + "Address": "192.168.63.120:20000", + "Cluster": "frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul", + "Weight": 1, + "Status": "HEALTHY" + }, + { + "Address": "127.0.0.1:8080", + "Cluster": "local_app", + "Weight": 1, + "Status": "HEALTHY" + } + ], + "listeners": [ + { + "Name": "public_listener", + "Address": "192.168.69.179:20000", + "FilterChain": [ + { + "Filters": [ + "* to local_app/" + ], + "FilterChainMatch": "Any" + } + ], + "Direction": "INBOUND", + "LastUpdated": "2022-08-08T12:02:22.261Z" + }, + { + "Name": "outbound_listener", + "Address": "127.0.0.1:15001", + "FilterChain": [ + { + "Filters": [ + "to client.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul" + ], + "FilterChainMatch": "10.100.134.173/32, 240.0.0.3/32" + }, + { + "Filters": [ + "to frontend.default.dc1.internal.bc3815c2-1a0f-f3ff-a2e9-20d791f08d00.consul" + ], + "FilterChainMatch": "10.100.31.2/32, 240.0.0.5/32" + }, + { + "Filters": [ + "to original-destination" + ], + "FilterChainMatch": "Any" + } + ], + "Direction": "OUTBOUND", + "LastUpdated": "2022-07-18T15:31:03.246Z" + } + ], + "routes": [ + { + "Name": "public_listener", + "DestinationCluster": "local_app/", + "LastUpdated": "2022-08-08T12:02:22.260Z" + } + ], + "secrets": [] + } +} +``` + +Get the raw Envoy configuration dump and clusters information for the Envoy +proxy running on the Pod `backend-658b679b45-d5xlb`. The example command returns +the raw configuration for each service as JSON. You can use the +[JQ command line tool](https://stedolan.github.io/jq/) to index into +the configuration for the service you want to inspect. + +Refer to the [Envoy config dump documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/admin/v3/config_dump.proto) +for more information on the structure of the config dump. + +The following output is truncated for brevity. + +```shell-session +$ consul-k8s proxy read backend-658b679b45-d5xlb -o raw +{ + "backend-658b679b45-d5xlb": { + "clusters": { + // [-- snip 372 lines --] output from the Envoy admin interface's /clusters endpoint. + }, + "config_dump": { + // [-- snip 1816 lines --] output from the Envoy admin interface's /config_dump?include_eds endpoint. + } +} +``` + +### `proxy log` + +The `proxy log` command allows you to inspect and modify the logging configuration of Envoy proxies running on a given Pod. + +```shell-session +$ consul-k8s proxy log +``` + +The command takes a required value, ``. This should be the full name +of a Kubernetes Pod. If a Pod is running more than one Envoy proxy managed by +Consul, as in the [Multiport configuration](/consul/docs/k8s/connect#kubernetes-pods-with-multiple-ports), +the terminal displays configuration information for all proxies in the pod. + +The following options are available. + +| Flag | Description | Default | +| ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------- | +| `-namespace`, `-n` | `String` Specifies the namespace containing the target Pod. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | +| `-update-level`, `-u` | `String` Specifies the logger (optional) and the level to update.

    Use the following format to configure the same level for loggers: `-update-level `.

    You can also specify a comma-delineated list to configure levels for specific loggers, for example: `-update-level grpc:warning,http:info`.

    | none | +| `-reset`, `-r` | `String` Reset the log levels for all loggers back to the default of `info` | `info` | + +#### Example commands +In the following example, Consul returns the log levels for all of an Envoy proxy's loggers in a pod with the ID `server-697458b9f8-4vr29`: + +```shell-session +$ consul-k8s proxy log server-697458b9f8-4vr29 +Envoy log configuration for server-697458b9f8-4vr29 in namespace default: + +==> Log Levels for server-697458b9f8-4vr29 +Name Level +rds info +backtrace info +hc info +http info +io info +jwt info +rocketmq info +matcher info +runtime info +redis info +stats info +tap info +alternate_protocols_cache info +grpc info +init info +quic info +thrift info +wasm info +aws info +conn_handler info +ext_proc info +hystrix info +tracing info +dns info +oauth2 info +connection info +health_checker info +kafka info +mongo info +config info +admin info +forward_proxy info +misc info +websocket info +dubbo info +happy_eyeballs info +main info +client info +lua info +udp info +cache_filter info +filter info +multi_connection info +quic_stream info +router info +http2 info +key_value_store info +secret info +testing info +upstream info +assert info +ext_authz info +rbac info +decompression info +envoy_bug info +file info +pool info +``` + +The following command updates the log levels for all loggers of an Envoy proxy to `warning`. +```shell-session +$ consul-k8s proxy log server-697458b9f8-4vr29 -update-level warning +Envoy log configuration for server-697458b9f8-4vr29 in namespace default: + +==> Log Levels for server-697458b9f8-4vr29 +Name Level +pool warning +rbac warning +tracing warning +aws warning +cache_filter warning +decompression warning +init warning +assert warning +client warning +misc warning +udp warning +config warning +hystrix warning +key_value_store warning +runtime warning +admin warning +dns warning +jwt warning +redis warning +quic warning +alternate_protocols_cache warning +conn_handler warning +ext_proc warning +http warning +oauth2 warning +ext_authz warning +http2 warning +kafka warning +mongo warning +router warning +thrift warning +grpc warning +matcher warning +hc warning +multi_connection warning +wasm warning +dubbo warning +filter warning +upstream warning +backtrace warning +connection warning +io warning +main warning +happy_eyeballs warning +rds warning +tap warning +envoy_bug warning +rocketmq warning +file warning +forward_proxy warning +stats warning +health_checker warning +lua warning +secret warning +quic_stream warning +testing warning +websocket warning +``` +The following command updates the `grpc` log level to `error`, the `http` log level to `critical`, and the `runtime` log level to `debug` for pod ID `server-697458b9f8-4vr29` +```shell-session +$ consul-k8s proxy log server-697458b9f8-4vr29 -update-level grpc:error,http:critical,runtime:debug +Envoy log configuration for server-697458b9f8-4vr29 in namespace default: + +==> Log Levels for server-697458b9f8-4vr29 +Name Level +assert info +dns info +http critical +pool info +thrift info +udp info +grpc error +hc info +stats info +wasm info +alternate_protocols_cache info +ext_authz info +filter info +http2 info +key_value_store info +tracing info +cache_filter info +quic_stream info +aws info +io info +matcher info +rbac info +tap info +connection info +conn_handler info +rocketmq info +hystrix info +oauth2 info +redis info +backtrace info +file info +forward_proxy info +kafka info +config info +router info +runtime debug +testing info +happy_eyeballs info +ext_proc info +init info +lua info +health_checker info +misc info +envoy_bug info +jwt info +main info +quic info +upstream info +websocket info +client info +decompression info +mongo info +multi_connection info +rds info +secret info +admin info +dubbo info +``` +The following command resets the log levels for all loggers of an Envoy proxy in pod `server-697458b9f8-4vr29` to the default level of `info`. +```shell-session +$ consul-k8s proxy log server-697458b9f8-4vr29 -r +Envoy log configuration for server-697458b9f8-4vr29 in namespace default: + +==> Log Levels for server-697458b9f8-4vr29 +Name Level +ext_proc info +secret info +thrift info +tracing info +dns info +rocketmq info +happy_eyeballs info +hc info +io info +misc info +conn_handler info +key_value_store info +rbac info +hystrix info +wasm info +admin info +cache_filter info +client info +health_checker info +oauth2 info +runtime info +testing info +grpc info +upstream info +forward_proxy info +matcher info +pool info +aws info +decompression info +jwt info +tap info +assert info +redis info +http info +quic info +rds info +connection info +envoy_bug info +stats info +alternate_protocols_cache info +backtrace info +filter info +http2 info +init info +multi_connection info +quic_stream info +dubbo info +ext_authz info +main info +udp info +websocket info +config info +mongo info +router info +file info +kafka info +lua info +``` + +### `proxy stats` + +The `proxy stats` command allows you to inspect the Envoy cluster stats for Envoy proxies running on a given Pod. + +```shell-session +$ consul-k8s proxy stats +``` +| Flag | Description | Default | +| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | + +Refer to the [Global Options](#global-options) for additional options that you can use +when installing Consul on Kubernetes. + +#### Example Commands + +Display the Envoy cluster stats in a given pod in default namespace. + +```shell-session +$ consul-k8s proxy stats product-api-7c4d77c7c9-6slnl +cluster.frontend.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.nginx.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.payments.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.product-api-db.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.public-api.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster_manager.cds.version_text: "" +control_plane.identifier: "" +listener_manager.lds.version_text: "" +cluster.consul-dataplane.assignment_stale: 0 +cluster.consul-dataplane.assignment_timeout_received: 0 +cluster.consul-dataplane.bind_errors: 0 +cluster.consul-dataplane.circuit_breakers.default.cx_open: 0 +cluster.consul-dataplane.circuit_breakers.default.cx_pool_open: 0 +cluster.consul-dataplane.circuit_breakers.default.rq_open: 0 +cluster.consul-dataplane.circuit_breakers.default.rq_pending_open: 0 +cluster.consul-dataplane.circuit_breakers.default.rq_retry_open: 0 +cluster.consul-dataplane.circuit_breakers.high.cx_open: 0 +cluster.consul-dataplane.circuit_breakers.high.cx_pool_open: 0 +cluster.consul-dataplane.circuit_breakers.high.rq_open: 0 +cluster.consul-dataplane.circuit_breakers.high.rq_pending_open: 0 +cluster.consul-dataplane.circuit_breakers.high.rq_retry_open: 0 +cluster.consul-dataplane.default.total_match_count: 1 +cluster.consul-dataplane.http2.deferred_stream_close: 0 +cluster.consul-dataplane.http2.dropped_headers_with_underscores: 0 +cluster.consul-dataplane.http2.header_overflow: 0 +cluster.consul-dataplane.http2.headers_cb_no_stream: 0 +cluster.consul-dataplane.http2.inbound_empty_frames_flood: 0 +cluster.consul-dataplane.http2.inbound_priority_frames_flood: 0 +......... +``` + +Display the Envoy cluster stats in a given pod in different namespace. + +```shell-session +$ consul-k8s proxy stats public-api-567d949866-452xc -n consul +cluster.frontend.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.nginx.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.payments.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.product-api-db.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster.product-api.default.dc1.internal.4b11bae3-b8ca-ee63-89bc-428cbfa6ef60.consul.version_text: "" +cluster_manager.cds.version_text: "" +control_plane.identifier: "" +listener_manager.lds.version_text: "" +cluster.consul-dataplane.assignment_stale: 0 +cluster.consul-dataplane.assignment_timeout_received: 0 +cluster.consul-dataplane.bind_errors: 0 +cluster.consul-dataplane.circuit_breakers.default.cx_open: 0 +cluster.consul-dataplane.circuit_breakers.default.cx_pool_open: 0 +cluster.consul-dataplane.circuit_breakers.default.rq_open: 0 +cluster.consul-dataplane.circuit_breakers.default.rq_pending_open: 0 +cluster.consul-dataplane.circuit_breakers.default.rq_retry_open: 0 +cluster.consul-dataplane.circuit_breakers.high.cx_open: 0 +cluster.consul-dataplane.circuit_breakers.high.cx_pool_open: 0 +cluster.consul-dataplane.circuit_breakers.high.rq_open: 0 +cluster.consul-dataplane.circuit_breakers.high.rq_pending_open: 0 +cluster.consul-dataplane.circuit_breakers.high.rq_retry_open: 0 +......... +``` + +### `status` + +The `status` command provides an overall status summary of the Consul on Kubernetes installation. It also provides the configuration that was used to deploy Consul K8s and information about the health of Consul servers and clients. This command does not take in any flags. + +```shell-session +$ consul-k8s status +``` + +#### Example Command + +```shell-session +$ consul-k8s status + +==> Consul-K8s Status Summary + NAME | NAMESPACE | STATUS | CHARTVERSION | APPVERSION | REVISION | LAST UPDATED +---------+-----------+----------+--------------+------------+----------+-------------------------- + consul | consul | deployed | 0.41.1 | 1.11.4 | 1 | 2022/03/10 07:48:58 MST + +==> Config: + connectInject: + enabled: true + metrics: + defaultEnableMerging: true + defaultEnabled: true + enableGatewayMetrics: true + global: + metrics: + enableAgentMetrics: true + enabled: true + name: consul + prometheus: + enabled: true + server: + replicas: 1 + ui: + enabled: true + service: + enabled: true + + ✓ Consul servers healthy (1/1) + ✓ Consul clients healthy (3/3) +``` + +### `troubleshoot` + +The `troubleshoot` command exposes two subcommands for troubleshooting Consul +service mesh and network issues from a given pod. + +- [`troubleshoot upstreams`](#troubleshoot-upstreams): List all Envoy upstreams in Consul service mesh from the given pod. +- [`troubleshoot proxy`](#troubleshoot-proxy): Troubleshoot Consul service mesh configuration and network issues between the given pod and the given upstream. + +### `troubleshoot upstreams` + +```shell-session +$ consul-k8s troubleshoot upstreams -pod +``` + +| Flag | Description | Default | +| ------------------------------------ | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | +| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | + +#### Example Commands + +The following example displays all transparent proxy upstreams in Consul service mesh from the given pod. + + ```shell-session + $ consul-k8s troubleshoot upstreams -pod frontend-767ccfc8f9-6f6gx + + ==> Upstreams (explicit upstreams only) (0) + + ==> Upstreams IPs (transparent proxy only) (1) + [10.4.6.160 240.0.0.3] true map[backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul] + + If you cannot find the upstream address or cluster for a transparent proxy upstream: + - Check intentions: Tproxy upstreams are configured based on intentions. Make sure you have configured intentions to allow traffic to your upstream. + - To check that the right cluster is being dialed, run a DNS lookup for the upstream you are dialing. For example, run `dig backend.svc.consul` to return the IP address for the `backend` service. If the address you get from that is missing from the upstream IPs, it means that your proxy may be misconfigured. + ``` + +The following example displays all explicit upstreams from the given pod in the Consul service mesh. + + ```shell-session + $ consul-k8s troubleshoot upstreams -pod client-767ccfc8f9-6f6gx + + ==> Upstreams (explicit upstreams only) (1) + server + counting + + ==> Upstreams IPs (transparent proxy only) (0) + + If you cannot find the upstream address or cluster for a transparent proxy upstream: + - Check intentions: Tproxy upstreams are configured based on intentions. Make sure you have configured intentions to allow traffic to your upstream. + - To check that the right cluster is being dialed, run a DNS lookup for the upstream you are dialing. For example, run `dig backend.svc.consul` to return the IP address for the `backend` service. If the address you get from that is missing from the upstream IPs, it means that your proxy may be misconfigured. + ``` + +### `troubleshoot proxy` + +```shell-session +$ consul-k8s troubleshoot proxy -pod -upstream-ip +$ consul-k8s troubleshoot proxy -pod -upstream-envoy-id +``` + +| Flag | Description | Default | +| ------------------------------------ | ----------------------------------------------------------| ---------------------------------------------------------------------------------------------------------------------- | +| `-namespace`, `-n` | `String` The Kubernetes namespace to list proxies in. | Current [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) namespace. | +| `-upstream-ip` | `String` The IP address of the upstream transparent proxy | | +| `-upstream-envoy-id` | `String` The Envoy identifier of the upstream | | + +#### Example Commands + +The following example troubleshoots the Consul service mesh configuration and network issues between the given pod and the given upstream IP. + + ```shell-session + $ consul-k8s troubleshoot proxy -pod frontend-767ccfc8f9-6f6gx -upstream-ip 10.4.6.160 + + ==> Validation + ✓ certificates are valid + ✓ Envoy has 0 rejected configurations + ✓ Envoy has detected 0 connection failure(s) + ✓ listener for upstream "backend" found + ✓ route for upstream "backend" found + ✓ cluster "backend.default.dc1.internal..consul" for upstream "backend" found + ✓ healthy endpoints for cluster "backend.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ✓ cluster "backend2.default.dc1.internal..consul" for upstream "backend" found + ! no healthy endpoints for cluster "backend2.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "backend" found + ``` + +The following example troubleshoots the Consul service mesh configuration and network issues between the given pod and the given upstream. + + ```shell-session + $ consul-k8s troubleshoot proxy -pod frontend-767ccfc8f9-6f6gx -upstream-envoy-id db + + ==> Validation + ✓ certificates are valid + ✓ Envoy has 0 rejected configurations + ✓ Envoy has detected 0 connection failure(s) + ! no listener for upstream "db" found + ! no route for upstream "backend" found + ! no cluster "db.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "db" found + ! no healthy endpoints for cluster "db.default.dc1.internal.e08fa6d6-e91e-dfe0-f6e1-ba097a828e31.consul" for upstream "db" found + ``` + +### `uninstall` + +The `uninstall` command removes Consul from Kubernetes. + +```shell-session +$ consul-k8s uninstall +``` + +The following options are available. + +| Flag | Description | Default | +| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | +| `-auto-approve` | Boolean value that enables you to skip the removal confirmation prompt. | `false` | +| `-name` | String value for the name of the installation to remove. | none | +| `-namespace` | String value that specifies the namespace of the Consul installation to remove. | `consul` | +| `-timeout` | Specifies how long to wait for the removal process to complete before timing out. The value is specified with an integer and string value indicating a unit of time.
    The following units are supported:
    `ms` (milliseconds)
    `s` (seconds)
    `m` (minutes)
    `h` (hours)
    In the following example, removal will timeout after one minute:
    `consul-k8s uninstall -timeout 1m` | `10m` | +| `-wipe-data` | Boolean value that deletes PVCs and secrets associated with the Consul installation during installation.
    Data will be removed without a verification prompt if the `-auto-approve` flag is set to `true`. | `false`
    Instructions for removing data will be printed to the console. | +| `--help` | Prints usage information for this option. | none | + +See [Global Options](#global-options) for additional commands that you can use when uninstalling Consul from Kubernetes. + +#### Example Command + +The following example command immediately uninstalls Consul from the `my-ns` namespace with the name `my-consul` and removes PVCs and secrets associated with the installation without asking for verification: + +```shell-session +$ consul-k8s uninstall -namespace=my-ns -name=my-consul -wipe-data=true -auto-approve=true +``` + +### `upgrade` + +The `upgrade` command upgrades the Consul on Kubernetes components to the current version of the `consul-k8s` cli. Prior to running `consul-k8s upgrade`, the `consul-k8s` CLI should first be upgraded to the latest version as described [Upgrade the Consul K8s CLI](#upgrade-the-consul-k8s-cli) + +```shell-session +$ consul-k8s upgrade +``` + +The following options are available. + +| Flag | Description | Default | +| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | +| `-auto-approve` | Boolean value that enables you to skip the upgrade confirmation prompt. | `false` | +| `-dry-run` | Boolean value that allows you to run pre-upgrade checks and returns a summary of the upgrade. | `false` | +| `-config-file` | String value that specifies the path to a file containing custom upgrade configurations, e.g., Consul Helm chart values file.
    You can use the `-config-file` flag multiple times to specify multiple files. | none | +| `-namespace` | String value that specifies the namespace of the Consul installation. | `consul` | +| `-preset` | String value that upgrades Consul based on a preset configuration. | Configuration of the Consul Helm chart. | +| `-set` | String value that enables you to set a customizable value. This flag is comparable to the `helm upgrade --set` flag.
    You can use the `-set` flag multiple times to set multiple values.
    Consul Helm chart values are supported. | none | +| `-set-file` | String value that specifies the name of an arbitrary config file. This flag is comparable to the `helm upgrade --set-file`
    flag. The contents of the file will be used to set a customizable value. You can use the `-set-file` flag multiple times to specify multiple files.
    Consul Helm chart values are supported. | none | +| `-set-string` | String value that enables you to set a customizable string value. This flag is comparable to the `helm upgrade --set-string`
    flag. You can use the `-set-string` flag multiple times to specify multiple strings.
    Consul Helm chart values are supported. | none | +| `-timeout` | Specifies how long to wait for the upgrade process to complete before timing out. The value is specified with an integer and string value indicating a unit of time.
    The following units are supported:
    `ms` (milliseconds)
    `s` (seconds)
    `m` (minutes)
    In the following example, the upgrade will timeout after one minute:
    `consul-k8s upgrade -timeout 1m` | `10m` | +| `-wait` | Boolean value that determines if Consul should wait for resources in the upgrade to be ready before exiting the command. | `true` | +| `-verbose`, `-v` | Boolean value that specifies whether to output verbose logs from the upgrade command with the status of resources being upgraded. | `false` | +| `--help` | Prints usage information for this option. | none | + +See [Global Options](#global-options) for additional commands that you can use when installing Consul on Kubernetes. + +### `version` + +The `version` command prints the Consul on Kubernetes version. This command does not take any options. + +```shell-session +$ consul-k8s version +``` + +You can also print the version with the `--version` flag. + +```shell-session +$ consul-k8s --version +``` + +## Global Options + +The following global options are available. + +| Flag | Description | Default | +| -------------------------------- | ----------------------------------------------------------------------------------- | ------- | +| `-context` | String value that sets the Kubernetes context to use for Consul K8s CLI operations. | none | +| `-kubeconfig`, `-c` | String value that specifies the path to the `kubeconfig` file.
    | none | diff --git a/website/content/docs/reference/cli/cts/index.mdx b/website/content/docs/reference/cli/cts/index.mdx new file mode 100644 index 000000000000..a9b7ce9e6d06 --- /dev/null +++ b/website/content/docs/reference/cli/cts/index.mdx @@ -0,0 +1,98 @@ +--- +layout: docs +page_title: Consul-Terraform-Sync CLI +description: >- + How to use the Consul-Terraform-Sync CLI +--- + +# Consul-Terraform-Sync Command (CLI) + +Consul-Terraform-Sync (CTS) is controlled via an easy to use command-line interface (CLI). CTS is only a single command-line application: `consul-terraform-sync`. CTS can be run as a daemon and execute CLI commands. When CTS is run as a daemon, it acts as a server to the CLI commands. Users can use the commands to interact and modify the daemon as it is running. The complete list of commands is in the navigation to the left. Both the daemon and commands return a non-zero exit status on error. + +## Daemon + + + +Running CTS as a daemon without using a command is deprecated in CTS 0.6.0 and will be removed at a much later date in a major release. For information on the preferred way to run CTS as a daemon review the [`start` command docs](/consul/docs/reference/cli/cts/start) + + + +When CTS runs as a daemon, there is no default configuration to start CTS. You must set a configuration flag -config-file or -config-dir. For example: + +```shell-session +$ consul-terraform-sync start -config-file=config.hcl +``` + +To review a list of available flags, use the `-help` or `-h` flag. + +## Commands + +In addition to running the daemon, CTS has a set of commands that act as a client to the daemon server. The commands provide a user-friendly experience interacting with the daemon. The commands use the CTS APIs but does not correspond one-to-one with it. Please review the individual commands in the left navigation for more details. + +To get help for a command, run: `consul-terraform-sync -h` + +### CLI structure + +CTS commands follow the below structure + +```shell-session +$ consul-terraform-sync [options] [args] +``` + +- `options`: Flags to specify additional settings. There are general options that can be used across all commands and command-specific options. +- `args`: Required arguments specific to a commands + +Example: + +```shell-session +$ consul-terraform-sync task disable -http-addr=http://localhost:2000 task_a +``` + +### Autocompletion + +The `consul-terraform-sync` command features opt-in autocompletion for flags, subcommands, and +arguments (where supported). + +To enable autocompletion, run: + +```shell-session +$ consul-terraform-sync -autocomplete-install +``` + +After you install autocomplete, you must restart your shell for the change to take effect. + +When you start typing a CTS command, press the `` key to show a +list of available completions. To show available flag completes, type `-`. + +Autocompletion will query the running CTS server to return helpful argument suggestions. For example, for the `task disable` command, autocompletion will return the names of all enabled tasks that can be disabled. + +When autocompletion makes the query to the running CTS server, it will also use any `CTS_*` environment variables (for example `CTS_ADDRESS`) set on the CTS server. + +#### Example: Use autocomplete to discover how to disable a task + +Assume a tab is typed at the end of each prompt line: + +```shell-session hideClipboard +$ consul-terraform-sync +start task + +$ consul-terraform-sync task +create delete disable enable + +$ consul task disable +task_name_a task_name_b task_name_c +``` + +### General Options + +Below are options that can be used across all commands: + +| Option | Required | Type | Description | Default | +| --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------- | +| `-port` | Optional | integer | **Deprecated in Consul-Terraform-Sync 0.5.0 and will be removed in a later version.** Use `-http-addr` option instead to specify the address and port of the Consul-Terraform-Sync API.

    Port from which the CTS daemon serves its API.
    The value is prepended with `http://localhost:`, but you can specify a different scheme or address with the `-http-addr` if necessary. | `8558` | +| `-http-addr` | Optional | string | Address and port of the CTS API. You can specify an IP or DNS address.

    Alternatively, you can specify a value using the `CTS_ADDRESS` environment variable. | `http://localhost:8558` | +| `-ssl-verify` | Optional | boolean | Enables verification for TLS/SSL connections to the API if set to true. This does not affect insecure HTTP connections.

    Alternatively, you can specify the value using the `CTS_SSL_VERIFY` environment variable. | `true` | +| `-ca-cert` | Optional | string | Path to a PEM-encoded certificate authority file that is used to verify TLS/SSL connections. Takes precedence over `-ca-path` if both are provided.

    Alternatively, you can specify the value using the `CTS_CACERT` environment variable. | none | +| `-ca-path` | Optional | string | Path to a directory containing a PEM-encoded certificate authority file that is used to verify TLS/SSL connections.

    Alternatively, you can specify the value using the `CTS_CAPATH` environment variable. | none | +| `-client-cert`                                                   | Optional | string | Path to a PEM-encoded client certificate that the CTS API requires when [`verify_incoming`](/consul/docs/automate/infrastructure/configure#verify_incoming) is set to `true` on the API.

    Alternatively, you can specify the value using the `CTS_CLIENT_CERT` environment variable. | none | +| `-client-key` | Optional | string | Path to a PEM-encoded client key for the certificate configured with the `-client-cert` option. This is required if `-client-cert` is set and if [`verify_incoming`](/consul/docs/automate/infrastructure/configure#verify_incoming) is set to `true` on the CTS API.

    Alternatively, you can specify the value using the `CTS_CLIENT_KEY` environment variable. | none | diff --git a/website/content/docs/nia/cli/start.mdx b/website/content/docs/reference/cli/cts/start.mdx similarity index 88% rename from website/content/docs/nia/cli/start.mdx rename to website/content/docs/reference/cli/cts/start.mdx index 68b3f6bd7ac4..f111ee2647a6 100644 --- a/website/content/docs/nia/cli/start.mdx +++ b/website/content/docs/reference/cli/cts/start.mdx @@ -15,7 +15,7 @@ The `start` command starts the Consul-Terraform-Sync (CTS) daemon. $ consul-terraform-sync start -config-file [OPTIONS] ``` -The `-config-file` or `-config-dir` flag is required. Use the flag to specify the [CTS instance configuration file](/consul/docs/nia/configuration) or directory containing several configuration files to start a CTS cluster. +The `-config-file` or `-config-dir` flag is required. Use the flag to specify the [CTS instance configuration file](/consul/docs/automate/infrastructure/configure) or directory containing several configuration files to start a CTS cluster. ## Options @@ -28,7 +28,7 @@ The following table describes all of the available flags. | `-inspect` | Optional | boolean | Starts CTS in inspect mode . In inspect mode, CTS displays the proposed state changes for all tasks once and exits. No changes are applied. Refer to [Modes](#modes) for additional information. If an error occurs before displaying all changes, CTS exits with a non-zero status. | `false` | | `-inspect-task` | Optional | string | Starts CTS in inspect mode for the specified task. CTS displays the proposed state changes for the specified task and exits. No changes are applied.
    You can specify the flag multiple times to display more than one task.
    If an error occurs before displaying all changes, CTS exits with a non-zero status. | none | | `-once` | Optional | boolean | Starts CTS in once-mode. In once-mode, CTS renders templates, runs tasks once, and disables buffer periods. Refer to [Modes](#modes) for additional information. | `false` | -| `-reset-storage` | Optional | boolean | Directs CTS to overwrite the state storage with new state information when the instance you are starting is elected the cluster leader.
    Only use this flag when running CTS in [high availability mode](/consul/docs/nia/usage/run-ha). | `false` | +| `-reset-storage` | Optional | boolean | Directs CTS to overwrite the state storage with new state information when the instance you are starting is elected the cluster leader.
    Only use this flag when running CTS in [high availability mode](/consul/docs/automate/infrastructure/high-availability). | `false` | | `-h`, `-help` | Optional | boolean | Prints the CTS command line help. | `false` | ## Modes @@ -40,4 +40,4 @@ By default, CTS starts in long-running mode. The following table describes all a | Long-running mode | CTS starts in once-mode and switches to a long-running process.

    During the once-mode phase, the daemon exits with a non-zero status if it encounters an error.

    After successfully operating in once-mode, CTS begins a long-running process.

    When the long-running process begins, the CTS daemon serves API and command requests.

    If an error occurs, CTS logs it and continues running.

    | No additional flags.
    This is the default mode. | | Once-mode | In once-mode, CTS renders templates and runs tasks once. CTS does not start the process in long-running mode and disables buffer periods.

    Use once-mode before starting CTS in long-running mode to verify that your configuration is accurate and tasks update network infrastructure as expected.

    | Add the `-once` flag when starting CTS. | | Inspect mode | CTS displays the proposed state changes for all tasks once and exits. No changes are applied. If an error occurs before displaying all changes, CTS exits with a non-zero status.

    Use inspect mode before starting CTS in long-running mode to debug one or more tasks and to verify that your tasks will update network infrastructure as expected.

    | Add the `-inspect` flag to verify all tasks.

    Add the `-inspect-task` flag to inspect a single task. Use multiple flags to verify more than one task.

    | -| High availability mode | A long-running process that ensures that all changes to Consul that occur during a failover transition are processed and that CTS continues to operate as expected. CTS logs the errors and continues to operate without interruption. Refer to [Run Consul-Terraform-Sync with High Availability](/consul/docs/nia/usage/run-ha) for additional information. | Add the `high_availability` block to your CTS instance configuration.

    Refer to [Run Consul-Terraform-Sync with High Availability](/consul/docs/nia/usage/run-ha) for additional information.

    | +| High availability mode | A long-running process that ensures that all changes to Consul that occur during a failover transition are processed and that CTS continues to operate as expected. CTS logs the errors and continues to operate without interruption. Refer to [Run Consul-Terraform-Sync with High Availability](/consul/docs/automate/infrastructure/high-availability) for additional information. | Add the `high_availability` block to your CTS instance configuration.

    Refer to [Run Consul-Terraform-Sync with High Availability](/consul/docs/automate/infrastructure/high-availability) for additional information.

    | diff --git a/website/content/docs/reference/cli/cts/task.mdx b/website/content/docs/reference/cli/cts/task.mdx new file mode 100644 index 000000000000..d3f27955d24b --- /dev/null +++ b/website/content/docs/reference/cli/cts/task.mdx @@ -0,0 +1,216 @@ +--- +layout: docs +page_title: Task Command +description: >- + Consul-Terraform-Sync supports task commands for users to modify tasks while the daemon is running +--- + +# task + +## `task create` + +`task create` command creates a new task so that it will run and update task resources. The command generates and outputs a Terraform plan, similar to [inspect-mode](/consul/docs/reference/cli/cts/start#modes), of how resources will be modified if the task is created. The command will then ask for user approval before creating the task. + +It is not to be used for updating a task and will not create a task if the task name already exists. + +### Usage + +`consul-terraform-sync task create [options] -task-file=` + +**Options:** + +In addition to [general options](/consul/docs/reference/cli/cts#general-options) this command also supports the following: + +| Name | Required | Type | Description | +| ------------ | -------- | ------ | ------------------------------------------------------------------------------------------------------------------- | +| `-task-file` | Required | string | The path to the task configuration file. Refer to [configuration information](/consul/docs/automate/infrastructure/configure#task) for details. | +| `-auto-approve`   | Optional | boolean | Whether to skip the interactive approval of the plan before creating. | + +### Example + +task_example.hcl: + +```hcl +task { + name = "task_a" + description = "" + enabled = true + providers = [] + module = "org/example/module" + version = "1.0.0" + variable_files = [] + condition "services" { + names = ["web", "api"] + } +} +``` + +Shell Session: + +```shell-session +$ consul-terraform-sync task create -task-file=task_example.hcl +==> Inspecting changes to resource if creating task 'task_a'... + + Generating plan that Consul-Terraform-Sync will use Terraform to execute + + Request ID: 1da3e8e0-87c3-069b-51a6-46903e794a76 + Request Payload: + + // + + Plan: + +Terraform used the selected providers to generate the following execution +plan. Resource actions are indicated with the following symbols: + + create + +Terraform will perform the following actions: + +// + +Plan: 1 to add, 0 to change, 0 to destroy. + +==> Creating the task will perform the actions described above. + Do you want to perform these actions for 'task_a'? + - This action cannot be undone. + - Consul-Terraform-Sync cannot guarantee Terraform will perform + these exact actions if monitored services have changed. + + Only 'yes' will be accepted to approve, enter 'no' or leave blank to reject. + +Enter a value: yes + +==> Creating and running task 'api-task'... + The task creation request has been sent to the CTS server. + Please be patient as it may take some time to refer to a confirmation that this task has completed. + Warning: Terminating this process will not stop task creation. + +==> Task 'task_a' created + Request ID: '78eddd74-0f08-83d6-72b2-6aaac1424fba' +``` + +## `task enable` + +`task enable` command enables an existing task so that it will run and update task resources. If the task is already enabled, it will return a success. The command generates and outputs a Terraform plan, similar to [inspect-mode](/consul/docs/reference/cli/cts#inspect-mode), of how resources will be modified if task becomes enabled. If resources changes are detected, the command will ask for user approval before enabling the task. If no resources are detected, the command will go ahead and enable the task. + +### Usage + +`consul-terraform-sync task enable [options] ` + +**Arguments:** + +| Name | Required | Type | Description | +| ----------- | -------- | ------ | ------------------------------- | +| `task_name` | Required | string | The name of the task to enable. | + +**Options:** + +In addition to [general options](/consul/docs/reference/cli/cts/#general-options) this command also supports the following: + +| Name | Required | Type | Description | +| --------------- | -------- | ------- | ------------------------------- | +| `-auto-approve` | Optional | boolean | Whether to skip the interactive approval of the plan before enabling. | + +### Example + +```shell-session +$ consul-terraform-sync task enable task_a +==> Inspecting changes to resource if enabling 'task_a'... + + Generating plan that Consul-Terraform-Sync will use Terraform to execute + +An execution plan has been generated and is shown below. +Resource actions are indicated with the following symbols: + + create + +Terraform will perform the following actions: + +// + +Plan: 3 to add, 0 to change, 0 to destroy. + +==> Enabling the task will perform the actions described above. + Do you want to perform these actions for 'task_a'? + - This action cannot be undone. + - Consul-Terraform-Sync cannot guarantee Terraform will perform + these exact actions if monitored services have changed. + + Only 'yes' will be accepted to approve. + +Enter a value: yes + +==> Enabling and running 'task_a'... + +==> 'task_a' enable complete! +``` + +## `task disable` + +`task disable` command disables an existing task so that it will no longer run and update task resources. If the task is already disabled, it will return a success. + +### Usage + +`consul-terraform-sync task disable [options] ` + +**Arguments:** + +| Name | Required | Type | Description | +| ----------- | -------- | ------ | -------------------------------- | +| `task_name` | Required | string | The name of the task to disable. | + +**Options:** Currently only supporting [general options](/consul/docs/reference/cli/cts#general-options) + +### Example + +```shell-session +$ consul-terraform-sync task disable task_b +==> Waiting to disable 'task_b'... + +==> 'task_b' disable complete! +``` + +## `task delete` + +`task delete` command deletes an existing task. The command will ask the user for approval before deleting the task. The task will be marked for deletion and will be deleted immediately if it is not running. Otherwise, the task will be deleted once it has completed. + + + +Deleting a task will not destroy the infrastructure managed by the task. + + + +### Usage + +`consul-terraform-sync task delete [options] ` + +**Arguments:** + +| Name | Required | Type | Description | +| ----------- | -------- | ------ | ------------------------------- | +| `task_name` | Required | string | The name of the task to delete. | + +**Options:** + +In addition to [general options](/consul/docs/reference/cli/cts#general-options) this command also supports the following: + +| Name | Required | Type | Description | +| --------------- | -------- | ------- | ------------------------------- | +| `-auto-approve` | Optional | boolean | Whether to skip the interactive approval of the task deletion. | + +### Example + +```shell-session +$ consul-terraform-sync task delete task_a +==> Do you want to delete 'task_a'? + - This action cannot be undone. + - Deleting a task will not destroy the infrastructure managed by the task. + - If the task is not running, it will be deleted immediately. + - If the task is running, it will be deleted once it has completed. + Only 'yes' will be accepted to approve, enter 'no' or leave blank to reject. + +Enter a value: yes + +==> Marking task 'task_a' for deletion... + +==> Task 'task_a' has been marked for deletion and will be deleted when not running. +``` diff --git a/website/content/docs/reference/config-entry/api-gateway.mdx b/website/content/docs/reference/config-entry/api-gateway.mdx new file mode 100644 index 000000000000..a0954ced5369 --- /dev/null +++ b/website/content/docs/reference/config-entry/api-gateway.mdx @@ -0,0 +1,562 @@ +--- +layout: docs +page_title: API Gateway configuration reference +description: Learn how to configure a Consul API gateway on VMs. +--- + +# API gateway configuration reference + +This topic provides reference information for the API gateway configuration entry that you can deploy to networks in virtual machine (VM) environments. For reference information about configuring Consul API gateways on Kubernetes, refer to [Gateway Resource Configuration](/consul/docs/reference/k8s/api-gateway/gateway). + +## Introduction + +A gateway is a type of network infrastructure that determines how service traffic should be handled. Gateways contain one or more listeners that bind to a set of hosts and ports. An HTTP Route or TCP Route can then attach to a gateway listener to direct traffic from the gateway to a service. + +## Configuration model + +The following list outlines field hierarchy, language-specific data types, and requirements in an `api-gateway` configuration entry. Click on a property name to view additional details, including default values. + +- [`Kind`](#kind): string | must be `"api-gateway"` +- [`Name`](#name): string | no default +- [`Namespace`](#namespace): string | no default +- [`Partition`](#partition): string | no default +- [`Meta`](#meta): map | no default +- [`Listeners`](#listeners): list of objects | no default + - [`Name`](#listeners-name): string | no default + - [`Port`](#listeners-port): number | no default + - [`Hostname`](#listeners-hostname): string | `"*"` + - [`Protocol`](#listeners-protocol): string | `"tcp"` + - [`TLS`](#listeners-tls): map | none + - [`MinVersion`](#listeners-tls-minversion): string | no default + - [`MaxVersion`](#listeners-tls-maxversion): string | no default + - [`CipherSuites`](#listeners-tls-ciphersuites): list of strings | Envoy default cipher suites + - [`Certificates`](#listeners-tls-certificates): list of objects | no default + - [`Kind`](#listeners-tls-certificates-kind): string | no default + - [`Name`](#listeners-tls-certificates-name): string | no default + - [`Namespace`](#listeners-tls-certificates-namespace): string | no default + - [`Partition`](#listeners-tls-certificates-partition): string | no default + - [`default`](#listeners-default): map + - [`JWT`](#listeners-default-jwt): map + - [`Providers`](#listeners-default-jwt-providers): list + - [`Name`](#listeners-default-jwt-providers): string + - [`VerifyClaims`](#listeners-default-jwt-providers): map + - [`Path`](#listeners-default-jwt-providers): list + - [`Value`](#listeners-default-jwt-providers): string + - [`override`](#listeners-override): map + - [`JWT`](#listeners-override-jwt): map + - [`Providers`](#listeners-override-jwt-providers): list + - [`Name`](#listeners-override-jwt-providers): string + - [`VerifyClaims`](#listeners-override-jwt-providers): map + - [`Path`](#listeners-override-jwt-providers): list + - [`Value`](#listeners-override-jwt-providers): string + + + +## Complete configuration + +When every field is defined, an `api-gateway` configuration entry has the following form: + + + +```hcl +Kind = "api-gateway" +Name = "" +Namespace = "" +Partition = "" + +Meta = { + = "" +} + +Listeners = [ + { + Port = + Name = "" + Protocol = "" + TLS = { + MaxVersion = "" + MinVersion = "" + CipherSuites = [ + "" + ] + Certificates = [ + { + Kind = "file-system-certificate" + Name = "" + Namespace = "" + Partition = "" + } + ] + } + default = { + JWT = { + Providers = [ + Name = "" + VerifyClaims = { + Path = [""] + Value = "" + } + ] + } + } + override = { + JWT = { + Providers = [ + Name = "" + VerifyClaims = { + Path = [""] + Value = "" + } + ] + } + } + } +] +``` + +```json +{ + "Kind": "api-gateway", + "Name": "", + "Namespace": "", + "Partition": "", + "Meta": { + "": "" + }, + "Listeners": [ + { + "Name": "", + "Port": , + "Protocol": "", + "TLS": { + "MaxVersion": "", + "MinVersion": "", + "CipherSuites": [ + "" + ], + "Certificates": [ + { + "Kind": "file-system-certificate", + "Name": "", + "Namespace": "", + "Partition": "" + } + ] + } + }, + { + "default": { + "JWT": { + "Providers": [ + { + "Name": "", + "VerifyClaims": { + "Path": [""], + "Value": "" + } + } + ] + } + } + }, + { + "override": { + "JWT": { + "Providers": [ + { + "Name": "", + "VerifyClaims": { + "Path": [""], + "Value": "" + } + } + ] + } + } + } + ] +} +``` + + + +## Specification + +This section provides details about the fields you can configure in the +`api-gateway` configuration entry. + +### `Kind` + +Specifies the type of configuration entry to implement. This must be +`api-gateway`. + +#### Values + +- Default: none +- This field is required. +- Data type: string value that must be set to `"api-gateway"`. + +### `Name` + +Specifies a name for the configuration entry. The name is metadata that you can +use to reference the configuration entry when performing Consul operations, +such as applying a configuration entry to a specific cluster. + +#### Values + +- Default: none +- This field is required. +- Data type: string + +### `Namespace` + +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) to apply to the configuration entry. + +#### Values + +- Default: `"default"` in Enterprise +- Data type: string + +### `Partition` + +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) to apply to the configuration entry. + +#### Values + +- Default: `"default"` in Enterprise +- Data type: string + +### `Meta` + +Specifies an arbitrary set of key-value pairs to associate with the gateway. + +#### Values + +- Default: none +- Data type: map containing one or more keys and string values. + +### `Listeners[]` + +Specifies a list of listeners that gateway should set up. Listeners are +uniquely identified by their port number. + +#### Values + +- Default: none +- This field is required. +- Data type: List of maps. Each member of the list contains the following fields: + - [`Name`](#listeners-name) + - [`Port`](#listeners-port) + - [`Hostname`](#listeners-hostname) + - [`Protocol`](#listeners-protocol) + - [`TLS`](#listeners-tls) + +### `Listeners[].Name` + +Specifies the unique name for the listener. This field accepts letters, numbers, and hyphens. + +#### Values + +- Default: none +- This field is required. +- Data type: string + +### `Listeners[].Port` + +Specifies the port number that the listener receives traffic on. + +#### Values + +- Default: `0` +- This field is required. +- Data type: integer + +### `Listeners[].Hostname` + +Specifies the hostname that the listener receives traffic on. + +#### Values + +- Default: `"*"` +- This field is optional. +- Data type: string + +### `Listeners[].Protocol` + +Specifies the protocol associated with the listener. + +#### Values + +- Default: none +- This field is required. +- The data type is one of the following string values: `"tcp"` or `"http"`. + +### `Listeners[].TLS` + +Specifies the TLS configurations for the listener. + +#### Values + +- Default: none +- Map that contains the following fields: + - [`MaxVersion`](#listeners-tls-maxversion) + - [`MinVersion`](#listeners-tls-minversion) + - [`CipherSuites`](#listeners-tls-ciphersuites) + - [`Certificates`](#listeners-tls-certificates) + +### `Listeners[].TLS.MaxVersion` + +Specifies the maximum TLS version supported for the listener. + +#### Values + +- Default depends on the version of Envoy: + - Envoy 1.22.0 and later default to `TLSv1_2` + - Older versions of Envoy default to `TLSv1_0` +- Data type is one of the following string values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `Listeners[].TLS.MinVersion` + +Specifies the minimum TLS version supported for the listener. + +#### Values + +- Default: none +- Data type is one of the following string values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `Listeners[].TLS.CipherSuites[]` + +Specifies a list of cipher suites that the listener supports when negotiating connections using TLS 1.2 or older. + +#### Values + +- Defaults to the ciphers supported by the version of Envoy in use. Refer to the + [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tlsparameters-cipher-suites) + for details. +- Data type: List of string values. Refer to the + [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) + for a list of supported ciphers. + +### `Listeners[].TLS.Certificates[]` + +The list of references to [file system](/consul/docs/reference/config-entry/file-system-certificate) or [inline certificates](/consul/docs/reference/config-entry/inline-certificate) that the listener uses for TLS termination. You should create the configuration entry for the certificate separately and then reference the configuration entry in the `Name` field. + +#### Values + +- Default: None +- Data type: List of maps. Each member of the list has the following fields: + - [`Kind`](#listeners-tls-certificates-kind) + - [`Name`](#listeners-tls-certificates-name) + - [`Namespace`](#listeners-tls-certificates-namespace) + - [`Partition`](#listeners-tls-certificates-partition) + +### `Listeners[].TLS.Certificates[].Kind` + +The list of references to certificates that the listener uses for TLS termination. + +#### Values + +- Default: None +- This field is required. +- The data type is one of the following string values: `"file-system-certificate"` or `"inline-certificate"`. + +### `Listeners[].TLS.Certificates[].Name` + +Specifies the name of the [file system certificate](/consul/docs/reference/config-entry/file-system-certificate) or [inline certificate](/consul/docs/reference/config-entry/inline-certificate) that the listener uses for TLS termination. + +#### Values + +- Default: None +- This field is required. +- Data type: string + +### `Listeners[].TLS.Certificates[].Namespace` + +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) where the certificate can be found. + +#### Values + +- Default: `"default"` in Enterprise +- Data type: string + +### `Listeners[].TLS.Certificates[].Partition` + +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) where the certificate can be found. + +#### Values + +- Default: `"default"` in Enterprise +- Data type: string + +### `Listeners[].default` + +Specifies a block of default configurations to apply to the gateway listener. All routes attached to the listener inherit the default configurations. You can specify override configurations that have precedence over default configurations in the [`override` block](#listeners-override) as well as in the `JWT` block in the [HTTP route configuration entry](/consul/docs/reference/config-entry/http-route). + +#### Values + +- Default: None +- Data type: Map + +### `Listeners[].default{}.JWT` + +Specifies a block of default JWT verification configurations to apply to the gateway listener. Specify configurations that have precedence over the defaults in either the [`override.JWT` block](#listeners-override) or in the [`JWT` block](/consul/docs/reference/config-entry/http-route#rules-filters-jwt) in the HTTP route configuration. Refer to [Use JWTs to verify requests to API gateways](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) for order of precedence and other details about using JWT verification in API gateways. + +#### Values + +- Default: None +- Data type: Map + +### `Listeners[].default{}.JWT{}.Providers` + +Specifies a list of default JWT provider configurations to apply to the gateway listener. A provider configuration contains the name of the provider and claims. Specify configurations that have precedence over the defaults in either the [`override.JWT.Providers` block](#listeners-override-providers) or in the [`JWT` block](/consul/docs/reference/config-entry/http-route#rules-filters-jwt-providers) of the HTTP route configuration. Refer to [Use JWTs to verify requests to API gateways](/consul/docs/north-south/api-gateway/secure-traffic/jwt/vm) for order of precedence and other details about using JWT verification in API gateways. + +#### Values + +- Default: None +- Data type: List of maps + +The following table describes the parameters you can specify in a member of the `Providers` list: + +| Parameter | Description | Data type | Default | +| --- | --- | --- | --- | +| `Name` | Specifies the name of the provider. | String | None | +| `VerifyClaims` | Specifies a list of paths and a value that define the claim that Consul verifies when it receives a request. The `VerifyClaims` map specifies the following settings:
    • `Path`: Specifies a list of one or more registered or custom claims.
    • `Value`: Specifies the expected value of the claim.
    | Map | None | + +Refer to [Configure JWT verification settings](#configure-jwt-verification-settings) for an example configuration. + +### `Listeners[].override` + +Specifies a block of configurations to apply to the gateway listener. The override settings have precedence over the configurations in the [`Listeners[].default` block](#listeners-default). + +#### Values + +- Default: None +- Data type: Map + +### `Listeners[].override{}.JWT` + +Specifies a block of JWT verification configurations to apply to the gateway listener. The override settings have precedence over the [`Listeners[].default` configurations](#listeners-default) as well as any route-specific JWT configurations. + +#### Values + +- Default: None +- Data type: Map + +### `Listeners[].override{}.JWT{}.Providers` + +Specifies a list of JWT provider configurations to apply to the gateway listener. A provider configuration contains the name of the provider and claims. The override settings have precedence over `Listeners[].defaults{}.JWT{}.Providers` as well as any listener-specific configuration. + +#### Values + +- Default: None +- Data type: List of maps + +The following table describes the parameters you can specify in a member of the `Providers` list: + +| Parameter | Description | Data type | Default | +| --- | --- | --- | --- | +| `Name` | Specifies the name of the provider. | String | None | +| `VerifyClaims` | Specifies a list of paths and a value that define the claim that Consul verifies when it receives a request. The `VerifyClaims` map specifies the following settings:
    • `Path`: Specifies a list of one or more registered or custom claims.
    • `Value`: Specifies the expected value of the claim.
    | Map | None | + +Refer to [Configure JWT verification settings](#configure-jwt-verification-settings) for an example configuration. + +## Examples + +The following examples demonstrate common API gateway configuration patterns for specific use cases. + +### Configure JWT verification settings + +The following example configures `listener-one` to verify that requests include a token with Okta user permissions by default. The listener also verifies that the token has an audience of `api.apps.organization.com`. + + + + +```hcl +Kind = "api-gateway" +Name = "api-gateway" +Listeners = [ + { + name = "listener-one" + port = 9001 + protocol = "http" + # override and default are backed by the same type of data structure, see the following section for more on how they interact + override = { + JWT = { + Providers = [ + { + Name = "okta", + VerifyClaims = { + Path = ["aud"], + Value = "api.apps.organization.com", + } + }, + ] + } + } + default = { + JWT = { + Providers = [ + { + Name = "okta", + VerifyClaims = { + Path = ["perms", "role"], + Value = "user", + } + } + ] + } + } + } +] +``` + + + +```json +{ + "Kind": "api-gateway", + "Name": "api-gateway", + "Listeners": [ + { + "name": "listener-one", + "port": 9001, + "protocol": "http", + "override": { + "JWT": { + "Providers": [{ + "Name": "okta", + "VerifyClaims": { + "Path": ["aud"], + "Value": "api.apps.organization.com" + } + }] + } + }, + "default": { + "JWT": { + "Providers": [{ + "Name": "okta", + "VerifyClaims": { + "Path": ["perms", "role"], + "Value": "user" + } + }] + } + } + } + ] +} +``` + + + \ No newline at end of file diff --git a/website/content/docs/connect/config-entries/control-plane-request-limit.mdx b/website/content/docs/reference/config-entry/control-plane-request-limit.mdx similarity index 87% rename from website/content/docs/connect/config-entries/control-plane-request-limit.mdx rename to website/content/docs/reference/config-entry/control-plane-request-limit.mdx index 53404c0d3f6a..fbbc8caf6351 100644 --- a/website/content/docs/connect/config-entries/control-plane-request-limit.mdx +++ b/website/content/docs/reference/config-entry/control-plane-request-limit.mdx @@ -18,20 +18,20 @@ This feature requires Consul Enterprise. Refer to the [feature compatibility mat The following list outlines field hierarchy, language-specific data types, and requirements in a control plane request limit configuration entry. Click on a property name to view additional details, including default values. -- [`kind`](#kind): string | required | must be set to `control-plane-request-limit` -- [`mode`](#mode): string | required | default is `permissive` -- [`name`](#name): string | required -- [`read_rate`](#read-rate): number | `100` -- [`write_rate`](#write-rate): number | `100` -- [`kv`](#kv): map | no default - - [`read_rate`](#kv-read-rate): number | `100` - - [`write_rate`](#kv-write-rate): number | `100` -- [`acl`](#acl): map | no default - - [`read_rate`](#acl-read-rate): number | `100` - - [`write_rate`](#acl-write-rate): number | `100` -- [`catalog`](#catalog): map - - [`read_rate`](#catalog-read-rate): number | default is `100` - - [`write_rate`](#catalog-write-rate): number | default is `100` +- [`Kind`](#kind): string | required | must be set to `control-plane-request-limit` +- [`Mode`](#mode): string | required | default is `permissive` +- [`Name`](#name): string | required +- [`ReadRate`](#read-rate): number | `100` +- [`WriteRate`](#write-rate): number | `100` +- [`KV`](#kv): map | no default + - [`ReadRate`](#kv-read-rate): number | `100` + - [`WriteRate`](#kv-write-rate): number | `100` +- [`ACL`](#acl): map | no default + - [`ReadRate`](#acl-read-rate): number | `100` + - [`WriteRate`](#acl-write-rate): number | `100` +- [`Catalog`](#catalog): map + - [`ReadRate`](#catalog-read-rate): number | default is `100` + - [`WriteRate`](#catalog-write-rate): number | default is `100` ## Complete configuration diff --git a/website/content/docs/connect/config-entries/exported-services.mdx b/website/content/docs/reference/config-entry/exported-services.mdx similarity index 97% rename from website/content/docs/connect/config-entries/exported-services.mdx rename to website/content/docs/reference/config-entry/exported-services.mdx index ba613a09e35e..a7f3e255b345 100644 --- a/website/content/docs/connect/config-entries/exported-services.mdx +++ b/website/content/docs/reference/config-entry/exported-services.mdx @@ -1,13 +1,13 @@ --- layout: docs -page_title: Exported Services configuration reference +page_title: Exported services configuration entry reference description: >- - An exported services configuration entry defines the availability of a cluster's services to cluster peers and local admin partitions. Learn about `""exported-services""` config entry parameters and exporting services to other datacenters. + An exported services configuration entry defines the availability of a cluster's services to cluster peers and local admin partitions. Learn about `exported-services` config entry parameters and exporting services to other datacenters. --- -# Exported Services configuration reference +# Exported services configuration entry reference -This topic describes the `exported-services` configuration entry type. The `exported-services` configuration entry enables Consul to export service instances to other clusters from a single file and connect services across clusters. For additional information, refer to [Cluster Peering](/consul/docs/connect/cluster-peering) and [Admin Partitions](/consul/docs/enterprise/admin-partitions). +This topic describes the `exported-services` configuration entry type. The `exported-services` configuration entry enables Consul to export service instances to other clusters from a single file and connect services across clusters. For additional information, refer to [Cluster Peering](/consul/docs/east-west/cluster-peering) and [Admin Partitions](/consul/docs/multi-tenant/admin-partition). ## Introduction @@ -22,9 +22,9 @@ You can configure the settings defined in the `exported-services` configuration ## Usage 1. Verify that your datacenter meets the conditions specified in the [Requirements](#requirements). -1. Specify the `exported-services` configuration in the agent configuration file (see [`config_entries`](/consul/docs/agent/config/config-files#config_entries)) as described in [Configuration](#configuration). +1. Specify the `exported-services` configuration in the agent configuration file (refer to [`config_entries`](/consul/docs/reference/agent/configuration-file/general#config_entries)) as described in [Configuration](#configuration). 1. Apply the configuration using one of the following methods: - - Kubernetes CRD: Refer to the [Custom Resource Definitions](/consul/docs/k8s/crds) documentation for details. + - Kubernetes CRD: Refer to the [Custom Resource Definitions](/consul/docs/fundamentals/config-entry) documentation for details. - Issue the `consul config write` command: Refer to the [Consul Config Write](/consul/commands/config/write) documentation for details. ## Configuration @@ -527,7 +527,7 @@ spec: ### Exporting a service to a sameness group -The following example configures Consul to export a service named `api` to a defined group of partitions that belong to a separately defined [sameness group](/consul/docs/connect/config-entries/sameness-group) named `monitoring`. +The following example configures Consul to export a service named `api` to a defined group of partitions that belong to a separately defined [sameness group](/consul/docs/reference/config-entry/sameness-group) named `monitoring`. diff --git a/website/content/docs/connect/config-entries/file-system-certificate.mdx b/website/content/docs/reference/config-entry/file-system-certificate.mdx similarity index 92% rename from website/content/docs/connect/config-entries/file-system-certificate.mdx rename to website/content/docs/reference/config-entry/file-system-certificate.mdx index d633139a7760..b42f1d50c80f 100644 --- a/website/content/docs/connect/config-entries/file-system-certificate.mdx +++ b/website/content/docs/reference/config-entry/file-system-certificate.mdx @@ -7,9 +7,9 @@ description: Learn how to configure a file system certificate bound to an API Ga # File system certificate configuration reference This topic provides reference information for the file system certificate -configuration entry. The file system certificate is a more secure alternative to the [inline certificate configuration entry](/consul/docs/connect/config-entries/inline-certificate) when using Consul API Gateway on VMs because it references a local filepath instead of including sensitive information in the configuration entry itself. File system certificates also include a file system watch that implements certificate and key changes without restarting the gateway. +configuration entry. The file system certificate is a more secure alternative to the [inline certificate configuration entry](/consul/docs/reference/config-entry/inline-certificate) when using Consul API Gateway on VMs because it references a local filepath instead of including sensitive information in the configuration entry itself. File system certificates also include a file system watch that implements certificate and key changes without restarting the gateway. -Consul on Kubernetes deployments that use `consul-k8s` Helm chart v1.5.0 or later use file system certificates without additional configuration. To learn about configuring certificates for Kubernetes environments, refer to [Gateway Resource Configuration](/consul/docs/connect/gateways/api-gateway/configuration/gateway). +Consul on Kubernetes deployments that use `consul-k8s` Helm chart v1.5.0 or later use file system certificates without additional configuration. To learn about configuring certificates for Kubernetes environments, refer to [Gateway Resource Configuration](/consul/docs/reference/k8s/api-gateway/gateway). ## Configuration model @@ -93,7 +93,7 @@ as applying a configuration entry to a specific cluster. ### `Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to apply to the configuration entry. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) to apply to the configuration entry. #### Values @@ -102,7 +102,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to appl ### `Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) to apply to the configuration entry. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) to apply to the configuration entry. #### Values diff --git a/website/content/docs/connect/config-entries/http-route.mdx b/website/content/docs/reference/config-entry/http-route.mdx similarity index 97% rename from website/content/docs/connect/config-entries/http-route.mdx rename to website/content/docs/reference/config-entry/http-route.mdx index 6265930b3e57..cb4b7def597d 100644 --- a/website/content/docs/connect/config-entries/http-route.mdx +++ b/website/content/docs/reference/config-entry/http-route.mdx @@ -1,12 +1,13 @@ --- layout: docs -page_title: HTTP Route configuration reference -description: Learn how to configure an HTTP Route bound to an API Gateway on VMs. +page_title: HTTP route configuration entry reference +description: >- + Learn how to configure an HTTP Route bound to an API Gateway on VMs. --- -# HTTP route configuration reference +# HTTP route configuration entry reference -This topic provides reference information for the gateway routes configuration entry. Refer to [Route Resource Configuration](/consul/docs/connect/gateways/api-gateway/configuration/routes) for information about configuring API gateway routes in Kubernetes environments. +This topic provides reference information for the gateway routes configuration entry. Refer to [Route Resource Configuration](/consul/docs/reference/k8s/api-gateway/routes) for information about configuring API gateway routes in Kubernetes environments. ## Configuration model @@ -389,7 +390,7 @@ such as applying a configuration entry to a specific cluster. ### `Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to apply to the configuration entry. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) to apply to the configuration entry. #### Values @@ -398,7 +399,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to appl ### `Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) to apply to the configuration entry. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) to apply to the configuration entry. #### Values @@ -451,7 +452,7 @@ Specifies the name of the api-gateway to bind to. ### `Parents[].Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to apply to the configuration entry. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) to apply to the configuration entry. #### Values @@ -460,7 +461,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to appl ### `Parents[].Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) to apply to the configuration entry. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) to apply to the configuration entry. #### Values @@ -527,7 +528,7 @@ Specifies rule for rewriting the URL of incoming requests when an incoming reque ### `Rules[].Filters[].URLRewrite.Path` -Specifies a path that determines how Consul API Gateway rewrites a URL path. Refer to [Reroute HTTP requests](/consul/docs/connect/gateways/api-gateway/define-routes/reroute-http-requests) for additional information. +Specifies a path that determines how Consul API Gateway rewrites a URL path. Refer to [Reroute HTTP requests](/consul/docs/north-south/api-gateway/k8s/reroute) for additional information. #### Values @@ -633,7 +634,7 @@ The following table describes the settings you can configure in the `TimeoutFilt ### `Rules[].Filters{}.JWT` -Specifies a block of JWT verification configurations to apply to the route. These route-specific settings have precedence over default configurations defined for listeners the route attaches to on the API gateway. Refer to [`Listeners[].default{}.JWT`](/consul/docs/connect/config-entries/api-gateway#listeners-default-jwt) for additional information. +Specifies a block of JWT verification configurations to apply to the route. These route-specific settings have precedence over default configurations defined for listeners the route attaches to on the API gateway. Refer to [`Listeners[].default{}.JWT`](/consul/docs/reference/config-entry/api-gateway#listeners-default-jwt) for additional information. #### Values @@ -854,7 +855,7 @@ Specifies the name of an HTTP-based service to route to. ### `Rules[].Services[].Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to apply to the configuration entry. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) to apply to the configuration entry. #### Values @@ -863,7 +864,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to appl ### `Rules[].Services.Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) to apply to the configuration entry. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) to apply to the configuration entry. #### Values @@ -1131,4 +1132,4 @@ rules = [ ```
    - + \ No newline at end of file diff --git a/website/content/docs/reference/config-entry/ingress-gateway.mdx b/website/content/docs/reference/config-entry/ingress-gateway.mdx new file mode 100644 index 000000000000..66a690b9b927 --- /dev/null +++ b/website/content/docs/reference/config-entry/ingress-gateway.mdx @@ -0,0 +1,1876 @@ +--- +layout: docs +page_title: Ingress gateway configuration entry reference +description: >- + The ingress gateway configuration entry kind defines behavior for securing incoming communication between the service mesh and external sources. Learn about `ingress-gateway` config entry parameters for exposing TCP and HTTP listeners. +--- + +# Ingress gateway configuration entry reference + + + +Ingress gateway is deprecated and will not be enhanced beyond its current capabilities. Ingress gateway is fully supported in this version but will be removed in a future release of Consul. + +Consul's API gateway is the recommended alternative to ingress gateway. + + + +This topic provides configuration reference information for the ingress gateway configuration entry. An ingress gateway is a type of proxy you register as a service in Consul to enable network connectivity from external services to services inside of the service mesh. Refer to [Ingress gateways overview](/consul/docs/north-south/ingress-gateway) for additional information. + +## Configuration model + +The following list describes the configuration hierarchy, language-specific data types, default values if applicable, and requirements for the configuration entry. Click on a property name to view additional details. + + + + +- [`Kind`](#kind): string | must be `ingress-gateway` | required +- [`Name`](#name): string | required +- [`Namespace`](#namespace): string | `default` | +- [`Meta`](#meta): map of strings +- [`Partition`](#partition): string | `default` | +- [`TLS`](#tls): map + - [`Enabled`](#tls-enabled): boolean | `false` + - [`TLSMinVersion`](#tls-tlsminversion): string | `TLSv1_2` + - [`TLSMaxVersion`](#tls-tlsmaxversion): string + - [`CipherSuites`](#tls-ciphersuites): list of strings + - [`SDS`](#tls-sds): map of strings + - [`ClusterName`](#tls-sds): string + - [`CertResource`](#tls-sds): string +- [`Defaults`](#defaults): map + - [`MaxConnections`](#defaults-maxconnections): number + - [`MaxPendingRequests`](#defaults-maxpendingrequests): number + - [`MaxConcurrentRequests`](#defaults-maxconcurrentrequests): number + - [`PassiveHealthCheck`](#defaults-passivehealthcheck): map + - [`Interval`](#defaults-passivehealthcheck): number + - [`MaxFailures`](#defaults-passivehealthcheck): number + - [`EnforcingConsecutive5xx`](#defaults-passivehealthcheck): number + - [`MaxEjectionPercent`](#defaults-passivehealthcheck): number + - [`BaseEjectionTime`](#defaults-passivehealthcheck): string +- [`Listeners`](#listeners): list of maps + - [`Port`](#listeners-port): number | `0` + - [`Protocol`](#listeners-protocol): number | `tcp` + - [`Services`](#listeners-services): list of objects + - [`Name`](#listeners-services-name): string + - [`Namespace`](#listeners-services-namespace): string | + - [`Partition`](#listeners-services-partition): string | + - [`Hosts`](#listeners-services-hosts): List of strings | `.ingress.*` + - [`RequestHeaders`](#listeners-services-requestheaders): map + - [`Add`](#listeners-services-requestheaders): map of strings + - [`Set`](#listeners-services-requestheaders): map of strings + - [`Remove`](#listeners-services-requestheaders): list of strings + - [`ResponseHeaders`](#listeners-services-responseheaders): map + - [`Add`](#listeners-services-responseheaders): map of strings + - [`Set`](#listeners-services-responseheaders): map of strings + - [`Remove`](#listeners-services-responseheaders): list of strings + - [`TLS`](#listeners-services-tls): map + - [`SDS`](#listeners-services-tls-sds): map of strings + - [`ClusterName`](#listeners-services-tls-sds): string + - [`CertResource`](#listeners-services-tls-sds): string + - [`MaxConnections`](#listeners-services-maxconnections): number | `0` + - [`MaxPendingRequests`](#listeners-services-maxconnections): number | `0` + - [`MaxConcurrentRequests`](#listeners-services-maxconnections): number | `0` + - [`PassiveHealthCheck`](#listeners-services-passivehealthcheck): map + - [`Interval`](#listeners-services-passivehealthcheck): number + - [`MaxFailures`](#listeners-services-passivehealthcheck): number + - [`EnforcingConsecutive5xx`](#listeners-services-passivehealthcheck): number + - [`MaxEjectionPercent`](#listeners-services-passivehealthcheck): number + - [`BaseEjectionTime`](#listeners-services-passivehealthcheck): string + - [`TLS`](#listeners-tls): map + - [`Enabled`](#listeners-tls-enabled): boolean | `false` + - [`TLSMinVersion`](#listeners-tls-tlsminversion): string | `TLSv1_2` + - [`TLSMaxVersion`](#listeners-tls-tlsmaxversion): string + - [`CipherSuites`](#listeners-tls-ciphersuites): list of strings + - [`SDS`](#listeners-tls-sds): map of strings + - [`ClusterName`](#listeners-tls-sds): string + - [`CertResource`](#listeners-tls-sds): string + + + + + +- [ `apiVersion`](#apiversion): string | must be set to `consul.hashicorp.com/v1alpha1` | required +- [`kind`](#kind): string | must be `IngressGateway` | required +- [`metadata`](#metadata): map of strings + - [`name`](#metadata-name): string | required + - [`namespace`](#metadata-namespace): string | `default` | +- [`spec`](#spec): map + - [`tls`](#spec-tls): map + - [`enabled`](#spec-tls-enabled): boolean | `false` + - [`tlsMinVersion`](#spec-tls-tlsminversion): string | `TLSv1_2` + - [`tlsMaxVersion`](#spec-tls-tlsmaxversion): string + - [`cipherSuites`](#spec-tls-ciphersuites): list of strings + - [`sds`](#spec-tls-sds): map of strings + - [`clusterName`](#spec-tls-sds): string + - [`certResource`](#spec-tls-sds): string + - [`defaults`](#spec-defaults): map + - [`maxConnections`](#spec-defaults-maxconnections): number + - [`maxPendingRequests`](#spec-defaults-maxpendingrequests): number + - [`maxConcurrentRequests`](#spec-defaults-maxconcurrentrequests): number + - [`passiveHealthCheck`](#spec-defaults-passivehealthcheck): map + - [`interval`](#spec-defaults-passivehealthcheck): string + - [`maxFailures`](#spec-defaults-passivehealthcheck): integer + - [`enforcingConsecutive5xx`](#spec-defaults-passivehealthcheck): number + - [`maxEjectionPercent`](#spec-defaults-passivehealthcheck): number + - [`baseEjectionTime`](#spec-defaults-passivehealthcheck): string + - [`listeners`](#spec-listeners): list of maps + - [`port`](#spec-listeners-port): number | `0` + - [`protocol`](#spec-listeners-protocol): number | `tcp` + - [`services`](#spec-listeners-services): list of maps + - [`name`](#spec-listeners-services-name): string + - [`namespace`](#spec-listeners-services-namespace): string | current namespace | + - [`partition`](#spec-listeners-services-partition): string | current partition | + - [`hosts`](#spec-listeners-services-hosts): list of strings | `.ingress.*` + - [`requestHeaders`](#spec-listeners-services-requestheaders): map + - [`add`](#spec-listeners-services-requestheaders): map of strings + - [`set`](#spec-listeners-services-requestheaders): map of strings + - [`remove`](#spec-listeners-services-requestheaders): list of strings + - [`responseHeaders`](#spec-listeners-services-responseheaders): map + - [`add`](#spec-listeners-services-responseheaders): map of strings + - [`set`](#spec-listeners-services-responseheaders): map of strings + - [`remove`](#spec-listeners-services-responseheaders): list of strings + - [`tls`](#spec-listeners-services-tls): map + - [`sds`](#spec-listeners-services-tls-sds): map of strings + - [`clusterName`](#spec-listeners-services-tls-sds): string + - [`certResource`](#spec-listeners-services-tls-sds): string + - [`maxConnections`](#spec-listeners-services-maxconnections): number | `0` + - [`maxPendingRequests`](#spec-listeners-services-maxconnections): number | `0` + - [`maxConcurrentRequests`](#spec-listeners-services-maxconnections): number | `0` + - [`passiveHealthCheck`](#spec-listeners-services-passivehealthcheck): map + - [`interval`](#spec-listeners-services-passivehealthcheck): string + - [`maxFailures`](#spec-listeners-services-passivehealthcheck): number + - [`enforcingConsecutive5xx`](#spec-listeners-services-passivehealthcheck): number + - [`maxEjectionPercent`](#spec-listeners-services-passivehealthcheck): integer + - [`baseEjectionTime`](#spec-listeners-services-passivehealthcheck): string + - [`tls`](#spec-listeners-tls): map + - [`enabled`](#spec-listeners-tls-enabled): boolean | `false` + - [`tlsMinVersion`](#spec-listeners-tls-tlsminversion): string | `TLSv1_2` + - [`tlsMaxVersion`](#spec-listeners-tls-tlsmaxversion): string + - [`cipherSuites`](#spec-listeners-tls-ciphersuites): list of strings + - [`sds`](#spec-listeners-tls-sds): map of strings + - [`clusterName`](#spec-listeners-tls-sds): string + - [`certResource`](#spec-listeners-tls-sds): string + + + + + +## Complete configuration + +When every field is defined, an ingress gateway configuration entry has the following form: + + + + + +```hcl +Kind = "ingress-gateway" +Name = "" +Namespace = "" +Partition = "" +Meta = { + = "" +} +TLS = { + Enabled = false + TLSMinVersion = "TLSv1_2" + TLSMaxVersion = "" + CipherSuites = [ + "" + ] + SDS = { + ClusterName = "" + CertResource = "" + } +} +Defaults = { + MaxConnections = + MaxPendingRequests = + MaxConcurrentRequests = + PassiveHealthCheck = { + Interval = "" + MaxFailures = + EnforcingConsecutive5xx = + MaxEjectionPercent = + BaseEjectionTime = "" + } +} +Listeners = [ + { + Port = 0 + Protocol = "tcp" + Services = [ + { + Name = "" + Namespace = "" + Partition = "" + Hosts = [ + ".ingress.*" + ] + RequestHeaders = { + Add = { + RequestHeaderName = "" + } + Set = { + RequestHeaderName = "" + } + Remove = [ + "" + ] + } + ResponseHeaders = { + Add = { + ResponseHeaderName = "" + } + Set = { + ResponseHeaderName = "" + } + Remove = [ + "" + ] + } + TLS = { + SDS = { + ClusterName = "" + CertResource = "" + } + } + MaxConnections = + MaxPendingRequests = + MaxConcurrentRequests = + PassiveHealthCheck = { + Interval = "" + MaxFailures = + EnforcingConsecutive5xx = + MaxEjectionPercent = + BaseEjectionTime = "" + } + }] + TLS = { + Enabled = false + TLSMinVersion = "TLSv1_2" + TLSMaxVersion = "" + CipherSuites = [ + "" + ] + SDS = { + ClusterName = "" + CertResource = "" + } + } + } +] +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: IngressGateway +metadata: + name: + namespace: "" +spec: + tls: + enabled: false + tlsSMinVersion: TLSv1_2 + tlsMaxVersion: "" + cipherSuites: + - + sds: + clusterName: + certResource: + defaults: + maxConnections: + maxPendingRequests: + maxConcurrentRequests: + passiveHealthCheck: + interval: "" + maxFailures: + enforcingConsecutive5xx: + maxEjectionPercent: + baseEjectionTime: "" + listeners: + - port: 0 + protocol: tcp + services: + - name: + namespace: + partition: + hosts: + - .ingress.* + requestHeaders: + add: + requestHeaderName: + set: + requestHeaderName: + remove: + - + responseHeaders: + add: + responseHeaderName: + set: + responseHeaderName: + remove: + - + tls: + sds: + clusterName: + certResource: + maxConnections: + maxPendingRequests: + maxConcurrentRequests: + passiveHealthCheck: + interval: "" + maxFailures: + enforcingConsecutive5xx: + maxEjectionPercent: + baseEjectionTime: "" + tls: + enabled: false + tlsMinVersion: TLSv1_2 + tlsMaxVersion: + cipherSuites: + - + sds: + clusterName: + certResource: +``` + + + + + +```json +{ + "Kind" : "ingress-gateway", + "Name" : "", + "Namespace" : "", + "Partition" : "", + "Meta": { + "" : "" + }, + "TLS" : { + "Enabled" : false, + "TLSMinVersion" : "TLSv1_2", + "TLSMaxVersion" : "", + "CipherSuites" : [ + "" + ], + "SDS": { + "ClusterName" : "", + "CertResource" : "" + } + }, + "Defaults" : { + "MaxConnections" : , + "MaxPendingRequests" : , + "MaxConcurrentRequests": , + "PassiveHealthCheck" : { + "interval": "", + "maxFailures": , + "enforcingConsecutive5xx": , + "maxEjectionPercent": , + "baseEjectionTime": "" + } + }, + "Listeners" : [ + { + "Port" : 0, + "Protocol" : "tcp", + "Services" : [ + { + "Name" : "", + "Namespace" : "", + "Partition" : "", + "Hosts" : [ + ".ingress.*" + ], + "RequestHeaders" : { + "Add" : { + "RequestHeaderName" : "" + }, + "Set" : { + "RequestHeaderName" : "" + }, + "Remove" : [ + "" + ] + }, + "ResponseHeaders" : { + "Add" : { + "ResponseHeaderName" : "" + }, + "Set" : { + "ResponseHeaderName" : "" + }, + "Remove" : [ + "" + ] + }, + "TLS" : { + "SDS" : { + "ClusterName" : "", + "CertResource" : "" + } + }, + "MaxConnections" : , + "MaxPendingRequests" : , + "MaxConcurrentRequests" : , + "PassiveHealthCheck" : { + "interval": "", + "maxFailures": , + "enforcingConsecutive5xx":, + "maxEjectionPercent": , + "baseEjectionTime": "" + } + } + ], + "TLS" : { + "Enabled" : false, + "TLSMinVersion" : "TLSv1_2", + "TLSMaxVersion" : "", + "CipherSuites" : [ + "" + ], + "SDS" : { + "ClusterName" : "", + "CertResource" : "" + } + } + } + ] +} +``` + + + + + +## Specification + +This section provides details about the fields you can configure in the ingress gateway configuration entry. + + + + + +### `Kind` + +Specifies the type of configuration entry. Must be set to `ingress-gateway`. + +#### Values + +- Default: None +- This field is required. +- Data type: String value that must be set to `ingress-gateway`. + +### `Name` + +Specifies a name for the gateway. The name is metadata that you can use to reference the configuration entry when performing Consul operations with the [`consul config` command](/consul/commands/config). + +#### Values + +- Default: None +- This field is required. +- Data type: String + +### `Namespace` + +Specifies the namespace to apply the configuration entry in. Refer to [Namespaces](/consul/docs/multi-tenant/namespace) for additional information about Consul namespaces. + +If unspecified, the ingress gateway is applied to the `default` namespace. You can override the namespace when using the [`/config` API endpoint](/consul/api-docs/config) to register the configuration entry by specifying the `ns` query parameter. + +#### Values + +- Default: `default`, +- Data type: String + +### `Partition` + +Specifies the admin partition that the ingress gateway applies to. The value must match the partition in which the gateway is registered. Refer to [Admin partitions](/consul/docs/multi-tenant/admin-partition) for additional information. + +If unspecified, the ingress gateway is applied to the `default` partition. You can override the partition when using the [`/config` API endpoint](/consul/api-docs/config) to register the configuration entry by specifying the `partition` query parameter. + +#### Values + +- Default: `default +- Data type: String + +### `Meta` + +Defines an arbitrary set of key-value pairs to store in the Consul KV. + +#### Values + +- Default: None +- Data type: Map of one or more key-value pairs. + - keys: String + - values: String, integer, or float + +### `TLS` + +Specifies the TLS configuration settings for the gateway. + +#### Values + +- Default: No default +- Data type: Object that can contain the following fields: + - [`Enabled`](#tls-enabled) + - [`TLSMinVersion`](#tls-tlsminversion) + - [`TLSMaxVersion`](#tls-tlsmaxversion) + - [`CipherSuites`](#tls-ciphersuites) + - [`SDS`](#tls-sds) + +### `TLS.Enabled` + +Enables and disables TLS for the configuration entry. Set to `true` to enable built-in TLS for every listener on the gateway. TLS is disabled by default. + +When enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). + +#### Values + + - Default: `false` + - Data type: boolean + +### `TLS.TLSMinVersion` + +Specifies the minimum TLS version supported for gateway listeners. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `TLS.TLSMaxVersion` + +Specifies the maximum TLS version supported for gateway listeners. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `TLS.CipherSuites[]` + +Specifies a list of cipher suites that gateway listeners support when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. + +#### Values + +- Default: None +- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. + +### `TLS.SDS` + +Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for additional information. + +Consul applies the SDS configuration specified in this field as defaults for all listeners defined in the gateway. You can override the SDS settings for per listener or per service defined in the listener. Refer to the following configurations for additional information: + +- [`Listeners.TLS.SDS`](#listeners-tls-sds): Configures SDS settings for all services in the listener. +- [`Listeners.Services.TLS.SDS`](#listeners-services-tls-sds): Configures SDS settings for a specific service defined in the listener. + +#### Values + +- Default: None +- Data type: Map containing the following fields: + - `ClusterName` + - `CertResource` + +The following table describes how to configure SDS parameters. + +| Parameter | Description | Data type | +| --- | --- | --- | +| `ClusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | +| `CertResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | + +### `Defaults` + +Specifies default configurations for connecting upstream services. + +#### Values + +- Default: None +- The data type is a map containing the following parameters: + + - [`MaxConnections`](#defaults-maxconnections) + - [`MaxPendingRequests`](#defaults-maxpendingrequests) + - [`MaxConcurrentRequests`](#defaults-maxconcurrentrequests) + +### `Defaults.MaxConnections` + +Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. + +#### Values + +- Default value is `0`, which instructs Consul to use the proxy's configuration. For Envoy, the default is `1024`. +- Data type: Integer + +### `Defaults.MaxPendingRequests` + +Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol). + +#### Values + +- Default value is `0`, which instructs Consul to use the proxy's configuration. For Envoy, the default is `1024`. +- Data type: Integer + +### `Defaults.MaxConcurrentRequests` + +Specifies the maximum number of concurrent HTTP/2 traffic requests that are allowed at a single point in time. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol). + +#### Values + +- Default value is `0`, which instructs Consul to use the proxy's configuration. For Envoy, the default is `1024`. +- Data type: Integer + +### `Defaults.PassiveHealthCheck` + +Defines a passive health check configuration. Passive health checks remove hosts from the upstream cluster when they are unreachable or return errors. + +#### Values + +- Default: None +- Data type: Map + +The following table describes the configurations for passive health checks: + +| Parameter | Description | Data type | Default | +| --- | --- | --- | --- | + | `Interval` | Specifies the time between checks. | string | `0s` | + | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | + | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | + | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | + | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | + +### `Listeners[]` + +Specifies a list of listeners in the mesh for the gateway. Listeners are uniquely identified by their port number. + +#### Values + +- Default: None +- Data type: List of maps containing the following fields: + - [`Port`](#listeners-port) + - [`Protocol`](#listeners-protocol) + - [`Services[]`](#listeners-services) + - [`TLS`](#listeners-tls) + +### `Listeners[].Port` + +Specifies the port that the listener receives traffic on. The port is bound to the IP address specified in the [`-address`](/consul/commands/connect/envoy#address) flag when starting the gateway. The listener port must not conflict with the health check port. + +#### Values + +- Default: `0` +- Data type: Integer + +### `Listeners[].Protocol` + +Specifies the protocol associated with the listener. To enable L7 network management capabilities, specify one of the following values: + +- `http` +- `http2` +- `grpc` + +#### Values + +- Default: `tcp` +- Data type: String that contains one of the following values: + + - `tcp` + - `http` + - `http2` + - `grpc` + +### `Listeners[].Services[]` + +Specifies a list of services that the listener exposes to services outside the mesh. Each service must have a unique name. The `Namespace` field is required for Consul Enterprise datacenters. If the [`Listeners.Protocol`] field is set to `tcp`, then Consul can only expose one service. You can expose multiple services if the listener uses any other supported protocol. + +#### Values + +- Default: None +- Data type: List of maps that can contain the following fields: + - [`Name`](#listeners-services-name) + - [`Namespace`](#listeners-services-namespace) + - [`Partition`](#listeners-services-partition) + - [`Hosts`](#listeners-services-hosts) + - [`RequestHeaders`](#listeners-services-requestheaders) + - [`ResponseHeaders`](#listeners-services-responseheaders)` + - [`TLS`](#listeners-services-tls) + - [`MaxConnections`](#listeners-services-maxconnections) + - [`MaxPendingRequests`](#listeners-services-maxpendingrequests) + - [`MaxConcurrentRequests`](#listeners-services-maxconcurrentrequests) + - [`PassiveHealthCheck`](#listeners-services-passivehealthcheck) + +### `Listeners[].Services[].Name` + +Specifies the name of a service to expose to the listener. You can specify services in the following ways: + +- Provide the name of a service registered in the Consul catalog. +- Provide the name of a service defined in other configuration entries. Refer to [Service Mesh Traffic Management Overview](/consul/docs/manage-traffic) for additional information. +- Provide a `*` wildcard to expose all services in the datacenter. Wild cards are not supported for listeners configured for TCP. Refer to [`Listeners[].Protocol`](#listeners-protocol) for additional information. + +#### Values + +- Default: None +- Data type: String + +### `Listeners[].Services[].Namespace` + +Specifies the namespace to use when resolving the location of the service. + +#### Values + +- Default: Current namespace +- Data type: String + +### `Listeners[].Services[].Partition` + +Specifies the admin partition to use when resolving the location of the service. + +#### Values + +- Default: Current partition +- Data type: String + +### `Listeners[].Services[].Hosts[]` + +Specifies one or more hosts that the listening services can receive requests on. The ingress gateway proxies external traffic to the specified services when external requests include `host` headers that match a host specified in this field. + +If unspecified, Consul matches requests to services using the `.ingress.*` domain. You cannot specify a host for listeners that communicate over TCP. You cannot specify a host when service names are specified with a `*` wildcard. Requests must include the correct host for Consul to proxy traffic to the service. + +When TLS is disabled, you can use the `*` wildcard to match all hosts. Disabling TLS may be suitable for testing and learning purposes, but we recommend enabling TLS in production environments. + +You can use the wildcard in the left-most DNS label to match a set of hosts. For example, `*.example.com` is valid, but `example.*` and `*-suffix.example.com` are invalid. + +#### Values + +- Default: None +- Data type: List of strings or `*` + +### `Listeners[].Services[].RequestHeaders` + +Specifies a set of HTTP-specific header modification rules applied to requests routed through the gateway. You cannot configure request headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with Path-based Routing](#http-listener-with-path-based-routing) for an example configuration. + +#### Values + +- Default: None +- Data type: Object containing one or more fields that define header modification rules: + + - `Add`: Map of one or more key-value pairs + - `Set`: Map of one or more key-value pairs + - `Remove`: Map of one or more key-value pairs + +The following table describes how to configure values for request headers: + +| Rule | Description | Data type | +| --- | --- | --- | +| `Add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `Set` | Defines a set of key-value pairs to add to the request header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `Remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | + +##### Use variable placeholders + +For `Add` and `Set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. + +### `Listeners[].Services[].ResponseHeaders` + +Specifies a set of HTTP-specific header modification rules applied to responses routed through the gateway. You cannot configure response headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with Path-based Routing](#http-listener-with-path-based-routing) for an example configuration. + +#### Values + +- Default: None +- Data type: Map containing one or more fields that define header modification rules: + + - `Add`: Map of one or more key-value pairs + - `Set`: Map of one or more key-value pairs + - `Remove`: Map of one or more key-value pairs + +The following table describes how to configure values for request headers: + +| Rule | Description | Data type | +| --- | --- | --- | +| `Add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `Set` | Defines a set of key-value pairs to add to the response header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `Remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | + +##### Use variable placeholders + +For `Add` and `Set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. + +### `Listeners[].Services[].TLS` + +Specifies a TLS configuration for a specific service. The settings in this configuration overrides the main [`TLS`](#tls) settings for the configuration entry. + +#### Values + +- Default: None +- Data type: Map + +### `Listeners[].Services[].TLS.SDS` + +Specifies parameters that configure the listener to load TLS certificates from an external SDS. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for additional information. + +This configuration overrides the main [`TLS.SDS`](#tls-sds) settings for configuration entry. If unspecified, Consul applies the top-level [`TLS.SDS`](#tls-sds) settings. + +#### Values + +- Default: None +- Data type: Map containing the following fields: + + - `ClusterName` + - `CertResource` + +The following table describes how to configure SDS parameters. Refer to [Configure static SDS clusters](/consul/docs/connect/gateways/ingress-gateway/tls-external-service#configure-static-sds-clusters) for usage information: + +| Parameter | Description | Data type | +| `ClusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | +| `CertResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | + +### `Listeners[].Services[].MaxConnections` + +Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. + +When defined, this field overrides the [`Defaults.MaxConnections`](#defaults-maxconnections) configuration. + +#### Values + +- Default: None +- Data type: Integer + +### `Listeners[].Services.MaxPendingRequests` + +Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. When defined, this field overrides the value specified in the [`Defaults.MaxPendingRequests`](#defaults-maxpendingrequests) field of the configuration entry. + +Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol) for more information. + +#### Values + +- Default: None +- Data type: Integer + +### `Listeners[].Services[].MaxConcurrentRequests` + +Specifies the maximum number of concurrent HTTP/2 traffic requests that the service is allowed at a single point in time. This field overrides the value set in the [`Defaults.MaxConcurrentRequests`](#defaults-maxconcurrentrequests) field of the configuration entry. + +Listeners must use an L7 protocol for this configuration to take effect. Refer to [`Listeners.Protocol`](#listeners-protocol) for more information. + +#### Values + +- Default: None +- Data type: Integer + +### `Listeners[].Services[].PassiveHealthCheck` + +Defines a passive health check configuration for the service. Passive health checks remove hosts from the upstream cluster when the service is unreachable or returns errors. This field overrides the value set in the [`Defaults.PassiveHealthCheck`](#defaults-passivehealthcheck) field of the configuration entry. + +#### Values + +- Default: None +- Data type: Map + +The following table describes the configurations for passive health checks: + +| Parameter | Description | Data type | Default | +| --- | --- | --- | --- | + | `Interval` | Specifies the time between checks. | string | `0s` | + | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | + | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | + | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | + | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | + +### `Listeners[].TLS` + +Specifies the TLS configuration for the listener. If unspecified, Consul applies any [service-specific TLS configurations](#listeners-services-tls). If neither the listener- nor service-specific TLS configurations are specified, Consul applies the main [`TLS`](#tls) settings for the configuration entry. + +#### Values + +- Default: None +- Data type: Map that can contain the following fields: + - [`Enabled`](#listeners-tls-enabled) + - [`TLSMinVersion`](#listeners-tls-tlsminversion) + - [`TLSMaxVersion`](#listeners-tls-tlsmaxversion) + - [`CipherSuites`](#listeners-tls-ciphersuites) + - [`SDS`](#listeners-tls-sds) + +### `Listeners[].TLS.Enabled` + +Set to `true` to enable built-in TLS for the listener. If enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). + +#### Values + + - Default: `false` + - Data type: boolean + +### `Listeners[].TLS.TLSMinVersion` + +Specifies the minimum TLS version supported for the listener. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `Listeners[].TLS.TLSMaxVersion` + +Specifies the maximum TLS version supported for the listener. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `Listeners[].TLS.CipherSuites` + +Specifies a list of cipher suites that the listener supports when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. + +#### Values + +- Default: None +- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. + +### `Listeners[].TLS.SDS` + +Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for additional information. + +Consul applies the SDS configuration specified in this field to all services in the listener. You can override the `Listeners.TLS.SDS` configuration per service by configuring the [`Listeners.Services.TLS.SDS`](#listeners-services-tls-sds) settings for each service. + +#### Values + +- Default: None +- The data type is a map containing `ClusterName` and `CertResource` fields. + +The following table describes how to configure SDS parameters. Refer to [Configure static SDS clusters](/consul/docs/connect/gateways/ingress-gateway/tls-external-service#configure-static-sds-clusters) for usage information: + +| Parameter | Description | Data type | +| --- | --- | --- | +| `ClusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | +| `CertResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | + + + + + +### `apiVersion` + +Kubernetes-only parameter that specifies the version of the Consul API that the configuration entry maps to Kubernetes configurations. The value must be `consul.hashicorp.com/v1alpha1`. + +### `kind` + +Specifies the type of configuration entry to implement. Must be set to `IngressGateway`. + +#### Values + +- Default: None +- This field is required. +- Data type: String value that must be set to `IngressGateway`. + +### `metadata` + +Specifies metadata for the gateway. + +#### Values + +- Default: None +- This field is required +- Data type: Map the contains the following fields: + - [`name`](#metadata-name) + - [`namespace`](#metadata-namespace) + +### `metadata.name` + +Specifies a name for the gateway. The name is metadata that you can use to reference the configuration entry when performing Consul operations with the [`consul config` command](/consul/commands/config). + +#### Values + +- Default: None +- This field is required. +- Data type: String + +### `metadata.namespace` + +Specifies the namespace to apply the configuration entry in. Refer to [Namespaces](/consul/docs/multi-tenant/namespace) for additional information about Consul namespaces. + +If unspecified, the ingress gateway is applied to the `default` namespace. You can override the namespace when using the [`/config` API endpoint](/consul/api-docs/config) to register the configuration entry by specifying the `ns` query parameter. + +#### Values + +- Default: `default`, +- Data type: String + +### `spec` + +Kubernetes-only field that contains all of the configurations for ingress gateway pods. + +#### Values + +- Default: None +- This field is required. +- Data type: Map containing the following fields: + - [`tls`](#tls) + - [`defaults`](#defaults) + - [`listeners`](#listeners) + +### `spec.tls` + +Specifies the TLS configuration settings for the gateway. + +#### Values + +- Default: No default +- Data type: Object that can contain the following fields: + - [`enabled`](#tls-enabled) + - [`tlsMinVersion`](#spec-tls-tlsminversion) + - [`tlsMaxVersion`](#spec-tls-tlsmaxversion) + - [`cipherSuites`](#spec-tls-tlsciphersuites) + - [`sds`](#spec-tls-sds) + +### `spec.tls.enabled` + +Enables and disables TLS for the configuration entry. Set to `true` to enable built-in TLS for every listener on the gateway. TLS is disabled by default. + +When enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). + +#### Values + + - Default: `false` + - Data type: boolean + +### `spec.tls.tlsMinVersion` + +Specifies the minimum TLS version supported for gateway listeners. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `spec.tls.tlsMaxVersion` + +Specifies the maximum TLS version supported for gateway listeners. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `spec.tls.cipherSuites[]` + +Specifies a list of cipher suites that gateway listeners support when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. + +#### Values + +- Default: None +- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. + +### `spec.tls.sds` + +Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for additional information. + +Consul applies the SDS configuration specified in this field as defaults for all listeners defined in the gateway. You can override the SDS settings for per listener or per service defined in the listener. Refer to the following configurations for additional information: + +- [`spec.listeners.tls.sds`](#spec-listeners-tls-sds): Configures SDS settings for all services in the listener. +- [`spec.listeners.services.tls.sds`](#spec-listeners-services-tls-sds): Configures SDS settings for a specific service defined in the listener. + +#### Values + +- Default: None +- Data type: Map containing the following fields: + - [`clusterName`] + - [`certResource`] + +The following table describes how to configure SDS parameters. + +| Parameter | Description | Data type | +| --- | --- | --- | +| `clusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | +| `certResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | + +### `spec.defaults` + +Specifies default configurations for upstream connections. + +#### Values + +- Default: None +- The data type is a map containing the following parameters: + + - [`maxConnections`](#spec-defaults-maxconnections) + - [`maxPendingRequests`](#spec-defaults-maxpendingrequests) + - [`maxConcurrentRequests`](#spec-defaults-maxconcurrentrequests) + +### `spec.defaults.maxConnections` + +Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. If unspecified, Consul uses Envoy's configuration. The default configuration for Envoy is `1024`. + +#### Values + +- Default: `0` +- Data type: Integer + +### `spec.defaults.maxPendingRequests` + +Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.Protocol`](#spec-listeners-protocol). + +If unspecified, Consul uses Envoy's configuration. The default for Envoy is `1024`. + +#### Values + +- Default: `0` +- Data type: Integer + +### `spec.defaults.maxConcurrentRequests` + +Specifies the maximum number of concurrent HTTP/2 traffic requests that are allowed at a single point in time. Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol). + +If unspecified, Consul uses Envoy's configuration. The default for Envoy is `1024`. + +#### Values + +- Default: `0` +- Data type: Integer + +### `spec.defaults.passiveHealthCheck` + +Defines a passive health check configuration. Passive health checks remove hosts from the upstream cluster when they are unreachable or return errors. + +#### Values + +- Default: None +- Data type: Map + +The following table describes the configurations for passive health checks: + +| Parameter | Description | Data type | Default | +| --- | --- | --- | --- | + | `Interval` | Specifies the time between checks. | string | `0s` | + | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | + | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | + | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | + | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | + +### `spec.listeners[]` + +Specifies a list of listeners in the mesh for the gateway. Listeners are uniquely identified by their port number. + +#### Values + +- Default: None +- Data type: List of maps containing the following fields: + - [`port`](#spec-listeners-port) + - [`protocol`](#spec-listeners-protocol) + - [`services[]`](#spec-listeners-services) + - [`tls`](#spec-listeners-tls) + +### `spec.listeners[].port` + +Specifies the port that the listener receives traffic on. The port is bound to the IP address specified in the [`-address`](/consul/commands/connect/envoy#address) flag when starting the gateway. The listener port must not conflict with the health check port. + +#### Values + +- Default: `0` +- Data type: Integer + +### `spec.listeners[].protocol` + +Specifies the protocol associated with the listener. To enable L7 network management capabilities, specify one of the following values: + +- `http` +- `http2` +- `grpc` + +#### Values + +- Default: `tcp` +- Data type: String that contains one of the following values: + + - `tcp` + - `http` + - `http2` + - `grpc` + +### `spec.listeners[].services[]` + +Specifies a list of services that the listener exposes to services outside the mesh. Each service must have a unique name. The `namespace` field is required for Consul Enterprise datacenters. If the listener's [`protocol`](#spec-listeners-protocol) field is set to `tcp`, then Consul can only expose one service. You can expose multiple services if the listener uses any other supported protocol. + +#### Values + +- Default: None +- Data type: List of maps that can contain the following fields: + - [`name`](#spec-listeners-services-name) + - [`namespace`](#spec-listeners-services-namespace) + - [`partition`](#spec-listeners-services-partition) + - [`hosts`](#spec-listeners-services-hosts) + - [`requestHeaders`](#spec-listeners-services-requestheaders) + - [`responseHeaders`](#spec-listeners-services-responseheaders)` + - [`tlsLS`](#spec-listeners-services-tls) + - [`maxConnections`](#spec-listeners-services-maxconnections) + - [`maxPendingRequests`](#spec-listeners-services-maxpendingrequests) + - [`maxConcurrentRequests`](#spec-listeners-services-maxconcurrentrequests) + - [`passiveHealthCheck`](#spec-listeners-services-passivehealthcheck) + +### `spec.listeners[].services[].name` + +Specifies the name of a service to expose to the listener. You can specify services in the following ways: + +- Provide the name of a service registered in the Consul catalog. +- Provide the name of a service defined in other configuration entries. Refer to [Service Mesh Traffic Management Overview](/consul/docs/manage-traffic) for additional information. Refer to [HTTP listener with path-based routes](#http-listener-with-path-based-routes) for an example. +- Provide a `*` wildcard to expose all services in the datacenter. Wild cards are not supported for listeners configured for TCP. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol) for additional information. + +#### Values + +- Default: None +- Data type: String + +### `spec.listeners[].services[].namespace` + +Specifies the namespace to use when resolving the location of the service. + +#### Values + +- Default: Current namespace +- Data type: String + +### `spec.listeners[].services[].partition` + +Specifies the admin partition to use when resolving the location of the service. + +#### Values + +- Default: Current partition +- Data type: String + +### `spec.listeners[].services[].hosts[]` + +Specifies one or more hosts that the listening services can receive requests on. The ingress gateway proxies external traffic to the specified services when external requests include `host` headers that match a host specified in this field. + +If unspecified, Consul matches requests to services using the `.ingress.*` domain. You cannot specify a host for listeners that communicate over TCP. You cannot specify a host when service names are specified with a `*` wildcard. Requests must include the correct host for Consul to proxy traffic to the service. + +When TLS is disabled, you can use the `*` wildcard to match all hosts. Disabling TLS may be suitable for testing and learning purposes, but we recommend enabling TLS in production environments. + +You can use the wildcard in the left-most DNS label to match a set of hosts. For example, `*.example.com` is valid, but `example.*` and `*-suffix.example.com` are invalid. + +#### Values + +- Default: None +- Data type: List of strings or `*` + +### `spec.listeners[].services[].requestHeaders` + +Specifies a set of HTTP-specific header modification rules applied to requests routed through the gateway. You cannot configure request headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with path-based routing](#http-listener-with-path-based-routing) for an example configuration. + +#### Values + +- Default: None +- Data type: Object containing one or more fields that define header modification rules: + + - `add`: Map of one or more key-value pairs + - `set`: Map of one or more key-value pairs + - `remove`: Map of one or more key-value pairs + +The following table describes how to configure values for request headers: + +| Rule | Description | Data type | +| --- | --- | --- | +| `add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `set` | Defines a set of key-value pairs to add to the request header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | + +##### Use variable placeholders + +For `add` and `set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. + +### `spec.listeners[].services[].responseHeaders` + +Specifies a set of HTTP-specific header modification rules applied to responses routed through the gateway. You cannot configure response headers if the listener protocol is set to `tcp`. Refer to [HTTP listener with path-based routing](#http-listener-with-path-based-routing) for an example configuration. + +#### Values + +- Default: None +- Data type: Map containing one or more fields that define header modification rules: + + - `add`: Map of one or more key-value pairs + - `set`: Map of one or more key-value pairs + - `remove`: Map of one or more key-value pairs + +The following table describes how to configure values for request headers: + +| Rule | Description | Data type | +| --- | --- | --- | +| `add` | Defines a set of key-value pairs to add to the header. Use header names as the keys. Header names are not case-sensitive. If header values with the same name already exist, the value is appended and Consul applies both headers. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `set` | Defines a set of key-value pairs to add to the request header or to replace existing header values with. Use header names as the keys. Header names are not case-sensitive. If header values with the same names already exist, Consul replaces the header values. You can [use variable placeholders](#use-variable-placeholders). | Map of strings | +| `remove` | Defines a list of headers to remove. Consul removes only headers containing exact matches. Header names are not case-sensitive. | List of strings | + +##### Use variable placeholders + +For `add` and `set`, if the service is configured to use Envoy as the proxy, the value may contain variables to interpolate dynamic metadata into the value. For example, using the variable `%DOWNSTREAM_REMOTE_ADDRESS%` in your configuration entry allows you to pass a value that is generated at runtime. + +### `spec.listeners[].services[].tls` + +Specifies a TLS configuration for a specific service. The settings in this configuration overrides the main [`tls`](#spec.tls) settings for the configuration entry. + +#### Values + +- Default: None +- Data type: Map + +### `spec.listeners[].services[].tls.sds` + +Specifies parameters that configure the listener to load TLS certificates from an external SDS. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for additional information. + +If unspecified, Consul applies the [`sds`](#spec-tls-sds) settings configured for the ingress gateway. If both are specified, this configuration overrides the settings for configuration entry. + +#### Values + +- Default: None +- Data type: Map containing the following fields: + + - `clusterName` + - `certResource` + +The following table describes how to configure SDS parameters. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for usage information: + +| Parameter | Description | Data type | +| --- | --- | --- | +| `clusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | +| `certResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | + +### `spec-listeners[].services[].maxConnections` + +Specifies the maximum number of HTTP/1.1 connections a service instance is allowed to establish against the upstream. + +A value specified in this field overrides the [`maxConnections`](#spec-defaults-maxconnections) field specified in the `defaults` configuration. + +#### Values + +- Default: None +- Data type: Integer + +### `spec.listeners[].services.maxPendingRequests` + +Specifies the maximum number of requests that are allowed to queue while waiting to establish a connection. A value specified in this field overrides the [`maxPendingRequests`](#spec-defaults-maxpendingrequests) field specified in the `defaults` configuration. + +Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol) for more information. + +#### Values + +- Default: None +- Data type: Integer + +### `spec.listeners[].services[].maxConcurrentRequests` + +Specifies the maximum number of concurrent HTTP/2 traffic requests that the service is allowed at a single point in time. A value specified in this field overrides the [`maxConcurrentRequests`](#spec-defaults-maxconcurrentrequests) field specified in the `defaults` configuration entry. + +Listeners must use an L7 protocol for this configuration to take effect. Refer to [`spec.listeners.protocol`](#spec-listeners-protocol) for more information. + +#### Values + +- Default: None +- Data type: Integer + +### `spec.listeners[].services[].passiveHealthCheck` + +Defines a passive health check configuration for the service. Passive health checks remove hosts from the upstream cluster when the service is unreachable or returns errors. Health checks specified for services override the health checks defined in the [`spec.defaults.passiveHealthCheck`](#spec-defaults-passivehealthcheck) configuration. + +#### Values + +- Default: None +- Data type: Map + +The following table describes the configurations for passive health checks: + +| Parameter | Description | Data type | Default | +| --- | --- | --- | --- | + | `Interval` | Specifies the time between checks. | string | `0s` | + | `MaxFailures` | Specifies the number of consecutive failures allowed per check interval. If exceeded, Consul removes the host from the load balancer. | integer | `0` | + | `EnforcingConsecutive5xx` | Specifies a percentage that indicates how many times out of 100 that Consul ejects the host when it detects an outlier status. The outlier status is determined by consecutive errors in the 500-599 response range. | integer | `100` | + | `MaxEjectionPercent` | Specifies the maximum percentage of an upstream cluster that Consul ejects when the proxy reports an outlier. Consul ejects at least one host when an outlier is detected regardless of the value. | integer | `10` | + | `BaseEjectionTime` | Specifies the minimum amount of time that an ejected host must remain outside the cluster before rejoining. The real time is equal to the value of the `BaseEjectionTime` multiplied by the number of times the host has been ejected. | string | `30s` | + +### `spec.listeners[].tls` + +Specifies the TLS configuration for the listener. If unspecified, Consul applies any [service-specific TLS configurations](#spec-listeners-services-tls). If neither the listener- nor service-specific TLS configurations are specified, Consul applies the main [`tls`](#tls) settings for the configuration entry. + +#### Values + +- Default: None +- Data type: Map that can contain the following fields: + - [`enabled`](#spec-listeners-tls-enabled) + - [`tlsMinVersion`](#spec-listeners-tls-tlsminversion) + - [`tlsMaxVersion`](#spec-listeners-tls-tlsmaxversion) + - [`cipherSuites`](#spec-listeners-tls-ciphersuites) + - [`sds`](#spec-listeners-tls-sds) + +### `spec.listeners[].tls.enabled` + +Set to `true` to enable built-in TLS for the listener. If enabled, Consul adds each host defined in every service's `Hosts` field to the gateway's x509 certificate as a DNS subject alternative name (SAN). + +#### Values + + - Default: `false` + - Data type: boolean + +### `spec.listeners[].tls.tlsMinVersion` + +Specifies the minimum TLS version supported for the listener. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `spec.listeners[].tls.tlsMaxVersion` + +Specifies the maximum TLS version supported for the listener. + +#### Values + +- Default: Depends on the version of Envoy: + - Envoy v1.22.0 and later: `TLSv1_2` + - Older versions: `TLSv1_0` +- Data type: String with one of the following values: + - `TLS_AUTO` + - `TLSv1_0` + - `TLSv1_1` + - `TLSv1_2` + - `TLSv1_3` + +### `spec.listeners[].tls.cipherSuites` + +Specifies a list of cipher suites that the listener supports when negotiating connections using TLS 1.2 or older. If unspecified, the Consul applies the default for the version of Envoy in use. Refer to the [Envoy documentation](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tls parameters-cipher-suites) for details. + +#### Values + +- Default: None +- Data type: List of string values. Refer to the [Consul repository](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) for a list of supported ciphers. + +### `spec.listeners[].tls.sds` + +Specifies parameters for loading the TLS certificates from an external SDS service. Refer to [Serve custom TLS certificates from an external service](/consul/docs/north-south/ingress-gateway/external) for additional information. + +Consul applies the SDS configuration specified in this field to all services in the listener. You can override the `spec.listeners[].tls.sds` configuration per service by configuring the [`spec.listeners.services.tls.sds`](#spec-listeners-services-tls-sds) settings for each service. + +#### Values + +- Default: None +- Data type: Map containing the following fields + - `clusterName` + - `certResource` + +The following table describes how to configure SDS parameters. Refer to [Configure static SDS clusters](/consul/docs/connect/gateways/ingress-gateway/tls-external-service#configure-static-sds-clusters) for usage information: + +| Parameter | Description | Data type | +| --- | --- | --- | +| `clusterName` | Specifies the name of the SDS cluster where Consul should retrieve certificates. The cluster must be specified in the gateway's bootstrap configuration. | String | +| `certResource` | Specifies an SDS resource name. Consul requests the SDS resource name when fetching the certificate from the SDS service. When set, Consul serves the certificate to all listeners over TLS unless a listener-specific TLS configuration overrides the SDS configuration. | String | + + + + + +## Examples + +Refer to the following examples for common ingress gateway configuration patterns: +- [Define a TCP listener](#define-a-tcp-listener) +- [Use wildcards to define listeners](#use-wildcards-to-define-an-http-listener) +- [HTTP listener with path-based routes](#http-listener-with-path-based-routes) + +### Define a TCP listener + +The following example sets up a TCP listener on an ingress gateway named `us-east-ingress` that proxies traffic to the `db` service. For Consul Enterprise, the `db` service can only listen for traffic in the `default` namespace inside the `team-frontend` admin partition: + +#### Consul CE + + + +```hcl +Kind = "ingress-gateway" +Name = "us-east-ingress" + +Listeners = [ + { + Port = 3456 + Protocol = "tcp" + Services = [ + { + Name = "db" + } + ] + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: IngressGateway +metadata: + name: us-east-ingress +spec: + listeners: + - port: 3456 + protocol: tcp + services: + - name: db +``` + +```json +{ + "Kind": "ingress-gateway", + "Name": "us-east-ingress", + "Listeners": [ + { + "Port": 3456, + "Protocol": "tcp", + "Services": [ + { + "Name": "db" + } + ] + } + ] +} +``` + + + +#### Consul Enterprise + + + +```hcl +Kind = "ingress-gateway" +Name = "us-east-ingress" +Namespace = "default" +Partition = "team-frontend" + +Listeners = [ + { + Port = 3456 + Protocol = "tcp" + Services = [ + { + Namespace = "ops" + Name = "db" + } + ] + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: IngressGateway +metadata: + name: us-east-ingress + namespace: default +spec: + listeners: + - port: 3456 + protocol: tcp + services: + - name: db + namespace: ops +``` + +```json +{ + "Kind": "ingress-gateway", + "Name": "us-east-ingress", + "Namespace": "default", + "Partition": "team-frontend", + "Listeners": [ + { + "Port": 3456, + "Protocol": "tcp", + "Services": [ + { + "Namespace": "ops", + "Name": "db" + } + ] + } + ] +} +``` + + + +### Use wildcards to define an HTTP listener + +The following example gateway is named `us-east-ingress` and defines two listeners. The first listener is configured to listen on port `8080` and uses a wildcard (`*`) to proxy traffic to all services in the datacenter. The second listener exposes the `api` and `web` services on port `4567` at user-provided hosts. + +TLS is enabled on every listener. The `max_connections` of the ingress gateway proxy to each upstream cluster is set to `4096`. + +The Consul Enterprise version implements the following additional configurations: + +- The ingress gateway is set up in the `default` [namespace](/consul/docs/multi-tenant/namespace) and proxies traffic to all services in the `frontend` namespace. +- The `api` and `web` services are proxied to team-specific [admin partitions](/consul/docs/multi-tenant/admin-partition): + +#### Consul CE + + + +```hcl +Kind = "ingress-gateway" +Name = "us-east-ingress" + +TLS { + Enabled = true +} + +Defaults { + MaxConnections = 4096 +} + +Listeners = [ + { + Port = 8080 + Protocol = "http" + Services = [ + { + Name = "*" + } + ] + }, + { + Port = 4567 + Protocol = "http" + Services = [ + { + Name = "api" + Hosts = ["foo.example.com"] + }, + { + Name = "web" + Hosts = ["website.example.com"] + } + ] + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: IngressGateway +metadata: + name: us-east-ingress +spec: + tls: + enabled: true + listeners: + - port: 8080 + protocol: http + services: + - name: '*' + - port: 4567 + protocol: http + services: + - name: api + hosts: ['foo.example.com'] + - name: web + hosts: ['website.example.com'] +``` + +```json +{ + "Kind": "ingress-gateway", + "Name": "us-east-ingress", + "TLS": { + "Enabled": true + }, + "Listeners": [ + { + "Port": 8080, + "Protocol": "http", + "Services": [ + { + "Name": "*" + } + ] + }, + { + "Port": 4567, + "Protocol": "http", + "Services": [ + { + "Name": "api", + "Hosts": ["foo.example.com"] + }, + { + "Name": "web", + "Hosts": ["website.example.com"] + } + ] + } + ] +} +``` + + + +#### Consul Enterprise + + + +```hcl +Kind = "ingress-gateway" +Name = "us-east-ingress" +Namespace = "default" + +TLS { + Enabled = true +} + +Listeners = [ + { + Port = 8080 + Protocol = "http" + Services = [ + { + Namespace = "frontend" + Name = "*" + } + ] + }, + { + Port = 4567 + Protocol = "http" + Services = [ + { + Namespace = "frontend" + Name = "api" + Hosts = ["foo.example.com"] + Partition = "api-team" + }, + { + Namespace = "frontend" + Name = "web" + Hosts = ["website.example.com"] + Partition = "web-team" + } + ] + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: IngressGateway +metadata: + name: us-east-ingress + namespace: default +spec: + tls: + enabled: true + listeners: + - port: 8080 + protocol: http + services: + - name: '*' + namespace: frontend + - port: 4567 + protocol: http + services: + - name: api + namespace: frontend + hosts: ['foo.example.com'] + partition: api-team + - name: web + namespace: frontend + hosts: ['website.example.com'] + partition: web-team +``` + +```json +{ + "Kind": "ingress-gateway", + "Name": "us-east-ingress", + "Namespace": "default", + "TLS": { + "Enabled": true + }, + "Listeners": [ + { + "Port": 8080, + "Protocol": "http", + "Services": [ + { + "Namespace": "frontend", + "Name": "*" + } + ] + }, + { + "Port": 4567, + "Protocol": "http", + "Services": [ + { + "Namespace": "frontend", + "Name": "api", + "Hosts": ["foo.example.com"], + "Partition": "api-team" + }, + { + "Namespace": "frontend", + "Name": "web", + "Hosts": ["website.example.com"], + "Partition": "web-team" + } + ] + } + ] +} +``` + + \ No newline at end of file diff --git a/website/content/docs/connect/config-entries/inline-certificate.mdx b/website/content/docs/reference/config-entry/inline-certificate.mdx similarity index 86% rename from website/content/docs/connect/config-entries/inline-certificate.mdx rename to website/content/docs/reference/config-entry/inline-certificate.mdx index abf68dc9b2db..e8017fa9dafb 100644 --- a/website/content/docs/connect/config-entries/inline-certificate.mdx +++ b/website/content/docs/reference/config-entry/inline-certificate.mdx @@ -1,15 +1,16 @@ --- layout: docs -page_title: Inline certificate configuration reference -description: Learn how to configure an inline certificate bound to an API Gateway on VMs. +page_title: Inline certificate configuration entry reference +description: >- + Learn how to configure an inline certificate bound to an API Gateway on VMs. --- -# Inline certificate configuration reference +# Inline certificate configuration entry reference This topic provides reference information for the inline certificate -configuration entry. The inline certificate secures TLS for the Consul API gateway on VMs. In production environments, we recommend you use the more secure [file system certificate configuration entry](/consul/docs/connect/config-entries/file-system-certificate) instead. +configuration entry. The inline certificate secures TLS for the Consul API gateway on VMs. In production environments, we recommend you use the more secure [file system certificate configuration entry](/consul/docs/reference/config-entry/file-system-certificate) instead. -The inline certificate configuration entry is not used for Consul on Kubernetes deployments. To learn about configuring certificates for Kubernetes environments, refer to [Gateway Resource Configuration](/consul/docs/connect/gateways/api-gateway/configuration/gateway). +The inline certificate configuration entry is not used for Consul on Kubernetes deployments. To learn about configuring certificates for Kubernetes environments, refer to [Gateway Resource Configuration](/consul/docs/reference/k8s/api-gateway/gateway). ## Configuration model @@ -90,7 +91,7 @@ as applying a configuration entry to a specific cluster. ### `Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to apply to the configuration entry. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) to apply to the configuration entry. #### Values @@ -99,7 +100,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to appl ### `Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) to apply to the configuration entry. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) to apply to the configuration entry. #### Values diff --git a/website/content/docs/connect/config-entries/jwt-provider.mdx b/website/content/docs/reference/config-entry/jwt-provider.mdx similarity index 98% rename from website/content/docs/connect/config-entries/jwt-provider.mdx rename to website/content/docs/reference/config-entry/jwt-provider.mdx index d465b4d8c2b6..ecaea3137b06 100644 --- a/website/content/docs/connect/config-entries/jwt-provider.mdx +++ b/website/content/docs/reference/config-entry/jwt-provider.mdx @@ -1,12 +1,13 @@ --- -page_title: JWT provider configuration reference -description: |- - JWT provider configuration entries add JSON Web Token token validation to intentions in the service mesh. Learn how to write `jwt-provider` config entries in HCL or YAML with a specification reference, configuration model, a complete example, and example code by use case. +layout: docs +page_title: JSON web token (JWT) provider configuration entry reference +description: >- + JWT provider configuration entries add JSON Web Token token validation to intentions in the service mesh. Learn how to write `jwt-provider` config entries in HCL or YAML with a specification reference, configuration model, a complete example, and example code by use case. --- -# JWT provider configuration reference +# JSON web token (JWT) provider configuration entry reference -This page provides reference information for the JWT provider configuration entry, which configures Consul to use a JSON Web Token (JWT) and JSON Web Key Set (JWKS) in order to add JWT validation to proxies in the service mesh. Refer to [Use JWT authorization with service intentions](/consul/docs/connect/intentions/jwt-authorization) for more information. +This page provides reference information for the JWT provider configuration entry, which configures Consul to use a JSON Web Token (JWT) and JSON Web Key Set (JWKS) in order to add JWT validation to proxies in the service mesh. Refer to [Use JWT authorization with service intentions](/consul/docs/secure-mesh/intention/jwt) for more information. ## Configuration model @@ -821,7 +822,7 @@ Specifies a name for the configuration entry. The name is metadata that you can ### `metadata.namespace` -Specifies the namespace that the configuration applies to. Refer to [namespaces](/consul/docs/enterprise/namespaces) for more information. +Specifies the namespace that the configuration applies to. Refer to [namespaces](/consul/docs/multi-tenant/namespace) for more information. #### Values @@ -1324,4 +1325,4 @@ spec: ```
    - + \ No newline at end of file diff --git a/website/content/docs/reference/config-entry/mesh.mdx b/website/content/docs/reference/config-entry/mesh.mdx new file mode 100644 index 000000000000..de58cc405657 --- /dev/null +++ b/website/content/docs/reference/config-entry/mesh.mdx @@ -0,0 +1,590 @@ +--- +layout: docs +page_title: Mesh configuration entry reference +description: >- + The mesh configuration entry kind defines global default settings like TLS version requirements for proxies inside the service mesh. Use the reference guide to learn about `mesh` config entry parameters and how to control communication with services outside of the mesh. +--- + +# Mesh configuration entry reference + +This page provides reference information for the `mesh` configuration entry, which allows you to define a global default configuration that applies to all service mesh proxies. + +Settings in this configuration entry apply across all namespaces and federated datacenters. + +## Sample Configuration Entries + +The following examples demonstrate common configuration patterns for the `mesh` configuration entry. + +### Mesh-wide TLS Min Version + +Enforce that service mesh mTLS traffic uses TLS v1.2 or newer. + + + + + + +```hcl +Kind = "mesh" +TLS { + Incoming { + TLSMinVersion = "TLSv1_2" + } +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh +spec: + tls: + incoming: + tlsMinVersion: TLSv1_2 +``` + +```json +{ + "Kind": "mesh", + "TLS": { + "Incoming": { + "TLSMinVersion": "TLSv1_2" + } + } +} +``` + + + + + + +The `mesh` configuration entry can only be created in the `default` namespace and will apply to proxies across **all** namespaces. + + + +```hcl +Kind = "mesh" + +TLS { + Incoming { + TLSMinVersion = "TLSv1_2" + } +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh + namespace: default +spec: + tls: + incoming: + tlsMinVersion: TLSv1_2 +``` + +```json +{ + "Kind": "mesh", + "Namespace": "default", + "Partition": "default", + "TLS": { + "Incoming": { + "TLSMinVersion": "TLSv1_2" + } + } +} +``` + + + + + + +Note that the Kubernetes example does not include a `partition` field. Configuration entries are applied on Kubernetes using [custom resource definitions (CRD)](/consul/docs/fundamentals/config-entry), which can only be scoped to their own partition. + +### Mesh Destinations Only + +Only allow transparent proxies to dial addresses in the mesh. + + + + + + +```hcl +Kind = "mesh" +TransparentProxy { + MeshDestinationsOnly = true +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh +spec: + transparentProxy: + meshDestinationsOnly: true +``` + +```json +{ + "Kind": "mesh", + "TransparentProxy": { + "MeshDestinationsOnly": true + } +} +``` + + + + + + +The `mesh` configuration entry can only be created in the `default` namespace and will apply to proxies across **all** namespaces. + + + +```hcl +Kind = "mesh" + +TransparentProxy { + MeshDestinationsOnly = true +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh + namespace: default +spec: + transparentProxy: + meshDestinationsOnly: true +``` + +```json +{ + "Kind": "mesh", + "Namespace": "default", + "Partition": "default", + "TransparentProxy": { + "MeshDestinationsOnly": true + } +} +``` + + + + + + +Note that the Kubernetes example does not include a `partition` field. Configuration entries are applied on Kubernetes using [custom resource definitions (CRD)](/consul/docs/fundamentals/config-entry), which can only be scoped to their own partition. + +### Peer Through Mesh Gateways + +Set the `PeerThroughMeshGateways` parameter to `true` to route peering control plane traffic through mesh gateways. + + + + + + +```hcl +Kind = "mesh" +Peering { + PeerThroughMeshGateways = true +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh +spec: + peering: + peerThroughMeshGateways: true +``` + +```json +{ + "Kind": "mesh", + "Peering": { + "PeerThroughMeshGateways": true + } +} +``` + + + + + + +You can only set the `PeerThroughMeshGateways` attribute on `mesh` configuration entries in the `default` partition. +The `default` partition owns the traffic routed through the mesh gateway control plane to Consul servers. + + + +```hcl +Kind = "mesh" + +Peering { + PeerThroughMeshGateways = true +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh + namespace: default +spec: + peering: + peerThroughMeshGateways: true +``` + +```json +{ + "Kind": "mesh", + "Peering": { + "PeerThroughMeshGateways": true + } +} +``` + + + + + + +Note that the Kubernetes example does not include a `partition` field. Configuration entries are applied on Kubernetes using [custom resource definitions (CRD)](/consul/docs/fundamentals/config-entry), which can only be scoped to their own partition. + +### Request Normalization + +Enable options under `HTTP.Incoming.RequestNormalization` to apply normalization to all inbound traffic to mesh proxies. + +~> **Compatibility warning**: This feature is available as of Consul CE 1.20.1 and Consul Enterprise 1.20.1, 1.19.2, 1.18.3, and 1.15.15. We recommend upgrading to the latest version of Consul to take advantage of the latest features and improvements. + + + +```hcl +Kind = "mesh" +HTTP { + Incoming { + RequestNormalization { + InsecureDisablePathNormalization = false // default false, shown for completeness + MergeSlashes = true + PathWithEscapedSlashesAction = "UNESCAPE_AND_FORWARD" + HeadersWithUnderscoresAction = "REJECT_REQUEST" + } + } +} +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: Mesh +metadata: + name: mesh +spec: + http: + incoming: + requestNormalization: + insecureDisablePathNormalization: false # default false, shown for completeness + mergeSlashes: true + pathWithEscapedSlashesAction: UNESCAPE_AND_FORWARD + headersWithUnderscoresAction: REJECT_REQUEST +``` + +```json +{ + "Kind": "mesh", + "HTTP": { + "Incoming": { + "RequestNormalization": { + "InsecureDisablePathNormalization": false, + "MergeSlashes": true, + "PathWithEscapedSlashesAction": "UNESCAPE_AND_FORWARD", + "HeadersWithUnderscoresAction": "REJECT_REQUEST" + } + } + } +} +``` + + + +## Available Fields + +: nil', + description: + 'Specifies arbitrary KV metadata pairs. Added in Consul 1.8.4.', + yaml: false, + }, + { + name: 'metadata', + children: [ + { + name: 'name', + description: 'Must be set to `mesh`', + }, + { + name: 'namespace', + enterprise: true, + description: + 'Must be set to `default`. If running Consul Community Edition, the namespace is ignored (see [Kubernetes Namespaces in Consul CE](/consul/docs/k8s/crds#consul-ce)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/consul/docs/k8s/crds#consul-enterprise) for additional information.', + }, + ], + hcl: false, + }, + { + name: 'TransparentProxy', + type: 'TransparentProxyConfig: ', + description: + 'Controls configuration specific to proxies in `transparent` [mode](/consul/docs/reference/config-entry/service-defaults#mode). Added in v1.10.0.', + children: [ + { + name: 'MeshDestinationsOnly', + type: 'bool: false', + description: `Determines whether sidecar proxies operating in transparent mode can + proxy traffic to IP addresses not registered in Consul's mesh. If enabled, traffic will only be proxied + to upstream proxies or mesh-native services. If disabled, requests will be proxied as-is to the + original destination IP address. Consul will not encrypt the connection.`, + }, + ], + }, + { + name: 'AllowEnablingPermissiveMutualTLS', + type: 'bool: false', + description: + 'Controls whether `MutualTLSMode=permissive` can be set in the `proxy-defaults` and `service-defaults` configuration entries. ' + }, + { + name: 'ValidateClusters', + type: 'bool: false', + description: + `Controls whether the clusters the route table refers to are validated. The default value is false. When set to + false and a route refers to a cluster that does not exist, the route table loads and routing to a non-existent + cluster results in a 404. When set to true and the route is set to a cluster that do not exist, the route table + will not load. For more information, refer to + [HTTP route configuration in the Envoy docs](https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/route/v3/route.proto#envoy-v3-api-field-config-route-v3-routeconfiguration-validate-clusters) + for more details. `, + }, + { + name: 'TLS', + type: 'TLSConfig: ', + description: 'TLS configuration for the service mesh.', + children: [ + { + name: 'Incoming', + type: 'TLSDirectionConfig: ', + description: `TLS configuration for inbound mTLS connections targeting + the public listener on \`connect-proxy\` and \`terminating-gateway\` + proxy kinds.`, + children: [ + { + name: 'TLSMinVersion', + type: 'string: ""', + description: + "Set the default minimum TLS version supported. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.", + }, + { + name: 'TLSMaxVersion', + type: 'string: ""', + description: { + hcl: + "Set the default maximum TLS version supported. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.3 as a max version for incoming connections.", + yaml: + "Set the default maximum TLS version supported. Must be greater than or equal to `tls_min_version`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.3 as a max version for incoming connections.", + }, + }, + { + name: 'CipherSuites', + type: 'array: ', + description: `Set the default list of TLS cipher suites + to support when negotiating connections using + TLS 1.2 or earlier. If unspecified, Envoy will use a + [default server cipher list](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tlsparameters-cipher-suites). + The list of supported cipher suites can seen in + [\`consul/types/tls.go\`](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) + and is dependent on underlying support in Envoy. Future + releases of Envoy may remove currently-supported but + insecure cipher suites, and future releases of Consul + may add new supported cipher suites if any are added to + Envoy.`, + }, + ], + }, + { + name: 'Outgoing', + type: 'TLSDirectionConfig: ', + description: `TLS configuration for outbound mTLS connections dialing upstreams + from \`connect-proxy\` and \`ingress-gateway\` + proxy kinds.`, + children: [ + { + name: 'TLSMinVersion', + type: 'string: ""', + description: + "Set the default minimum TLS version supported. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy v1.22.0 and newer [will default to TLS 1.2 as a min version](https://github.com/envoyproxy/envoy/pull/19330), while older releases of Envoy default to TLS 1.0.", + }, + { + name: 'TLSMaxVersion', + type: 'string: ""', + description: { + hcl: + "Set the default maximum TLS version supported. Must be greater than or equal to `TLSMinVersion`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.2 as a max version for outgoing connections, but future Envoy releases [may change this to TLS 1.3](https://github.com/envoyproxy/envoy/issues/9300).", + yaml: + "Set the default maximum TLS version supported. Must be greater than or equal to `tls_min_version`. One of `TLS_AUTO`, `TLSv1_0`, `TLSv1_1`, `TLSv1_2`, or `TLSv1_3`. If unspecified, Envoy will default to TLS 1.2 as a max version for outgoing connections, but future Envoy releases [may change this to TLS 1.3](https://github.com/envoyproxy/envoy/issues/9300).", + }, + }, + { + name: 'CipherSuites', + type: 'array: ', + description: `Set the default list of TLS cipher suites + to support when negotiating connections using + TLS 1.2 or earlier. If unspecified, Envoy will use a + [default server cipher list](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-tlsparameters-cipher-suites). + The list of supported cipher suites can seen in + [\`consul/types/tls.go\`](https://github.com/hashicorp/consul/blob/v1.11.2/types/tls.go#L154-L169) + and is dependent on underlying support in Envoy. Future + releases of Envoy may remove currently-supported but + insecure cipher suites, and future releases of Consul + may add new supported cipher suites if any are added to + Envoy.`, + }, + ], + }, + ], + }, + { + name: 'HTTP', + type: 'HTTPConfig: ', + description: 'HTTP configuration for the service mesh.', + children: [ + { + name: 'SanitizeXForwardedClientCert', + type: 'bool: ', + description: `If configured to \`true\`, the \`forward_client_cert_details\` option will be set to \`SANITIZE\` + for all Envoy proxies. As a result, Consul will not include the \`x-forwarded-client-cert\` header in the next hop. + If set to \`false\` (default), the XFCC header is propagated to upstream applications.`, + }, + { + name: 'Incoming', + type: 'DirectionalHTTPConfig: ', + description: `HTTP configuration for inbound traffic to mesh proxies.`, + children: [ + { + name: 'RequestNormalization', + type: 'RequestNormalizationConfig: ', + description: `Request normalization configuration for inbound traffic to mesh proxies.`, + children: [ + { + name: 'InsecureDisablePathNormalization', + type: 'bool: false', + description: `Sets the value of the \`normalize_path\` option in the Envoy listener's \`HttpConnectionManager\`. The default value is \`false\`. + When set to \`true\` in Consul, \`normalize_path\` is set to \`false\` for the Envoy proxy. + This parameter disables the normalization of request URL paths according to RFC 3986, + conversion of \`\\\` to \`/\`, and decoding non-reserved %-encoded characters. When using L7 + intentions with path match rules, we recommend enabling path normalization in order + to avoid match rule circumvention with non-normalized path values.`, + }, + { + name: 'MergeSlashes', + type: 'bool: false', + description: `Sets the value of the \`merge_slashes\` option in the Envoy listener's \`HttpConnectionManager\`. The default value is \`false\`. + This option controls the normalization of request URL paths by merging consecutive \`/\` characters. This normalization is not part + of RFC 3986. When using L7 intentions with path match rules, we recommend enabling this setting to avoid match rule circumvention through non-normalized path values, unless legitimate service + traffic depends on allowing for repeat \`/\` characters, or upstream services are configured to + differentiate between single and multiple slashes.`, + }, + { + name: 'PathWithEscapedSlashesAction', + type: 'string: ""', + description: `Sets the value of the \`path_with_escaped_slashes_action\` option in the Envoy listener's + \`HttpConnectionManager\`. The default value of this option is empty, which is + equivalent to \`IMPLEMENTATION_SPECIFIC_DEFAULT\`. This parameter controls the action taken in response to request URL paths with escaped + slashes in the path. When using L7 intentions with path match rules, we recommend enabling this setting to avoid match rule circumvention through non-normalized path values, unless legitimate service + traffic depends on allowing for escaped \`/\` or \`\\\` characters, or upstream services are configured to + differentiate between escaped and unescaped slashes. Refer to the Envoy documentation for more information on available + options.`, + }, + { + name: 'HeadersWithUnderscoresAction', + type: 'string: ""', + description: `Sets the value of the \`headers_with_underscores_action\` option in the Envoy listener's + \`HttpConnectionManager\` under \`common_http_protocol_options\`. The default value of this option is + empty, which is equivalent to \`ALLOW\`. Refer to the Envoy documentation for more information on available options.`, + }, + ], + }, + ], + } + ], + }, + { + name: 'Peering', + type: 'PeeringMeshConfig: ', + description: + 'Controls configuration specific to [peering connections](/consul/docs/east-west/cluster-peering).', + children: [ + { + name: 'PeerThroughMeshGateways', + type: 'bool: ', + description: `Determines if peering control-plane traffic should be routed through mesh gateways. + When enabled, dialing cluster attempt to contact peers through their mesh gateway. + Clusters that accept calls advertise the address of their mesh gateways, rather than the address of their Consul servers.`, + }, + ], + }, + ]} +/> + +## ACLs + +Configuration entries may be protected by [ACLs](/consul/docs/secure/acl). + +Reading a `mesh` config entry requires no specific privileges. + +Creating, updating, or deleting a `mesh` config entry requires +`operator:write`. diff --git a/website/content/docs/connect/config-entries/proxy-defaults.mdx b/website/content/docs/reference/config-entry/proxy-defaults.mdx similarity index 92% rename from website/content/docs/connect/config-entries/proxy-defaults.mdx rename to website/content/docs/reference/config-entry/proxy-defaults.mdx index 824ce835998c..b5a259a36d8e 100644 --- a/website/content/docs/connect/config-entries/proxy-defaults.mdx +++ b/website/content/docs/reference/config-entry/proxy-defaults.mdx @@ -2,16 +2,16 @@ layout: docs page_title: Proxy defaults configuration entry reference description: >- - The proxy defaults configuration entry kind defines default behaviors for proxies in the service mesh. Use the reference guide to learn about `""proxy-defaults""` config entry parameters. + The proxy defaults configuration entry kind defines default behaviors for proxies in the service mesh. Use the reference guide to learn about `proxy-defaults` config entry parameters. --- -# Proxy defaults configuration reference +# Proxy defaults configuration entry reference -This topic provides reference information for proxy defaults configuration entries. Refer to [Service mesh proxy overview](/consul/docs/connect/proxies) for information about using proxies in Consul. +This topic provides reference information for proxy defaults configuration entries. Refer to [Service mesh proxy overview](/consul/docs/connect/proxy) for information about using proxies in Consul. ## Introduction -Proxy defaults configuration entries set global passthrough Envoy settings for proxies in the service mesh, including sidecars and gateways. Proxy defaults configuration entries do not control features for peered clusters, transparent proxy, or TLS behavior. For information about configuring Consul settings that affect service mesh behavior, refer to the [mesh configuration entry reference](/consul/docs/connect/config-entries/mesh). +Proxy defaults configuration entries set global passthrough Envoy settings for proxies in the service mesh, including sidecars and gateways. Proxy defaults configuration entries do not control features for peered clusters, transparent proxy, or TLS behavior. For information about configuring Consul settings that affect service mesh behavior, refer to the [mesh configuration entry reference](/consul/docs/reference/config-entry/mesh). Consul only supports one global proxy defaults configuration entry at a time. If multiple configuration entries are defined in Consul Enterprise, Consul implements the configuration entry in the `default` partition. @@ -305,7 +305,7 @@ Specifies the namespace that the proxy defaults apply to. You can only specify t ### `Partition` -Specifies the local admin partition that the proxy defaults apply to. Refer to [admin partitions](/consul/docs/enterprise/admin-partitions) for more information. +Specifies the local admin partition that the proxy defaults apply to. Refer to [admin partitions](/consul/docs/multi-tenant/admin-partition) for more information. #### Values @@ -337,7 +337,7 @@ Specifies an arbitrary map of configuration values used by service mesh proxies. ### `EnvoyExtensions` -Specifies a list of extensions that modify Envoy proxy configurations. Refer to [Envoy extensions](/consul/docs/connect/proxies/envoy-extensions) for additional information. +Specifies a list of extensions that modify Envoy proxy configurations. Refer to [Envoy extensions](/consul/docs/envoy-extension) for additional information. #### Values @@ -374,7 +374,7 @@ Specifies a mode for how proxies direct inbound and outbound traffic. You can sp ### `TransparentProxy` -Contains configurations for proxies that are running in transparent proxy mode. This mode enables permissive mTLS for Consul so that you can use your Kubernetes cluster's DNS service instead of Consul DNS. Refer to [Transparent proxy mode](/consul/docs/k8s/connect/transparent-proxy) for additional information. +Contains configurations for proxies that are running in transparent proxy mode. This mode enables permissive mTLS for Consul so that you can use your Kubernetes cluster's DNS service instead of Consul DNS. Refer to [Transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy) for additional information. #### Values @@ -388,7 +388,7 @@ The following table describes how to configure values in the `TransparentProxy` | Parameter | Description | Data type | Default | | --- | --- | --- | --- | | `OutboundListenerPort` | Specifies the port that the proxy listens on for outbound traffic. Outbound application traffic must be captured and redirected to this port. | Integer | `15001` | -| `DialedDirectly` | Determines whether other proxies in transparent mode can directly dial this proxy instance's IP address. Proxies in transparent mode commonly dial upstreams at the [`virtual` tagged address](/consul/docs/services/configuration/services-configuration-reference#tagged_addresses-virtual), which load balances across instances. Dialing individual instances can be helpful when sending requests to stateful services, such as database clusters with a leader. | Boolean | `false` | +| `DialedDirectly` | Determines whether other proxies in transparent mode can directly dial this proxy instance's IP address. Proxies in transparent mode commonly dial upstreams at the [`virtual` tagged address](/consul/docs/reference/service#tagged_addresses-virtual), which load balances across instances. Dialing individual instances can be helpful when sending requests to stateful services, such as database clusters with a leader. | Boolean | `false` | ### `MutualTLSMode` @@ -423,7 +423,7 @@ Sets the default mesh gateway `mode` field for all proxies. You can specify the Specifies default configurations for exposing HTTP paths through Envoy. Exposing paths through Envoy enables services to protect themselves by only listening on `localhost`. Applications that are not Consul service mesh-enabled are still able to contact an HTTP endpoint. -Example use-cases include exposing the `/metrics` endpoint to a monitoring system, such as Prometheus, and exposing the `/healthz` endpoint to the kubelet for liveness checks. Refer to [Expose Paths Configuration Reference](/consul/docs/connect/proxy-config-reference#expose-paths-configuration-reference) for additional information. +Example use-cases include exposing the `/metrics` endpoint to a monitoring system, such as Prometheus, and exposing the `/healthz` endpoint to the kubelet for liveness checks. Refer to [Expose Paths Configuration Reference](/consul/docs/reference/proxy/connect-proxy#expose-paths-configuration-reference) for additional information. #### Values @@ -434,7 +434,7 @@ Example use-cases include exposing the `/metrics` endpoint to a monitoring syste ### `Expose{}.Checks` -Exposes all HTTP and gRPC checks registered with the agent when set to `true`. Envoy exposes listeners for the checks and only accepts connections originating from localhost or the [Consul agent's `advertise_addr`](/consul/docs/agent/config/config-files#advertise). The ports for the listeners are dynamically allocated from the [agent's `expose_min_port`](/consul/docs/agent/config/config-files#expose_min_port) and [`expose_max_port`](/consul/docs/agent/config/config-files#expose_max_port) configurations. +Exposes all HTTP and gRPC checks registered with the agent when set to `true`. Envoy exposes listeners for the checks and only accepts connections originating from localhost or the [Consul agent's `advertise_addr`](/consul/docs/reference/agent/configuration-file/address#advertise_addr). The ports for the listeners are dynamically allocated from the [agent's `expose_min_port`](/consul/docs/reference/agent/configuration-file/general#expose_min_port) and [`expose_max_port`](/consul/docs/reference/agent/configuration-file/general#expose_max_port) configurations. We recommend enabling the `Checks` configuration when a Consul client cannot reach registered services over localhost. @@ -465,7 +465,7 @@ The following table describes the parameters for each map you can define in the Sets a mode for the service that allows instances to prioritize upstream targets that are in the same network region and zone. You can specify the following string values for the `mode` field: -- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `Locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. +- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `Locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. #### Values @@ -566,7 +566,7 @@ Specifies an arbitrary map of configuration values used by service mesh proxies. ### `spec.envoyExtensions` -Specifies a list of extensions that modify Envoy proxy configurations. Refer to [Envoy extensions](/consul/docs/connect/proxies/envoy-extensions) for additional information. +Specifies a list of extensions that modify Envoy proxy configurations. Refer to [Envoy extensions](/consul/docs/envoy-extension) for additional information. #### Values @@ -603,7 +603,7 @@ Specifies a mode for how proxies direct inbound and outbound traffic. You can sp ### `spec.transparentProxy` -Contains configurations for proxies that are running in transparent proxy mode. This mode enables permissive mTLS for Consul so that you can use your Kubernetes cluster's DNS service instead of Consul DNS. Refer to [Transparent proxy mode](/consul/docs/k8s/connect/transparent-proxy) for additional information. +Contains configurations for proxies that are running in transparent proxy mode. This mode enables permissive mTLS for Consul so that you can use your Kubernetes cluster's DNS service instead of Consul DNS. Refer to [Transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy) for additional information. #### Values @@ -617,7 +617,7 @@ The following table describes how to configure values in the `TransparentProxy` | Parameter | Description | Data type | Default | | --- | --- | --- | --- | | `outboundListenerPort` | Specifies the port that the proxy listens on for outbound traffic. Outbound application traffic must be captured and redirected to this port. | Integer | `15001` | -| `dialedDirectly` | Determines whether other proxies in transparent mode can directly dial this proxy instance's IP address. Proxies in transparent mode commonly dial upstreams at the [`virtual` tagged address](/consul/docs/services/configuration/services-configuration-reference#tagged_addresses-virtual), which load balances across instances. Dialing individual instances can be helpful when sending requests to stateful services, such as database clusters with a leader. | Boolean | `false` | +| `dialedDirectly` | Determines whether other proxies in transparent mode can directly dial this proxy instance's IP address. Proxies in transparent mode commonly dial upstreams at the [`virtual` tagged address](/consul/docs/reference/service#tagged_addresses-virtual), which load balances across instances. Dialing individual instances can be helpful when sending requests to stateful services, such as database clusters with a leader. | Boolean | `false` | ### `spec.mutualTLSMode` @@ -652,7 +652,7 @@ Sets the default mesh gateway `mode` field for all proxies. You can specify the Specifies default configurations for exposing HTTP paths through Envoy. Exposing paths through Envoy enables services to protect themselves by only listening on `localhost`. Applications that are not Consul service mesh-enabled are still able to contact an HTTP endpoint. -Example use-cases include exposing the `/metrics` endpoint to a monitoring system, such as Prometheus, and exposing the `/healthz` endpoint to the kubelet for liveness checks. Refer to [Expose Paths Configuration Reference](/consul/docs/connect/proxy-config-reference#expose-paths-configuration-reference) for additional information. +Example use-cases include exposing the `/metrics` endpoint to a monitoring system, such as Prometheus, and exposing the `/healthz` endpoint to the kubelet for liveness checks. Refer to [Expose Paths Configuration Reference](/consul/docs/reference/proxy/connect-proxy#expose-paths-configuration-reference) for additional information. #### Values @@ -663,9 +663,7 @@ Example use-cases include exposing the `/metrics` endpoint to a monitoring syste ### `spec.expose{}.checks` -Exposes all HTTP and gRPC checks registered with the agent when set to `true`. Envoy exposes listeners for the checks and only accepts connections originating from localhost or the [Consul agent's `advertise_addr`](/consul/docs/agent/config/config-files#advertise). The ports for the listeners are dynamically allocated from the [agent's `expose_min_port`](/consul/docs/agent/config/config-files#expose_min_port) and [`expose_max_port`](/consul/docs/agent/config/config-files#expose_max_port) configurations. - -We recommend enabling the `Checks` configuration when a Consul client cannot reach registered services over localhost, such as when Consul agents run in their own pods in Kubernetes. +@include 'text/reference/config-entry/spec-expose-checks.mdx' #### Values @@ -694,7 +692,7 @@ The following table describes the parameters for each map you can define in the Sets a mode for the service that allows instances to prioritize upstream targets that are in the same network region and zone. You can specify the following string values for the `mode` field: -- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. +- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. #### Values @@ -874,7 +872,7 @@ spec: ### Access Logs -The following example enables access logs for all proxies. Refer to [access logs](/consul/docs/connect/observability/access-logs) for more detailed examples. +The following example enables access logs for all proxies. Refer to [access logs](/consul/docs/observe/access-log) for more detailed examples. diff --git a/website/content/docs/connect/config-entries/registration.mdx b/website/content/docs/reference/config-entry/registration.mdx similarity index 96% rename from website/content/docs/connect/config-entries/registration.mdx rename to website/content/docs/reference/config-entry/registration.mdx index 95740cb084f8..ea898d66d023 100644 --- a/website/content/docs/connect/config-entries/registration.mdx +++ b/website/content/docs/reference/config-entry/registration.mdx @@ -6,7 +6,7 @@ description: The Registration CRD enables Consul on Kubernetes to register an ex # Registration CRD configuration reference -This topic provides reference information for `Registration` custom resource definitions (CRDs). You can use this CRD to register an external service with Consul on Kubernetes. For more information. Refer to [Register services running on external nodes to Consul on Kubernetes](/consul/docs/k8s/deployment-configurations/external-service) for more information. +This topic provides reference information for `Registration` custom resource definitions (CRDs). You can use this CRD to register an external service with Consul on Kubernetes. For more information. Refer to [Register services running on external nodes to Consul on Kubernetes](/consul/docs/register/external/esm/k8s) for more information. ## Configuration model @@ -221,7 +221,7 @@ Specifies the IP address where the external service is available. Specifies details for a health check. Health checks perform several safety functions, such as allowing a web balancer to gracefully remove failing nodes and allowing a database to replace a failed secondary. You can configure health checks to monitor the health of the entire node. -For more information about configuring health checks for Consul, refer to [health check configuration reference](/consul/docs/services/configuration/checks-configuration-reference). +For more information about configuring health checks for Consul, refer to [health check configuration reference](/consul/docs/reference/service/health-check). #### Values @@ -262,7 +262,7 @@ Requires child parameters `deregisterCriticalServiceAfterDuration`, `intervalDur | `spec.check.definition.tcp` | Specifies an IP address or host and port number that the check establishes a TCP connection with. | String | None | | `spec.check.definition.tcpUseTLS` | Enables TLS for TCP checks when set to `true`. | Boolean | `false` | | `spec.check.definition.timeoutDuration` | Specifies how long unsuccessful requests take to end with a timeout. This parameter is required to configure `spec.check.definition`. | String | `"10s"` | -| `spec.check.definition.tlsServerName` | Specifies the server name used to verify the hostname on the returned certificates unless `tls_skip_verify` is configured. This value is also included in the client's handshake to support SNI. It is recommended that this field be left unspecified. Refer to [health check configuration reference](/consul/docs/services/configuration/checks-configuration-reference#check-block) for more information. | String | None | +| `spec.check.definition.tlsServerName` | Specifies the server name used to verify the hostname on the returned certificates unless `tls_skip_verify` is configured. This value is also included in the client's handshake to support SNI. It is recommended that this field be left unspecified. Refer to [health check configuration reference](/consul/docs/reference/service/health-check#check-block) for more information. | String | None | | `spec.check.definition.tlsSkipVerify` | Determines if the check verifies the chain and hostname of the certificate that the server presents. Set to `true` to disable verification. We recommend setting to `false` in production environments. | Boolean | `false` | | `spec.check.definition.udp` | Specifies an IP address or host and port number for the check to send UDP datagrams to. | String | None | @@ -286,7 +286,7 @@ Specifies a name for the health check. Defaults to [`spec.service.id`](#spec-ser ### `spec.check.namespace` -Specifies the Consul namespace the health check applies to. Refer to [namespaces](/consul/docs/enterprise/namespace) for more information. +Specifies the Consul namespace the health check applies to. Refer to [namespaces](/consul/docs/multi-tenant/namespace) for more information. #### Values @@ -322,7 +322,7 @@ Specifies human readable output in response to a health check. ### `spec.check.partition` -Specifies the Consul admin partition the health check applies to. Refer to [admin partitions](/consul/docs/enterprise/admin-partitions) for more information. +Specifies the Consul admin partition the health check applies to. Refer to [admin partitions](/consul/docs/multi-tenant/admin-partition) for more information. #### Values @@ -432,7 +432,7 @@ Specifies the admin partition of the node to register. Specifies the service to register. The `Service.Service` field is required. If `Service.ID` is not provided, the default is the `Service.Service`. -You can only specify one service with a given `ID` per node. We recommend using valid DNS labels for service definition names. Refer to [Internet Engineering Task Force's RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Service names that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/services/configuration/services-configuration-reference#name) for more information. +You can only specify one service with a given `ID` per node. We recommend using valid DNS labels for service definition names. Refer to [Internet Engineering Task Force's RFC 1123](https://datatracker.ietf.org/doc/html/rfc1123#page-72) for additional information. Service names that conform to standard usage ensures compatibility with external DNSs. Refer to [Services Configuration Reference](/consul/docs/reference/service#name) for more information. #### Values @@ -474,7 +474,7 @@ Specifies an ID for the service. Services on the same node must have unique IDs. ### `spec.service.locality` -Specifies the region and zone in the cloud service provider (CSP) where the service is available. Configure this field to enable Consul to route traffic to the nearest physical service instance. Services inherit the `locality` configuration of the Consul agent they are registered with, but you can explicitly define locality for your service instances if an override is needed. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. +Specifies the region and zone in the cloud service provider (CSP) where the service is available. Configure this field to enable Consul to route traffic to the nearest physical service instance. Services inherit the `locality` configuration of the Consul agent they are registered with, but you can explicitly define locality for your service instances if an override is needed. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. #### Values @@ -512,7 +512,7 @@ Specifies a name for the service. We recommend using valid DNS labels for servic ### `spec.service.namespace` -Specifies the Consul namespace to register the service in. Refer to [namespaces](/consul/docs/enterprise/namespaces) for more information. +Specifies the Consul namespace to register the service in. Refer to [namespaces](/consul/docs/multi-tenant/namespace) for more information. #### Values @@ -521,7 +521,7 @@ Specifies the Consul namespace to register the service in. Refer to [namespaces] ### `spec.service.partition` -Specifies the Consul admin partition to register the service in. Refer to [admin partitions](/consul/docs/enterprise/admin-partitions) for more information. +Specifies the Consul admin partition to register the service in. Refer to [admin partitions](/consul/docs/multi-tenant/admin-partition) for more information. #### Values diff --git a/website/content/docs/reference/config-entry/sameness-group.mdx b/website/content/docs/reference/config-entry/sameness-group.mdx new file mode 100644 index 000000000000..ed38cf2104aa --- /dev/null +++ b/website/content/docs/reference/config-entry/sameness-group.mdx @@ -0,0 +1,398 @@ +--- +layout: docs +page_title: Sameness group configuration entry reference +description: >- + Sameness groups enable Consul to associate service instances with the same name deployed to the same namespace as identical services. Learn how to configure a `sameness-group` configuration entry to enable failover between partitions and cluster peers in non-federated networks. +--- + +# Sameness group configuration entry reference + +This page provides reference information for sameness group configuration entries. Sameness groups associate identical admin partitions to facilitate traffic between identical services. When partitions are part of the same Consul datacenter, you can create a sameness group by listing them in the `Members[].Partition` field. When partitions are located on remote clusters, you must establish cluster peering connections between remote partitions in order to add them to a sameness group in the `Members[].Peer` field. + +To learn more about creating a sameness group, refer to [Create sameness groups](/consul/docs/multi-tenant/sameness-group/vm) or [Create sameness groups on Kubernetes](/consul/docs/multi-tenant/sameness-group/k8s). + +## Configuration model + +The following list outlines field hierarchy, language-specific data types, and requirements in the sameness group configuration entry. Click on a property name to view additional details, including default values. + + + + + +- [`Kind`](#kind): string | required | must be set to `sameness-group` +- [`Name`](#name): string | required +- [`Partition`](#partition): string | `default` +- [`DefaultForFailover`](#defaultforfailover): boolean | `false` +- [`IncludeLocal`](#includelocal): boolean | `false` +- [`Members`](#members): list of maps | required + - [`Partition`](#members-partition): string + - [`Peer`](#members-peer): string + + + + + +- [`apiVersion`](#apiversion): string | required | must be set to `consul.hashicorp.com/v1alpha1` +- [`kind`](#kind): string | required | must be set to `SamenessGroup` +- [`metadata`](#metadata): map | required + - [`name`](#metadata-name): string | required +- [`spec`](#spec): map | required + - [`defaultForFailover`](#spec-defaultforfailover): boolean | `false` + - [`includeLocal`](#spec-includelocal): boolean | `false` + - [`members`](#spec-members): list of maps | required + - [`partition`](#spec-members-partition): string + - [`peer`](#spec-members-peer): string + + + + +## Complete configuration + +When every field is defined, a sameness group configuration entry has the following form: + + + + + +```hcl +Kind = "sameness-group" # required +Name = "" # required +Partition = "" +DefaultForFailover = false +IncludeLocal = true +Members = [ # required + { Partition = "" }, + { Peer = "" } +] +``` + + + + + +```json +{ + "Kind": "sameness-group", // required + "Name": "", // required + "Partition": "", + "DefaultForFailover": false, + "IncludeLocal": true, + "Members": [ // required + { + "Partition": "" + }, + { + "Peer": "" + } + ] +} +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 # required +kind: SamenessGroup # required +metadata: + name: +spec: + defaultForFailover: false + includeLocal: true + members: # required + - partition: + - peer: +``` + + + + +## Specifications + +This section provides details about the fields you can configure in the sameness group configuration entry. + + + + + +### `Kind` + +Specifies the type of configuration entry to implement. Must be set to `sameness-group`. + +#### Values + +- Default: None +- This field is required. +- Data type: String value that must be set to `sameness-group`. + +### `Name` + +Specifies a name for the configuration entry that is used to identify the sameness group. To ensure consistency, use descriptive names and make sure that the same name is used when creating configuration entries to add each member to the sameness group. + +#### Values + +- Default: None +- This field is required. +- Data type: String + +### `Partition` + +Specifies the local admin partition that the sameness group applies to. Refer to [admin partitions](/consul/docs/multi-tenant/admin-partition) for more information. + +#### Values + +- Default: `default` +- Data type: String + +### `DefaultForFailover` + +Determines whether the sameness group should be used to establish connections to services with the same name during failover scenarios. + +When this field is set to `true`, upstream requests automatically fail over to services in the sameness group according to the order of the members in the `Members` list. It impacts all services on the partition. + +When this field is set to `false`, you can use a sameness group for failover by configuring the `Failover` block of a [service resolver configuration entry](/consul/docs/reference/config-entry/service-resolver). + +When you [query Consul DNS](/consul/docs/discover/service/static) using sameness groups, `DefaultForFailover` must be set to `true`. Otherwise, Consul DNS returns an error. + +#### Values + +- Default: `false` +- Data type: Boolean + +### `IncludeLocal` + +Determines whether the local partition should be considered the first member of the sameness group. When this field is set to `true`, DNS queries, upstream requests, and failover traffic returns a health instance from the local partition unless one does not exist. + +If you enable this parameter, you do not need to list the local partition as the first member in the group. + +#### Values + +- Default: `false` +- Data type: Boolean + +### `Members` + +Specifies the partitions and cluster peers that are members of the sameness group from the perspective of the local partition. + +The local partition should be the first member listed unless `IncludeLocal=true`. The order of the members determines their precedence during failover scenarios. If a member is listed but Consul cannot connect to it, failover proceeds with the next healthy member in the list. For an example demonstrating how to configure this parameter, refer to [Failover between sameness groups](#failover-between-members-of-a-sameness-group). + +Each partition can belong to a single sameness group. You cannot associate a partition or cluster peer with multiple sameness groups. + +#### Values + +- Default: None +- This field is required. +- Data type: List that can contain maps of the following parameters: + - [`Partition`](#members-partition) + - [`Peer`](#members-peer) + +### `Members[].Partition` + +Specifies a partition in the local datacenter that is a member of the sameness group. Local partitions do not require cluster peering connections before they are added to a sameness group. + +#### Values + +- Default: None +- Data type: String + +### `Members[].Peer` + +Specifies the name of a cluster peer that is a member of the sameness group. + +Cluster peering connections must be established before adding a remote partition to the list of members. Refer to [establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/vm) for more information. + +#### Values + +- Default: None +- Data type: String + + + + + +### `apiVersion` + +Specifies the version of the Consul API for integrating with Kubernetes. The value must be `consul.hashicorp.com/v1alpha1`. + +#### Values + +- Default: None +- This field is required. +- String value that must be set to `consul.hashicorp.com/v1alpha1`. + +### `kind` + +Specifies the type of configuration entry to implement. Must be set to `SamenessGroup`. + +#### Values + +- Default: None +- This field is required. +- Data type: String value that must be set to `SamenessGroup`. + +### `metadata` + +Map that contains an arbitrary name for the configuration entry and the namespace it applies to. + +#### Values + +- Default: None +- Data type: Map + +### `metadata.name` + +Specifies a name for the configuration entry that is used to identify the sameness group. To ensure consistency, use descriptive names and make sure that the same name is used when creating configuration entries to add each member to the sameness group. + +#### Values + +- Default: None +- This field is required. +- Data type: String + +### `spec` + +Map that contains the details about the `SamenessGroup` configuration entry. The `apiVersion`, `kind`, and `metadata` fields are siblings of the spec field. All other configurations are children. + +#### Values + +- Default: None +- This field is required. +- Data type: Map + +### `spec.defaultForFailover` + +Determines whether the sameness group should be used to establish connections to services with the same name during failover scenarios. When this field is set to `true`, upstream requests automatically failover to services in the sameness group according to the order of the members in the `spec.members` list. This setting affects all services on the partition. + +When this field is set to `false`, you can use a sameness group for failover by configuring the `spec.failover` block of a [service resolver CRD](/consul/docs/reference/config-entry/service-resolver). + +#### Values + +- Default: `false` +- Data type: Boolean + +### `spec.includeLocal` + +Determines whether the local partition should be considered the first member of the sameness group. When this field is set to `true`, DNS queries, upstream requests, and failover traffic target return a healthy instance from the local partition unless a healthy instance does not exist. + +If you enable this parameter, you do not need to list the local partition as the first member in the group. + +#### Values + +- Default: `false` +- Data type: Boolean + +### `spec.members` + +Specifies the local partitions and cluster peers that are members of the sameness group from the perspective of the local partition. + +The local partition should be the first member listed unless `spec.includeLocal: true`. The order of the members determines their precedence during failover scenarios. If a member is listed but Consul cannot connect to it, failover proceeds with the next healthy member in the list. For an example demonstrating how to configure this parameter, refer to [Failover between sameness groups](#failover-between-sameness-groups). + +Each partition can belong to a single sameness group. You cannot associate a partition or cluster peer with multiple sameness groups. + +#### Values + +- Default: None +- This field is required. +- Data type: List that can contain maps of the following parameters: + + - [`partition`](#spec-members-partition) + - [`peer`](#spec-members-peer) + +### `spec.members[].partition` + +Specifies a partition in the local datacenter that is a member of the sameness group. Local partitions do not require cluster peering connections before they are added to a sameness group. + +#### Values + +- Default: None +- Data type: String + +### `spec.members[].peer` + +Specifies the name of a cluster peer that is a member of the sameness group. + +Cluster peering connections must be established before adding a peer to the list of members. Refer to [establish cluster peering connections](/consul/docs/east-west/cluster-peering/establish/vm) for more information. + +#### Values + +- Default: None +- Data type: String + + + + +## Examples + +The following examples demonstrate common sameness group configuration patterns for specific use cases. + +### Failover between members of a sameness group + +In the following example, the configuration entry defines a sameness group named `products-api` that applies to the `store-east` partition in the local datacenter. The sameness group is configured so that when a service instance in `store-east` fails, Consul attempts to establish a failover connection in the following order: + +- Services with the same name in the `store-east` partition +- Services with the same name in the `inventory-east` partition in the same datacenter +- Services with the same name in the `store-west` partition of datacenter `dc2`, which has an established cluster peering connection. +- Services with the same name in the `inventory-west` partition of `dc2`, which has an established cluster peering connection. + + + + + +```hcl +Kind = "sameness-group" +Name = "products-api" +Partition = "store-east" +Members = [ + { Partition = "store-east" }, + { Partition = "inventory-east" }, + { Peer = "dc2-store-west" }, + { Peer = "dc2-inventory-west" } +] +``` + + + + + +```json +{ + "Kind": "sameness-group", + "Name": "products-api", + "Partition": "store-east", + "Members": [ + { + "Partition": "store-east" + }, + { + "Partition": "inventory-east" + }, + { + "Peer": "dc2-store-west" + }, + { + "Peer": "dc2-inventory-west" + } + ] +} +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: SamenessGroup +metadata: + name: products-api +spec: + members: + - partition: store-east + - partition: inventory-east + - peer: dc2-store-west + - peer: dc2-inventory-west +``` + + + \ No newline at end of file diff --git a/website/content/docs/connect/config-entries/service-defaults.mdx b/website/content/docs/reference/config-entry/service-defaults.mdx similarity index 94% rename from website/content/docs/connect/config-entries/service-defaults.mdx rename to website/content/docs/reference/config-entry/service-defaults.mdx index 271256bbae93..95b6b481dcfd 100644 --- a/website/content/docs/connect/config-entries/service-defaults.mdx +++ b/website/content/docs/reference/config-entry/service-defaults.mdx @@ -1,11 +1,11 @@ --- layout: docs -page_title: Service defaults configuration reference -description: -> +page_title: Service defaults configuration entry reference +description: >- Use the service-defaults configuration entry to set default configurations for services, such as upstreams, protocols, and namespaces. Learn how to configure service-defaults. --- -# Service defaults configuration reference +# Service defaults configuration entry reference This topic describes how to configure service defaults configuration entries. The service defaults configuration entry contains common configuration settings for service mesh services, such as upstreams and gateways. Refer to [Define service defaults](/consul/docs/services/usage/define-services#define-service-defaults) for usage information. @@ -547,7 +547,7 @@ Specifies the Consul namespace that the configuration entry applies to. ### `Partition` -Specifies the name of the Consul admin partition that the configuration entry applies to. Refer to [Admin Partitions](/consul/docs/enterprise/admin-partitions) for additional information. +Specifies the name of the Consul admin partition that the configuration entry applies to. Refer to [Admin Partitions](/consul/docs/multi-tenant/admin-partition) for additional information. #### Values @@ -556,7 +556,7 @@ Specifies the name of the Consul admin partition that the configuration entry ap ### `Meta` -Specifies a set of custom key-value pairs to add to the [Consul KV](/consul/docs/dynamic-app-config/kv) store. +Specifies a set of custom key-value pairs to add to the [Consul KV](/consul/docs/automate/kv) store. #### Values @@ -569,12 +569,12 @@ Specifies a set of custom key-value pairs to add to the [Consul KV](/consul/docs Specifies the default protocol for the service. In service mesh use cases, the `protocol` configuration is required to enable the following features and components: -- [observability](/consul/docs/connect/observability) -- [service splitter configuration entry](/consul/docs/connect/config-entries/service-splitter) -- [service router configuration entry](/consul/docs/connect/config-entries/service-router) +- [observability](/consul/docs/observe/tech-specs) +- [service splitter configuration entry](/consul/docs/reference/config-entry/service-splitter) +- [service router configuration entry](/consul/docs/reference/config-entry/service-router) - [L7 intentions](/consul/docs/connect/intentions#l7-traffic-intentions) -You can set the global protocol for proxies in the [`proxy-defaults`](/consul/docs/connect/config-entries/proxy-defaults#default-protocol) configuration entry, but the protocol specified in the `service-defaults` configuration entry overrides the `proxy-defaults` configuration. +You can set the global protocol for proxies in the [`proxy-defaults`](/consul/docs/reference/config-entry/proxy-defaults#default-protocol) configuration entry, but the protocol specified in the `service-defaults` configuration entry overrides the `proxy-defaults` configuration. #### Values @@ -662,7 +662,7 @@ The following table describes the parameters you can specify in the `Routes` map | --- | --- | --- | --- | | `PathExact` | Specifies the exact path to match on the request path. When using this field, do not configure `PathPrefix` or `PathRegex` in the same `Routes` map. | String | None | | `PathPrefix` | Specifies the path prefix to match on the request path. When using this field, do not configure `PathExact` or `PathRegex` in the same `Routes` map. | String | None | -| `PathRegex` | Specifies a regular expression to match on the request path. When using this field, do not configure `PathExact` or `PathPrefix` in the same `Routes` map. The syntax is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. | String | None | +| `PathRegex` | Specifies a regular expression to match on the request path. When using this field, do not configure `PathExact` or `PathPrefix` in the same `Routes` map. The syntax is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. | String | None | | `RequestsPerSecond` | Specifies the average number of requests per second allowed to the service. Overrides the [`RequestsPerSecond`](#ratelimits-instancelevel-requestspersecond) parameter specified for the service. | Integer | None | | `RequestsMaxBurst` | Specifies the maximum number of concurrent requests temporarily allowed to the service. When the limit is reached, Consul blocks additional requests. You must specify a value equal to or greater than the `Routes.RequestsPerSecond` parameter. Overrides the [`RequestsMaxBurst`](#ratelimits-instancelevel-requestsmaxburst) parameter specified for the service. | Integer | None | @@ -717,7 +717,7 @@ Specifies the peer name of the upstream service that the configuration applies t ### `UpstreamConfig.Overrides[].Protocol` Specifies the protocol to use for requests to the upstream listener. -We recommend configuring the protocol in the main [`Protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/connect/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. +We recommend configuring the protocol in the main [`Protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. #### Values @@ -729,7 +729,7 @@ We recommend configuring the protocol in the main [`Protocol`](#protocol) field Specifies how long in milliseconds that the service should attempt to establish an upstream connection before timing out. -We recommend configuring the upstream timeout in the [`connection_timeout`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) field of the `service-resolver` configuration entry for the upstream destination service. Doing so enables you to leverage [L7 features](/consul/docs/connect/manage-traffic). Configuring the timeout in the `service-defaults` upstream configuration limits L7 management functionality. +We recommend configuring the upstream timeout in the [`connection_timeout`](/consul/docs/reference/config-entry/service-resolver#connecttimeout) field of the `service-resolver` configuration entry for the upstream destination service. Doing so enables you to leverage [L7 features](/consul/docs/manage-traffic). Configuring the timeout in the `service-defaults` upstream configuration limits L7 management functionality. #### Values @@ -805,7 +805,7 @@ Specifies configurations that set default upstream settings. For information abo Specifies default protocol for upstream listeners. -We recommend configuring the protocol in the main [`Protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/connect/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. +We recommend configuring the protocol in the main [`Protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. - Default: None - Data type: String @@ -814,7 +814,7 @@ We recommend configuring the protocol in the main [`Protocol`](#protocol) field Specifies how long in milliseconds that all services should continue attempting to establish an upstream connection before timing out. -For non-Kubernetes environments, we recommend configuring the upstream timeout in the [`connection_timeout`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) field of the `service-resolver` configuration entry for the upstream destination service. Doing so enables you to leverage [L7 features](/consul/docs/connect/manage-traffic). Configuring the timeout in the `service-defaults` upstream configuration limits L7 management functionality. +For non-Kubernetes environments, we recommend configuring the upstream timeout in the [`connection_timeout`](/consul/docs/reference/config-entry/service-resolver#connecttimeout) field of the `service-resolver` configuration entry for the upstream destination service. Doing so enables you to leverage [L7 features](/consul/docs/manage-traffic). Configuring the timeout in the `service-defaults` upstream configuration limits L7 management functionality. - Default: `5000` - Data type: Integer @@ -860,7 +860,7 @@ Map that specifies a set of rules that enable Consul to remove hosts from the up ### `TransparentProxy` -Controls configurations specific to proxies in transparent mode. Refer to [Transparent Proxy Mode](/consul/docs/k8s/connect/transparent-proxy) for additional information. +Controls configurations specific to proxies in transparent mode. Refer to [Transparent Proxy Mode](/consul/docs/connect/proxy/transparent-proxy) for additional information. You can configure the following parameters in the `TransparentProxy` block: @@ -884,7 +884,7 @@ You can specify the following string values for the `MutualTLSMode` field: ### `EnvoyExtensions` -List of extensions to modify Envoy proxy configuration. Refer to [Envoy Extensions](/consul/docs/connect/proxies/envoy-extensions) for additional information. +List of extensions to modify Envoy proxy configuration. Refer to [Envoy Extensions](/consul/docs/envoy-extension) for additional information. The following table describes how to configure values in the `EnvoyExtensions` map: @@ -898,9 +898,9 @@ The following table describes how to configure values in the `EnvoyExtensions` m ### `Destination{}` -Configures the destination for service traffic through terminating gateways. Refer to [Terminating Gateway](/consul/docs/connect/gateways/terminating-gateway) for additional information. +Configures the destination for service traffic through terminating gateways. Refer to [Terminating Gateway](/consul/docs/north-south/terminating-gateway) for additional information. -To use the `Destination` block, proxy services must be in transparent proxy mode. Refer to [Enable transparent proxy mode](/consul/docs/k8s/connect/transparent-proxy/enable-transparent-proxy) for additional information. +To use the `Destination` block, proxy services must be in transparent proxy mode. Refer to [Enable transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy/k8s) for additional information. You can configure the following parameters in the `Destination` block: @@ -949,14 +949,14 @@ Specifies the TLS server name indication (SNI) when federating with an external ### `Expose` -Specifies default configurations for exposing HTTP paths through Envoy. Exposing paths through Envoy enables services to listen on `localhost` only. Applications that are not Consul service mesh-enabled can still contact an HTTP endpoint. Refer to [Expose Paths Configuration Reference](/consul/docs/proxies/proxy-config-reference#expose-paths-configuration-reference) for additional information and example configurations. +Specifies default configurations for exposing HTTP paths through Envoy. Exposing paths through Envoy enables services to listen on `localhost` only. Applications that are not Consul service mesh-enabled can still contact an HTTP endpoint. Refer to [Expose Paths Configuration Reference](/consul/docs/reference/proxy/connect-proxy#expose-paths-configuration-reference) for additional information and example configurations. - Default: None - Data type: Map ### `Expose.Checks` -Exposes all HTTP and gRPC checks registered with the agent if set to `true`. Envoy exposes listeners for the checks and only accepts connections originating from localhost or Consul's [`advertise_addr`](/consul/docs/agent/config/config-files#advertise_addr). The ports for the listeners are dynamically allocated from the agent's [`expose_min_port`](/consul/docs/agent/config/config-files#expose_min_port) and [`expose_max_port`](/consul/docs/agent/config/config-files#expose_max_port) configurations. +Exposes all HTTP and gRPC checks registered with the agent if set to `true`. Envoy exposes listeners for the checks and only accepts connections originating from localhost or Consul's [`advertise_addr`](/consul/docs/reference/agent/configuration-file/address#advertise_addr). The ports for the listeners are dynamically allocated from the agent's [`expose_min_port`](/consul/docs/reference/agent/configuration-file/general#expose_min_port) and [`expose_max_port`](/consul/docs/reference/agent/configuration-file/general#expose_max_port) configurations. We recommend enabling the `Checks` configuration when a Consul client cannot reach registered services over localhost, such as when Consul agents run in their own pods in Kubernetes. @@ -1031,12 +1031,12 @@ Map that contains the details about the `ServiceDefaults` configuration entry. T Specifies the default protocol for the service. In service service mesh use cases, the `protocol` configuration is required to enable the following features and components: -- [observability](/consul/docs/connect/observability) -- [`service-splitter` configuration entry](/consul/docs/connect/config-entries/service-splitter) -- [`service-router` configuration entry](/consul/docs/connect/config-entries/service-router) +- [observability](/consul/docs/observe/tech-specs) +- [`service-splitter` configuration entry](/consul/docs/reference/config-entry/service-splitter) +- [`service-router` configuration entry](/consul/docs/reference/config-entry/service-router) - [L7 intentions](/consul/docs/connect/intentions#l7-traffic-intentions) -You can set the global protocol for proxies in the [`ProxyDefaults` configuration entry](/consul/docs/connect/config-entries/proxy-defaults#default-protocol), but the protocol specified in the `ServiceDefaults` configuration entry overrides the `ProxyDefaults` configuration. +You can set the global protocol for proxies in the [`ProxyDefaults` configuration entry](/consul/docs/reference/config-entry/proxy-defaults#default-protocol), but the protocol specified in the `ServiceDefaults` configuration entry overrides the `ProxyDefaults` configuration. #### Values @@ -1128,7 +1128,7 @@ The following table describes the parameters you can specify in the `routes` map | --- | --- | --- | --- | | `pathExact` | Specifies the exact path to match on the request path. When using this field, do not configure `pathPrefix` or `pathRegex` in the same `routes` map. | String | None | | `pathPrefix` | Specifies the path prefix to match on the request path. When using this field, do not configure `pathExact` or `pathRegex` in the same `routes` map. | String | None | -| `pathRegex` | Specifies a regular expression to match on the request path. When using this field, do not configure `pathExact` or `pathPrefix` in the same `routes` map. The syntax is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. | String | None | +| `pathRegex` | Specifies a regular expression to match on the request path. When using this field, do not configure `pathExact` or `pathPrefix` in the same `routes` map. The syntax is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. | String | None | | `requestsPerSecond` | Specifies the average number of requests per second allowed to the service. Overrides the [`requestsPerSecond`](#spec-ratelimits-instancelevel-requestspersecond) parameter specified for the service. | Integer | None | | `requestsMaxBurst` | Specifies the maximum number of concurrent requests temporarily allowed to the service. When the limit is reached, Consul blocks additional requests. You must specify a value equal to or greater than the `routes.requestsPerSecond` parameter. Overrides the [`requestsMaxBurst`](#spec-ratelimits-instancelevel-requestsmaxburst) parameter specified for the service. | Integer | None | @@ -1181,7 +1181,7 @@ Specifies the peer name of the upstream service that the configuration applies t ### `spec.upstreamConfig.overrides[].protocol` -Specifies the protocol to use for requests to the upstream listener. We recommend configuring the protocol in the main [`protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/connect/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. +Specifies the protocol to use for requests to the upstream listener. We recommend configuring the protocol in the main [`protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. #### Values @@ -1193,7 +1193,7 @@ Specifies the protocol to use for requests to the upstream listener. We recommen Specifies how long in milliseconds that the service should attempt to establish an upstream connection before timing out. -We recommend configuring the upstream timeout in the [`connectTimeout`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) field of the `ServiceResolver` CRD for the upstream destination service. Doing so enables you to leverage [L7 features](/consul/docs/connect/manage-traffic). Configuring the timeout in the `ServiceDefaults` upstream configuration limits L7 management functionality. +We recommend configuring the upstream timeout in the [`connectTimeout`](/consul/docs/reference/config-entry/service-resolver#connecttimeout) field of the `ServiceResolver` CRD for the upstream destination service. Doing so enables you to leverage [L7 features](/consul/docs/manage-traffic). Configuring the timeout in the `ServiceDefaults` upstream configuration limits L7 management functionality. #### Values @@ -1262,7 +1262,7 @@ Map of configurations that set default upstream configurations for the service. ### `spec.upstreamConfig.defaults.protocol` -Specifies default protocol for upstream listeners. We recommend configuring the protocol in the main [`Protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/connect/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. +Specifies default protocol for upstream listeners. We recommend configuring the protocol in the main [`Protocol`](#protocol) field of the configuration entry so that you can leverage [L7 features](/consul/docs/manage-traffic). Setting the protocol in an upstream configuration limits L7 management functionality. #### Values @@ -1273,7 +1273,7 @@ Specifies default protocol for upstream listeners. We recommend configuring the Specifies how long in milliseconds that all services should continue attempting to establish an upstream connection before timing out. -We recommend configuring the upstream timeout in the [`connectTimeout`](/consul/docs/connect/config-entries/service-resolver#connecttimeout) field of the `ServiceResolver` CRD for upstream destination services. Doing so enables you to leverage [L7 features](/consul/docs/connect/manage-traffic). Configuring the timeout in the `ServiceDefaults` upstream configuration limits L7 management functionality. +We recommend configuring the upstream timeout in the [`connectTimeout`](/consul/docs/reference/config-entry/service-resolver#connecttimeout) field of the `ServiceResolver` CRD for upstream destination services. Doing so enables you to leverage [L7 features](/consul/docs/manage-traffic). Configuring the timeout in the `ServiceDefaults` upstream configuration limits L7 management functionality. #### Values @@ -1360,7 +1360,7 @@ You can specify the following string values for the `MutualTLSMode` field: ### `spec.envoyExtensions` -List of extensions to modify Envoy proxy configuration. Refer to [Envoy Extensions](/consul/docs/connect/proxies/envoy-extensions) for additional information. +List of extensions to modify Envoy proxy configuration. Refer to [Envoy Extensions](/consul/docs/envoy-extension) for additional information. #### Values @@ -1376,9 +1376,9 @@ The following table describes how to configure values in the `envoyExtensions` m ### `spec.destination` -Map of configurations that specify one or more destinations for service traffic routed through terminating gateways. Refer to [Terminating Gateway](/consul/docs/connect/gateways/terminating-gateway) for additional information. +Map of configurations that specify one or more destinations for service traffic routed through terminating gateways. Refer to [Terminating Gateway](/consul/docs/north-south/terminating-gateway) for additional information. -To use the `destination` block, proxy services must be in transparent proxy mode. Refer to [Enable transparent proxy mode](/consul/docs/k8s/connect/transparent-proxy/enable-transparent-proxy) for additional information. +To use the `destination` block, proxy services must be in transparent proxy mode. Refer to [Enable transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy/k8s) for additional information. #### Values @@ -1447,9 +1447,7 @@ Specifies default configurations for exposing HTTP paths through Envoy. Exposing ### `spec.expose.checks` -Exposes all HTTP and gRPC checks registered with the agent if set to `true`. Envoy exposes listeners for the checks and only accepts connections originating from localhost or Consul's [`advertise_addr`](/consul/docs/agent/config/config-files#advertise_addr). The ports for the listeners are dynamically allocated from the agent's [`expose_min_port`](/consul/docs/agent/config/config-files#expose_min_port) and [`expose_max_port`](/consul/docs/agent/config/config-files#expose_max_port) configurations. - -We recommend enabling the `Checks` configuration when a Consul client cannot reach registered services over localhost, such as when Consul agents run in their own pods in Kubernetes. +@include 'text/reference/config-entry/spec-expose-checks.mdx' #### Values @@ -1511,7 +1509,7 @@ spec:
    -You can also set the global default protocol for all proxies in the [`proxy-defaults` configuration entry](/consul/docs/connect/config-entries/proxy-defaults#default-protocol), but the protocol specified for individual service instances in the `service-defaults` configuration entry takes precedence over the globally-configured value set in the `proxy-defaults`. +You can also set the global default protocol for all proxies in the [`proxy-defaults` configuration entry](/consul/docs/reference/config-entry/proxy-defaults#default-protocol), but the protocol specified for individual service instances in the `service-defaults` configuration entry takes precedence over the globally-configured value set in the `proxy-defaults`. ### Upstream configuration @@ -1695,7 +1693,7 @@ spec: The following examples creates a default destination assigned to a terminating gateway. A destination represents a location outside the Consul cluster. Services can dial destinations dialed directly when transparent proxy mode is enabled. -Proxy services must be in transparent proxy mode to configure destinations. Refer to [Enable transparent proxy mode](/consul/docs/k8s/connect/transparent-proxy/enable-transparent-proxy) for additional information. +Proxy services must be in transparent proxy mode to configure destinations. Refer to [Enable transparent proxy mode](/consul/docs/connect/proxy/transparent-proxy/k8s) for additional information. diff --git a/website/content/docs/connect/config-entries/service-intentions.mdx b/website/content/docs/reference/config-entry/service-intentions.mdx similarity index 95% rename from website/content/docs/connect/config-entries/service-intentions.mdx rename to website/content/docs/reference/config-entry/service-intentions.mdx index 47d980f9a820..bc4ef64bace7 100644 --- a/website/content/docs/connect/config-entries/service-intentions.mdx +++ b/website/content/docs/reference/config-entry/service-intentions.mdx @@ -1,13 +1,13 @@ --- layout: docs -page_title: Service intentions configuration reference +page_title: Service intentions configuration entry reference description: >- - Use the service intentions configuration entry to allow or deny traffic to services in the mesh from specific sources. Learn how to configure `service-intention` config entries + Use the service intentions configuration entry to allow or deny traffic to services in the mesh from specific sources. Learn how to configure `service-intention` config entries. --- -# Service intentions configuration reference +# Service intentions configuration entry reference -This topic provides reference information for the service intentions configuration entry. Intentions are configurations for controlling access between services in the service mesh. A single service intentions configuration entry specifies one destination service and one or more L4 traffic sources, L7 traffic sources, or combination of traffic sources. Refer to [Service mesh intentions overview](/consul/docs/connect/intentions) for additional information. +This topic provides reference information for the service intentions configuration entry. Intentions are configurations for controlling access between services in the service mesh. A single service intentions configuration entry specifies one destination service and one or more L4 traffic sources, L7 traffic sources, or combination of traffic sources. Refer to [Service mesh intentions overview](/consul/docs/secure-mesh/intention) for additional information. ## Configuration model @@ -399,7 +399,7 @@ You can also specify a wildcard character (`*`) to match all services without in ### `Namespace` -Specifies the [namespace](/consul/docs/enterprise/namespaces) that the configuration entry applies to. Services in the namespace are the traffic destinations that the intentions allow or deny traffic to. +Specifies the [namespace](/consul/docs/multi-tenant/namespace) that the configuration entry applies to. Services in the namespace are the traffic destinations that the intentions allow or deny traffic to. #### Values @@ -410,7 +410,7 @@ You can also specify a wildcard character (`*`) to match all namespaces. Intenti ### `Partition` -Specifies the [admin partition](/consul/docs/enterprise/admin-partitions) to apply the configuration entry. Services in the specified partition are the traffic destinations that the intentions allow or deny traffic to. +Specifies the [admin partition](/consul/docs/multi-tenant/admin-partition) to apply the configuration entry. Services in the specified partition are the traffic destinations that the intentions allow or deny traffic to. #### Values @@ -430,7 +430,7 @@ Specifies key-value pairs to add to the KV store when the configuration entry is ### `JWT` -Specifies a JSON Web Token provider configured in a [JWT provider configuration entry](/consul/docs/connect/config-entries/jwt-provider), as well as additional configurations for verifying a service's JWT before authorizing communication between services +Specifies a JSON Web Token provider configured in a [JWT provider configuration entry](/consul/docs/reference/config-entry/jwt-provider), as well as additional configurations for verifying a service's JWT before authorizing communication between services #### Values @@ -439,7 +439,7 @@ Specifies a JSON Web Token provider configured in a [JWT provider configuration ### `JWT{}.Providers` -Specifies the names of one or more previously configured [JWT provider configuration entries](/consul/docs/connect/config-entries/jwt-provider), which include the information necessary to validate a JSON web token. +Specifies the names of one or more previously configured [JWT provider configuration entries](/consul/docs/reference/config-entry/jwt-provider), which include the information necessary to validate a JSON web token. #### Values @@ -448,7 +448,7 @@ Specifies the names of one or more previously configured [JWT provider configura ### `JWT{}.Providers[].Name` -Specifies the name of a JWT provider defined in the `Name` field of the [`jwt-provider` configuration entry](/consul/docs/connect/config-entries/jwt-provider). You must write the JWT Provider to Consul before referencing it in a service intention. +Specifies the name of a JWT provider defined in the `Name` field of the [`jwt-provider` configuration entry](/consul/docs/reference/config-entry/jwt-provider). You must write the JWT Provider to Consul before referencing it in a service intention. #### Values @@ -520,7 +520,7 @@ Specifies the name of the source that the intention allows or denies traffic fro ### `Sources[].Peer` -Specifies the name of a peered Consul cluster that the intention allows or denies traffic from. Refer to [Cluster peering overview](/consul/docs/connect/cluster-peering) for additional information about peers. +Specifies the name of a peered Consul cluster that the intention allows or denies traffic from. Refer to [Cluster peering overview](/consul/docs/east-west/cluster-peering) for additional information about peers. The `Peer` and `Partition` fields are mutually exclusive. @@ -540,7 +540,7 @@ Specifies the traffic source namespace that the intention allows or denies traff ### `Sources[].Partition` -Specifies the name of an admin partition that the intention allows or denies traffic from. Refer to [Admin Partitions](/consul/docs/enterprise/admin-partitions) for additional information about partitions. +Specifies the name of an admin partition that the intention allows or denies traffic from. Refer to [Admin Partitions](/consul/docs/multi-tenant/admin-partition) for additional information about partitions. The `Peer` and `Partition` fields are mutually exclusive. @@ -551,7 +551,7 @@ The `Peer` and `Partition` fields are mutually exclusive. ### `Sources[].SamenessGroup` -Specifies the name of a sameness group that the intention allows or denies traffic from. Refer to [create sameness groups](/consul/docs/connect/cluster-peering/usage/create-sameness-groups) for additional information. +Specifies the name of a sameness group that the intention allows or denies traffic from. Refer to [create sameness groups](/consul/docs/multi-tenant/sameness-group/vm) for additional information. #### Values @@ -582,7 +582,7 @@ Specifies a list of permissions for L7 traffic sources. The list contains one or Consul applies permissions in the order specified in the configuration. Beginning at the top of the list, Consul applies the first matching request and stops evaluating against the remaining configurations. -For requests that do not match any of the defined permissions, Consul applies the intention behavior defined in the [`acl_default_policy`](/consul/docs/agent/config/config-files#acl_default_policy) configuration. +For requests that do not match any of the defined permissions, Consul applies the intention behavior defined in the [`acl_default_policy`](/consul/docs/reference/agent/configuration-file/acl#acl_default_policy) configuration. Do not configure this field for L4 intentions. Use the [`Sources.Action`](#sources-action) parameter instead. @@ -685,7 +685,7 @@ Read-only unique user ID (UUID) for the intention in the system. Consul generate Read-only set of arbitrary key-value pairs to attach to the intention. Consul generates the metadata and exposes it in the configuration entry so that legacy intention API endpoints continue to function. Refer to [Read Specific Intention by ID](/consul/api-docs/connect/intentions#read-specific-intention-by-id) for additional information. -### `Sources[].CreateTime` +### `Sources[].LegacyCreateTime` Read-only timestamp for the intention creation. Consul exposes the timestamp in the configuration entry to allow legacy intention API endpoints to continue functioning. Refer to [Read Specific Intention by ID](/consul/api-docs/connect/intentions#read-specific-intention-by-id) for additional information. @@ -740,7 +740,7 @@ Specifies an arbitrary name for the configuration entry. Note that in other conf ### `metadata.namespace` -Specifies the [namespace](/consul/docs/enterprise/namespaces) that the configuration entry applies to. Refer to [Consul Enterprise](/consul/docs/k8s/crds#consul-enterprise) for information about how Consul namespaces map to Kubernetes Namespaces. Consul Community Edition (Consul CE) ignores the `metadata.namespace` configuration. +Specifies the [namespace](/consul/docs/multi-tenant/namespace) that the configuration entry applies to. Refer to [Consul Enterprise](/consul/docs/k8s/crds#consul-enterprise) for information about how Consul namespaces map to Kubernetes Namespaces. Consul Community Edition (Consul CE) ignores the `metadata.namespace` configuration. #### Values @@ -780,7 +780,7 @@ You can also specify a wildcard character (`*`) to match all services that are m ### `spec.jwt` -Specifies a JSON Web Token provider configured in a [JWT provider configuration entry](/consul/docs/connect/config-entries/jwt-provider), as well as additional configurations for verifying a service's JWT before authorizing communication between services +Specifies a JSON Web Token provider configured in a [JWT provider configuration entry](/consul/docs/reference/config-entry/jwt-provider), as well as additional configurations for verifying a service's JWT before authorizing communication between services #### Values @@ -789,7 +789,7 @@ Specifies a JSON Web Token provider configured in a [JWT provider configuration ### `spec.jwt.providers` -Specifies the names of one or more previously configured [JWT provider configuration entries](/consul/docs/connect/config-entries/jwt-provider), which include the information necessary to validate a JSON web token. +Specifies the names of one or more previously configured [JWT provider configuration entries](/consul/docs/reference/config-entry/jwt-provider), which include the information necessary to validate a JSON web token. #### Values @@ -798,7 +798,7 @@ Specifies the names of one or more previously configured [JWT provider configura ### `spec.jwt.providers[].name` -Specifies the name of a JWT provider defined in the `metadata.name` field of the [JWT provider configuration entry](/consul/docs/connect/config-entries/jwt-provider). You must write the JWT Provider to Consul before referencing it in a service intention. +Specifies the name of a JWT provider defined in the `metadata.name` field of the [JWT provider configuration entry](/consul/docs/reference/config-entry/jwt-provider). You must write the JWT Provider to Consul before referencing it in a service intention. #### Values @@ -865,7 +865,7 @@ Specifies the name of the source that the intention allows or denies traffic fro ### `spec.sources[].peer` -Specifies the name of a peered Consul cluster that the intention allows or denies traffic from. Refer to [Cluster peering overview](/consul/docs/connect/cluster-peering) for additional information about peers. The `peer` and `partition` fields are mutually exclusive. +Specifies the name of a peered Consul cluster that the intention allows or denies traffic from. Refer to [Cluster peering overview](/consul/docs/east-west/cluster-peering) for additional information about peers. The `peer` and `partition` fields are mutually exclusive. #### Values - Default: None @@ -882,7 +882,7 @@ Specifies the traffic source namespace that the intention allows or denies traff ### `spec.sources[].partition` -Specifies the name of an admin partition that the intention allows or denies traffic from. Refer to [Admin Partitions](/consul/docs/enterprise/admin-partitions) for additional information about partitions. The `peer` and `partition` fields are mutually exclusive. +Specifies the name of an admin partition that the intention allows or denies traffic from. Refer to [Admin Partitions](/consul/docs/multi-tenant/admin-partition) for additional information about partitions. The `peer` and `partition` fields are mutually exclusive. #### Values @@ -891,7 +891,7 @@ Specifies the name of an admin partition that the intention allows or denies tra ### `spec.sources[].samenessGroup` -Specifies the name of a sameness group that the intention allows or denies traffic from. Refer to [create sameness groups](/consul/docs/k8s/connect/cluster-peering/usage/create-sameness-groups) for additional information. +Specifies the name of a sameness group that the intention allows or denies traffic from. Refer to [create sameness groups](/consul/docs/multi-tenant/sameness-group/k8s) for additional information. #### Values @@ -914,7 +914,7 @@ Specifies a list of permissions for L7 traffic sources. The list contains one or Consul applies permissions in the order specified in the configuration. Starting at the beginning of the list, Consul applies the first matching request and stops evaluating against the remaining configurations. -For requests that do not match any of the defined permissions, Consul applies the intention behavior defined in the [`acl_default_policy`](/consul/docs/agent/config/config-files#acl_default_policy) configuration. +For requests that do not match any of the defined permissions, Consul applies the intention behavior defined in the [`acl_default_policy`](/consul/docs/reference/agent/configuration-file/acl#acl_default_policy) configuration. Do not configure this field for L4 intentions. Use the [`spec.sources.action`](#sources-action) parameter instead. diff --git a/website/content/docs/connect/config-entries/service-resolver.mdx b/website/content/docs/reference/config-entry/service-resolver.mdx similarity index 98% rename from website/content/docs/connect/config-entries/service-resolver.mdx rename to website/content/docs/reference/config-entry/service-resolver.mdx index bce568514d6c..d64e02e3ba0c 100644 --- a/website/content/docs/connect/config-entries/service-resolver.mdx +++ b/website/content/docs/reference/config-entry/service-resolver.mdx @@ -1,15 +1,15 @@ --- layout: docs -page_title: Service Resolver configuration reference +page_title: Service resolver configuration entry reference description: >- Service resolver configuration entries are L7 traffic management tools for defining sets of service instances that resolve upstream requests and Consul’s behavior when resolving them. Learn how to write `service-resolver` config entries in HCL or YAML with a specification reference, configuration model, a complete example, and example code by use case. --- -# Service resolver configuration reference +# Service resolver configuration entry reference This page provides reference information for service resolver configuration entries. Configure and apply service resolvers to create named subsets of service instances and define their behavior when satisfying upstream requests. -Refer to [L7 traffic management overview](/consul/docs/connect/manage-traffic) for additional information. +Refer to [L7 traffic management overview](/consul/docs/manage-traffic) for additional information. ## Configuration model @@ -399,7 +399,7 @@ Specifies a name for the configuration entry. The name is metadata that you can ### `Namespace` -Specifies the namespace that the service resolver applies to. Refer to [namespaces](/consul/docs/enterprise/namespaces) for more information. +Specifies the namespace that the service resolver applies to. Refer to [namespaces](/consul/docs/multi-tenant/namespace) for more information. #### Values @@ -408,7 +408,7 @@ Specifies the namespace that the service resolver applies to. Refer to [namespac ### `Partition` -Specifies the admin partition that the service resolver applies to. Refer to [admin partitions](/consul/docs/enterprise/admin-partitions) for more information. +Specifies the admin partition that the service resolver applies to. Refer to [admin partitions](/consul/docs/multi-tenant/admin-partition) for more information. #### Values @@ -554,7 +554,7 @@ Specifies the cluster with an active cluster peering connection at the redirect Sets a mode for the service that allows instances to prioritize upstream targets that are in the same network region and zone. You can specify the following string values for the `mode` field: -- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `Locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. +- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `Locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. #### Values @@ -881,7 +881,7 @@ Specifies a name for the configuration entry. The name is metadata that you can ### `metadata.namespace` -Specifies the namespace that the service resolver applies to. Refer to [namespaces](/consul/docs/enterprise/namespaces) for more information. +Specifies the namespace that the service resolver applies to. Refer to [namespaces](/consul/docs/multi-tenant/namespace) for more information. #### Values @@ -1027,7 +1027,7 @@ Specifies the cluster with an active cluster peering connection at the redirect Sets a mode for the service that allows instances to prioritize upstream targets that are in the same network region and zone. You can specify the following string values for the `mode` field: -- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/connect/manage-traffic/route-to-local-upstreams) for additional information. +- `failover`: If the upstream targets that a service is connected to become unreachable, the service prioritizes healthy upstream instances with matching `locality` configuration. Refer to [Route traffic to local upstreams](/consul/docs/manage-traffic/route-local) for additional information. #### Values @@ -1728,4 +1728,4 @@ spec: ``` - + \ No newline at end of file diff --git a/website/content/docs/connect/config-entries/service-router.mdx b/website/content/docs/reference/config-entry/service-router.mdx similarity index 98% rename from website/content/docs/connect/config-entries/service-router.mdx rename to website/content/docs/reference/config-entry/service-router.mdx index d384b83be39c..58b390469efc 100644 --- a/website/content/docs/connect/config-entries/service-router.mdx +++ b/website/content/docs/reference/config-entry/service-router.mdx @@ -1,15 +1,15 @@ --- layout: docs -page_title: Service Router configuration reference +page_title: Service router configuration entry reference description: >- Service router configuration entries are L7 traffic management tools for redirecting requests for a service to a particular instance or set of instances. Learn how to write `service-router` config entries in HCL or YAML with a specification reference, configuration model, a complete example, and example code by use case. --- -# Service router configuration reference +# Service router configuration entry reference This page provides reference information for service router configuration entries. Service routers use L7 network information to redirect a traffic request for a service to one or more specific service instances. -Refer to [L7 traffic management overview](/consul/docs/connect/manage-traffic) for additional information. +Refer to [L7 traffic management overview](/consul/docs/manage-traffic) for additional information. ## Configuration model @@ -373,7 +373,7 @@ Specifies a name for the configuration entry. The name is metadata that you can ### `Namespace` -Specifies the namespace to apply the configuration entry to. Refer to [Namespaces](/consul/docs/enterprise/namespaces) for additional information about Consul namespaces. +Specifies the namespace to apply the configuration entry to. Refer to [Namespaces](/consul/docs/multi-tenant/namespace) for additional information about Consul namespaces. #### Values @@ -382,7 +382,7 @@ Specifies the namespace to apply the configuration entry to. Refer to [Namespace ### `Partition` -Specifies the admin partition to apply the configuration entry to. Refer to [Admin partitions](/consul/docs/enterprise/admin-partitions) for additional information. +Specifies the admin partition to apply the configuration entry to. Refer to [Admin partitions](/consul/docs/multi-tenant/admin-partition) for additional information. #### Values @@ -468,7 +468,7 @@ Specifies the path prefix to match on the HTTP request path must be case insensi ### `Routes[].Match{}.HTTP{}.PathRegex` -Specifies a regular expression to match on the HTTP request path. When using this field, do not configure `PathExact` or `PathPrefix` in the same HTTP map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. +Specifies a regular expression to match on the HTTP request path. When using this field, do not configure `PathExact` or `PathPrefix` in the same HTTP map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. #### Values @@ -561,7 +561,7 @@ Specifies that a request matches when the header with the given name has this su ### `Routes[].Match{}.HTTP{}.Header[].Regex` -Specifies that a request matches when the header with the given name matches this regular expression. When using this field, do not configure `Present`, `Exact`, `Prefix`, or `Suffix` in the same HTTP map . The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. +Specifies that a request matches when the header with the given name matches this regular expression. When using this field, do not configure `Present`, `Exact`, `Prefix`, or `Suffix` in the same HTTP map . The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. #### Values @@ -622,7 +622,7 @@ Specifies that a request matches when the query parameter with the given name is ### `Routes[].Match{}.HTTP{}.QueryParam[].Regex` -Specifies that a request matches when the query parameter with the given name matches this regular expression. When using this field, do not configure `Present` or `Exact` in the same map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. +Specifies that a request matches when the query parameter with the given name matches this regular expression. When using this field, do not configure `Present` or `Exact` in the same map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. #### Values @@ -663,7 +663,7 @@ Specifies the name of the service to resolve. If this parameter is not specified ### `Routes[].Destination{}.ServiceSubset` -Specifies a named subset of the given service to resolve instead of the one defined as that service's `DefaultSubset` in the [service resolver configuration entry](/consul/docs/connect/config-entries/service-resolver). If this parameter is not specified, the default subset is used. +Specifies a named subset of the given service to resolve instead of the one defined as that service's `DefaultSubset` in the [service resolver configuration entry](/consul/docs/reference/config-entry/service-resolver). If this parameter is not specified, the default subset is used. #### Values @@ -945,7 +945,7 @@ Specifies the path prefix to match on the HTTP request path. When using this fie ### `spec.routes[].match.http.pathRegex` -Specifies a regular expression to match on the HTTP request path. When using this field, do not configure `pathExact` or `pathPrefix` in the same HTTP map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. +Specifies a regular expression to match on the HTTP request path. When using this field, do not configure `pathExact` or `pathPrefix` in the same HTTP map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. #### Values @@ -1038,7 +1038,7 @@ Specifies that a request matches when the header with the given name has this su ### `spec.routes[].match.http.header.regex` -Specifies that a request matches when the header with the given name matches this regular expression. When using this field, do not configure `present`, `exact`, `prefix`, or `suffix` in the same HTTP map . The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. +Specifies that a request matches when the header with the given name matches this regular expression. When using this field, do not configure `present`, `exact`, `prefix`, or `suffix` in the same HTTP map . The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. #### Values @@ -1098,7 +1098,7 @@ Specifies that a request matches when the query parameter with the given name is ### `spec.routes[].match.http.queryParam[].regex` -Specifies that a request matches when the query parameter with the given name matches this regular expression. When using this field, do not configure `present` or `exact` in the same map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/connect/proxies/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. +Specifies that a request matches when the query parameter with the given name matches this regular expression. When using this field, do not configure `present` or `exact` in the same map. The syntax for the regular expression field is proxy-specific. When [using Envoy](/consul/docs/reference/proxy/envoy), refer to [the documentation for Envoy v1.11.2 or newer](https://github.com/google/re2/wiki/Syntax) or [the documentation for Envoy v1.11.1 or older](https://en.cppreference.com/w/cpp/regex/ecmascript), depending on the version of Envoy you use. #### Values @@ -1139,7 +1139,7 @@ Specifies the name of the service to resolve. If this parameter is not specified ### `spec.routes[].destination.serviceSubset` -Specifies a named subset of the given service to resolve instead of the one defined as that service's `defaultSubset` in the [service resolver configuration entry](/consul/docs/connect/config-entries/service-resolver). If this parameter is not specified, the default subset is used. +Specifies a named subset of the given service to resolve instead of the one defined as that service's `defaultSubset` in the [service resolver configuration entry](/consul/docs/reference/config-entry/service-resolver). If this parameter is not specified, the default subset is used. #### Values @@ -1767,4 +1767,4 @@ spec: ``` - + \ No newline at end of file diff --git a/website/content/docs/connect/config-entries/service-splitter.mdx b/website/content/docs/reference/config-entry/service-splitter.mdx similarity index 96% rename from website/content/docs/connect/config-entries/service-splitter.mdx rename to website/content/docs/reference/config-entry/service-splitter.mdx index f7247be924c2..a8a081bed77f 100644 --- a/website/content/docs/connect/config-entries/service-splitter.mdx +++ b/website/content/docs/reference/config-entry/service-splitter.mdx @@ -1,13 +1,13 @@ --- layout: docs -page_title: Service Splitter configuration reference -description: >- +page_title: Service splitter configuration entry reference +description: >- Service splitter configuration entries are L7 traffic management tools for redirecting requests for a service to multiple instances. Learn how to write `service-splitter` config entries in HCL or YAML with a specification reference, configuration model, a complete example, and example code by use case. --- -# Service Splitter configuration reference +# Service splitter configuration entry reference This reference page describes the structure and contents of service splitter configuration entries. Configure and apply service splitters to redirect a percentage of incoming traffic requests for a service to one or more specific service instances. @@ -244,7 +244,7 @@ Specifies a name for the configuration entry. The name is metadata that you can ### `Namespace` -Specifies the [namespace](/consul/docs/enterprise/namespaces) to apply the configuration entry. +Specifies the [namespace](/consul/docs/multi-tenant/namespace) to apply the configuration entry. #### Values @@ -253,7 +253,7 @@ Specifies the [namespace](/consul/docs/enterprise/namespaces) to apply the confi ### `Partition` -Specifies the [admin partition](/consul/docs/enterprise/admin-partitions) to apply the configuration entry. +Specifies the [admin partition](/consul/docs/multi-tenant/admin-partition) to apply the configuration entry. #### Values @@ -311,7 +311,7 @@ Specifies the name of the service to resolve. Specifies a subset of the service to resolve. A service subset assigns a name to a specific subset of discoverable service instances within a datacenter, such as `version2` or `canary`. All services have an unnamed default subset that returns all healthy instances. -You can define service subsets in a [service resolver configuration entry](/consul/docs/connect/config-entries/service-resolver), which are referenced by their names throughout the other configuration entries. This field overrides the default subset value in the service resolver configuration entry. +You can define service subsets in a [service resolver configuration entry](/consul/docs/reference/config-entry/service-resolver), which are referenced by their names throughout the other configuration entries. This field overrides the default subset value in the service resolver configuration entry. #### Values @@ -320,7 +320,7 @@ You can define service subsets in a [service resolver configuration entry](/cons ### `Splits[].Namespace` -Specifies the [namespace](/consul/docs/enterprise/namespaces) to use in the FQDN when resolving the service. +Specifies the [namespace](/consul/docs/multi-tenant/namespace) to use in the FQDN when resolving the service. #### Values @@ -329,7 +329,7 @@ Specifies the [namespace](/consul/docs/enterprise/namespaces) to use in the FQDN ### `Splits[].Partition` -Specifies the [admin partition](/consul/docs/enterprise/admin-partitions) to use in the FQDN when resolving the service. +Specifies the [admin partition](/consul/docs/multi-tenant/admin-partition) to use in the FQDN when resolving the service. #### Values @@ -491,7 +491,7 @@ Specifies a subset of the service to resolve. This field overrides the `DefaultS ### `spec.splits[].namespace` -Specifies the [namespace](/consul/docs/enterprise/namespaces) to use when resolving the service. +Specifies the [namespace](/consul/docs/multi-tenant/namespace) to use when resolving the service. #### Values @@ -500,7 +500,7 @@ Specifies the [namespace](/consul/docs/enterprise/namespaces) to use when resolv ### `spec.splits[].partition` -Specifies which [admin partition](/consul/docs/enterprise/admin-partitions) to use in the FQDN when resolving the service. +Specifies which [admin partition](/consul/docs/multi-tenant/admin-partition) to use in the FQDN when resolving the service. #### Values @@ -784,4 +784,4 @@ spec: - + \ No newline at end of file diff --git a/website/content/docs/connect/config-entries/tcp-route.mdx b/website/content/docs/reference/config-entry/tcp-route.mdx similarity index 83% rename from website/content/docs/connect/config-entries/tcp-route.mdx rename to website/content/docs/reference/config-entry/tcp-route.mdx index e7eda8c1ccb2..1db8d403f614 100644 --- a/website/content/docs/connect/config-entries/tcp-route.mdx +++ b/website/content/docs/reference/config-entry/tcp-route.mdx @@ -1,13 +1,14 @@ --- layout: docs -page_title: TCP Route configuration reference -description: Learn how to configure a TCP Route that is bound to an API gateway on VMs. +page_title: TCP route configuration entry reference +description: >- + Learn how to configure a TCP Route that is bound to an API gateway on VMs. --- -# TCP route configuration reference +# TCP route configuration entry reference This topic provides reference information for the gateway TCP routes configuration -entry. Refer to [Route Resource Configuration](/consul/docs/connect/gateways/api-gateway/configuration/routes) for information +entry. Refer to [Route Resource Configuration](/consul/docs/reference/k8s/api-gateway/routes) for information about configuring API gateways in Kubernetes environments. ## Configuration model @@ -128,7 +129,7 @@ such as applying a configuration entry to a specific cluster. ### `Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to apply to the configuration entry. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) to apply to the configuration entry. #### Values @@ -137,7 +138,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) to appl ### `Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) to apply to the configuration entry. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) to apply to the configuration entry. #### Values @@ -177,7 +178,7 @@ Specifies the list of TCP-based services to route to. You can specify a maximum ### `Services.Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) where the service is located. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) where the service is located. #### Values @@ -186,7 +187,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) where t ### `Services.Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) where the service is located. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) where the service is located. #### Values @@ -230,7 +231,7 @@ Specifies the name of the API gateway to bind to. ### `Parents.Namespace` -Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) where the parent is located. +Specifies the Enterprise [namespace](/consul/docs/multi-tenant/namespace) where the parent is located. #### Values @@ -239,7 +240,7 @@ Specifies the Enterprise [namespace](/consul/docs/enterprise/namespaces) where t ### `Parents.Partition` -Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partitions) where the parent is located. +Specifies the Enterprise [admin partition](/consul/docs/multi-tenant/admin-partition) where the parent is located. #### Values @@ -248,9 +249,9 @@ Specifies the Enterprise [admin partition](/consul/docs/enterprise/admin-partiti ### `Parents.SectionName` -Specifies the name of the listener defined in the [`api-gateway` configuration](/consul/docs/connect/gateways/api-gateway/configuration/api-gateway) that the route binds to. If the field is configured to an empty string, the route binds to all listeners on the parent gateway. +Specifies the name of the listener defined in the [`api-gateway` configuration](/consul/docs/reference/config-entry/api-gateway) that the route binds to. If the field is configured to an empty string, the route binds to all listeners on the parent gateway. #### Values - Default: `""` -- Data type: string +- Data type: string \ No newline at end of file diff --git a/website/content/docs/reference/config-entry/terminating-gateway.mdx b/website/content/docs/reference/config-entry/terminating-gateway.mdx new file mode 100644 index 000000000000..29942c853b13 --- /dev/null +++ b/website/content/docs/reference/config-entry/terminating-gateway.mdx @@ -0,0 +1,701 @@ +--- +layout: docs +page_title: Terminating gateway configuration entry reference +description: >- + The terminating gateway configuration entry kind defines behavior to secure outgoing communication between the service mesh and non-mesh services. Use the reference guide to learn about `terminating-gateway` config entry parameters and connecting from your service mesh to external or non-mesh services registered with Consul. +--- + +# Terminating gateway configuration entry reference + +The `terminating-gateway` config entry kind (`TerminatingGateway` on Kubernetes) allows you to configure terminating gateways +to proxy traffic from services in the Consul service mesh to services registered with Consul that do not have a +[service mesh sidecar proxy](/consul/docs/connect/proxy). The configuration is associated with the name of a gateway service +and will apply to all instances of the gateway with that name. + +~> [Configuration entries](/consul/docs/fundamentals/config-entry) are global in scope. A configuration entry for a gateway name applies +across all federated Consul datacenters. If terminating gateways in different Consul datacenters need to route to different +sets of services within their datacenter then the terminating gateways **must** be registered with different names. + +See [Terminating Gateway](/consul/docs/north-south/terminating-gateway) for more information. + +## TLS Origination + +By specifying a path to a [CA file](/consul/docs/reference/config-entry/terminating-gateway#cafile) connections +from the terminating gateway will be encrypted using one-way TLS authentication. If a path to a +[client certificate](/consul/docs/reference/config-entry/terminating-gateway#certfile) +and [private key](/consul/docs/reference/config-entry/terminating-gateway#keyfile) are also specified connections +from the terminating gateway will be encrypted using mutual TLS authentication. + +~> Setting the `SNI` field is strongly recommended when enabling TLS to a service. If this field is not set, +Consul will not attempt to verify the Subject Alternative Name fields in the service's certificate. + +If none of these are provided, Consul will **only** encrypt connections to the gateway and not +from the gateway to the destination service. + +## Wildcard service specification + +Terminating gateways can optionally target all services within a Consul namespace by specifying a wildcard "\*" +as the service name. Configuration options set on the wildcard act as defaults that can be overridden +by options set on a specific service name. + +Note that if the wildcard specifier is used, and some services in that namespace have a service mesh sidecar proxy, +traffic from the mesh to those services will be evenly load-balanced between the gateway and their sidecars. + +## Sample Config Entries + +### Access an external service + + + + +Link gateway named "us-west-gateway" with the billing service. + +Connections to the external service will be unencrypted. + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" + +Services = [ + { + Name = "billing" + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: billing +``` + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Services": [ + { + "Name": "billing" + } + ] +} +``` + + + + + + +Link gateway named "us-west-gateway" in the default namespace with the billing service in the finance namespace. + +Connections to the external service will be unencrypted. + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" +Namespace = "default" + +Services = [ + { + Namespace = "finance" + Name = "billing" + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: billing + namespace: finance +``` + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Namespace": "default", + "Services": [ + { + "Namespace": "finance", + "Name": "billing" + } + ] +} +``` + + + + + + +### Access an external service over TLS + + + + +Link gateway named "us-west-gateway" with the billing service, and specify a CA +file to be used for one-way TLS authentication. + +-> **Note**: When not using destinations in transparent proxy mode, you must specify the `CAFile` parameter +and point to a valid CA bundle in order to properly initiate a TLS +connection to the destination service. For more information about configuring a gateway for destinations, refer to [Register an External Service as a Destination](/consul/docs/k8s/connect/terminating-gateways#register-an-external-service-as-a-destination). + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" + +Services = [ + { + Name = "billing" + CAFile = "/etc/certs/ca-chain.cert.pem" + SNI = "billing.service.com" + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: billing + caFile: /etc/certs/ca-chain.cert.pem + sni: billing.service.com +``` + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Services": [ + { + "Name": "billing", + "CAFile": "/etc/certs/ca-chain.cert.pem", + "SNI": "billing.service.com" + } + ] +} +``` + + + + + + +Link gateway named "us-west-gateway" in the default namespace with the billing service in the finance namespace, +and specify a CA file to be used for one-way TLS authentication. + +-> **Note**: The `CAFile` parameter must be specified _and_ point to a valid CA +bundle in order to properly initiate a TLS connection to the destination service. + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" +Namespace = "default" + +Services = [ + { + Namespace = "finance" + Name = "billing" + CAFile = "/etc/certs/ca-chain.cert.pem" + SNI = "billing.service.com" + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: billing + namespace: finance + caFile: /etc/certs/ca-chain.cert.pem + sni: billing.service.com +``` + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Namespace": "default", + "Services": [ + { + "Namespace": "finance", + "Name": "billing", + "CAFile": "/etc/certs/ca-chain.cert.pem", + "SNI": "billing.service.com" + } + ] +} +``` + + + + + + +### Access an external service over mutual TLS + + + + +Link gateway named "us-west-gateway" with the billing service, and specify a CA +file, key file, and cert file to be used for mutual TLS authentication. + +-> **Note**: The `CAFile` parameter must be specified _and_ point to a valid CA +bundle in order to properly initiate a TLS connection to the destination service. + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" + +Services = [ + { + Name = "billing" + CAFile = "/etc/certs/ca-chain.cert.pem" + KeyFile = "/etc/certs/gateway.key.pem" + CertFile = "/etc/certs/gateway.cert.pem" + SNI = "billing.service.com" + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: billing + caFile: /etc/certs/ca-chain.cert.pem + keyFile: /etc/certs/gateway.key.pem + certFile: /etc/certs/gateway.cert.pem + sni: billing.service.com +``` + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Services": [ + { + "Name": "billing", + "CAFile": "/etc/certs/ca-chain.cert.pem", + "KeyFile": "/etc/certs/gateway.key.pem", + "CertFile": "/etc/certs/gateway.cert.pem", + "SNI": "billing.service.com" + } + ] +} +``` + + + + + + +Link gateway named "us-west-gateway" in the default namespace with the billing service in the finance namespace. +Also specify a CA file, key file, and cert file to be used for mutual TLS authentication. + +-> **Note**: The `CAFile` parameter must be specified _and_ point to a valid CA +bundle in order to properly initiate a TLS connection to the destination service. + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" +Namespace = "default" + +Services = [ + { + Namespace = "finance" + Name = "billing" + CAFile = "/etc/certs/ca-chain.cert.pem" + KeyFile = "/etc/certs/gateway.key.pem" + CertFile = "/etc/certs/gateway.cert.pem" + SNI = "billing.service.com" + } +] +``` + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: billing + namespace: finance + caFile: /etc/certs/ca-chain.cert.pem + keyFile: /etc/certs/gateway.key.pem + certFile: /etc/certs/gateway.cert.pem + sni: billing.service.com +``` + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Namespace": "default", + "Services": [ + { + "Namespace": "finance", + "Name": "billing", + "CAFile": "/etc/certs/ca-chain.cert.pem", + "KeyFile": "/etc/certs/gateway.key.pem", + "CertFile": "/etc/certs/gateway.cert.pem", + "SNI": "billing.service.com" + } + ] +} +``` + + + + + + +### Override connection parameters for a specific service + + + + +Link gateway named "us-west-gateway" with all services in the datacenter, and configure default certificates for mutual TLS. + +Override the SNI and CA file used for connections to the billing service. + + + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" + +Services = [ + { + Name = "*" + CAFile = "/etc/common-certs/ca-chain.cert.pem" + KeyFile = "/etc/common-certs/gateway.key.pem" + CertFile = "/etc/common-certs/gateway.cert.pem" + }, + { + Name = "billing" + CAFile = "/etc/billing-ca/ca-chain.cert.pem" + SNI = "billing.service.com" + } +] +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: '*' + caFile: /etc/common-certs/ca-chain.cert.pem + keyFile: /etc/common-certs/gateway.key.pem + certFile: /etc/common-certs/gateway.cert.pem + - name: billing + caFile: /etc/billing-ca/ca-chain.cert.pem + sni: billing.service.com +``` + + + + + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Services": [ + { + "Name": "*", + "CAFile": "/etc/common-certs/ca-chain.cert.pem", + "KeyFile": "/etc/common-certs/gateway.key.pem", + "CertFile": "/etc/common-certs/gateway.cert.pem" + }, + { + "Name": "billing", + "CAFile": "/etc/billing-ca/ca-chain.cert.pem", + "SNI": "billing.service.com" + } + ] +} +``` + + + + + + + + +Link gateway named "us-west-gateway" in the default namespace with all services in the finance namespace, +and configure default certificates for mutual TLS. + +Override the SNI and CA file used for connections to the billing service: + + + + + +```hcl +Kind = "terminating-gateway" +Name = "us-west-gateway" +Namespace = "default" + +Services = [ + { + Namespace = "finance" + Name = "*" + CAFile = "/etc/common-certs/ca-chain.cert.pem" + KeyFile = "/etc/common-certs/gateway.key.pem" + CertFile = "/etc/common-certs/gateway.cert.pem" + }, + { + Namespace = "finance" + Name = "billing" + CAFile = "/etc/billing-ca/ca-chain.cert.pem" + SNI = "billing.service.com" + } +] +``` + + + + + +```yaml +apiVersion: consul.hashicorp.com/v1alpha1 +kind: TerminatingGateway +metadata: + name: us-west-gateway +spec: + services: + - name: '*' + namespace: finance + caFile: /etc/common-certs/ca-chain.cert.pem + keyFile: /etc/common-certs/gateway.key.pem + certFile: /etc/common-certs/gateway.cert.pem + - name: billing + namespace: finance + caFile: /etc/billing-ca/ca-chain.cert.pem + sni: billing.service.com +``` + + + + + +```json +{ + "Kind": "terminating-gateway", + "Name": "us-west-gateway", + "Namespace": "default", + "Services": [ + { + "Namespace": "finance", + "Name": "*", + "CAFile": "/etc/common-certs/ca-chain.cert.pem", + "KeyFile": "/etc/common-certs/gateway.key.pem", + "CertFile": "/etc/common-certs/gateway.cert.pem" + }, + { + "Namespace": "finance", + "Name": "billing", + "CAFile": "/etc/billing-ca/ca-chain.cert.pem", + "SNI": "billing.service.com" + } + ] +} +``` + + + + + + + + +## Available Fields + +', + yaml: false, + }, + { + name: 'Namespace', + type: `string: "default"`, + enterprise: true, + description: + 'Specifies the namespace to which the configuration entry will apply. This must match the namespace in which the gateway is registered.' + + ' If omitted, the namespace will be inherited from [the request](/consul/api-docs/config#ns)' + + ' or will default to the `default` namespace.', + yaml: false, + }, + { + name: 'Partition', + type: `string: "default"`, + enterprise: true, + description: + 'Specifies the admin partition to which the configuration entry will apply. This must match the partition in which the gateway is registered.' + + ' If omitted, the partition will be inherited from [the request](/consul/api-docs/config)' + + ' or will default to the `default` partition.', + yaml: false, + }, + { + name: 'Meta', + type: 'map: nil', + description: + 'Specifies arbitrary KV metadata pairs. Added in Consul 1.8.4.', + yaml: false, + }, + { + name: 'metadata', + children: [ + { + name: 'name', + description: 'Set to the name of the gateway being configured.', + }, + { + name: 'namespace', + description: + 'If running Consul Community Edition, the namespace is ignored (see [Kubernetes Namespaces in Consul CE](/consul/docs/k8s/crds#consul-ce)). If running Consul Enterprise see [Kubernetes Namespaces in Consul Enterprise](/consul/docs/k8s/crds#consul-enterprise) for more details.', + }, + ], + hcl: false, + }, + { + name: 'Services', + type: 'array: ', + description: `A list of services or destinations to link + with the gateway. The gateway will proxy traffic to these services. These linked services + must be registered with Consul for the gateway to discover their addresses. They must also + be registered in the same Consul datacenter as the terminating gateway. + Destinations are an exception to this requirement, and only need to be defined as a service-defaults configuration entry in the same datacenter. + If Consul ACLs are enabled, the Terminating Gateway's ACL token must grant service:write for all linked services.`, + children: [ + { + name: 'Name', + type: 'string: ""', + description: + 'The name of the service to link with the gateway. If the wildcard specifier, `*`, is provided, then ALL services within the namespace will be linked with the gateway.', + }, + { + name: 'Namespace', + enterprise: true, + type: 'string: ""', + description: + 'The namespace of the service. If omitted, the namespace will be inherited from the config entry.', + }, + { + name: 'CAFile', + type: 'string: ""', + description: `A file path to a PEM-encoded certificate authority. + The file must be present on the proxy's filesystem. + The certificate authority is used to verify the authenticity of the service linked with the gateway. + It can be provided along with a CertFile and KeyFile for mutual TLS authentication, or on its own + for one-way TLS authentication. If none is provided the gateway will not encrypt the traffic to the destination.`, + }, + { + name: 'CertFile', + type: 'string: ""', + description: { + hcl: `A file path to a PEM-encoded certificate. + The file must be present on the proxy's filesystem. + The certificate is provided servers to verify the gateway's authenticity. It must be provided if a \`KeyFile\` was specified.`, + yaml: `A file path to a PEM-encoded certificate. + The file must be present on the proxy's filesystem. + The certificate is provided servers to verify the gateway's authenticity. It must be provided if a \`keyFile\` was specified.`, + }, + }, + { + name: 'KeyFile', + type: 'string: ""', + description: { + hcl: `A file path to a PEM-encoded private key. + The file must be present on the proxy's filesystem. + The key is used with the certificate to verify the gateway's authenticity. It must be provided along if a \`CertFile\` was specified.`, + yaml: `A file path to a PEM-encoded private key. + The file must be present on the proxy's filesystem. + The key is used with the certificate to verify the gateway's authenticity. It must be provided along if a \`certFile\` was specified.`, + }, + }, + { + name: 'SNI', + type: 'string: ""', + description: + `An optional hostname or domain name to specify during the TLS handshake. This option will also configure [strict SAN matching](https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/transport_sockets/tls/v3/common.proto#envoy-v3-api-field-extensions-transport-sockets-tls-v3-certificatevalidationcontext-match-typed-subject-alt-names), which requires + the external services to have certificates with SANs, not having which will result in \`CERTIFICATE_VERIFY_FAILED\` error.`, + }, + { + name: 'DisableAutoHostRewrite', + type: 'bool: ""', + description: + 'When set to true, Terminating Gateway will not modify the incoming requests host header for this service.', + }, + ], + }, + ]} +/> + +## ACLs + +Configuration entries may be protected by [ACLs](/consul/docs/secure/acl). + +Reading a `terminating-gateway` config entry requires `service:read` on the `Name` +field of the config entry. + +Creating, updating, or deleting a `terminating-gateway` config entry requires +`operator:write`. \ No newline at end of file diff --git a/website/content/docs/reference/consul-template/cli.mdx b/website/content/docs/reference/consul-template/cli.mdx new file mode 100644 index 000000000000..cbbe53114b2c --- /dev/null +++ b/website/content/docs/reference/consul-template/cli.mdx @@ -0,0 +1,109 @@ +--- +layout: docs +page_title: Consul Template CLI reference +description: >- + Consul Template is a tool available as a distinct binary that enables dynamic application configuration and secrets rotation for Consul deployments based on Go templates. +--- + +# Consul Template CLI reference + +This topic describes the available command-line options for the Consul Template binary. + +## General + +These options are top level values to configure Consul Template. + +- `-help` - Prints the full reference for the `consul-template` command. + +- `-config=` - Sets the path to a configuration file or folder on disk. This can be specified multiple times to load multiple files or folders. If multiple values are given, they are merged left-to-right, and CLI arguments take the top-most precedence. +- `-default-left-delimiter` - The default left delimiter for templating. +- `-default-right-delimiter` - The default right delimiter for templating. +- `-dry` - Print generated templates to stdout instead of rendering. +- `-kill-signal=` - Signal to listen to gracefully terminate the process. +- `-log-level=` - Set the logging level. Allowed values are `debug`, `info`, `warn`, and `err`. +- `-parse-only` - Do not process templates. Parse them for structure. +- `-pid-file=` - Path on disk to write the PID of the process. +- `-reload-signal=` - Signal to listen to reload configuration. +- `-syslog` - Send the output to syslog instead of standard error and standard out. The syslog facility defaults to `LOCAL0` and can be changed using a configuration file. +- `-syslog-facility=` - Set the facility where syslog should log - if this attribute is supplied, the `-syslog` flag must also be supplied. +- `-syslog-name=` - Set the name of the application which will appear in syslog, if this attribute is supplied, the `-syslog` flag must also be supplied. +- `-template=